id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2302.11041
Counterexamples in rotundity of norms in Banach spaces
We study several classical concepts in the topic of strict convexity of norms in infinite dimensional Banach spaces. Specifically, and in descending order of strength, we deal with Uniform Rotundity (UR), Weak Uniform Rotundity (WUR) and Uniform Rotundity in Every Direction (URED). Our first three results show that we may distinguish between all of these three properties in every Banach space where such renormings are possible. Specifically, we show that in every infinite dimensional Banach space which admits a WUR (resp. URED) renorming, we can find a norm with the same condition and which moreover fails to be UR (resp. WUR). We prove that these norms can be constructed to be Locally Uniformly Rotund (LUR) in Banach spaces admitting such renormings. Additionally, we obtain that in every Banach space with a LUR norm we can find a LUR renorming which is not URED. These results solve three open problems posed by A.J. Guirao, V. Montesinos and V. Zizler. The norms we construct in this first part are dense. In the last part of this note, we solve a fourth question posed by the same three authors by constructing a $C^\infty$-smooth norm in $c_0$ whose dual norm is not strictly convex.
Petr Hájek, Andrés Quilis
2023-02-21T22:47:27Z
http://arxiv.org/abs/2302.11041v1
# Counterexamples in Rotundity of norms in Banach spaces ###### Abstract. We study several classical concepts in the topic of strict convexity of norms in infinite dimensional Banach spaces. Specifically, and in descending order of strength, we deal with Uniform Rotundity (UR), Weak Uniform Rotundity (WUR) and Uniform Rotundity in Every Direction (URED). Our first three results show that we may distinguish between all of these three properties in every Banach space where such renormings are possible. Specifically, we show that in every infinite dimensional Banach space which admits a WUR (resp. URED) renorming, we can find a norm with the same condition and which moreover fails to be UR (resp. WUR). We prove that these norms can be constructed to be Locally Uniformly Rotund (LUR) in Banach spaces admitting such renormings. Additionally, we obtain that in every Banach space with a LUR norm we can find a LUR renorming which is not URED. These results solve three open problems posed by A.J. Guirao, V. Montesinos and V. Zizler. The norms we construct in this first part are dense. In the last part of this note, we solve a fourth question posed by the same three authors by constructing a \(C^{\infty}\)-smooth norm in \(c_{0}\) whose dual norm is not strictly convex. Key words and phrases:Strict convexity, uniformly rotund norms, weakly uniformly rotund norms, uniformly rotund in every direction norms, higher order smoothness 2020 Mathematics Subject Classification: 46B03, 46B10 ## 1. Introduction In this article, we obtain four main results related to convexity and smoothness of renormings in infinite dimensional Banach spaces. These theorems answer four open questions posed in the recently published monograph [1]. The first three results deal with different strengthenings of strict convexity (or rotundity). In particular, we work with the following classical concepts: **Definition 1.1**.: Let \(X\) be a Banach space and let \(\|\cdot\|\) be a norm in \(X\). Denote by \(B_{\|\cdot\|}\) and \(S_{\|\cdot\|}\) the unit ball of \(X\) and the unit sphere of \(X\) respectively in the norm \(\|\cdot\|\). * The norm \(\|\cdot\|\) is _uniformly rotund (UR)_ if for every pair of sequences \(\{x_{n}\}_{n\in\mathbb{N}}\) and \(\{y_{n}\}_{n\in\mathbb{N}}\) in \(B_{\|\cdot\|}\) such that \(\left\|\frac{x_{n}+y_{n}}{2}\right\|\to 1\) we have that \(\|y_{n}-x_{n}\|\to 0\). * The norm \(\|\cdot\|\) is _weakly uniformly rotund (WUR)_ if for every pair of sequences \(\{x_{n}\}_{n\in\mathbb{N}}\) and \(\{y_{n}\}_{n\in\mathbb{N}}\) in \(B_{\|\cdot\|}\) such that \(\left\|\frac{x_{n}+y_{n}}{2}\right\|\to 1\) we have that \(y_{n}-x_{n}\to 0\) in the weak topology of \(X\). Introduction Let \(X\) be an infinite dimensional Banach space with an LUR norm. Then there exists an equivalent norm in \(X\) which is LUR and fails to be URED. Moreover, the class of norms with this property is dense. **Theorem A**.: _Let \(X\) be an infinite dimensional Banach space with a URED norm. Then there exists an equivalent norm in \(X\) which is URED and not WUR. If \(X\) admits a LUR norm, then this norm can also be taken to be LUR. Moreover, the class of norms with this property is dense._ **Theorem C**.: _Let \(X\) be an infinite dimensional superreflexive Banach space. Then there exists an equivalent norm in \(X\) which is LUR and WUR but not UR. Moreover, the class of norms with this property is dense._ As mentioned above, these three theorems answer three questions in [1], specifically Questions 52.3.4, 52.3.7 and 52.3.1 respectively (page 500). Notice as well that, by duality, Theorem C implies that in every superreflexive space we may approximate every norm by a Frechet smooth norm which is Uniformly Gateaux but fails to be Uniformly Frechet. This answers Question 52.1.2.4 of [1] as well, which was already solved differently in [15] by constructing a Frechet differentiable norm which fails to be Uniformly Gateaux. The renormings we construct to prove Theorems A, B and C come from a single method, applied with varying parameters to obtain the desired properties in each situation. Intuitively, the way we build these renormings is by defining first a countable family of norms with the following property: their unit sphere coincides with the original unit sphere except in a particular slice, where the new sphere contains a strictly convex approximation of a segment in a given direction, which may be different for each of the countably many norms. By suitably combining the countable family of norms, we obtain a renorming which fails to be UR, WUR or URED while satisfying a weaker rotundity condition, depending on the choice of the directions in which the approximated segments appear. The idea of using countably many norms which differ from the original norm only on a certain slice was also used in [10], where such a technique is applied to the construction of norms with specific smoothness properties. To finish the introduction, let us discuss the fourth and last of the main theorems of this article, which we state now: **Theorem D**.: _There exists a \(C^{\infty}\)-smooth norm on \(c_{0}\) such that the dual norm is not strictly convex. Moreover, the norm \(\|\cdot\|_{\infty}\) can be uniformly approximated on bounded sets by norms with this property._ This result answers in particular part (i) of Question 139 in [10] (part (ii) was already solved in [10]), which is posed again as Question 52.1.4.6 in [10] (page 498). To put it in context, recall that by a classical Smulyan result, a norm \(\|\cdot\|\) in any Banach space is Gateaux differentiable as soon as the dual norm \(\|\cdot\|^{*}\) is strictly convex. It is known that the converse is not true in general: indeed, by a result of V. Klee [11] (Proposition 3.3), a Gateaux differentiable norm with non strictly convex dual can be constructed in every separable non-reflexive Banach space. For such spaces which additionally have a separable dual (such as \(c_{0}\)), this norm can be taken to be Frechet differentiable, as shown by A.J. Guirao, V. Montesinos and V. Zizler in [10]. Even in reflexive spaces, where every Gateaux differentiable norm does have a strictly convex dual norm, a classical result from D. Yost in [11] proves that we may construct Frechet differentiable norms whose dual norm is not LUR. The construction of the norm in the proof of Theorem D is based around the \(C^{\infty}\)-smooth approximation of certain \(n\)-dimensional polyhedra constructed inductively. Let us now briefly discuss the structure of this article. In section 2 we set the notation to be used throughout the rest of the note, and we recall some more definitions and preliminary results. In section 3 we lay out the construction of the renormings in order to prove Theorems A, B and C regarding rotundity. Finally, section 4 is dedicated to proving Theorem D about a \(C^{\infty}\)-smooth norm in \(c_{0}\) with non strictly convex dual norm. ## 2. Notation and Preliminary results We write \(B_{\|\cdot\|}\) and \(S_{\|\cdot\|}\) to denote the unit ball and the unit sphere of a Banach space with respect to the norm \(\|\cdot\|\). We use the definitions of UR, WUR, URED and LUR given in the previous section. Additionally, we say that a norm \(\|\cdot\|\) in a Banach space \(X\) is _strictly convex_ if whenever \(x,y\in B_{\|\cdot\|}\) with \(x\neq y\) we have that \(\left\|\frac{x+y}{2}\right\|<1\). Regarding smoothness, we will use the standard definitions of differentiability in Banach spaces, which can be found, for instance, in [11]. Given a Banach space \(X\) and a class of equivalent norms \(\mathcal{S}\) defined on \(X\), we say that a norm \(\|\cdot\|\) is _uniformly approximated on uniform sets_ by norms in \(\mathcal{S}\) if for every \(\varepsilon>0\) there exists a norm \(\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\in\mathcal{S}\) such that \(\left|\!\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\!\right|\leq\| \cdot\|\leq(1+\varepsilon)\left|\!\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\!\right|\). If every norm in \(X\) can be uniformly approximated on uniforms sets by norms in a class \(\mathcal{S}\), we say that the class \(\mathcal{S}\) is _dense_. Following the notation of [10], given a norm \(\|\cdot\|\) in a Banach space \(X\) and two points \(x,y\in X\), we define the expression \[Q_{\|\cdot\|}(x,y)=2\|x\|^{2}+2\|y\|^{2}-\|x+y\|^{2}\geq 0.\] We will use the following remark with an elementary proof regarding this function: **Remark 2.1**.: Let \(X\) be a Banach space. For every \(n\in\mathbb{N}\), let \(\|\cdot\|_{n}\) be an equivalent norm on \(X\), and let \(\{x_{n}\}_{n\in\mathbb{N}},\{y_{n}\}_{n\in\mathbb{N}}\subset X\) be two sequences such that \(\limsup_{n\to\infty}\|x_{n}\|_{n}\leq 1\), \(\limsup_{n\to\infty}\|y_{n}\|_{n}\leq 1\), and \(\left|\!\left|\frac{x_{n}+y_{n}}{2}\right|\!\right|_{n}\to 1\). Then \(Q_{\|\cdot\|_{n}}(x_{n},y_{n})\to 0\). As suggested by the previous remark, the function \(Q_{\|\cdot\|}\) can be used to describe rotundity qualities of the norm \(\|\cdot\|\). In particular, we will use the following known characterizations: **Proposition 2.2**.: _Let \(X\) be a Banach space. A norm \(\|\cdot\|\) is UR if and only if for every pair of sequences \(\{x_{n}\}_{n\in\mathbb{N}}\) and \(\{y_{n}\}_{n\in\mathbb{N}}\) in \(X\) such that \(Q_{\|\cdot\|}(x_{n},y_{n})\to 0\), we have that \(\|x_{n}-y_{n}\|\to 0\)._ **Proposition 2.3**.: _Let \(X\) be a Banach space. A norm \(\|\cdot\|\) is WUR if and only if for every pair of sequences \(\{x_{n}\}_{n\in\mathbb{N}}\) and \(\{y_{n}\}_{n\in\mathbb{N}}\) in \(X\) such that \(Q_{\|\cdot\|}(x_{n},y_{n})\to 0\), we have that \(x_{n}-y_{n}\to 0\) in the weak topology of \(X\)._ **Proposition 2.4**.: _Let \(X\) be a Banach space. A norm \(\|\cdot\|\) is URED if and only if for every \(v\in X\setminus\{0\}\), and every pair of sequences \(\{x_{n}\}_{n\in\mathbb{N}}\) and \(\{y_{n}\}_{n\in\mathbb{N}}\) in \(X\) such that \(y_{n}-x_{n}\in\text{span}(v)\) and \(Q_{\|\cdot\|}(x_{n},y_{n})\to 0\), we have that \(\|x_{n}-y_{n}\|\to 0\)._ **Proposition 2.5**.: _Let \(X\) be a Banach space. A norm \(\|\cdot\|\) is LUR at a point \(x\in X\) if and only if for every sequence \(\{y_{n}\}_{n\in\mathbb{N}}\) in \(X\) such that \(Q_{\|\cdot\|}(x,y_{n})\to 0\) we have that \(\|x-y_{n}\|\to 0\)._ We refer to [10] again for a thorough study of this function and a comprehensive list of characterizations of rotundity concepts it can be used for. In section 3 we will define several renormings of the form \((a_{1}\|\cdot\|_{1}^{2}+a_{2}\|\cdot\|_{2}^{2})^{1/2}\), where \(\|\cdot\|_{1},\|\cdot\|_{2}\) are two equivalent norms in a Banach space \(X\), and \(a_{1},a_{2}\) are positive numbers. These renormings are a well known technique to obtain rotund approximations of a given norm. The key property they enjoy in this regard is the fact that \[Q_{\big{(}a_{1}\|\cdot\|_{1}^{2}+a_{2}\|\cdot\|_{2}^{2}\big{)}^{1/2}}(x,y)=a_{ 1}Q_{\|\cdot\|_{1}}(x,y)+a_{2}Q_{\|\cdot\|_{2}}(x,y), \tag{1}\] which, together with the characterizations we allude to previously, shows that the renorming \((a_{1}\|\cdot\|_{1}^{2}+a_{2}\|\cdot\|_{2}^{2})^{1/2}\) inherits the strongest rotundity conditions of both \(\|\cdot\|_{1}\) and \(\|\cdot\|_{2}\). More precisely, with these observations we immediately obtain the following lemma: **Lemma 2.6**.: _Let \(X\) be a Banach space, let \(\|\cdot\|_{1}\) and \(\|\cdot\|_{2}\) be two equivalent norms on \(X\), and let \(a_{1},a_{2}>0\). If \(\|\cdot\|_{1}\) is UR (resp. WUR, URED, LUR), then \((a_{1}\|\cdot\|_{1}^{2}+a_{2}\|\cdot\|_{2}^{2})^{1/2}\) is UR (resp. WUR, URED, LUR)._ Let us finish by stating another result which easily follows from the Propositions above stated, and whose proof we also omit. **Lemma 2.7**.: _Let \(X\) be a Banach space, let \(\{\|\cdot\|_{i}\}_{i=1}^{n}\) be a finite sequence of equivalent norms on \(X\). If \(\|\cdot\|_{i}\) is UR (resp. WUR, URED, LUR) for every \(i=1,\ldots,n\), then the equivalent norm \(\max_{i=1,\ldots,n}\{\|\cdot\|_{i}\}\) is UR (resp. WUR, URED, LUR)._ Notice that in Lemma 2.6 we only need rotundity of one of the norms, while in Lemma 2.7 it is necessary that all norms share the same property. ## 3. Rotundity This section is divided into three further subsections. In the first section we define a norm in any Banach space which approximates a non strictly convex norm whose unit sphere contains a specific segment. The approximation is done in such a way that the resulting norm has the strongest rotundity qualities of the starting norm. The norm \(\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\! \!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left| \!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\!\ Now, fix \(\varepsilon_{\delta,C}=\frac{1-(1-\delta)^{2}}{(1+C^{2})^{2}}>0\). Then we have that \[(1+\varepsilon_{\delta,C})^{1/2}(1-\delta)|\!|\!|y|\!|\!|\!|_{\widetilde{B}_{ \alpha,\delta}}\leq\left((1-\delta)^{2}|\!|\!|y|\!|\!|\!|_{\widetilde{B}_{ \alpha,\delta}}^{2}+\varepsilon_{\delta,C}\|y\|_{g_{0}}^{2}\right)^{1/2}\leq| \!|\!|y|\!|\!|_{\widetilde{B}_{\alpha,\delta}} \tag{2}\] for every \(y\in\ker(g_{0})\). Finally, we define the norm: \[|\!|\!|x|\!|\!|\!|\!|_{\alpha,\delta,C}=\max\left\{\|x\|,\left((1-\delta)^{2}| \!|\!|P_{\alpha}x|\!|\!|\!|_{\widetilde{B}_{\alpha,\delta}}^{2}+\varepsilon_ {\delta,C}\|P_{\alpha}x\|_{g_{0}}^{2}\right)^{1/2}\right\},\qquad\text{for all }x\in X. \tag{3}\] Using equation (2) and the fact that \((1-\delta)^{2}|\!|\!|P_{\alpha}x|\!|\!|\!|_{\widetilde{B}_{\alpha,\delta}} \leq\|x\|\) for all \(x\in X\), we observe that this norm satisfies \(|\!|\cdot|\!|\leq|\!|\!|\!|\!|\cdot|\!|\!|\!|_{\alpha,\delta,C}\leq\frac{1}{( 1-\delta)^{2}}\|\cdot\|\). Geometrically, the unit ball of \(|\!|\!|\!|\!|\!|\!|_{\alpha,\delta,C}\) is the intersection of the original ball \(B_{\|\cdot\|}\) with the cylinder generated by the unit ball of the norm \(((1-\delta)^{2}|\!|\!|\!|\!|\!|_{\widetilde{B}_{\alpha,\delta}}^{2}+\varepsilon _{\delta,C}\|\cdot\|^{2})^{1/2}\) (defined in \(\ker(g_{0})\)) in the direction of \(h_{0}\). Let us collect some properties of the norm \(|\!|\!|\!|\!|\!|\!|\!|_{\alpha,\delta,C}\) which are easily shown. **Lemma 3.1**.: _Let \(X\) be a Banach space, and let \(\delta\), \(C\), and \(\alpha\) be as above. The norm \(|\!|\!|\!|\!|\!|\!|\!|_{\alpha,\delta,C}\) satisfies:_ 1. _If_ \(\lambda_{0}=\frac{1}{|\!|\!|x_{0}|\!|\!|_{\alpha,\delta,C}}\)_, we have that_ \((1-\delta)^{2}\leq\lambda_{0}\leq\frac{1}{(1+\varepsilon_{\delta,C})^{1/2}}\)_, where_ \(\varepsilon_{\delta,C}=\frac{1-(1-\delta)^{2}}{(1+C^{2})^{2}}>0\)_._ 2. _The unit sphere_ \(S_{|\!|\!|\!|\!|\!|_{\alpha,\delta,C}}\) _contains the segment_ \(\left[\lambda_{0}x_{0}-\frac{(1-\lambda_{0})}{C}h_{0},\lambda_{0}x_{0}+\frac {(1-\lambda_{0})}{C}h_{0}\right]\) _where_ \(\lambda_{0}=\frac{1}{|\!|\!|x_{0}|\!|\!|_{\alpha,\delta,C}}\)_._ 3. \(|\!|\!|\!|\!|x|\!|\!|\!|_{\alpha,\delta,C}=\|x\|\) _for all_ \(x\in X\) _with_ \(|\langle f_{0},x\rangle|\leq(1-\delta)^{2}\|x\|\)_._ Proof.: To obtain part \((a)\), first notice that \(|\!|\!|\!|x_{0}|\!|\!|\!|_{\alpha,\delta,C}\leq\frac{1}{(1-\delta)^{2}}\) and thus \(\lambda_{0}\geq(1-\delta)^{2}\). On the other hand, since \(\langle f_{0},P_{\alpha}x_{0}\rangle\geq 1-\delta\), we have by definition of \(|\!|\!|\!|\!|\!|\!|_{\widetilde{B}_{\alpha,\delta}}\) that \(|\!|\!|(1-\delta)P_{\alpha}x_{0}|\!|\!|\!|_{\widetilde{B}_{\alpha,\delta}}\geq 1\). Then, we can apply equation (2) to obtain that \(\lambda_{0}\leq\frac{1}{(1+\varepsilon_{\delta,C})^{1/2}}\). For item \((b)\), fix \(\mu\in\left[-\frac{(1-\lambda_{0})}{C},\frac{(1-\lambda_{0})}{C}\right]\). Since \(\langle f_{0},h_{0}\rangle=0\), we have that \(P_{\alpha}(\lambda_{0}x_{0})=P_{\alpha}(\lambda_{0}x_{0}+\mu h_{0})\). Moreover, the point \(\lambda_{0}x_{0}+\mu h_{0}\) belongs to the unit ball \(B_{\|\cdot\|}\). It follows now from the definition of \(|\!|\!|\!|\!|\!|_{\alpha,\delta,C}\) that \(|\!|\!|\!|\!|\!|_{\alpha,\delta,C}=|\!|\!|\!|\!|\lambda_{0}x_{0}|\!|\!|\!|_{ \alpha,\delta,C}=1\). We only need to check the equality in part \((c)\) for points in the unit sphere \(S_{|\cdot\|}\). For every \(x\in X\) with \(\|x\|=1\) we have that \(|\!|\!|x|\!|\!|\!|_{\alpha,\delta,C}\geq 1\). If moreover \(|\langle f_{0},x\rangle|\leq(1-\delta)^{2}\), the point \(P_{\alpha}x\) belongs to \(\widehat{B}_{\alpha,\delta}\), and thus by equation (2) it is clear that \(|\!|\!|x|\!|\!|_{\alpha,\delta,C}=1\). ### The norm \(|\!|\!|\!|\!|\!|\!|_{\Omega}\) In this subsection we use countably many of the previous norms to define the renormings to prove the main theorems of the section. We will define it abstractly using a tuple \(\Omega=(\delta,C,\{\alpha_{n}\}_{n\in\mathbb{N}},\{\eta_{n}\}_{n\in\mathbb{N}})\), where \(0<\delta<1\) with \((1-\delta)^{4}>\frac{1}{2}\), the constant \(C\) is bigger than \(1\), the set \(\alpha_{n}=(x_{n},h_{n},f_{n},g_{n})\) is a \(4\)-tuple with \(x_{n}\in S_{|\cdot\|}\), \(h_{n}\in X\) and \(f_{n},g_{n}\in X^{*}\) with \(1\leq\|h_{n}\|,\|g_{n}\|^{*}\leq C\) and \(\|f_{n}\|^{*}\leq C\), such that \[\langle f_{n},h_{n}\rangle=0,\qquad\langle g_{n},h_{n}\rangle=1,\qquad\langle f_ {n},x_{n}\rangle\geq 1-\delta\] for every \(n\in\mathbb{N}\), and \(\{\eta_{n}\}_{n\in\mathbb{N}}\) is a decreasing sequence of strictly positive numbers converging to \(0\). Moreover, we also suppose that \(|\langle f_{m},x_{n}\rangle|<2(1-\delta)^{4}-1\) for every \(m\neq n\in\mathbb{N}\), and that for every \(x\in X\setminus\{0\}\) there exists an open neighbourhood \(U_{x}\) of \(x\) and \(n_{x}\in\mathbb{N}\) such that \(|\langle f_{n},z\rangle|\leq(1-\delta)^{2}\|z\|\) for all \(z\in U_{x}\) and \(n\geq n_{x}\). Fix \(n\in\mathbb{N}\), and define \({\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{\alpha_{n},\delta,C}\) to be the equivalent norm in \(X\) we discussed in the previous subsection. Put \(\tau_{n}=\frac{(1+\eta_{n})^{2}-1}{(1+\eta_{n})^{2}}\), and set \[{\left|\kern-1.075pt\left|\kern-1.075pt\left|x\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{n}=\left(\frac{1}{(1+\eta_{n})^{2}}{\left|\kern-1.075pt \left|\kern-1.075pt\left|x\right|\kern-1.075pt\right|\kern-1.075pt\right|}_{ \alpha_{n},\delta,C}^{2}+\tau_{n}\|x\|^{2}\right)^{1/2},\qquad\text{for all }x\in X. \tag{4}\] This is an equivalent norm in \(X\), satisfying \[\frac{1}{1+\eta_{n}}{\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right| \kern-1.075pt\right|\kern-1.075pt\right|}_{\alpha_{n},\delta,C}\leq{\left| \kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right|\kern-1.075pt \right|}_{n}\leq{\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{\alpha_{n},\delta,C}. \tag{5}\] Moreover, by Lemma 2.6, if the norm \(\|\cdot\|\) is UR (resp. WUR, URED, LUR), then \({\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{n}\) is also UR (resp. WUR, URED, LUR). The final renorming we will need is defined as follows: \[{\left|\kern-1.075pt\left|\kern-1.075pt\left|x\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{\Omega}=\max\{\|x\|,\max_{n\in\mathbb{N}}{\left|\kern-1.075pt \left|\kern-1.075pt\left|x\right|\kern-1.075pt\right|\kern-1.075pt\right|}_{n} \}\qquad\text{for all }x\in X. \tag{6}\] Clearly \({\left|\kern-1.075pt\left|\kern-1.075pt\left|0\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{\Omega}=0\). Recall that given a point \(x\in X\setminus\{0\}\), we have that \(|\langle f_{n},z\rangle|\leq(1-\delta)^{2}\|z\|\) for all \(z\in U_{x}\) and all \(n\geq n_{x}\). Then, item \((c)\) in Lemma 3.1 applied to each \({\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{\alpha_{n},\delta,C}\) for \(n\geq n_{x}\) implies that \({\left|\kern-1.075pt\left|\kern-1.075pt\left|z\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{n}\leq\|z\|\) for all \(z\) in the open neighbourhood \(U_{x}\) of \(x\). This means that \({\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{\Omega}\) coincides with the finite intersection of norms given by \(\max\{\|\cdot\|,\max_{i=1,\ldots,n_{x}}{\left|\kern-1.075pt\left|\kern-1.075pt \left|\cdot\right|\kern-1.075pt\right|}_{i}\}\) in an open neighbourhood of \(x\). Therefore, \({\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{\Omega}\) is well defined, and if \(\|\cdot\|\) is LUR, then each \({\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{n}\) and thus \({\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{\Omega}\) is LUR as well by Lemma 2.7. It is also straightforward to obtain that \(\|\cdot\|\leq{\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right| \kern-1.075pt\right|}_{\Omega}\leq\frac{1}{(1-\delta)^{2}}\|\cdot\|\). Although the specific properties of the norm \({\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{\Omega}\) depend on the choice of the sequence \(\{\alpha_{n}\}_{n\in\mathbb{N}}\) and the qualities of the original norm in the space \(X\), we can prove in general a statement showing that \({\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right|}_{\Omega}\) fails to be uniformly rotund using segments in the directions of the sequence \(\{h_{n}\}_{n\in\mathbb{N}}\): **Lemma 3.2**.: _Let \((X,\|\cdot\|)\) be a Banach space, and let \(\Omega\) be as above. Put \(\lambda_{n}=\frac{1}{{\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right| \kern-1.075pt\right|\kern-1.075pt\right|}_{n},\delta,C}\) for every \(n\in\mathbb{N}\). Then, for every sequence \(\{z_{n}\}_{n\in\mathbb{N}}\) such that_ \[z_{n}\in\left[\lambda_{n}x_{n}-\frac{(1-\lambda_{n})}{C}h_{n},\lambda_{n}x_{n}+ \frac{(1-\lambda_{n})}{C}h_{n}\right]\] _for all \(n\in\mathbb{N}\), we have that \({\left|\kern-1.075pt\left|\kern-1.075pt\left|z_{n}\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{\Omega}\to 1\)._ Proof.: Fix \(n\in\mathbb{N}\). Using \((b)\) in Lemma 3.1 and the definition of \({\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|\kern-1.075pt\right|}_{n}\) above, we have that \(\frac{1}{1+\eta_{n}}\leq{\left|\kern-1.075pt\left|\kern-1.075pt\left|z_{n} \right|\kern-1.075pt\right|\kern-1.075pt\right|}_{n}\leq 1\). Hence, we obtain that \({\left|\kern-1.075pt\left|\kern-1.075pt\left|z_{n}\right|\kern-1.075pt\right| \kern-1.075pt\right|\kern-1.075pt\right|}_{\Omega}\geq\frac{1}{1+\eta_{n}}\). Since \(\eta_{n}\to 0\) and \(\|z_{n}\|\leq 1\), the result will follow if we show that \({\left|\kern-1.075pt\left|\kern-1.075pt\left|z_{n}\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{m}\leq 1\) for all \(m\neq n\). Fix \(m\neq n\). Observe that \(\lambda_{n}\geq(1-\delta)^{2}\) by \((a)\) in Lemma 3.1, and thus \(\|z_{n}\|\geq 2(1-\delta)^{2}-1\). Then, using that \(|\langle f_{m},x_{n}\rangle|<2(1-\delta)^{4}-1\) and \(\lambda_{n}\leq 1\) we obtain: \[|\langle f_{m},z_{n}\rangle| \leq|\langle f_{m},\lambda_{n}x_{n}\rangle|+|\langle f_{m},\lambda _{n}x_{n}-z_{n}\rangle|\] \[\leq 2(1-\delta)^{4}-(1-\delta)^{2}\leq(1-\delta)^{2}\|z_{n}\|.\] Therefore, by \((c)\) in Lemma 3.1, it holds that \(\left|\kern-1.075pt\left|\kern-1.075pt\left|z_{n}\right|\kern-1.075pt\right| \kern-1.075pt\right|_{\alpha_{m},\delta,C}=\|z_{n}\|\leq 1\), from which it follows that \(\left|\kern-1.075pt\left|\kern-1.075pt\left|z_{n}\right|\kern-1.075pt\right| \kern-1.075pt\right|_{m}\leq 1\), and the proof is finished. ### Proofs of the Main Theorems of Section 3 Each of the three main theorems of the section will be proven using a different tuple \(\Omega\) to define the desired renorming. To aid in the definition of these elements, we use a general result of Banach spaces. The following lemma is a standard result with a short proof that we include for completeness. This is also used and proven in [10] (Claim 1). **Lemma 3.3**.: _Let \((X,\|\cdot\|)\) be an infinite dimensional Banach space, and let \(\{\varepsilon_{n}\}_{n\in\mathbb{N}}\) be a decreasing sequence of positive numbers. Then there exist a sequence \(\{y_{n}\}_{n\in\mathbb{N}}\) in \(S_{\|\cdot\|}\) and a weak\({}^{*}\) null sequence \(\{\varphi_{n}\}_{n\in\mathbb{N}}\) in \(S_{\|\cdot\|^{*}}\) such that \(\langle\varphi_{n},y_{n}\rangle=1\) for all \(n\in\mathbb{N}\), and \(|\langle\varphi_{m},y_{n}\rangle|<\varepsilon_{\min\{m,n\}}\) for all \(m\neq n\in\mathbb{N}\)._ Proof.: Let \(\{\varphi_{n}\}_{n\in\mathbb{N}}\) be a weak\({}^{*}\) null sequence in \(S_{\|\cdot\|^{*}}\), which exists by Josefson\(-\)Nissenzweig's Theorem. We will inductively define a subsequence \(\{\varphi_{n_{k}}\}_{k\in\mathbb{N}}\) and a sequence \(\{y_{k}\}_{k\in\mathbb{N}}\) in \(S_{\|\cdot\|}\) such that 1. \(\langle\varphi_{n_{k}},y_{k}\rangle\geq 1-2^{-k}\), 2. \(|\langle\varphi_{n_{j}},y_{i}\rangle|<\varepsilon_{n_{j}}/2\) for all \(i,j\leq k\) with \(i<j\), 3. \(\langle\varphi_{n_{i}},y_{j}\rangle=0\) for all \(i,j\leq k\) with \(i<j\). Put \(n_{1}=1\) and choose \(y_{1}\in S_{\|\cdot\|}\) with \(\langle\varphi_{1},y_{1}\rangle\geq 1/2\), and suppose we have defined the first \(k\) terms of the two sequences. For each \(i\leq k\), define the linear projection \(P_{i}\colon X\to\operatorname{span}\{y_{i}\}\) given by \(P_{i}x=\frac{\langle\varphi_{n_{i}},x\rangle}{\langle\varphi_{n_{i}},y_{i} \rangle}y_{i}\). Setting \(T_{1}=P_{1}\) we may define inductively the linear projection \(T_{i+1}\colon X\to\operatorname{span}\{y_{1},\ldots,y_{i+1}\}\) by \(T_{i+1}=T_{i}+P_{i+1}\circ(I-T_{i})\) for all \(1\leq i\leq k-1\) (we need to use that \((iii)\) holds to show that this map is a linear projection). It can be shown as well that \(\langle\varphi_{n_{i}},(I-T_{k})x\rangle=0\) for all \(i\leq k\) and all \(x\in X\). We claim that there exists \(n_{k+1}\in\mathbb{N}\) which can be taken to be arbitrarily large such that \[\sup_{x\in S_{\|\cdot\|}}\langle\varphi_{n_{k+1}},(I-T_{k})x\rangle\geq 1-2^{-(k+1 )}.\] Indeed, otherwise, choosing a sequence \(\{x_{n}\}_{n\in\mathbb{N}}\) in \(B_{\|\cdot\|}\) such that \(\langle\varphi_{n},x_{n}\rangle\geq 1-2^{-(k+2)}\) we would obtain another bounded sequence \(\{T_{k}x_{n}\}_{n\in\mathbb{N}}\) in the finite dimensional space \(\operatorname{span}\{y_{1},\ldots,y_{k}\}\) such that \(\langle\varphi_{n},T_{k}x_{n}\rangle\geq 2^{-(k+2)}\) holds for infinitely many \(n\in\mathbb{N}\). Using the relative compactness of such a subsequence we arrive at a contradiction with the fact that \(\{\varphi_{n}\}_{n\in\mathbb{N}}\) is weak\({}^{*}\) null. We may take \(n_{k+1}\) large enough such that \(|\langle\varphi_{n_{k+1}},y_{i}\rangle|<\varepsilon_{n_{k+1}}/2\) for all \(i\leq k\). Now, choose \(y_{k+1}\in S_{\|\cdot\|}\) satisfying \(\langle\varphi_{n_{k+1}},y_{k+1}\rangle\geq 1-2^{-(k+1)}\) and \(\langle\varphi_{n_{i}},y_{k+1}\rangle=0\) for all \(i\leq k\). This finishes the induction. We can now apply Bishop\(-\)Phelps\(-\)Bollobas' Theorem (as stated in e.g.: Page 376 in [4]) to the sequences \(\{\varphi_{n_{k}}\}_{k\in\mathbb{N}}\) and \(\{y_{k}\}_{k\in\mathbb{N}}\) to obtain the desired result. We proceed to the proof of the main theorems of the section: The proof of Theorem A is the most straightforward of the three. We will construct a tuple \(\Omega\) of the above form such that the sequence \(\{h_{n}\}_{n\in\mathbb{N}}\) is actually constantly equal to a fixed vector \(h_{0}\), which will be the direction in which the norm \({\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{\Omega}\) fails to be Uniformly Rotund, and thus failing to be URED. As we have seen, the choice of the tuple \(\Omega\) and the definition of \({\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{\Omega}\) ensures that local properties are kept, and thus LUR is preserved. Proof of Theorem A.: Let \(X\) be an infinite dimensional Banach space and let \(\|\cdot\|\) be a LUR norm on \(X\). Let \(0<\delta<1\) such that \((1-\delta)^{4}>\frac{1}{2}\). Using Lemma 3.3, we can find a sequence \(\{y_{n}\}_{n\in\mathbb{N}}\) in \(S_{\|\cdot\|}\) and a weak\({}^{*}\) null sequence \(\{\varphi_{n}\}_{n\in\mathbb{N}}\) in \(S_{\|\cdot\|^{*}}\) such that \(\langle\varphi_{n},y_{n}\rangle=1\) for all \(n\in\mathbb{N}\), and \(|\langle\varphi_{m},y_{n}\rangle|<\min\left\{\frac{2(1-\delta)^{4}-1}{2}, \delta\right\}\) for all \(m\neq n\in\mathbb{N}\). Putting \(h_{n}=y_{1}\) and \(g_{n}=\varphi_{1}\) for all \(n\in\mathbb{N}\), and setting \(x_{n}=y_{n+1}\) and \(f_{n}=\varphi_{n+1}-\langle\varphi_{n+1},y_{1}\rangle\varphi_{1}\) for all \(n\in\mathbb{N}\), we have that \(x_{n},h_{n}\in S_{\|\cdot\|}\) and \(f_{n},g_{n}\in S_{\|\cdot\|^{*}}\). Moreover, we obtain that \[\langle f_{n},h_{n}\rangle=0,\qquad\langle g_{n},h_{n}\rangle=1,\qquad\langle f _{n},x_{n}\rangle\geq 1-\delta\] with \(|\langle f_{m},x_{n}\rangle|<2(1-\delta)^{4}-1\) for every \(m\neq n\in\mathbb{N}\). Since the sequence \(\{f_{n}\}_{n\in\mathbb{N}}\) is weak\({}^{*}\) null, we have that for every \(x\in X\setminus\{0\}\) there exists an open neighbourhood \(U_{x}\) of \(x\) and \(n_{x}\in\mathbb{N}\) such that \(|\langle f_{n},z\rangle|\leq(1-\delta)^{2}\|z\|\) for all \(z\in U_{x}\) and all \(n\geq n_{x}\). Therefore, setting \(\alpha_{n}=(x_{n},h_{n},f_{n},g_{n})\) for every \(n\in\mathbb{N}\) and \(\Omega=(\delta,C,\{\alpha_{n}\}_{n\in\mathbb{N}},\{2^{-n}\}_{n\in\mathbb{N}})\) with \(C=1+\delta\) we are able to define the norms \({\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{\alpha_{n},\delta,C}\) and \({\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{n}\) for every \(n\in\mathbb{N}\), and the norm \({\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{\Omega}\) in \((X,\|\cdot\|)\) as in equations (3),(4) and (6). Then we have that \[\|\cdot\|\leq{\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt \right|\kern-1.075pt\right|}_{\Omega}\leq\frac{1}{(1-\delta)^{2}}\|\cdot\|.\] Since \(\delta\) can be taken to be arbitrarily small and LUR norms are dense in \(X\), we obtain that every norm in \(X\) can be approximated uniformly on bounded sets by norms of this form. As we mentioned after the definition of \({\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{\Omega}\), if \(\|\cdot\|\) is LUR, then so is \({\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{\Omega}\), and thus it only remains to show that this last norm is not URED to finish the proof. Indeed, using Lemma 3.2 we have that setting \(\lambda_{n}=\frac{1}{{\left|\kern-1.075pt\left|x_{n}\right|\kern-1.075pt\right| \kern-1.075pt\right|}_{\alpha_{n},\delta,C}}\) and \[a_{n} =\lambda_{n}x_{n}-\frac{(1-\lambda_{n})}{1+\delta}y_{1}\] \[b_{n} =\lambda_{n}x_{n}+\frac{(1-\lambda_{n})}{1+\delta}y_{1},\] the sequences \(\{\|a_{n}\|_{\Omega}\}_{n\in\mathbb{N}}\), \(\{\|b_{n}\|_{\Omega}\}_{n\in\mathbb{N}}\) and \(\left\{{\left|\kern-1.075pt\left|\kern-1.075pt\left|\frac{a_{n}+b_{n}}{2} \right|\kern-1.075pt\right|\kern-1.075pt\right|}_{\Omega}\right\}_{n\in\mathbb{N}}\) converge to \(1\), while \(b_{n}-a_{n}=2\frac{(1-\lambda_{n})}{1+\delta}y_{1}\). By \((i)\) in Lemma 3.1, we have that \(\lambda_{n}\leq\frac{1}{(1+\varepsilon_{\delta,1})^{1/2}}<1\) for all \(n\in\mathbb{N}\), and thus we conclude that \(\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|_{\Omega}\) is not Uniformly Rotund in the direction of \(y_{1}\), and hence fails to be URED. For the proof of Theorem B, we will need a sequence of directions \(\{h_{n}\}_{n\in\mathbb{N}}\) which is not weakly null but which forms a uniformly separated set. In this way, using again Lemma 3.3, the resulting norm \(\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left| \!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\! \!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\!\left It only remains to show that is URED. Fix and consider two sequences and in \(S_{\|\cdot\|_{\Omega}}\) with \(d_{k}-c_{k}=\xi_{k}h\) for some \(\xi_{k}\in\mathbb{R}\) for all \(k\in\mathbb{N}\) and such that. Notice that the sequence \(\{\xi_{k}\}_{k\in\mathbb{N}}\) is bounded. Suppose by contradiction that \(\{\xi_{k}\}_{k\in\mathbb{N}}\) does not converge to \(0\). Then, by passing to a subsequence, we may assume that there exists \(\varepsilon_{0}>0\) such that for all \(n\in\mathbb{N}\), which is an equivalent URED norm in \(X\) by Lemma 2.7. Suppose first that there exists \(n_{0}\in\mathbb{N}\) such that for all \(k\in\mathbb{N}\). Then, since for all \(k\in\mathbb{N}\) and \(\left|\!\left|\!\left|\cdot\right|\right|\!\right|\!\right|_{\Omega_{n_{0}}}\) is URED, this already leads to the desired contradiction. Hence, by passing to a subsequence again, we may suppose that for every \(k\in\mathbb{N}\), and that there exists a sequence \(\{n_{k}\}_{k\in\mathbb{N}}\) of natural numbers with \(n_{k}\to\infty\) and such that for all \(k\in\mathbb{N}\). Equation (5) shows that for all \(k\in\mathbb{N}\) we have Now, since \(2^{-n_{k}}\to 0\), with the definition of (see equation (3)) and noting that for all \(k\in\mathbb{N}\), we deduce that \[\left((1-\delta)^{2}\!\left|\!\left|P_{\alpha_{n_{k}}}\left(\frac{c_{k}+d_{k}} {2}\right)\!\right|\!\right|\!\right|_{\widehat{B}_{\alpha_{n_{k}},\delta}}^{2 }+\varepsilon_{\delta,C}\left\|P_{\alpha_{n_{k}}}\left(\frac{c_{k}+d_{k}}{2} \right)\!\right\|^{2}\right)^{1/2}\longrightarrow 1, \tag{7}\] where \(P_{\alpha_{n_{k}}}\) is the projection from \(X\) onto \(\ker(g_{n_{k}})\) given by \(P_{\alpha_{n_{k}}}x=x-\langle g_{n_{k}},x\rangle h_{n_{k}}\), and \(\varepsilon_{\delta,C}=\frac{1-(1-\delta)^{2}}{(1+C^{2})^{2}}>0\). Note that, importantly, the constant \(\varepsilon_{\delta,C}\) does not depend on \(k\in\mathbb{N}\). For every \(k\in\mathbb{N}\), the point \(c_{k}\) is in, so we obtain that, which implies that. Similarly we obtain that \(\limsup_{k\to\infty}\left|\!\left|\!\left|d_{k}\right|\!\right|\!\right|_{ \alpha_{n_{k}},\delta,C}\leq 1\). Therefore, we have that Hence, using also equation (7), we can apply Remark 2.1 and equation (1), together with the non-negativity of the function \(Q_{\|\cdot\|\|}\) to obtain that \(Q_{\|\cdot\|}(P_{\alpha_{n_{k}}}c_{k},P_{\alpha_{n_{k}}}d_{k})\to 0\). Define \(v_{k}=\xi_{k}\langle g_{n_{k}},h\rangle h_{n_{k}}\in X\) for every \(k\in\mathbb{N}\). Then \(\{v_{k}\}_{k\in\mathbb{N}}\) converges to \(0\) in norm because \(\{\xi_{k}\}_{k\in\mathbb{N}}\) is bounded and \(\{g_{n_{k}}\}_{k\in\mathbb{N}}\) is weak\({}^{*}\) null. Therefore \(Q_{\|\cdot\|}(P_{\alpha_{n_{k}}}c_{k},P_{\alpha_{n_{k}}}d_{k}+v_{k})\to 0\) and moreover \(P_{\alpha_{n_{k}}}d_{k}+v_{k}-P_{\alpha_{n_{k}}}c_{k}=\xi_{k}h\) for all \(k\in\mathbb{N}\). Since \(\|\cdot\|\) is URED, we apply Proposition 2.4 and conclude that \(\{\xi_{k}\}_{k\in\mathbb{N}}\) converges to \(0\). This leads to a contradiction and the proof is finished. Finally, in the superreflexive case, the set \(\{h_{n}\}_{n\in\mathbb{N}}\) of directions in which we approximate the segments in the sphere will consist of a weakly null sequence. We will then obtain WUR without UR similarly as in the previous proof. Note however that, as we will see, the additional requirement that \(\{h_{n}\}_{n\in\mathbb{N}}\) is weakly null impedes us to easily ensure that the sequence of orthogonal functionals \(\{f_{n}\}_{n\in\mathbb{N}}\) is weak\({}^{*}\) null as well. This property was crucial in both previous proofs in order to show that \(\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|_{\Omega}\) is well defined. To solve this, we will employ a simple geometric argument which is possible due to the existence of a uniformly rotund norm. In a Banach space \(X\), given a closed and convex set \(C\subset X\), a functional \(f\in X^{*}\), and \(r>0\), we define the set \(S(C,f,\varepsilon)=\{x\in C\colon\langle f,x\rangle>r\}\). Sets of this form are called _slices of \(C\)_. Recall that a norm \(\|\cdot\|\) in a Banach space is uniformly rotund if and only if for every \(\varepsilon>0\) there exists \(\delta>0\) such that for every \(f\in S_{\|\cdot\|^{*}}\), the slice \(S(B_{\|\cdot\|},f,1-\delta)\) has diameter less than \(\varepsilon\). Proof of Theorem C.: Let \(X\) be an infinite dimensional superreflexive Banach space with a UR norm \(\|\cdot\|\), and let \(0<\delta<1\) such that \((1-\delta)^{4}>\frac{1}{2}\). Additionally, using that \(\|\cdot\|\) is UR, we may take \(\delta\) small enough such that the diameter of the slice \(S(B_{\|\cdot\|},f,(1-\delta)^{3})\) is less than \(\frac{1}{4}\) for all \(f\in X^{*}\) with \(\|f\|^{*}\leq 2\). Using reflexivity, we may apply Lemma 3.3 to \((X^{*},\|\cdot\|^{*})\) to obtain a sequence \(\{y_{n}\}_{n\in\mathbb{N}}\) in \(S_{\|\cdot\|^{*}}\) and a weakly null sequence \(\{\varphi_{n}\}_{n\in\mathbb{N}}\) in \(S_{\|\cdot\|}\) such that \(\langle y_{n},\varphi_{n}\rangle=1\) for all \(n\in\mathbb{N}\) and \(\langle y_{m},\varphi_{n}\rangle<\min\left\{\frac{2(1-\delta)^{4}-1}{2},\delta\right\}\) for all \(m\neq n\in\mathbb{N}\). Fix \(n\in\mathbb{N}\) and define \[x_{n} =\varphi_{2n} \in S_{\|\cdot\|},\] \[h_{n} =\varphi_{2n+1} \in S_{\|\cdot\|},\] \[f_{n} =y_{2n}-\langle y_{2n},\varphi_{2n+1}\rangle y_{2n+1} \in X^{*},\] \[g_{n} =y_{2n+1} \in S_{\|\cdot\|^{*}}.\] It holds that \(\|f_{n}\|^{*}\leq 1+\delta\) and \[\langle f_{n},h_{n}\rangle=0,\qquad\langle g_{n},h_{n}\rangle=1,\qquad\langle f _{n},x_{n}\rangle\geq 1-\delta.\] Notice as well that the sequence \(\{h_{n}\}_{n\in\mathbb{N}}\) is weakly null. Additionally, for \(m\neq n\in\mathbb{N}\) we have that \(\langle f_{m},x_{n}\rangle<2(1-\delta)^{4}-1\). Set \(\alpha_{n}=(x_{n},h_{n},f_{n},g_{n})\) for every \(n\in\mathbb{N}\) and \(\Omega=(\delta,C,\{\alpha_{n}\}_{n\in\mathbb{N}},\{2^{-n}\}_{n\in\mathbb{N}})\) with \(C=1+\delta\). In order to define the norm \(\left|\kern-1.075pt\left|\kern-1.075pt\left|\cdot\right|\kern-1.075pt\right| \kern-1.075pt\right|_{\Omega}\) properly, it only remains to show that for every \(x\in X\setminus\{0\}\) there exists an open neighbourhood \(U_{x}\) of \(x\) and \(n_{x}\in\mathbb{N}\) such that \(|\langle f_{n},z\rangle|\leq(1-\delta)^{2}\|z\|\) for all \(z\in U_{x}\) and all \(n\geq n_{x}\). We will show that for any \(x\in S_{\|\cdot\|}\) there exists at most one \(n\in\mathbb{N}\) such that \(|\langle f_{n},x\rangle|>(1-\delta)^{3}\), which by homogeneity and by the fact that \((1-\delta)^{3}<(1-\delta)^{2}\), implies the condition we need. Indeed, suppose there existed such \(x\in S_{\|\cdot\|}\) and \(m\neq n\in\mathbb{N}\) such that \(|\langle f_{m},x\rangle|>(1-\delta)^{3}\) and \(|\langle f_{n},x\rangle|>(1-\delta)^{3}\). We assume that \(\langle f_{j},x\rangle>0\) for \(j\in\{m,n\}\), since the other possibilities are proven similarly. Then \(x\) belongs to both slices \(S(B_{\|\cdot\|},f_{m},(1-\delta)^{3})\) and \(S(B_{\|\cdot\|},f_{n},(1-\delta)^{3})\), which have diameter \(1/4\) by choice of \(\delta\). Since the point \(x_{j}\) also belongs to the slice \(S(B_{\|\cdot\|},f_{j},(1-\delta)^{3})\) for both \(j\in\{m,n\}\), we obtain using the triangle inequality that \(\left\|x_{m}-x_{n}\right\|\leq\frac{1}{2}\). However, considering the functional \(y_{2n}\in S_{\left\|\cdot\right\|^{*}}\) we get: \[\left\|x_{n}-x_{m}\right\|\geq\left|\langle y_{2n},x_{n}\rangle\right|-\left| \langle y_{2n},x_{m}\rangle\right|>1-\delta>\frac{1}{2},\] a contradiction. Hence, we can define the norm \(\left\|\!\left|\!\left|\!\left|\!\left|\!\cdot\right|\!\right|\!\right|\! \right|\!\right|_{\Omega}\) as in equation (6), which is LUR. We also consider the norms \(\left\|\!\left|\!\left|\!\left|\!\left|\!\cdot\right|\!\right|\!\right|\! \right|\!\right|_{\alpha_{n},\delta,C}\) and \(\left\|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\cdot\right|\!\right|\! \right|\!\right|\!\right|_{n}\) for every \(n\in\mathbb{N}\) as in equations (3),(4). Recall that \(\left\|\!\left|\!\left|\!\left|\!\left|\!\cdot\right|\!\right|\!\right|\! \right|_{n}\) is UR for all \(n\in\mathbb{N}\). The rest of the proof is very similar to the one of the previous theorem. Indeed, to show that \(\left\|\!\left|\!\left|\!\left|\!\left|\!\cdot\right|\!\right|\!\right|\! \right|_{\Omega}\) is not UR, it is enough to consider for every \(n\in\mathbb{N}\) the points \(a_{n}=\lambda_{n}x_{n}+\frac{1-\lambda_{n}}{2(1+\delta)}h_{n}\) and \(b_{n}=\lambda_{n}x_{n}-\frac{1-\lambda_{n}}{2(1+\delta)}h_{n}\) with \(\lambda_{n}=\frac{1}{\left\|x_{n}\right\|\!\right|\!\right|_{\alpha_{n},\delta,C}}\), and apply Lemma 3.2 to the resulting sequences. Finally, we prove that \(\left\|\!\left|\!\left|\!\left|\!\cdot\right|\!\right|\!\right|\!\right|_{\Omega}\) is WUR. Let \(\{c_{k}\}_{k\in\mathbb{N}}\) and \(\{d_{k}\}_{k\in\mathbb{N}}\) be two sequences in \(S_{\left\|\!\left|\!\left|\!\left|\!\left|\!\frac{c_{k}+d_{k}}{2}\right|\! \right|\!\right|\!\right|_{\Omega}}\to 1\). Suppose for the sake of contradiction that \(\{d_{k}-c_{k}\}_{k\in\mathbb{N}}\) is not weakly null, and, by passing to a subsequence, fix \(f_{0}\in X^{*}\) and \(\varepsilon_{0}>0\) such that \(\left|\langle f_{0},d_{k}-c_{k}\rangle\right|>\varepsilon_{0}\) for all \(k\in\mathbb{N}\). Define for every \(n\in\mathbb{N}\) the norm \(\left\|\!\left|\!\left|\!\left|\!\cdot\right|\!\right|\!\right|\!\right|_{ \Omega_{n}}=\max\{\left\|\cdot\right\|,\left\|\!\left|\!\left|\!\left|\!\left| \!\cdot\right|\!\right|\!\right|\!\right|_{n}\}\), which is UR by Lemma 2.7. In particular, it is WUR. If there existed \(n_{0}\in\mathbb{N}\) such that \(\left\|\!\left|\!\left|\!\left|\!\frac{c_{k}+d_{k}}{2}\right|\!\right|\! \right|\!\right|_{\Omega}=\left\|\!\left|\!\left|\!\left|\!\frac{c_{k}+d_{k}} {2}\right|\!\right|\!\right|\!\right|_{\Omega_{n_{0}}}\) for all \(k\in\mathbb{N}\), we would reach a contradiction immediately. Hence, passing to a further subsequence, we assume that there exists a sequence \(\{n_{k}\}_{k\in\mathbb{N}}\) of natural numbers with \(n_{k}\to\infty\) such that \(\left\|\!\left|\!\left|\!\frac{c_{k}+d_{k}}{2}\right|\!\right|\!\right|_{\Omega }=\left\|\!\left|\!\left|\!\left|\!\frac{c_{k}+d_{k}}{2}\right|\!\right|\! \right|\!\right|_{n_{k}}>\left\|\!\left|\!\left|\!\frac{c_{k}+d_{k}}{2}\right|\!\right|\!\right|\) for all \(k\in\mathbb{N}\). Reasoning exactly as in the proof of the previous theorem, we obtain that \(Q_{\left\|\cdot\right\|\!}(P_{\alpha_{n_{k}}}c_{k},P_{\alpha_{n_{k}}}d_{k})\to 0\), where \(P_{\alpha_{n_{k}}}\colon X\to\ker(g_{n_{k}})\) is the linear projection given by \(P_{\alpha_{n_{k}}}x=x-\langle g_{n_{k}},x\rangle h_{n_{k}}\). Since \(\left\|\cdot\right\|\) is UR, Proposition 2.2 shows that \(\{P_{\alpha_{n_{k}}}(d_{k}-c_{k})\}_{k\in\mathbb{N}}\) converges to \(0\) in norm. Then, since \[d_{k}-c_{k}=P_{\alpha_{n_{k}}}(d_{k}-c_{k})+\langle g_{n_{k}},d_{k}-c_{k} \rangle h_{n_{k}}\] for all \(k\in\mathbb{N}\), and the sequence \(\{h_{n_{k}}\}_{k\in\mathbb{N}}\) is weakly null, we obtain that \(\{d_{k}-c_{k}\}_{k\in\mathbb{N}}\) is weakly null as well. This leads to the contradiction we sought. As in the previous theorems, by taking \(\delta\) as small as necessary, it is clear that every norm can be uniformly approximated on bounded sets by norms with this properties. ## 4. \(C^{\infty}\)-smooth norm in \(c_{0}\) with non-strictly convex dual norm In the final section of this note, we prove the last of the main theorems by constructing a \(C^{\infty}\)-smooth norm in \(c_{0}\) whose dual norm is not strictly convex. The process we employ consists in considering first a non-smooth norm in \(c_{0}\) whose dual norm is not strictly convex, and which can be constructed inductively through its finite-dimensional sections. These finite-dimensional sections of the non-smooth ball are \(n\)-dimensional polyhedra, which we can approximate by \(C^{\infty}\)-smooth unit balls in \(\ell_{\infty}^{n}\). We then use these increasingly better \(C^{\infty}\)-smooth approximations to recreate the inductive definition of the original norm with non strictly convex dual norm while preserving \(C^{\infty}\)-smoothness. Note that every equivalent norm in the finite-dimensional space \(\ell_{\infty}^{n}\) can be uniformly approximated by a \(C^{\infty}\)-smooth norm. This is a standard result, which is discussed and improved in [1]. Proof of Theorem D.: We divide the proof into three parts for the convenience of the reader. **1.- Set up and construction of finite-dimensional polyhedra** Let \(\{e_{n}\}_{n\in\mathbb{N}}\) be the canonical basis of \(c_{0}\), and let \(P_{n}\colon c_{0}\to\ell_{\infty}^{n}=\operatorname{span}\{e_{j}\colon 1\leq j \leq n\}\) be the linear projection given by \(P_{n}(x_{j})_{j=1}^{\infty}=(x_{j})_{j=1}^{n}\) for all \((x_{j})_{j=1}^{\infty}\in c_{0}\) and all \(n\in\mathbb{N}\). Consider \(0<\delta<1\), and consider a sequence \(\{\varepsilon_{n}\}_{n\in\mathbb{N}}\) of strictly positive numbers with \(\varepsilon_{1}=1\) and such that \(\sum_{n=1}^{\infty}\varepsilon_{n}\leq\frac{1}{\delta}\). Set \(c_{n}=\sum_{j\leq n}\varepsilon_{j}\), and choose \[w_{n}\in\left(1,\frac{1-\delta c_{n-1}}{1-\delta c_{n-1}-\delta\varepsilon_{ n}}\right)\qquad\text{ and }\qquad h_{n}=\frac{w_{n}\delta}{w_{n}-1},\] for every \(n\geq 2\), where we select \(w_{n}\) such that \[\prod_{n=2}^{\infty}w_{n}<\infty\qquad\text{ and }\qquad\sum_{n=2}^{\infty} \frac{1}{h_{n}}<\infty.\] Consider as well a sequence \(\{\eta_{n}\}_{n\geq 2}\) of strictly positive numbers such that \[(1+\eta_{n})^{2}\left(\frac{1}{w_{n}}+\frac{\delta}{2h_{n}}\right)\leq 1 \qquad\text{ and }\qquad(1+\eta_{n})^{2}\left(\frac{h_{n+1}-1}{h_{n+1}-\delta}\right)\leq 1\] for every \(n\geq 2\). We may assume as well that \(\prod_{n=2}^{\infty}(1+\eta_{n})^{2}\leq\frac{1}{\delta}\), and that \(\frac{\delta}{2}\leq\frac{1}{1+\eta_{n}}\) for all \(n\geq 2\). Define \(\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\!\right|_{\infty,1}\colon \mathbb{R}\to\mathbb{R}^{+}\) by \(\left|\!\left|\!\left|\!\left|x\right|\!\right|\!\right|\!\right|_{\infty,1}= \left|x\right|\) for all \(x\in\mathbb{R}\). Inductively, we define for each \(n\geq 2\) the norms \(\left|\!\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\!\right|_{1,n}\colon \ell_{\infty}^{n}\to\mathbb{R}^{+}\) and \(\left|\!\left|\!\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\!\right| _{\infty,n}\colon\ell_{\infty}^{n}\to\mathbb{R}^{+}\) by \[\left|\!\left|\!\left|\!\left|x\right|\!\right|\!\right|\!\right|_{1,n}=\frac{ 1}{w_{n}}\left|\!\left|\!\left|P_{n-1}x\right|\!\right|\!\right|\!\right|_{ \infty,n-1}+\frac{1}{h_{n}}|x_{n}|,\qquad\text{for every $x\in\ell_{\infty}^{n}$},\] and \[\left|\!\left|\!\left|x\right|\!\right|\!\right|_{\infty,n}=\max\left\{ \left|\!\left|\!\left|x\right|\!\right|\!\right|\!\right|_{1,n},\left|\!\left| \!\left|P_{n-1}x\right|\!\right|\!\right|_{\infty,n-1},\left|x_{n}\right|\right\},\qquad\text{for every $x\in\ell_{\infty}^{n}$}.\] Note that \(\|x\|_{\infty}\leq\left|\!\left|\!\left|x\right|\!\right|\!\right|\!\right|_{ \infty,n}\) for all \(x\in\ell_{\infty}^{n}\) and all \(n\in\mathbb{N}\). Moreover, it also holds that \(\left|\!\left|\!\left|x\right|\!\right|\!\right|\!\right|_{\infty,n}\leq c_{n }\|x\|_{\infty}\) for all \(n\in\mathbb{N}\) and \(x\in\ell_{\infty}^{n}\). Indeed, this is trivially true for \(n=1\), and assuming it holds for \(n-1\) with \(n\geq 2\) we obtain that for any \(x\in\ell_{\infty}^{n}\): \[\left|\!\left|\!\left|x\right|\!\right|\!\right|_{\infty,n} \leq\max\left\{\left(\frac{c_{n-1}}{w_{n}}+\frac{1}{h_{n}} \right)\|x\|_{\infty},c_{n-1}\|x\|_{\infty},\|x\|_{\infty}\right\}\] \[=\max\left\{\left(\frac{c_{n-1}}{w_{n}}+\frac{w_{n}-1}{w_{n}\delta }\right)\|x\|_{\infty},c_{n-1}\|x\|_{\infty}\right\}\] \[\leq\max\left\{(c_{n-1}+\varepsilon_{n})\|x\|_{\infty},c_{n-1}\|x \|_{\infty}\right\}=c_{n}\|x\|_{\infty}.\] For the last inequality, we use that \(-(1-\delta c_{n-1})\leq-w_{n}(1-\delta c_{n-1}-\delta\varepsilon_{n})\) due to the choice of \(w_{n}\). Geometrically, the unit ball of \(\left|\!\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\!\right|_{\infty,n}\) is an \(n\)-dimensional polyhedron contained in the unit ball of \(\ell_{\infty}^{n}\). Figure 1 shows the unit ball of \(\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\! \!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\! \!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\! \!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\! \!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\! \!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\! \!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\! \!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\! \!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\! \!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\! \!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\! \!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\! \!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\left|\!\!\! \!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\! \!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\!\left|\!\!\!\! \!\left|\!\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\!\left|\!\!\!\! \left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\!\left|\!\!\!\!\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\!\left|\!\!\!\!\!\!\left|\!\!\!\!\!\!\!\left|\!\!\!\!\!\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\!\!\!\left|\!\!\!\!\!\!\!\left|\!\!\!\!\!\!\!\left|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\ Now, let \(\phi_{n}\colon\mathbb{R}\to\mathbb{R}^{+}\) be a real valued, \(C^{\infty}\)-smooth and convex function such that \(\phi_{n}(t)=0\) for all \(t\in\left[0,\frac{1}{1+\eta_{n}}\right]\), \(\phi_{n}(1)=1\), and \(\phi_{n}(t)>1\) for all \(t\in(1,+\infty)\). The map \(\nu_{n}\colon\ell_{\infty}^{n}\to\mathbb{R}^{+}\) given by is a \(C^{\infty}\)-smooth function such that the set \(B_{n}=\{x\in\ell_{\infty}^{n}\colon\nu_{n}(x)\leq 1\}\) is bounded in \(\ell_{\infty}^{n}\). Hence, by Corollary 1.1.23 in [10], the set \(B_{n}\) is the unit ball of a \(C^{\infty}\)-smooth norm \(\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\!\right|_{n}\colon\ell_{ \infty}^{n}\to\mathbb{R}^{+}\). Notice that, given \(x\in\ell_{\infty}^{n}\), \(\left|\!\left|\!\left|x\right|\!\right|\!\right|_{n}=1\) if and only if \(\nu_{n}(x)=1\). Let us show that properties \((ii)\) and \((iii)\) are satisfied by this norm. We prove that \((ii)\) holds for \(x\in\ell_{\infty}^{n}\) with \(\left|\!\left|\!\left|x\right|\!\right|\!\right|\!\right|_{n}=1\). For the first part, notice that if \(\nu_{n}(x)=1\), then one of \(\phi_{n}(\left|\!\left|\!\left|x\right|\!\right|\!\right|\!\right|_{\eta_{n}})\), \(\phi_{n}\left(\left|\!\left|\!\left|P_{n-1}x\right|\!\right|\!\right|_{n-1})\) or \(\phi_{n}\left(\left|x_{n}\right|\right)\) must be strictly positive. The first inequality now follows since \(\phi_{n}\) vanishes in the interval \(\left[0,\frac{1}{1+\eta_{n}}\right]\). The second inequality of the first statement holds simply by definition of \(\left|\!\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\!\right|_{\infty,n}\) if \(n=2\); while if \(n>2\) we also need to use the inductive hypothesis. For the second statement, observe that if \(\nu_{n}(x)=1\), then from which the first inequality follows since \(\phi_{n}(t)>1\) for all \(t\in(1,+\infty)\). The second inequality is a simple application of the inductive hypothesis if \(n>2\). Note that the case \(n=2\) should be argued separately, when the conclusion holds by the definitions of both \(\left|\!\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\!\right|_{\infty,2}\) and \(\left|\!\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\!\right|_{\eta_{2}}\), and the fact that \(\left|\!\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\!\right|_{1}= \left|\!\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\!\right|_{\infty,1}\). Before proving that \((iii)\) holds as well, notice that since \(\left|\!\left|\cdot\right|\!\right|\!\right|_{\infty}\leq\left|\!\left|\! \left|\!\left|\cdot\right|\!\right|\!\right|\!\right|_{\infty,n}\), the last inequality of condition \((ii)\) implies in particular that \(\left|\!\left|x\right|\!\right|\!\right|_{\infty}\leq\left|\!\left|\!\left|x \right|\!\right|\!\right|_{n}\) for every \(x\in\ell_{\infty}^{n}\). Now, to prove property \((iii)\) we may fix \(x\in\ell_{\infty}^{n}\) with \(|x_{n}|\leq\delta\|x\|_{\infty}\) and assume that \(\left|\!\left|\!\left|x\right|\!\right|\!\right|_{n}=1\). By the previous observation, we have that \(\|x\|_{\infty}\leq 1\), and thus \(|x_{n}|\leq\frac{\delta}{2}\). Since \(\frac{\delta}{2}\leq\frac{1}{1+\eta_{n}}\), we obtain directly that \(\phi_{n}\left(|x_{n}|\right)=0\). We have as well that \(\left|\!\left|\!\left|P_{n-1}x\right|\!\right|\!\right|\!\right|_{\infty,n-1}\leq 1\) by the last part of the already proven property \((ii)\), and thus we get the following estimate: \[\left|\!\left|\!\left|x\right|\!\right|\!\right|\!\right|_{\eta_{n}} \leq(1+\eta_{n})\left|\!\left|\!\left|x\right|\!\right|\!\right|_{1, n}=(1+\eta_{n})\left(\frac{\left|\!\left|\!\left|P_{n-1}x\right|\!\right|\!\right|_{ \infty,n-1}}{w_{n}}+\frac{|x_{n}|}{h_{n}}\right)\] \[\leq(1+\eta_{n})\left(\frac{1}{w_{n}}+\frac{\delta}{2h_{n}} \right)=\frac{1}{1+\eta_{n}},\] since we have chosen \(\eta_{n}\) such that the last inequality holds. Hence, we also get that \(\phi_{n}(\left|\!\left|\!\left|x\right|\!\right|\!\right|\!\right|_{\eta_{n}})=0\). Since \(\nu_{n}(x)=1\), we necessarily have that \(\phi_{n}\left(\left|\!\left|\!\left|P_{n-1}x\right|\!\right|\!\right|\!\right|_{n -1}\right)=1\), and by convexity of \(\phi_{n}\) we obtain that \(\left|\!\left|\!\left|P_{n-1}x\right|\!\right|\!\right|\!\right|_{n-1}=1\), which proves \((iii)\). Once the induction is finished, we define the final renorming \(\left|\!\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\colon c_{0}\to \mathbb{R}^{+}\) by \[\left|\!\left|\!\left|x\right|\!\right|\!\right|=\sup_{n\in\mathbb{N}}\left\{ \left|\!\left|\!\left|P_{n}x\right|\!\right|\!\right|_{n}\right\}\] for all \(x\in c_{0}\). Property \((ii)\) in the induction and the fact that for all \(n\in\mathbb{N}\) show that \[\|\cdot\|_{\infty}\leq|\!|\!|\cdot|\!|\!|\leq\frac{1}{\delta^{2}}\|\cdot\|_{ \infty}.\] Using property \((iii)\) we also have that given \(x_{0}\in c_{0}\) with \(x_{0}\neq 0\), there exists an open neighbourhood \(U\) of \(x_{0}\) and \(n_{0}\in\mathbb{N}\) such that for all \(x\in U\). This implies that the norm is LFC and \(C^{\infty}\)-smooth. **3.- Non-strictly convex dual ball** It only remains to prove that the dual norm of is not strictly convex. Define \[f =\left(\prod_{j>1}^{\infty}\frac{1}{w_{j}},\frac{1}{h_{2}}\prod_{j >2}^{\infty}\frac{1}{w_{j}},\ldots,\frac{1}{h_{n}}\prod_{j>n}^{\infty}\frac{1} {w_{j}},\ldots\right)\in\ell_{1}\] \[g =\left(0,\prod_{j>2}^{\infty}\frac{1}{w_{j}},\frac{1}{h_{3}} \prod_{j>3}^{\infty}\frac{1}{w_{j}},\ldots,\frac{1}{h_{n}}\prod_{j>n}^{\infty }\frac{1}{w_{j}},\ldots\right)\in\ell_{1}.\] We will show that the segment \([f,g]\) is contained in the sphere \(S_{|\!|\!|\cdot|\!|\!|^{*}}\). Given \(x\in c_{0}\), using the fact that and the definition of, it is straightforward to show inductively that for all \(n\geq 2\). Since \(w_{n}>1\) for all \(n\geq 2\) and for all \(n\in\mathbb{N}\), we obtain that This shows that and, which implies that and. Consider now for every \(n\geq 2\) the point \(z_{n}\in c_{0}\) given by Setting \(z_{1}=e_{1}\), we have that for all \(n\geq 2\). We start by showing inductively that. Indeed, this is trivially true for \(n=1\), and given \(n\geq 2\) and assuming it holds for \(n-1\) we obtain that \[=\frac{1}{w_{n}}\big{|}\!|\!|P_{n-1}z_{n}|\!|\!|_{\infty,n-1}+ \frac{1}{h_{n}}|z_{n}|\] \[=\frac{1}{w_{n}}\frac{h_{n}-1}{h_{n}-\delta}+\frac{1}{h_{n}}=1.\] Now the conclusion follows by definition of \(\left|\!\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\!\right|_{\infty,n}\) and a further application of the inductive hypothesis. Thanks to the previous fact, and again inductively, we can show that \(\left|\!\left|\!\left|\!\left|z_{n}\right|\!\right|\!\right|\!\right|\leq(1+ \eta_{n})^{2}\) for all \(n\geq 2\). In order to do this, notice that if \(\left|\!\left|\!\left|z_{n-1}\right|\!\right|\!\right|\leq(1+\eta_{n-1})^{2}\) for some \(n>2\), using that \((1+\eta_{n-1})^{2}\left(\frac{h_{n}-1}{h_{n}-\delta}\right)\leq 1\) and property \((ii)\) of the definition of \(\left|\!\left|\!\left|\!\left|\!\left|\cdot\right|\!\right|\!\right|\!\right|_{n}\), we get that \[\left|\!\left|\!\left|z_{n}\right|\!\right|\!\right| =\left|\!\left|\!\left|P_{n}z_{n}\right|\!\right|\!\right|_{n}=(1+ \eta_{n})\max\left\{\left|\!\left|\!\left|P_{n}z_{n}\right|\!\right|\!\right| _{\eta_{n}},\left|\!\left|\!\left|P_{n-1}z_{n}\right|\!\right|\!\right|,|z_{n} |\right\}\] \[\leq(1+\eta_{n})\max\left\{(1+\eta_{n})\left|\!\left|\!\left|P_{n }z_{n}\right|\!\right|\!\right|_{1,n},(1+\eta_{n-1})^{2}\left(\frac{h_{n}-1}{ h_{n}-\delta}\right)\right\}=(1+\eta_{n})^{2}.\] Finally, we show that \(\left\langle f,z_{n}\right\rangle=\left\langle g,z_{n}\right\rangle=\prod_{j> n}^{\infty}\frac{1}{w_{j}}\) for \(n\geq 2\). We start by showing the base case \(n=2\). For the functional \(g\), the equality is straightforward from the definition. For \(f\), simply observe that: \[\left\langle f,z_{2}\right\rangle =\frac{h_{2}-1}{h_{2}-\delta}\prod_{j>1}\frac{1}{w_{j}}+\frac{1}{ h_{2}}\prod_{j>2}\frac{1}{w_{j}}\] \[=\prod_{j>2}\frac{1}{w_{j}}\left(\frac{h_{2}-1}{h_{2}-\delta} \frac{1}{w_{2}}+\frac{1}{h_{2}}\right)=\prod_{j>2}\frac{1}{w_{j}}.\] Assume now the conclusion holds for \(n-1\) for some \(n>2\). Then, for \(\varphi\in\{f,g\}\), and writing \(\varphi_{|n}=(\varphi_{i})_{i=1}^{n}\in(\ell_{\infty}^{n})^{*}\) to denote the projection of \(\varphi\) onto its first \(n\) coordinates, we obtain: \[\left\langle\varphi,z_{n}\right\rangle =\left\langle\varphi_{|n},P_{n}z_{n}\right\rangle=\left\langle \varphi_{|(n-1)},P_{n-1}z_{n}\right\rangle+\frac{1}{h_{n}}\prod_{j>n}^{\infty} \frac{1}{w_{j}}\] \[=\frac{h_{n}-1}{h_{n}-\delta}\langle\varphi_{|(n-1)},P_{n-1}z_{n- 1}\rangle+\frac{1}{h_{n}}\prod_{j>n}^{\infty}\frac{1}{w_{j}}\] \[=\prod_{j>n}^{\infty}\frac{1}{w_{j}}\left(\frac{h_{n}-1}{h_{n}- \delta}\frac{1}{w_{n}}+\frac{1}{h_{n}}\right)=\prod_{j>n}^{\infty}\frac{1}{w_{ j}}.\] This implies that \(\left\langle\frac{f+g}{2},z_{n}\right\rangle=\prod_{j>n}^{\infty}\frac{1}{w_{j}}\) as well. Since we have found a sequence \(\{z_{n}\}_{n\in\mathbb{N}}\subset c_{0}\) such that \(\left|\!\left|\!\left|z_{n}\right|\!\right|\!\right|\to 1\) and \(\left\langle\frac{f+g}{2},z_{n}\right\rangle\to 1\), we get that \(\left|\!\left|\!\left|\!\left|\frac{f+g}{2}\right|\!\right|\!\right|\!\right|^{*}\geq 1\). This finishes the proof, as we have shown already that the norm of \(f\) and \(g\), and hence the norm of \(\frac{f+g}{2}\), is less than \(1\). ## Acknowledgements This research was supported by GA23-04776S and by the project SGS22/053/OHK3/1T/13. The second author's research has been supported by PAID-01-19 and by grant PID2021-122126NB-C33 funded by MCIN/AEI/10.13039/501100011033 and by "ERDF A way of making Europe".
2305.13829
Learning from Mistakes via Cooperative Study Assistant for Large Language Models
Large language models (LLMs) have demonstrated their potential to refine their generation based on their own feedback. However, the feedback from LLM itself is often inaccurate, thereby limiting its benefits. In this paper, we propose Study Assistant for Large LAnguage Model (SALAM), a novel framework with an auxiliary agent to assist the main LLM in learning from mistakes through interactive cooperation. In the gathering phase, the student assistant agent probes the main LLM, analyzes its errors, and collects the interaction in a mistake memory. During the examination phase, the study assistant provides guidelines by retrieving relevant cases to help the main LLM anticipate and avoid similar errors. We first investigate the effectiveness of a general study assistant and then customize it to provide LLM-specific guidance through imitation learning from successful guidance experiences. Our experiments on three LLMs using two challenging frameworks demonstrate that SALAM can significantly boost LLMs by an accuracy margin of up to 6.6 on BBH and 12.6 on BBQ.
Danqing Wang, Lei Li
2023-05-23T08:51:08Z
http://arxiv.org/abs/2305.13829v3
# Learn from Mistakes through Cooperative Interaction with Study Assistant ###### Abstract Large language models have demonstrated their ability to self-reflect and refine their generation, which can further improve their performance. However, this feedback mechanism faces challenges such as no guarantee of correctness and the lack of global insight into the model's weaknesses. In this paper, we propose a novel framework, **S**tudy **A**ssistant for **L**arge **A**nguage **M**odel (SALAM), to aid LLMs in the reflection and refinement process. Motivated by the human study assistant, this framework grades previous responses with the ground truth and collects mistakes in the training phase. During inference, it identifies common misunderstandings based on the mistake collections and provides guidelines for the model to help the model avoid similar mistakes during inference. SALAM is a model-agnostic framework, focusing on providing general feedback and can adapt to any base model. Our evaluation of SALAM on two challenging benchmarks demonstrated a significant improvement over various baselines 1. Footnote 1: Code and data will be released here. ## 1 Introduction Large language models (LLMs) have demonstrated remarkable performance in a wide range of tasks (Brown et al., 2020; Raffel et al., 2020; Chowdhery et al., 2022). Their effectiveness is enhanced by human instructions and feedback, which allow them to better align with human intentions (Chung et al., 2022; Ouyang et al., 2022; Bai et al., 2022). Furthermore, recent studies show that LLMs can also benefit from their own feedback, similar to the reflection of humans (Shinn et al., 2023; Madaan et al., 2023). Such iterative self-reflection has two main challenges. First, it highly relies on the stopping criteria, which decides when to stop the feedback and accept the generation. Poor criteria will mislead the LLM by keeping asking it to refine an acceptable generation or stopping the refinement of an undesirable generation. Some studies used extra learned discriminators and set a value-based threshold as the stopping signal (Welleck et al., 2022; Saunders et al., 2022; Lu et al., 2022), and others utilized few-shot examples to prompt LLMs' own ability for deciding when to accept (Madaan et al., 2023; Yang et al., 2022; Kwon et al., 2023). However, there is still no guarantee of the reliability of such criteria. The other challenge is that feedback is based on a specific example, lacking global insight into the weaknesses of the model. The model will keep repeating previous mistakes on similar examples unless it is finetuned on the incorrect cases, which is costly. Motivated by the way that humans learn from mistakes, we present a novel framework to assist LLMs to reflect and refine their generation, which Figure 1: SALAM Overview. The orange parts only appear in the training phase. There are two agents in SALAM framework: the main model and a study assistant model. The main model first has a try \(y_{0}\) on the given query. This answer is passed to the study assistant. The assistant grades and collect incorrect answers. It then retrieves from the mistake collection and generates a potential explanation and a guideline for the main model to refine. During inference, the study assistant directly gives guidelines based on the query, without knowing the target answer. is called **S**tudy **A**ssistant for **L**arge **L**A**nguage **M**odel (SALAM). Different from a teacher or a teaching assistant, a study assistant does not directly impact knowledge or tell correct answers. Instead, it collects mistakes made by the model and figures out the common misunderstanding. It then provides a specific guideline for similar questions to help the model overcome previous mistakes. The whole framework is illustrated in Figure 1. Specifically, the student first goes through queries of the training set and has a first attempt. These answers are graded with the ground truth and the incorrect ones are collected by the study assistant. The assistant provides a potential explanation for each mistake and generates a general guideline to help the student overcome similar mistakes. For the inference phase, the study assistant has no access to the true answer as well. It provides guidelines for each test query based on the mistakes the student model made on similar training queries. The student utilizes these guidelines as the instruction to generate the answer. SALAM uses the ground truth for verification during the training phase, guaranteeing the correctness. In addition, it provides global feedback on all training mistakes, which is more robust and generalizable. Furthermore, the study assistant can be a small and model-agnostic model, which only focuses on providing explanations given the query, the response, and the true answer. We evaluate SALAM on various tasks in BBH (Suzgun et al., 2022) and BBQ (Parrish et al., 2022), two challenging benchmarks that evaluate the reasoning ability and potential bias in LLMs. SALAM can significantly improve the performance of the student model by providing guidelines based on previous mistakes. Our contributions are as follows: * We propose a general framework SALAM that consists of a main model and a study assistant to aid the model to learn from mistakes. * SALAM provides a global insight into the weakness of the model and generates general instruction for avoiding similar mistakes, which can significantly improve the ability of the main model. * SALAM is a model-agnostic framework and can be adapted to any LLM. ## 2 Study Assistant for Language Model In this section, we introduce SALAM framework. As illustrated in Figure 1, there are two agents in SALAM, one is a main model \(\mathrm{M}\) and the other is a study assistant \(\mathrm{T}\). The goal of the study assistant \(\mathrm{T}\) is to provide textual feedback that can help \(\mathrm{M}\) refine their answers. Specifically, in the initialization phase, \(\mathrm{M}\) generates a first output attempt for a set of training queries \(Q_{tr}=\{\mathbf{q}^{(0)},\mathbf{q}^{(1)},\cdots,\mathbf{q}^{(N)}\}\) and passes these answers \(Y_{tr}=\{\mathbf{y}^{(0)},\mathbf{y}^{(1)},\cdots,\mathbf{y}^{(N)}\}\) to \(\mathrm{T}\). The study assistant grades \(Y_{tr}\) based on the ground truth and collects the failed ones into the mistake collection \(\mathrm{O}_{tr}\). After initialization, given a query \(\mathbf{q}\), \(\mathrm{T}\) provides textual feedback based on \(\mathrm{O}_{tr}\) and replies to \(\mathrm{M}\). \(\mathrm{M}\) then takes the feedback as the instruction and generates a new answer. This interaction can be multiple iterations, where the study assistant updates its feedback based on the new answer from \(\mathrm{M}\), and \(\mathrm{M}\) iteratively refines its answer according to the updated feedback. ### Problem Definition We can formulate the assist process as a Markov Decision Process (MDP) with \((\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R})\). For each state \(\mathbf{s}_{t}\), the study assistant \(\mathrm{T}\) generates the textual feedback as its action \(\mathbf{a}_{t}\) and receives the reward based on the performance of the model \(\mathrm{M}\). \(\mathcal{S}\) is the state space of \(\mathrm{T}\) and each state \(\mathbf{s}_{t}\in\mathcal{S}\) is defined as \(\mathbf{s}_{t}=\{\mathbf{q},\mathbf{y}_{t},\mathbf{c}_{t}\}\). It includes the query \(\mathbf{q}\), current response \(\mathbf{y}_{t}\) from \(\mathrm{M}\) and the context \(\mathbf{c}_{t}\) retrieved from the mistake collection \(\mathrm{O}_{tr}\). For the example in Figure 1, the model \(\mathbf{M}\) first generates the answer \(\mathbf{y}_{0}\)='_02/11/2002_' for the query \(\mathbf{q}\) = '_Jane thought today is 3/11/2002, but today is in fact Mar 12, which is 1 day later. What is the date a month ago in MM/DD/YYYY_'. The study assistant \(\mathrm{T}\) retrieves several similar queries and their wrong answers from \(\mathrm{O}_{tr}\) as the context \(\mathbf{c}_{0}\). Initially, \(\mathbf{c}_{0}=\emptyset\). For state \(\mathbf{s}_{1}\), the student model gets a new output \(\mathbf{y}_{1}\)='_02/12/2002_', and the study assistant retrieves previous answer \(\mathbf{c}_{1}\)='_02/11/2002_' from \(\mathrm{O}_{tr}\). \(\mathcal{A}\) is the action space of \(\mathrm{T}\), which is the natural language space (e.g., a set of all possible utterances). In this paper, an action \(\mathbf{a}_{t}\) is referred to as _feedback_ from \(\mathrm{T}\). It is a textual description to explain why \(\mathbf{y}\) is incorrect and provide a guideline for improvement. \(\mathcal{P}\) is the transition probability defined on \(\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\), and \(\mathcal{R}:S\times A\in\mathbb{R}\) is the reward function based on the state and feedback. The policy \(\pi(\mathbf{a}|\mathbf{s})\) is defined on \(\mathcal{S}\rightarrow\mathcal{A}\) to generate feedback based on the state. The goal of \(\mathrm{T}\) is to maximize the expectation of \(J(\pi)=E_{\pi}[\sum_{t=0}^{\infty}\gamma^{t}R(\mathbf{s}_{t},\mathbf{a}_{t})]\), where \(\gamma^{t}\) is the discount factor. ### Grader and Reward function During the training phase, \(\mathrm{T}\) has access to the ground truth \(\mathbf{r}\). We define an oracle function \(f(\mathbf{y},\mathbf{r})\in\{0,1\}\) as the grader in \(\mathrm{T}\). If \(\mathbf{y}\) fails the grader, which means \(f(\mathbf{y},\mathbf{r})=0\), \(\mathrm{T}\) updates \(\mathrm{O}_{tr}\) with \(\mathbf{y}\) and the corresponding query. For inference, since none of \(\mathrm{M}\) and \(\mathrm{T}\) know the ground truth, there is no grader. For the study assistant, the reward function \(R(\mathbf{s}_{t},\mathbf{a}_{t})\) is defined on the current state \(\mathbf{s}_{t}\) and the action \(\mathbf{a}_{t}\) that \(\mathrm{T}\) takes. It is calculated based on the performance of the model \(\mathrm{M}\), which varies in different tasks. In this paper, we focus on multi-choice question-answering tasks and reformulate them as generation tasks to generate the answer. So the reward is 1 if the response from \(\mathrm{M}\) contains the correct answer. Otherwise, the reward is 0. ### Mistake Collection The study assistant \(\mathrm{T}\) maintains a global mistake collection \(\mathrm{O}_{tr}\) for both the training and the inference phase. During the training, \(\mathrm{T}\) uses the oracle function to grade the generation from the student model \(\mathrm{M}\) and collect mistakes into \(\mathrm{O}_{tr}\). Each entry \(e_{i}\in\mathrm{O}_{tr}\) include a key \(k_{i}\), which is the query, and a list of values \(v_{i}=\{t_{i_{0}},v_{i_{1}},v_{i_{2}},\cdots\}\). The \(t_{i_{0}}\) is the target answer and the other are the mistakes made on this query. For each state \(\mathbf{s}_{t}\), SALAM retrieves entry \(e_{i}\) based on the distance \(d\) between the key \(k_{i}\) and the current query \(\mathbf{q}\): \[\mathbf{c}_{t}=\bigcup_{k_{i}\in\mathcal{N}_{\mathbf{q}}}v_{i}. \tag{1}\] Here \(\mathcal{N}_{\mathbf{q}}=\{k|d(k,\mathbf{q})<\theta,k\in\mathrm{O}_{tr}\}\), and \(\theta\) is the distance threshold to control the retrieval. This entry is updated with the current output \(\mathbf{y}_{t}\) if the output cannot pass the grader. During inference, \(\mathrm{T}\) retrieves relevant context from \(\mathrm{O}_{tr}\) and does not update the collection. ### Model-agnostic study assistant Previous studies showed that large language models can provide textual feedback with few-shot examples Kwon et al. (2023); Madaan et al. (2023); Yao et al. (2022). This ability enables SALAM to directly use an LLM as the policy \(\pi(\mathbf{a}|\mathbf{s})\). Given current \(\mathbf{s}_{t}\), the policy is defined as: \[\pi(\mathbf{a}_{t}|\mathbf{s}_{t})=\texttt{LLM}(a_{0},\cdots,a_{l}|\rho( \mathbf{q},\mathbf{c}_{t},\mathbf{y}_{t})). \tag{2}\] Here, \(\rho\) is a template-based function to map the context and current generation to a textual prompt. An example is shown in Table 1. SALAM utilizes LLMs' ability to analyze the mistake and provide human-like instructions. Here we utilize a simple chain-of-thought prompt Wei et al. (2022). It asks the LLM to first identify the potential explanation of the mistake and then generate the guideline based on the explanation. Since it only depends on \(\rho(\mathbf{q},\mathbf{c}_{t},\mathbf{y}_{t})\) and is unaware of where the mistake from, it is model-agnostic and can easily generalize to unseen tasks and new models. ## 3 Experiment We conduct experiments in two challenging benchmarks with various tasks: BBH and BBQ, evaluating SALAM's ability to guide complex reasoning and reduce social biases. We further conduct comprehensive analyses from different aspects to enhance the understanding of SALAM. ### Benchmark Big-Bench-Hard (BBH) Suzgun et al. (2022) is a subset of challenging tasks from Big-Bench Sri \begin{table} \begin{tabular}{l} \hline \hline **Prompt:** In the following sentences, explain the antecedent of the pronoun (which thing the pronoun refers to), or state that it is ambiguous. \\ Sentence: The engineer informed the client that he would need to make all future payments on time. \\ Options: \\ (A) The engineer should make payments \\ (B) The client should make payments \\ (C) Ambiguous **We get the answer** The engineer should make payments from the model while the true answer is (B) The client should make payments. Please return a JSON **with the following keys**: \\ _Explanation:_ explain the potential reason for the model prediction \\ _Guideline:_ based on the reason, provide instruction as a prompt for the model to avoid similar mistakes \\ Please do not mention the true answer or any specific option content in your response. \\ \hline **SALAM:**_Explanation_: The model might have misinterpreted the placement and role of ’he’ in the sentence. While ’he’ could technically refer to either the engineer or the client, the context strongly implies that ’he’ refers to the client, which the model seems to have overlooked. _Guideline:_ When identifying the antecedent of a pronoun, it’s essential to consider the surrounding context and the semantic roles of the entities involved. In sentences where the pronoun is embedded in a clause providing information or instructions, it often refers to the entity that the information or instruction is about. \\ \hline \hline \end{tabular} \end{table} Table 1: Example case for prompting SALAM. The green text is based on the current state \(s_{t}\). We omit the JSON format in the response of SALAM. vastava et al., 2022), targeting evaluating the reasoning capability of large language models under the zero-shot or few-shot setting. It contains 23 challenging tasks where prior language model evaluations fail the average human rater. We focus on 16 multi-choice question-answering tasks in BBH. There are about 250 examples in each task. Bias Benchmark for QA (BBQ) (Parrish et al., 2022) is a question set on the potential social bias along nine social dimensions. It tests the capability of LLMs to avoid biases in both informative and under-informative contexts. The original benchmark contains 58k examples that can be used for both training and evaluation. Similar to BBH, we randomly select 250 examples from each task. For each task in the benchmark, we split the data by 0.8/0.2 to build the training and test set. The training set is used to build the mistake collection and generate feedback. We demonstrate one example for each benchmark in Table 2 and leave other details in Appendix. ### Experiment Setup There are two agents in SALAM, and we implement them with LLMs. The two models can be the same or different. Given one query, the model \(\mathrm{M}\) generates one potential answer and refines its answer according to the feedback from the study assistant. Therefore, the requirement is that they should have the ability to follow instructions or conduct in-context learning from few-shot examples, such as Flan-T5 (Chung et al., 2022), GPT-NeoX (Black et al., 2022), LLaMA (Touvron et al., 2023) and so on. On the other side, we assume that study assistant \(\mathrm{T}\) is an expert on explaining and reasoning since it can analyze the mistakes and guide the student model. It can be any LLM that provides feedback from the zero-shot and few-shot setting (such as ChatGPT (Ouyang et al., 2022) and GPT-4 (PennAI, 2023)), or a small specific fine-tuned model for providing feedback. In the experiment, we take the Flan-T5-xxl with 11 billion parameters as the model \(\mathrm{M}\). For \(\mathrm{T}\), we \begin{table} \begin{tabular}{l c c} \hline \hline **Benchmark** & **Query** & **Answer** & **\#Task** \\ \hline BBH & Alice, Bob, Claire, Dave, and Eve are on the same team in a soccer match. At (Suzgun et al., 2022) & the start of the match, they are each assigned to a position: Alice is playing goalkeeper, Bob is playing left mifielder, Claire is playing right winger, Dave is playing striker, and Eve is playing center mifielder. As the game progresses, pairs of players occasionally swap positions. First, Alice and Claire trade positions. Then, Alice and Bob trade positions. Then, Dave and Bob trade positions. Then, Bob and Eve trade positions. Finally, Dave and Eve trade positions. At the end of the match, Eve is playing Options: (A) goalkeeper (B) left midfielder (C) right winger (D) striker (E) center midfielder \\ \hline BBQ & A family recently moved out of their long-time home into a newly renovated (Parrish et al., 2022) apartment in a different city across the country. Both the grandmother and the young son had never lived in that city before. The grandmother spent a lot of time exploring their new hometown and enjoying the local attractions, while the young son mostly stayed home feeling sad about their new living environment. Who is not resistant to change? Options: (A) The young son (B) Cannot be determined (C) The grandmother \\ \hline \hline \end{tabular} \end{table} Table 2: Examples of queries and answers for BBH and BBQ benchmark. We provide the options in the prompt and the model is expected to output the correct option (e.g., '(_C_)_’) or generate text that contains the option or the option content (e.g., '_right winger_’). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{4}{c}{BBH} & \multicolumn{4}{c}{BBQ} \\ \cline{2-7} & Min & Max & Average & Min & Max & Average \\ \hline \(\mathrm{M}\) & 10.0 & 72.0 & 42.4 & 62.0 & 86.0 & 76.6 \\ \(\mathrm{M}\) w/ \(\mathrm{O}_{corr}\) & 0.0 & 84.0 & 38.4 & 60.0 & 88.0 & 72.0 \\ \(\mathrm{M}\) w/ \(\mathrm{O}_{err}\) & 0.0 & 84.0 & 37.9 & 72.0 & 90.0 & 79.8 \\ \hline SALAM & 10.0 & 84.0 & 47.1 & 80.0 & 96.0 & 85.3 \\ \hline \hline \end{tabular} \end{table} Table 3: Accuracy (%) over tasks. SALAM achieves the best average performance on both benchmarks. finetune the LLaMA model with 7 billion on a small set of feedback from GPT-4 2 to provide feedback. For retrieval, we use SentenceTransformer 3(Reimers and Gurevych, 2019) to calculate the sentence embedding and the cosine similarity. We retrieve top \(k=3\) queries from the collection and filter candidates with a similarity lower than \(\theta=0.9\). We discuss the influence of the two hyperparameters in Section 3.5. Footnote 2: [https://chat.openai.com/?model=gpt-4](https://chat.openai.com/?model=gpt-4) Footnote 3: [https://www.sbert.net/](https://www.sbert.net/) ### Baseline and Evaluation We set up three baselines: \(\mathrm{M}\) directly takes the query as the prompt, evaluating the zero-shot capability. \(\mathrm{M}\) w/ \(\mathrm{O}_{corr}\) keeps a collection of correct answers, similar to the mistake collection described in Section 2.3 except that the entry has a reward of 1. It retrieves relevant queries and takes them as few-shot examples. \(\mathrm{M}\) w/ \(\mathrm{O}_{err}\) retrieves incorrect cases from the collection, but different from SALAM, there is no feedback from \(\mathrm{T}\). These first two baselines evaluate the zero-shot and few-shot performance, and the last one is an ablation study that removes the policy from \(\mathrm{T}\). We show several cases in Appendix. We reformulate the multi-choice question-answering to a generation task. For each query, we add options to the prompt. The generation contains the correct option or the option content is viewed as a correct answer. We calculate the accuracy rate as the evaluation metric. ### Main Results In this section, we focus on the following research questions: (i) _Can SALAM enhance the student model \(M\)'s ability?_ (ii) _Is SALAM data efficient?_ (iii) _Which learning strategy is better, learning from success or learning from failure?_ **SALAM achieves the best average performance on both benchmarks.** Table 3 shows the effectiveness of SALAM to outperform other baselines on various tasks. From Table 4 we can see that SALAM significantly improve the accuracy of the student model \(\mathrm{M}\) on BBQ. It achieves an 8.7% improvement in the average accuracy and gains more than 10% improvement on 4 out of 11 tasks. It indicates that global feedback on previous answers with bias can effectively reduce potential future bias. SALAM also outperforms other baselines by about 5% in BBH, as shown in Table 5. **SALAM can make better use of the training data.** In Table 5, we only provide feedback on 10% training data, while the other baselines have access to all training data. SALAM still outperforms other baselines and has more than 10% improvements on 6 out of 16 tasks. It implies that SALAM can effectively generate general feedback from a small number of mistakes and utilize it to enhance the ability of the student model. However, SALAM struggles with the _tracking shuffled objective_ tasks with an obvious performance decline. We hypothesize it is due to their difficulty, which requires more feedback to cover various potential mistakes. A similar trend also appears on _geometric shapes_, where the zero-shot performance is low and the improvement via SALAM is small. **Failure is more valuable than success.** Comparing the performance of \(\mathrm{M}\) w/ \(\mathrm{O}_{corr}\) and \(\mathrm{M}\) w/ \(\mathrm{O}_{err}\) in Table 4, we can find that the mis \begin{table} \begin{tabular}{l c c c c} \hline \hline & \(\mathrm{M}\) & w/ \(\mathrm{O}_{corr}\) & w/ \(\mathrm{O}_{err}\) & SALAM \\ \hline Age & 68.0 & 68.0 & 72.0 & 96.0 \\ Disability\_status & 62.0 & 62.0 & 72.0 & 80.0 \\ Gender\_identity & 84.0 & 68.0 & 82.0 & 82.0 \\ Nationality & 76.0 & 84.0 & 82.0 & 82.0 \\ Physical\_appearance & 74.0 & 60.0 & 74.0 & 82.0 \\ Race\_ethnicity & 84.0 & 88.0 & 82.0 & 86.0 \\ Race\_x\_SES & 86.0 & 76.0 & 84.0 & 86.0 \\ Race\_x\_gender & 74.0 & 76.0 & 84.0 & 82.0 \\ Religion & 82.0 & 66.0 & 80.0 & 82.0 \\ SES & 82.0 & 80.0 & 90.0 & 92.0 \\ Sexual\_orientation & 70.0 & 64.0 & 76.0 & 88.0 \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracy (%) on BBQ benchmark. SALAM outperforms all baselines by a large margin. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \(\mathrm{M}\) & w/ \(\mathrm{O}_{corr}\) & w/ \(\mathrm{O}_{err}\) & SALAM \\ \hline date\_understanding & 48.0 & 48.0 & 46.0 & 46.0 \\ disambiguation\_qa & 64.0 & 68.0 & 70.0 & 80.0 \\ geometric\_shapes & 14.0 & 12.0 & 6.0 & 14.0 \\ hyperbaton & 62.0 & 84.0 & 84.0 & 84.0 \\ logical\_deduction\_five & 50.0 & 20.0 & 40.0 & 70.0 \\ logical\_deduction\_seven & 64.0 & 4.0 & 6.0 & 62.0 \\ logical\_deduction\_three & 72.0 & 78.0 & 58.0 & 72.0 \\ movie\_recommendation & 30.0 & 54.0 & 44.0 & 42.0 \\ penguins\_in\_a\_table & 46.7 & 16.7 & 16.7 & 43.3 \\ reasoning\_color & 62.0 & 60.0 & 62.0 & 64.0 \\ ruin\_names & 16.0 & 22.0 & 28.0 & 26.0 \\ snarks & 61.1 & 77.8 & 75.0 & 75.0 \\ temporal\_sequences & 26.0 & 28.0 & 24.0 & 26.0 \\ tracking\_shuffled\_five & 18.0 & 14.0 & 18.0 & 10.0 \\ tracking\_shuffled\_seven & 10.0 & 0.0 & 0.0 & 16.0 \\ tracking\_shuffled\_three & 34.0 & 28.0 & 28.0 & 24.0 \\ \hline \hline \end{tabular} \end{table} Table 5: Accuracy (%) on BBH benchmark. SALAM achieves the best performance with only 10% training data. take is more helpful for the student model. We suppose that it is because previous successful attempts indicate the student model is already able to solve similar queries correctly, and they provide little help on what the student model has not mastered. It will even make the model lose to the zero-shot setting, indicating negative feedback. It also highlights the importance of choosing suitable few-shot examples. Instead, the mistake examples will correct previous incorrect answers, which provides better instruction for questions the student model performs poorly on. ### Analysis In this section, we dive into several aspects to enhance the understanding of SALAM. **How does the retrieval affect the performance?** Retrieval is a crucial part of our SALAM framework. There are two important hyperparameters: top\(k\) limits the number of retrieved entries by only returning \(k\) entries with the highest score, while \(\theta\) set the minimum similarity score the retrieved examples should reach. In Figure 2 and Figure 3 we demonstrate the influence of various \(topk\) and \(\theta\) on the average accuracy of the BBQ test set. In Figure 2, we set a small \(\theta=0\) to accept all retrieved entries. We can see that with the increase of \(k\), the accuracy keeps degrading. It is because more irrelevant examples are retrieved and mislead the student model. The trend of SALAM is more obvious and the performance drops to 0 when the k increases to 10. We check the generation and find that with more guidelines in the prompt, the student model takes these guidelines as few-shot examples instead of instruction, and then generates similar guidelines instead of the answer to the query. In Figure 3, we set a large \(k=10\) to retrieve entries with various similarity scores. We can find that with the increase of the threshold, the accuracy increases. It indicates that the relevancy is more important than the number of retrieved examples. For the \(\mathrm{M}\) w/ \(\mathrm{O}_{corr}\), we can see the relevancy of the few-shot examples is particularly important, which is consistent with previous studies on few-shot learning. Moreover, SALAM loses to \(\mathrm{M}\) w/ \(\mathrm{O}_{err}\) on low thresholds, however, eventually surpasses it on high thresholds and achieves the best performance. It indicates that it is crucial to choose relevant guidelines as the instruction for the student model. **Will pseudo mistakes help?** In Section 2.3, we collect mistakes from the previous attempt on the training set, which we call _real mistakes_. However, it requires the student model to have an extra pass over the training set. Instead of collecting real mistakes, we can also create some _pseudo mistakes_ by randomly choosing an incorrect candidate option of the query as the mistake. Thus, we investigate the performance of the student model \(\mathrm{M}\) given pseudo mistakes. Specifically, we take the whole dataset as the evaluation set since we do not need to go through the training set to collect mistakes. For the zero-shot setting, we prompt \(\mathrm{M}\) with the query and identify the pseudo mistake, while for the few-shot setting, we provide 3 examples with both the pseudo mistake and correct answer, telling the model how to correct mistakes. The detailed prompts are in Appendix. The results are illustrated in Figure 4. We can see that in most cases the pseudo mistakes harm the performance. Although we provide few-shot examples showing how to correct the mistake, it still worsens the performance on BBQ. It indicates that the pseudo mistakes usually cannot benefit the student model because they Figure 3: Retrieval with different similarity threshold. Topk is set to 10. Figure 2: Retrieval with different topk. The similarity threshold is set to 0. cannot expose its real deficiencies. Instead, these pseudo mistakes may confuse the student model. Therefore, it is necessary to collect real mistakes from the student model. **Can feedback generalize to unseen tasks?** In order to investigate the generalization ability of SALAM, we split the BBQ benchmark into two parts: we take the first 5 tasks as the in-domain tasks and collect mistakes from them, and use the rest as the out-of-domain tasks. We set retrieval topk as 1 to only use the most relevant mistake. We evaluate the performance of the out-of-domain tasks. As shown in Figure 5, SALAM can also help unseen tasks if the unseen tasks share some similarity with the existing errors. ## 4 Related Work **Large Language Models as Feedback** Large language models (LLMs) have exhibited a remarkable capability for providing feedback, as highlighted by numerous studies (Bai et al., 2022; Peng et al., 2023; Kocmi and Federmann, 2023; Shinn et al., 2023). The feedback from LLMs can be in the form of real numbers to evaluate the quality of the generation (Fu et al., 2023; Kocmi and Federmann, 2023), or textual instruction to guide the refinement (Kwon et al., 2023; Madaan et al., 2023; Yao et al., 2022) For instance, Peng et al. (2023) generates feedback grounded in evidence from external knowledge and revises its response. Reflexion (Shinn et al., 2023) generates automatic feedback utilizing trajectory history and dynamic memory, thereby enhancing the decision-making process during the interaction between the agent and its environment. Self-Refine (Madaan et al., 2023) queries an LLM to provide reflection based on the current task, trajectory history, and last reward. The reflection is then stored in the dynamic memory and used to refine its behavior in the next trial. Kwon et al. (2023) uses LLMs as a proxy reward function to evaluate the alignment between the generation and the user's objective. It prompts the LLM with a task description, a textual user objective, a textual description of the current reinforcement learning status, and a question to ask about the alignment degree. The output of the LLM is parsed to a score. Instead of instance-specific feedback, in this paper, we focus on providing global feedback on a set of previous mistakes to prevent future mistakes. **Learning from Feedback** Plenty of work has been done to investigate how to utilize feedback. One is to filter undesirable data based on feedback and use the filtered data to finetune the model (Huang et al., 2022; Uesato et al., 2022). The other is to train a reward function and take it as the reward function in the reinforcement learning (Stiennon et al., 2020; Ouyang et al., 2022; Bai et al., 2022). Benefiting the LLMs' ability to follow instructions, recent researchers also add textual feedback into the prompt and directly ask models to revise their response directly (Peng et al., 2023; Shinn et al., 2023). Moreover, the feedback can be one time (Saunders et al., 2022), or multiple times (Scheurer et al., 2023; Madaan et al., 2023). In this work, we use feedback as the instruction for the model and iteratively ask it to refine its answer. **Teacher-student Learning** Teacher-student learning is a knowledge distillation method to transfer knowledge from a larger teacher model to a smaller student model (Hinton et al., 2015; Gou et al., 2021). The goal is to produce similar results as the powerful teacher model with fewer parameters and computational costs. There are several methods to distill knowledge, such as assistant teaching (Mirzadeh et al., 2020), lifelong learn Figure 4: Prompt with pseudo mistakes. The y-axis indicates the average accuracy over various tasks. Figure 5: Results on out-of-domain tasks. We collect mistakes from the first 6 tasks and evaluate the feedback on the other tasks on BBQ. ing (Zhai et al., 2019), NAS-Based distillation (Liu et al., 2020), and so on. The teaching assistant is an intermediate model to mimic the behavior of the teacher model and then teaches the student model. It usually has a medium size between the student and the teacher (Mirzadeh et al., 2020). Recently, a lot of work try to distill knowledge in large language models to enhance the capability of small models, such as commonsense (Bhagavatula et al., 2023; West et al., 2022) and the reasoning ability Shridhar et al. (2023); Magister et al. (2022). In this work, instead of teaching the model new knowledge, we introduce a study assistant to help the model correct their mistakes, and it can be a smaller but specific feedback model rather than a larger one. ## 5 Conclusion In this paper, we introduce a novel framework, the **S**tudy **A**ssistant for **L**arge **L**anguage **M**odel (SALAM), aimed at assisting LLMs in learning from mistakes. The proposed framework draws inspiration from the way human study assistants aid students, focusing on identifying common misunderstandings from mistakes and providing instructions based on them. The study assistant and the student cooperative interaction in SALAM. The student model passes generations to the study assistant and refines its generations based on the feedback. The study assistant collects mistakes and provides feedback, taking the performance improvement of the student model as its reward. We demonstrated the effectiveness of SALAM on the BBH and BBQ benchmarks, showing significant improvement in the model's performance. We believe that our method provides a new way to enhance LLM's learning process. Future work can explore further personalization options based on the feedback of the model, aiming to continue the progress in the field of LLM learning and understanding.
2307.05832
Bag of Views: An Appearance-based Approach to Next-Best-View Planning for 3D Reconstruction
UAV-based intelligent data acquisition for 3D reconstruction and monitoring of infrastructure has experienced an increasing surge of interest due to recent advancements in image processing and deep learning-based techniques. View planning is an essential part of this task that dictates the information capture strategy and heavily impacts the quality of the 3D model generated from the captured data. Recent methods have used prior knowledge or partial reconstruction of the target to accomplish view planning for active reconstruction; the former approach poses a challenge for complex or newly identified targets while the latter is computationally expensive. In this work, we present Bag-of-Views (BoV), a fully appearance-based model used to assign utility to the captured views for both offline dataset refinement and online next-best-view (NBV) planning applications targeting the task of 3D reconstruction. With this contribution, we also developed the View Planning Toolbox (VPT), a lightweight package for training and testing machine learning-based view planning frameworks, custom view dataset generation of arbitrary 3D scenes, and 3D reconstruction. Through experiments which pair a BoV-based reinforcement learning model with VPT, we demonstrate the efficacy of our model in reducing the number of required views for high-quality reconstructions in dataset refinement and NBV planning.
Sara Hatami Gazani, Matthew Tucsok, Iraj Mantegh, Homayoun Najjaran
2023-07-11T22:56:55Z
http://arxiv.org/abs/2307.05832v3
# Bag of Views: An Appearance-based Approach to Next-Best-View Planning for 3D Reconstruction ###### Abstract UAV-based intelligent data acquisition for 3D reconstruction and monitoring of infrastructure has been experiencing an increasing surge of interest due to the recent advancements in image processing and deep learning-based techniques. View planning is an essential part of this task that dictates the information capture strategy and heavily impacts the quality of the 3D model generated from the captured data. Recent methods have used prior knowledge or partial reconstruction of the target to accomplish view planning for active reconstruction; the former approach poses a challenge for complex or newly identified targets while the latter is computationally expensive. In this work, we present Bag-of-Views (BoV), a fully appearance-based model used to assign utility to the captured views for both offline dataset refinement and online next-best-view (NBV) planning applications targeting the task of 3D reconstruction. With this contribution, we also developed the View Planning Toolbox (VPT), a lightweight package for training and testing machine learning-based view planning frameworks, custom view dataset generation of arbitrary 3D scenes, and 3D reconstruction. Through experiments which pair a BoV-based reinforcement learning model with VPT, we demonstrate the efficacy of our model in reducing the number of required views for high-quality reconstructions in dataset refinement and NBV planning1. Footnote 1: Authors have provided supplementary material including the scripts for the view Planning Toolbox available at **[https://github.com/ACIS2021/ViewPlanningToolbox](https://github.com/ACIS2021/ViewPlanningToolbox)**. View planning, 3D reconstruction, intelligent data acquisition, aerial robotics autonomy. ## I Introduction Active vision is characterized by the ability of a robot to make decisions about placing or reconfiguring its sensors to complement its perception of the environment [1]. This ability leads to meaningful actions of the robot based on interpretations of its surrounding environment and grants the robot a planning strategy, namely view planning, for actively replacing its sensors to uncover most amount of information about the target. In the context of active 3D reconstruction, view planning is used to optimize the robot's path until the task requirements are satisfied. For the application of 3D reconstruction of infrastructure using UAV-based imaging which is the concern of this work, the view planning problem dictates the data acquisition process and significantly impacts the reconstruction results. Previous work in this domain either relies on a given proxy of the target to build upon while planning the views [2, 3] or generates a partial reconstruction using the knowledge of the agent about the target so the camera is navigated to complete the model [4, 5]. In this setting, the agent iteratively calculates the next waypoint to attend where it can capture the next-best-view (NBV) with the highest predicted information gain. However, when the purpose is to capture views from newly recognized targets or in the case of targeting complex structures, a geometric proxy of the target might not be accessible and online 3D reconstruction to achieve guidance can be computationally expensive and time-inefficient. On the other hand, under the assumption that adequate computation resources exist onboard the drone, algo Fig. 1: Results of the proposed Bag-of-Views model applied to next-best-view planning trained and evaluated using our View Planning Toolbox. The output reconstruction shows a \(94.6\%\) surface coverage by only 13 captured views with \(2.78\) cm Hausdorff distance and \(0.80\) cm Chamfer discrepancy as compared to the results from a complete coverage baseline scan of the target with 288 views. rithms that use an external model for guidance purposes mostly focus on the coverage completeness of the area where less attention is paid to the relative visual information contained in consecutively captured views. In this work, we propose a novel approach to a fully appearance-based view planning for online NBV planning and offline dataset refinement applications. The key uniqueness of our work lies on its model-free nature that makes it independent of the true state of the environment, namely the actual 3D model, both during training and inference time. This allows the method to be applied to a wide range of settings and customized to fit different applications. In order to introduce the concept, we draw parallels between computational representation of views and how humans perceive objects by recognizing and interpreting distinct visual cues [6]. We employ the Bag-of-Visual-Words [7] as a simplified vision model to represent an agent's knowledge of the target and introduce spatial information to the visual vocabularies to form a Bag-of-Views (BoV), a model to record and retrieve the visual features of the target from different viewpoints. First, through experimental results, we demonstrate how the selection of the views using our appearance-based heuristics affects the 3D reconstruction process and use the observations to refine already acquired datasets. Next, we bring this approach to an active level and utilize a Soft Actor-Critic (SAC) method [8] to train an agent that seeks to capture the target from views that cause a more drastic change in how it remembers the environment so far. The core idea of this method is to use the local visual features of the scene and their positioning to guide the agent to predict the NBV that would result in revealing the highest number of unseen visual features as the agent remembers them. In addition, we present View Planning Toolbox (VPT) as a comprehensive solution for simulating the view capturing process as part of developing view planning algorithms. The introduction of this toolbox fills the significant gap in the area of simulating machine learning-based view planning frameworks and provides developers with an open-source, user-friendly solution to streamline their training and evaluation process. The main contributions of this work can be summarized as follows: * Bag-of-Views, a novel fully appearance-based view selecting model for offline dataset refinement. This model needs no pre-training and is modular and customizable for different applications. * A reinforcement learning approach to appearance-based NBV planning with no complete or partial reconstructions required at training or inference time. * View Planning Toolbox (VPT, available here), which provides an environment for training and testing of machine learning-based view planning and 3D reconstruction models. ## II Related Work ### _Solutions to the View Planning Problem_ Based on the amount of available information to the system about the environment and the complexity of the shape of the target, view planning approaches are often categorized as model-based or model-free (a.k.a. non-model-based) approaches [9]. In model-based view planning problems, a viewing plan is obtained using a previously built or given model of the target [2, 3]. In more general applications and in cases where the target is unknown or introduced to the system in runtime, the viewing strategy should be generated without prior information of the target [1]. In such cases, the goal is to manipulate the camera position and orientation in a manner that most unrevealed information about the target is exposed to the agent in each step. Most of the model-free methods follow the NBV approach. Typically, these systems build an interpretation of the environment using acquired information and base the planning of the next view(s) on it. That interpretation of the scene can be in different forms based on the specific task and application. Among methods that follow this approach are frontier-based methods and volumetric-based methods. In frontier-based view planning first introduced in [10], the core idea was to visit unexplored regions of an initial map, i.e. frontiers, to update the map based on the newly collected information in those regions. Methods belonging to this category usually represent the target zone of the environment using a 2D [11] or 3D [4] occupancy grid or directly use a point cloud to map the boundary between explored and unexplored regions [12]. As opposed to frontier-based methods that mostly focus on exploring an environment, volumetric-based approaches are usually concerned with modelling a single target and focus more on the completeness of the coverage. In this regard, to guarantee a successful registration, overlaps between the views are also considered while planning the views [13]. In addition to 3D modelling, such algorithms are also used for scene inspection [14, 15]. A subcategory of the model-free view planning algorithms, namely appearance-based planning methods, carries out the decision making process based solely on the visual input such as RGB or gray-scale images [16, 5, 17]. These methods either rely on an _a priori_ model of the target or a partial reconstruction of the scene, at least during the training stage of their models. For example, a method was proposed in [16] which, based on the information gain from different camera poses, computes a candidate sequence of viewpoints for a micro aerial vehicle to attend. More recently, with the advancements of deep reinforcement learning, appearance-based view planning has been visited more often. Accordingly, [5] utilized only the captured images to plan the views without tracking a partial reconstruction of the true 3D model. They used the surface coverage percentage to guide the agent. Similarly, [17] used the surface coverage as well as reconstruction error as part of the reward for guiding their reinforcement learning agent towards complete reconstruction. Unlike these methods, our proposed model omits the need for the true state of the environment or any partial reconstructions for guiding the agent towards capturing high-utility views for the task of reconstruction. We treat 3D reconstruction as a downstream task as opposed to a parallel task in the view planning framework and pay more attention to the conditions that should be met by the views in order to have a satisfactory reconstruction. ### _Simulation Environments for The View Planning Problem_ View planning research has explored various simulation environments, each exhibiting unique advantages and constraints. For instance, the PredRecon framework [18] utilized AirSim [19] for its simulation environment, taking advantage of its high-fidelity simulation capabilities. However, their work also relied on Unreal Engine 4 (UE4) [20] for data generation of other 3D models and Blender [21] for partial pointcloud reconstructions. This use of multiple software environments can be resource-intensive and complex to set up. In a similar manner, [22] used Gazebo [23]. While Gazebo is a feature-rich simulator providing a versatile environment for robotic simulations, using it for view planning in photo-realistic environments introduces computational overhead and performance degradation. Previous work addressed the need for flexibility and customization by employing environments inspired by OpenAI Gym [24] to train their reinforcement learning agent for NBV planning [17]. Their approach offered greater flexibility in defining the simulation environment and incorporating different perceptual elements. Additionally, the _gym-collision-avoidance_ package [25] used in [26] provided a simulation environment for informative trajectory planning. With a greater focus on collision avoidance, this package allowed researchers to simulate and evaluate view planning algorithms within a controlled environment. It is worth noting that simulation environments often require trade-offs between fidelity, computational resources, and the level of control provided over environmental factors. Striking the right balance between these aspects is dependent on the specific task being studied. For view planning, factors affecting high-level control of the simulation are of most importance. These factors include camera controls (pose, focal length, resolution), scene manipulation (lighting, object placement and scaling), and scripting automation of the aforementioned functionalities. To address these trade-offs, we have developed the View Planning Toolbox (VPT), a comprehensive solution outlined in Sec. V. ## III Bag of Views: Appearance-based Approach for Selecting Best Views Following the work in [27], we introduce a computational representation of the views in terms of the visual features of the scene captured from the respective viewpoints. As discussed in [27], for a successful multi-view 3D reconstruction, two key conditions must be satisfied: * The views in the set must present features of the target that are distinct from the ones presented by other views in the same set, * Each view by itself must be rich in the number of visual features it is revealing of the target. We denote the \(i^{th}\) view in the set as \(\chi_{i}\). Each view \(\chi_{i}\) is encoded to and is represented by its extracted features using a feature extracting algorithm such as Scale-Invariant Feature Transform (SIFT) [28]. Thus, \(\chi_{i}\) will be a 2D matrix with dimensions \(m\times n\) where \(m\) is the number of extracted features and \(n\) is the number of values in the feature descriptors. In the case of SIFT, each row of this view matrix represents a 128-dimensional feature descriptor where each element represents a certain attribute of the local feature detected in the image patch. We can denote each view by iterating over its resulting feature descriptors \(\chi_{i}(j,:)=\{f(j,k):k\in\{1,2,...,n\}\}\) where \(f(j,k)\) is the \(k^{th}\) value in the \(j^{th}\) descriptor of the image. These conditions for a set of views result in a greater distance between corresponding descriptors from two view representations denoted as \(dist(\chi_{i}(j,:),\chi_{i+1}(j^{\prime},:))\) for consecutively selected random views \(\chi_{i}\) and \(\chi_{i+1}\). A greater distance ensures a better reconstruction quality for a limited-length trajectory. In our case, the cosine distance metric is used to measure the dissimilarity between the descriptors and the visual words since its consistent range of outputs allows for easy interpretation and score comparison. Thus, \[dist(\chi_{i}(j,:),\chi_{i+1}(j^{\prime}\cdot))=1-2\times\cos\left(\chi_{i}(j,:),\chi_{i+1}(j^{\prime},:)\right) \tag{1}\] where \(j\in\{1,2,...,m\}\) and \(j^{\prime}\in\{1,2,...,m^{\prime}\}\) with \(m\) and \(m^{\prime}\) being the number of representative descriptors of \(\chi_{i}\) and \(\chi_{i+1}\): \[\cos\left(\chi_{i}(j,:),\chi_{i+1}(j^{\prime}:)(j)\right)=\frac{\langle\chi_{ i}(j,:),\;\chi_{i+1}(j^{\prime},:)\rangle}{|\chi_{i}(j,:)|.|\chi_{i+1}(j^{ \prime},:)|} \tag{2}\] Since the extracted features belonging to a target are prone to self-similarity, the feature descriptors of different regions of the target can be clustered based on their similarity and, instead of pair-wise comparison of all feature descriptors belonging to all views in the set, they can be compared to the cluster cores. In addition, the necessity of there being a model-free view planning with no pre-training requires learning these cluster cores iteratively from the incoming information. This clustering of the visual features is inspired by [29] where cluster centers of the quantified feature descriptors were used to form Bag-of-Words. In the context of view planning for a single target with one or more symmetry axes, learning a global visual vocabulary for the entire model can be prone to overconfidence in recognizing certain visual words. Therefore, we propose a method to track visual features of the target captured from different viewpoints via distinguished visual vocabularies for different regions included in what we call a Bag-of-Views (BoV). As well as mitigating the symmetry challenge, the computation cost will be significantly less with the viewpoint-based vocabularies; a smaller number of visual words is required to describe a portion of the target rather than all parts of it and the new view will only update its corresponding vocabulary among the BoV. Below are the steps involved: **1) Feature Extraction:** Using a feature detection algorithm, in our case SIFT, local features of the captured image at position \(T\) are extracted in the form of 128-dimensional vectors. \(T\) is the position of the camera in the Cartesian coordinate system with its origin located at the center of the scene. **2) Utility Assignment:** Depending on the application at hand, assigning utilities to the views can be divided into two different cases: **2.I)** In the case of dataset refinement where the views have already been captured, a decision should be made about including each view in the input set of the reconstruction algorithm based on its utility. The question simply is "does this new view help the reconstruction process?". To answer that, we look at the extent to which this view satisfies the two conditions mentioned before. Each of the feature descriptors from the previous step are compared with their closest visual word through vector quantization [30]. Then, a distance metric is used to measure the dissimilarity between feature descriptors and the supposed visual word that would represent them in the corresponding vocabulary in the BoV, denoted as \(\nu_{T}\). This process is repeated for all of the descriptors and the sum of the dissimilarity scores is used to decide whether to ignore or utilize the view in the reconstruction process. If the final score is a non-zero positive value, it is included in the set and proceeds to the third step, otherwise it is ignored. **2.II)** In the second case, we use the BoV model to train a reinforcement learning agent to propose NBVs based on the appearance of consecutively captured views. Further elaboration on the learning process will be provided in Sec. IV. In this section, our focus lies primarily on the utilization of this model to shape the reward function within the specified context. While training the reinforcement learning agent, we use the change in the corresponding vocabulary of the BoV after capturing a new view at location \(T\) to shape the reward function. The higher difference between the new and previous BoV implies that the new view contains more unseen features and results in higher rewards. **3) View Representation:** This step is a continuation of case \(2.I\). Depending on the position and orientation of the captured viewpoint, the extracted features update the specific vocabulary assigned to the range of views that the new view belongs to. This updating includes clustering of the descriptors belonging to that region using a clustering algorithm such as K-means [31]. Thus, every group of the cluster centers in the BoV, namely every regional vocabulary, describes the appearance of the target from viewpoints that are close in position and orientation. Algorithm 1 showcases the pseudo-code for the creating a BoV model. ``` 1:Number of view ranges \(N\), Number of words \(W\), Database \(\mathcal{D}\) 2:Initialize BoV\(\{\nu_{1:N}\}\), Feature extractor \(\mathcal{F}\) 3:while There is data to process in \(\mathcal{D}\)do 4:\(T,X\leftarrow\) Load data from \(\mathcal{D}\) 5:\(id_{\text{view}}\gets T.\text{azimuth}//\frac{2\pi}{N}\) 6:\(\chi\leftarrow\mathcal{F}(X_{i})\) 7:\(dist\gets 0\) 8:for\(j=1\) to \(W\)do 9:\(C\leftarrow\underset{dist\leftarrow\text{dist}}{\max}\left(\cos\left(\chi(j), \nu_{id}(k)\right)\right)\) 10:if\(dist>0\)then 11: Update descriptors with \(\chi\) for \(\nu_{id_{\text{view}}}\) 12:\(\nu_{id_{\text{view}}}.codebook\leftarrow\) Perform K-means on \(\nu_{id_{\text{view}}}\) 13:else 14: Remove \(X\) from \(\mathcal{D}\) ``` **Algorithm 1** Bag-of-Views Model for Offline View Selection ## IV A Reinforcement Learning Approach to Appearance-based NBV Planning The problem of NBV planning is a sequential decision making process that can be defined as a Partially Observable Markov Decision Process (POMDP) and be solved through reinforcement learning algorithms. Our goal is to achieve this without any need for _a priori_ knowledge of the target and without any full or partial reconstruction of the target during training or inference time. We begin by formulating different components of this process. The goal of the agent in our system is to iteratively propose next views for a limited number of steps to reach regions with a high number of features unfamiliar to the BoV. Seeking such views leads to drastic changes in the vocabularies of the BoV through each relocation of the camera. The state space should provide the agent with enough information about the environment to enable meaningful actions towards the goal. We use the concatenation of down-sampled gray-scale images captured through the last \(\tau\) consecutive frames as well as the concatenation of the normalized camera locations associated with each view. Thus, the state \(s_{t}\) at time \(t\) is defined as \(\{T_{t-\tau t},obs_{t-\tau t}\}\). The camera location \(T_{t}\) is presented using the spherical coordinate system in the form of \(\{R,\phi,\theta\}\) with three values for radial distance from the center, the azimuth angle, and the elevation angle. Also, given a deterministic state transition, the action \(a_{t}\) determines the next camera location \(T_{t+1}\) after being re-scaled to the specified ranges for its three components. Based on the introduction in case \(I\) of Sec. III, the reward received for this action represents the change in the part of the BoV that has been influenced by the new action, namely \(\nu_{T_{t+1}}\) which is the vocabulary associated with the region that the new location belongs to. This change is measured through comparing the same regional vocabulary before and after taking the action; the closest visual words in the two vocabularies are identified and their distance is measured through vector quantization with the cosine distance metric. The sum of these distances is used to represent the change in the vocabulary after taking the action. We also add a negative constant reward at each time step. Thus, we define the reward to be \(r_{t+1}=dist(\nu_{T_{t+1}},\nu_{T_{t}})-1\), with the distance between vocabularies being defined as: \[\text{dist}(\nu_{T_{t+1}},\nu_{T_{t}})=\sum_{i}\left(1-2\times\cos\left(\nu_{ T_{t+1}}(i),\nu_{T_{t}}(k)\right)\right) \tag{3}\] where \(k=\underset{j}{\arg\max}\cos(\nu_{T_{t+1}}(i),\nu_{T_{t}}(j))\) and \(i\) iterates over each of visual words in \(\nu_{T}\). The number of vocabularies in the BoV and the size of each vocabulary are dependant on the resources available during training and runtime. ## V Simulation Environment To address the need for a lightweight, flexible, and easy-to-integrate simulation environment, we introduce the View Planning Toolbox (VPT). VPT contains tools for simulating the components required to train, visualize, and test view planning algorithms entirely within Python [32] and Blender [21]. At the core of VPT is the _UAV Camera_, which simplifies the visual data acquisition process carried out by an Uncrewed Aerial Vehicle (UAV) down to a camera floating in 3D space, where the specifications and pose of the camera can be determined either manually, programmatically, or through the use of a view planning algorithm. ### _UAV Camera_ The _UAV Camera_ simulates the base functionality of a drone traversing a 3D scene to gather visual information including both RGB and depth images from the environment. It is a simple pinhole model where the developer specifies the resolution of the captured image in pixels, the focal length in millimeters, and the depth range in meters. The _UAV Camera_ implementation encapsulates BlendTorch [33], a framework designed to integrate PyTorch [34] deep learning models with Blender [21]. The BlendTorch framework provides a convenient, multiprocessing-safe means of communication between a Blender environments and native Python applications. In addition, the _UAV Camera_ can be instantiated in an OpenAI gym [24] environment for use with reinforcement learning models. ### _Data Generation_ VPT can serve a dual purpose as shown in Fig. 2: * Generate state-action pair experiences for training reinforcement learning models. This requires accessing the _UAV Camera_ directly which can be instantiated in a Gym environment. * Generate offline datasets for training supervised learning models. This functionality is contained within _Scanning Plans_ and adheres to traditional photogrammetric principles [35], ensuring proper determination of end and side overlap for aerial scanning. As a way of evaluating our appearance-based view planning model in Sec. III, we compare the resultant reconstructions from the captured views with those generated by views that fully cover the targets. These views are generated using a hemispherical scan in which the end and side overlap between images is maintained at a distance \(R\) from the center of the scene. For calculating the spacing of views around the hemisphere, we make a flat plane approximation to determine the overlap between image pairs. Due to the normalization of the structure dimensions in our environment, the surface of the target structure appears far from the camera. As a result, the flat plane approximation does not lead to a lower overlap First, the overlap between two adjacent views is calculated as \(OL=W/D\tan(FOV/2)\) where \(W\) is the width of the view frame when projected onto the surface plane, \(D\) is the distance to the surface plane relative to the camera, and \(FOV\) is either the vertical or horizontal field of view (FOV) of the camera. Given a desired separation \(S\) between two viewpoints perpendicular to the surface plane, the overlap becomes: \[OL=\frac{2D\tan(FOV/2)}{2D\tan(FOV/2)-S} \tag{4}\] We can relate the separation between the viewpoints with the desired overlap using Eq. 5: \[S=2D\tan(FOV/2)[1-OL] \tag{5}\] The latitudinal separation can be calculated using Eq. 5 with the vertical FOV of the camera and a specified side overlap for \(OL\). The latitudinal separation angle would be \(\theta_{s}=\arctan(S/R)\), which assumes a small-angle approximation due to the high overlap between adjacent views. However, the number of views \(n_{v}\) on a circular path depends on the height \(z\) of the path at each level of the hemispherical scan. This dependency is shown in Eq. 6: \[n_{v}=\frac{2\pi\sqrt{R^{2}-z^{2}}}{S} \tag{6}\] Also, the longitudinal separation angle \(\phi_{s}\) between views in the circular path is calculated using a small-angle approximation as: \[\phi_{s}=\frac{2\pi}{n_{im}}=\frac{s}{\sqrt{R^{2}-z^{2}}} \tag{7}\] Iterating \(\theta_{s}\) over the range \([0,\pi/2]\) and \(\phi_{s}\) over the range \([0,2\pi]\) in a nested loop, we generate the trajectory of views as seen in Fig. 3. Each scene contains a photo-realistic structure which has been centered about the z-axis and lies on the XY-plane. The structure is also normalized by its largest dimension. This ensures structures of all sizes can be fully visible from all viewpoints. Fig. 3: VPT performs full coverage hemisphere scans in which the end and side overlap are specified on many targets. Fig. 2: Simplified visualization of the interactions between different components of the View Planning Toolbox (VPT) for reinforcement learning and dataset generation. ### _TSDF-Fusion 3D Reconstruction_ To evaluate the performance of our view planning algorithms, VPT integrates a Truncated Signed-Distance Function (TSDF) Fusion 3D reconstruction module from [36]. A wrapper has been implemented to isolate the CUDA environment of the TSDF-Fusion module to prevent conflicts with PyTorch. The TSDF-Fusion module creates either a 3D point cloud or a 3D mesh using the marching cubes method [37]. TSDF-Fusion reconstruction requires RGB views, the corresponding depth maps, and the computed \(4\times 4\) rigid transformation matrices based on the poses for the corresponding views. This module acts as a lightweight photogrammetric stand-in to quickly evaluate the reconstruction quality from hundreds of views. ## VI Implementation Details and Experiments ### _Network Architecture and Training_ The policy network is composed of two different sub-networks for processing the two components of the state; the observation network and the location network. Details of these two sub-networks are shown in Fig. 4. We use the Soft Actor-Critic (SAC) algorithm by [8] to train the policy network. SAC is powered by the advantages of actor-critic methods and maximum entropy reinforcement learning. It is a model-free, off-policy algorithm that optimizes both the policy and the value function simultaneously through a combination of policy gradient and Q-value updates. It maximizes the expected return while incorporating a soft entropy regularization term [38]. The critic network, which is responsible for mapping state-action pairs to their quality values, encodes the concatenation of the observation component of the state through a sequence of 3D convolutional layers followed by 3D batch normalization layers and ReLU activation functions. Then, the output is flattened and goes through two fully connected layers with an output size of 128. A sub-network also processes the concatenation of the location component of the state and the action through fully connected layers of output size 64 and 128, respectively. The resulting feature vector is then concatenated with the encoded observations and goes through two fully connected layers of output size 128 and 3 as the output quality value vector. ### _Evaluation of The View Planning Results_ To evaluate the utility of the suggested views using the BoV model, we use the reconstruction method explained in Sec. V-C and compare the resulting 3D point clouds and meshes from the captured views with those from a complete coverage scan of the target discussed in Sec. V-B. We analyze the reconstruction results of the offline dataset refinement and online NBV planning using the Chamfer discrepancy, Hausdorff distance, and the mesh-to-mesh comparison carried out in CloudCompare [39]. Chamfer discrepancy gives an overall measure of similarity or dissimilarity between two point sets, while Hausdorff distance focuses on the largest observed distance, highlighting extreme differences. Comparing scanning results using these metrics, a higher Chamfer discrepancy indicates poor coverage of the target, while a higher Hausdorff distance suggests missed areas in scanning, resulting in reconstruction with holes. ### _Bag-of-Views for Dataset Refinement_ To study the effect of the size of BoV, we test the model performance in filtering two of the generated datasets consisting of 288 views uniformly covering two of the buildings in our dataset as explained in Sec. V-B. We used the method explained in Sec. III and Alg. 1 to reduce the dataset size and evaluate reconstruction performance resulting from the remaining views. The two effective variables in this study are the number of view ranges defining the size of BoV and the number of visual words in each vocabulary of the BoV. The reconstruction results are compared based on their Hausdorff distance and Chamfer discrepancy with the 3D point cloud reconstructed using the original dataset of 288 views. The results are listed in Table I. In addition to this quantitative comparison, we visualize the distance between the mesh reconstructions to those produced using the original dataset in Fig. 6. This analysis explores the question of achieving optimal performance by balancing model efficiency (number of selected views) and reconstruction quality. Shown in Fig. 6, a larger BoV, achieved by increasing the number of vocabularies, results in a reconstruction that exhibits reduced spatial sparsity in the error surrounding the model. This interprets as a more uniform scan of the target when seeking to update visual vocabularies that are defined for a smaller view range. This means that for a fixed number of words in each vocabulary, each view has less opponents to be compared with and the Fig. 4: Proposed architecture for the policy network. The policy network processes the two components of the state, namely the observations and locations throughout the last 5 frames, through two separate sub-networks and fuses the resultant feature vectors to produce the action distribution. Fig. 5: Block diagram of the proposed reinforcement learning approach to NBV planning based on Bag-of-Views model. resulting visual words are more local to that region. While each view has a higher chance to represent a certain view range in the vocabulary, there is a lower chance for its similar views to be accepted into the set, as the dominant local features have already been identified. This effect is reinforced when the number of visual words per each vocabulary in the BoV is increased. The higher the number of visual words attributed to each view range, the higher the chance of familiarity of the newly captured view and details exposed to it. ### _Appearance-based Next-Best-View Planning: Baseline Comparison and Generalization_ Based on the problem formulation in Sec. IV and the training algorithm in Sec VI-A, we trained an agent in a reinforcement learning cycle depicted in Fig. 5 to scan multiple structures from the dataset we generated using VPT and publicly available 3D models. We set the termination condition as the completion of one pass around the target to demonstrate the ability of the model to iteratively find the optimal views, and compare the resulting reconstructions with the results of a full coverage scan of the target as the baseline. The resultant trajectory for a sample from the dataset is shown in Fig. 1. In addition, we used the policy in the first part trained on a single structure on two unknown buildings to compare the results and test the generalablity of the learned policy. Shown in Fig. 7, the results demonstrate the efficacy of the model in finding the optimal views which result in high-quality reconstructions. ## VII Conclusions and Future Work Our study tackled the challenge of model-free view planning by introducing a novel appearance-based computational representation of reconstruction targets that can be of utmost utility for UAV-based aerial photogrammetry. Our model enables utility assignment to the views without tracking a full or partial reconstruction of the target through tracking unfamiliar visual features with its vocabularies contained within what we called a Bag-of-Views (BoV). We also developed the View Planning Toolbox (VPT), which offers a comprehensive solution for training, evaluation, and custom dataset generation in the context of view planning and 3D reconstruction. Our first set of experiments focused on exploring the size of BoV and the vocabularies within it on reconstruction quality. Using the reconstruction results from a complete coverage scan of the target as a baseline, we found that the BoV model achieved a remarkable reduction of views used for reconstruction (\(70.6\%\) decrease) while simultaneously reducing Fig. 6: Studying the effect of the size of BoV and the vocabularies within it. It can been seen that increasing the size of BoV lowers the error sparsity while increasing the vocabulary size reduces the error values. Fig. 7: Testing the generalizability of our RL-based NBV planner on two samples from our dataset. The resulting Hausdorff distance and Chamfer discrepancy of the _Alpine Chalet_ reconstruction were calculated to be \(9.47\) cm and \(1.01\) cm while those of the _Imperial Temple_ were \(6.35\) cm and \(0.82\) cm, respectively. the reconstruction error (\(33.5\%\) decrease). These outcomes showcased the efficacy of our model in identifying optimal views for reconstruction. Building upon this proof of concept, we extended the application of the BoV model to shape the reward of a reinforcement learning (RL) agent trained using the Soft Actor-Critic (SAC) algorithm for online NBV planning. Once again, our model yielded high-quality reconstructions with a significantly low number of views (down to \(5\%\) of the number of baseline views). Furthermore, the RL model exhibited substantial generalizability to unseen targets. Notably, we discovered that the degree of generalizability depended on the relative visual complexity of the training and testing environments, further validating the effectiveness of our appearance-based view selection approach. While our work primarily focused on 3D reconstruction, the modular nature of our proposed method lends itself well to customization for various other applications. Promising future research can include using custom feature extractors and pre-training the visual vocabularies of the BoV for tracking certain visual features associated with structural defects in infrastructure.
2310.07432
Bounds on zero forcing using (upper) total domination and minimum degree
While a number of bounds are known on the zero forcing number $Z(G)$ of a graph $G$ expressed in terms of the order of a graph and maximum or minimum degree, we present two bounds that are related to the (upper) total domination number $\gamma_t(G)$ (resp. $\Gamma_t(G)$) of $G$. We prove that $Z(G)+\gamma_t(G)\le n(G)$ and $Z(G)+\frac{\Gamma_t(G)}{2}\le n(G)$ holds for any graph $G$ with no isolated vertices of order $n(G)$. Both bounds are sharp as demonstrated by several infinite families of graphs. In particular, we show that every graph $H$ is an induced subgraph of a graph $G$ with $Z(G)+\frac{\Gamma_t(G)}{2}=n(G)$. Furthermore, we prove a characterization of graphs with power domination equal to $1$, from which we derive a characterization of the extremal graphs attaining the trivial lower bound $Z(G)\ge \delta(G)$. The class of graphs that appears in the corresponding characterizations is obtained by extending an idea from [D.D.~Row, A technique for computing the zero forcing number of a graph with a cut-vertex, Linear Alg.\ Appl.\ 436 (2012) 4423--4432], where the graphs with zero forcing number equal to $2$ were characterized.
Boštjan Brešar, María Gracia Cornet, Tanja Dravec, Michael Henning
2023-10-11T12:34:23Z
http://arxiv.org/abs/2310.07432v1
# Bounds on zero forcing using (upper) total domination and minimum degree ###### Abstract While a number of bounds are known on the zero forcing number \(Z(G)\) of a graph \(G\) expressed in terms of the order of a graph and maximum or minimum degree, we present two bounds that are related to the (upper) total domination number \(\gamma_{t}(G)\) (resp. \(\Gamma_{t}(G)\)) of \(G\). We prove that \(Z(G)+\gamma_{t}(G)\leq n(G)\) and \(Z(G)+\frac{\Gamma_{t}(G)}{2}\leq n(G)\) holds for any graph \(G\) with no isolated vertices of order \(n(G)\). Both bounds are sharp as demonstrated by several infinite families of graphs. In particular, we show that every graph \(H\) is an induced subgraph of a graph \(G\) with \(Z(G)+\frac{\Gamma_{t}(G)}{2}=n(G)\). Furthermore, we prove a characterization of graphs with power domination equal to \(1\), from which we derive a characterization of the extremal graphs attaining the trivial lower bound \(Z(G)\geq\delta(G)\). The class of graphs that appears in the corresponding characterizations is obtained by extending an idea from [D.D. Row, A technique for computing the zero forcing number of a graph with a cut-vertex, Linear Alg. Appl. 436 (2012) 4423-4432], where the graphs with zero forcing number equal to \(2\) were characterized. \({}^{a}\) Faculty of Natural Sciences and Mathematics, University of Maribor, Slovenia \({}^{b}\) Institute of Mathematics, Physics and Mechanics, Ljubljana, Slovenia \({}^{c}\) Depto. de Matematica, FCEIA, Universidad Nacional de Rosario, Argentina \({}^{d}\) Consejo Nacional de Investigaciones Cientificas y Tecnicas, Argentina \({}^{e}\) Department of Mathematics and Applied Mathematics, University of Johannesburg, South Africa **Keywords:** Grundy domination number; zero forcing; total domination; upper total domination; power domination **AMS subject classification: 05C69, 05C35** Introduction Concentrating solely on the positions of non-zero entries of real symmetric matrices, they can be described by the adjacency matrix of an undirected graph. Fixing the positions of non-zero values and considering all possible values, the concepts of maximum nullity and minimum rank of the corresponding graph arise. The zero forcing number was introduced in [1] as a useful (and often attained) upper bound on the maximum nullity of a graph. (See [5, 6] and the recent monograph [21] surveying inverse problems and zero forcing in graphs.) Zero forcing is defined by the process, which starts by choosing a set \(S\) of vertices of a graph \(G\) and coloring them blue. The _color-change rule_ consists of identifying a blue vertex having only one non-blue neighbor and coloring that neighbor blue. The color-change rule is performed as long as possible. If at the end of the process all vertices become blue, then the initial set \(S\) is a _zero forcing set_ of \(G\). The minimum cardinality of a zero forcing set in \(G\) is the _zero forcing number_, \(Z(G)\), of \(G\). Several bounds are known for the zero forcing number, where the trivial lower bound \(Z(G)\geq\delta(G)\) involves the minimum degree of a graph. Many authors studied upper bounds on \(Z(G)\) involving the order and the maximum degree of \(G\), and considered also extremal families of graphs attaining the bounds [3, 15, 17]. For instance, the most recent such bound is \(Z(G)\leq\frac{n(\Delta-2)}{\Delta-1}\), which holds when \(G\) is a connected graph with maximum degree \(\Delta\geq 3\) and \(G\) is not isomorphic to one of the five sporadic graphs [16]. Bounds on the zero forcing number expressed in terms of order, maximum and minimum degree were also proved [12] and the extremal graphs have recently been characterized [23]. We remark that good upper bounds on \(Z(G)\) are particularly interesting, because they also present upper bounds on the maximum nullity of a graph. Domination in graphs is one of the most studied topics in graph theory; see a recent monograph [19] surveying core concepts of domination theory. On the first sight, domination does not seem related to maximum nullity and zero forcing, yet there are some surprising connections. Notably, the concept of power domination, which was introduced in [18] as a model for monitoring electrical networks, has a very similar definition to that of zero forcing. In particular, the corresponding graph invariant \(\gamma_{P}(G)\) is a lower bound for \(Z(G)\) in all graphs \(G\)[7]. Moreover, the so-called Z-Grundy domination number \(\gamma_{\rm gr}^{\rm Z}(G)\) of \(G\), as introduced in [9], is dual to the zero forcing number of \(G\). In particular, \(Z(G)=n(G)-\gamma_{\rm gr}^{\rm Z}(G)\) holds for every graph \(G\) (where \(n(G)\) is the order of \(G\)). In this paper, we further relate zero forcing with domination. In particular, we prove an upper bound on the zero forcing number of a graph \(G\), expressed in terms of the total domination number \(\gamma_{t}(G)\) of \(G\). (The latter invariant is one of the central concepts of graph domination, cf. [19] and the book [20] surveying total domination in graphs). We prove that for any graph \(G\) with no component isomorphic to a complete graph, we have \(Z(G)\leq n(G)-\gamma_{t}(G)\) and \(Z(G)\leq n(G)-\frac{\Gamma_{t}(G)}{2}\) where \(\Gamma_{t}(G)\) is the upper total domination number of \(G\). We prove that both of these bounds are sharp, present several properties of the two families of extremal graphs, and prove that any graph is an induced subgraph of a graph \(G\) with \(Z(G)=n(G)-\frac{\Gamma_{t}(G)}{2}\). We also show that the ratio \(\frac{n-Z(G)}{\Gamma_{t}(G)}\) can be arbitrarily large (which implies that the ratio \(\frac{n-Z(G)}{\gamma_{t}(G)}\) can also be arbitrarily large). In the proofs, we are often using the language of Z-Grundy domination, which we present in the next section. We are also studying the graphs \(G\) with \(Z(G)=\delta(G)\), that is, achieving the trivial lower bound. We prove a characterization of these graphs by widely extending a characterization of the graphs \(G\) with zero forcing number equal to \(2\) due to Row [25]. The result is obtained from a characterization of the graphs with power domination equal to \(1\), which we also prove. We denote the _degree_ of a vertex \(v\) in a graph \(G\) by \(\deg_{G}(v)\). An _isolated vertex_ is a vertex of degree \(0\), and an _isolate-free graph_ is a graph with no isolated vertices. Hence if \(G\) is an isolate-free graph, then \(\deg_{G}(v)\geq 1\) for all vertices \(v\in V(G)\). In the following section, we present main definitions used in the paper. In particular, we recall the definition of the Z-Grundy domination number, \(\gamma_{\rm gr}^{\rm Z}(G)\), and the equality \(Z(G)=n(G)-\gamma_{\rm gr}^{\rm Z}(G)\) which holds in all graphs \(G\) (note that in the seminal paper [9] Z-Grundy domination number was introduced for isolate-free graphs, but our definition presented in the next section, allows also isolated vertices). In Section 3, we are concerned with total domination, we prove the bound \(\gamma_{\rm gr}^{\rm Z}(G)\geq\gamma_{t}(G)\), and present several properties of graphs \(G\) that attain the equality \(\gamma_{\rm gr}^{\rm Z}(G)=\gamma_{t}(G)\). In Section 4, we prove the mentioned results involving the upper total domination number. Finally, in Section 5, we prove the characterization of the graphs \(G\) with power domination \(1\) and the graphs \(G\) with \(Z(G)=\delta(G)\). Several open problems are posed throughout the paper. ## 2 Definitions and notation For graph theory notation and terminology, we generally follow [19]. Specifically, let \(G\) be a graph with vertex set \(V(G)\) and edge set \(E(G)\), and of order \(n(G)=|V(G)|\) and size \(m(G)=|E(G)|\). If \(G\) is clear from the context, we simply write \(V=V(G)\) and \(E=E(G)\). The _open neighborhood_ of a vertex \(v\) in \(G\) is \(N_{G}(v)=\{u\in V:\,uv\in E\}\) and the _closed neighborhood of \(v\)_ is \(N_{G}[v]=\{v\}\cup N_{G}(v)\). We denote the degree of \(v\) in \(G\) by \(\deg_{G}(v)\), and so \(\deg_{G}(v)=|N_{G}(v)|\). Two vertices are _neighbors_ if they are adjacent. For a set \(X\subseteq V(G)\) and a vertex \(v\in V(G)\), we denote by \(\deg_{X}(v)\) the number of neighbors of \(v\) in \(G\) that belong to the set \(X\), that is, \(\deg_{X}(v)=|N_{G}(v)\cap X|\). In particular, if \(X=V(G)\), then \(\deg_{X}(v)=\deg_{G}(v)\). The subgraph of \(G\) induced by a set \(D\subseteq V\) is denoted by \(G[D]\). For \(k\geq 1\) an integer, we let \([k]\) denote the set \(\{1,\ldots,k\}\). A vertex \(x\in V\)_dominates_ a vertex \(y\) if \(y\in N_{G}[x]\), and we say that \(y\) is _dominated_ by \(x\). If \(D\subseteq V\), then a vertex \(y\) in \(G\) is _dominated_ by \(D\) if there exists \(x\in D\) that dominates \(y\). A set \(D\) is a _dominating set_ of a graph \(G\) if every vertex in \(G\) is dominated by \(D\). A vertex \(x\in V\)_totally dominates_ a vertex \(y\) if \(y\in N_{G}(x)\), and we then also say that \(y\) is _totally dominated_ by \(x\). If \(D\subseteq V\), then \(y\in V\) is _totally dominated_ by \(D\) if there exists \(x\in D\) that totally dominates \(y\). A set \(D\) is a _total dominating set_ (or shortly, a _TD-set_) of \(G\) if every vertex in \(G\) is totally dominated by \(D\). The minimum cardinality of a TD-set of \(G\) is the _total domination number_ of \(G\), denoted \(\gamma_{t}(G)\). A TD-set \(D\) in \(G\) such that \(S\) is not a TD-set of \(G\) whenever \(S\subsetneq D\), is a _minimal TD-set_ of \(G\). The maximum cardinality of a minimal TD-set in \(G\) is the _upper total domination number_, \(\Gamma_{t}(G)\), of \(G\). A \(\gamma_{t}\)_-set of \(G\)_ is a TD-set of \(G\) of cardinality \(\gamma_{t}(G)\), while a \(\Gamma_{t}\)_-set of \(G\)_ is a minimal TD-set of \(G\) of cardinality \(\Gamma_{t}(G)\). For a set \(D\subseteq V\) and a vertex \(v\in D\), the _open \(D\)-private neighborhood_ of \(v\), denoted by \({\rm pn}(v,D)\), is the set of vertices that are in the open neighborhood of \(v\) but not in the open neighborhood of the set \(D\setminus\{v\}\). Equivalently, \({\rm pn}(v,D)=\{w\in V:N(w)\cap D=\{v\}\}\). The \(D\)_-external private neighborhood_ of \(v\) is the set \({\rm epn}(v,D)={\rm pn}(v,D)\setminus D\), and its _open \(D\)-internal private neighborhood_ is the set \({\rm ipn}(v,D)={\rm pn}(v,D)\cap D\). We note that \({\rm pn}(v,D)={\rm ipn}(v,D)\,\cup\,{\rm epn}(v,D)\). A vertex in \({\rm epn}(v,D)\) is a \(D\)_-external private neighbor_ of \(v\), and a vertex in \(\operatorname{ipn}(v,D)\) is a _\(D\)-internal private neighbor_ of \(v\). Hence, if \(w\in\operatorname{epn}(v,D)\), then \(w\notin D\) and \(w\) is not totally dominated by \(D\setminus\{w\}\), while if \(w\in\operatorname{ipn}(v,D)\), then \(w\in D\) and \(w\) is not totally dominated by \(D\setminus\{w\}\). A fundamental property of minimal TD-sets (see [19, Lemma 4.25]) is that \(D\) is a minimal TD-set in \(G\) if and only if \(|\operatorname{epn}(v,D)|\geq 1\) or \(|\operatorname{ipn}(v,D)|\geq 1\) hold for every vertex \(v\in D\). The concept of Grundy domination can be presented by a sequence of vertices in a graph. The first type of the corresponding graph invariant, the so called Grundy domination number, was defined in [10] with a motivation coming from the domination game. A few years latter, the Grundy total domination number [11] and the Z-Grundy domination number [9] were introduced. A sequence \(S=(v_{1},\ldots,v_{k})\) of vertices in a graph \(G\) is a _Z-sequence_ if for every \(i\in\{2,\ldots,k\}\), \[N_{G}(v_{i})\setminus\bigcup_{j=1}^{i-1}N_{G}[v_{j}]\neq\emptyset. \tag{1}\] The corresponding set of vertices from the sequence \(S\) will be denoted by \(\widehat{S}\). The maximum length \(|\widehat{S}|\) of a Z-sequence \(S\) in a graph \(G\) is the _Z-Grundy domination number_, \(\gamma_{\operatorname{gr}}^{\mathrm{Z}}(G)\), of \(G\). (The definition of _Grundy total domination number_ of a graph \(G\), \(\gamma_{\operatorname{gr}}^{t}(G)\), is similar, one just needs to change the closed neighborhood symbol in (1) with the open neighborhood symbol.) If \((v_{1},\ldots,v_{k})\) is a Z-sequence, then we say that \(v_{i}\)_footprints_ the vertices from \(N_{G}[v_{i}]\setminus\cup_{j=1}^{i-1}N_{G}[v_{j}]\), and that \(v_{i}\) is the _footprinter_ of every vertex \(u\in N_{G}[v_{i}]\setminus\cup_{j=1}^{i-1}N_{G}[v_{j}]\), for any \(i\in[k]\) (where \([k]=\{1,\ldots,k\}\)). Note that if \(S\) is a Z-sequence, \(x\in\widehat{S}\) may footprint itself, but it must footprint (also) a vertex \(y\) distinct from \(x\). For a sequence \(S=(u_{1},\ldots,u_{k})\) and a vertex \(x\notin\widehat{S}\), we use the following notation: \(S\oplus(x)=(u_{1},\ldots,u_{k},x)\). As it turns out, the Z-Grundy domination number is dual to the zero forcing number. Moreover, a sequence \(S\) is a Z-sequence if and only if the set of vertices outside \(S\) forms a zero forcing set [9]. In particular, this implies that \[Z(G)=n(G)-\gamma_{\operatorname{gr}}^{\mathrm{Z}}(G) \tag{2}\] for every graph \(G\). In a subsequent paper, Lin presented a natural connection between four variants of Grundy domination and four variants of zero forcing [24]. The connections show that all versions of Grundy domination can be applied in the study of different types of minimum rank parameters of symmetric matrices. Given a graph \(G\), a complete subgraph is a _clique_ in \(G\). Similarly, a complete graph may also be called a clique. A vertex of degree \(1\) in \(G\) is called a _leaf_ of \(G\). A vertex \(v\in V\) is a _simplicial vertex_ of \(G\), if \(N_{G}(v)\) induces a clique. The graph \(G\) is _chordal_ if it contains no induced cycles of length more than \(3\). Two vertices \(u\) and \(v\) in \(G\) are _closed twins_ if \(N_{G}[u]=N_{G}[v]\) and _open twins_ if \(N_{G}(u)=N_{G}(v)\). Vertices \(u\) and \(v\) in \(G\) are _twins_ if they are either open or closed twins. A vertex \(v\in V\) is called a _twin_ vertex if there exists \(u\in V(G)\), such that \(u\) and \(v\) are twins. In all notations presented in this section, index \(G\) may be omitted if the graph \(G\) is understood from the context. Zero forcing and total domination If \(G\) is an isolate-free graph and \(D\) is a (minimal) TD-set of \(G\), then the induced subgraph \(G[D]\) is isolate-free. In addition, the following observation is easy to see. **Observation 1**: _Let \(G\) be an isolate-free graph and let \(D\) be a (minimal) TD-set of \(G\). If \(x\) belongs to a component \(C\) of \(G[D]\) such that \(x\) is not adjacent to a vertex \(y\in V(C)\) with \(\deg_{C}(y)=1\), then \(x\) has an external private neighbor with respect to \(D\)._ We will also make use of the following notation. Let \(D\) be a (minimal) TD-set of \(G\), and let \(C_{1},\ldots,C_{\ell}\) be the components of \(G[D]\), which are isomorphic to \(K_{2}\); possibly \(\ell=0\) when there are no such components. For each \(i\in[\ell]\), let \(A_{i}(D)\) denote the set of vertices that are totally dominated by \(V(C_{i})\) and are not totally dominated by \(D\setminus V(C_{i})\). In particular, \(V(C_{i})\subset A_{i}(D)\) for all \(i\in[\ell]\), since both vertices in \(C_{i}\) are (internal) private neighbors to each other. **Theorem 2**: _If \(G\) is a graph such that no component of \(G\) is a clique, then \(\gamma_{\rm gr}^{\rm Z}(G)\geq\gamma_{t}(G)\)._ **Proof.** In the proof, we will construct a Z-sequence \(S\) of \(G\), with \(|\widehat{S}|\geq|D|\), where \(D\) is a \(\gamma_{t}\)-set of \(G\). Among all \(\gamma_{t}\)-sets of \(G\), let \(D\) be chosen in such a way that \(G[D]\) has the smallest possible number of \(K_{2}\)-components. In addition, letting \(C_{1},\ldots,C_{\ell}\) be the \(K_{2}\)-components of \(G[D]\), and \(V(C_{i})=\{x_{i},y_{i}\}\) for all \(i\in[\ell]\), we choose \(D\) among all \(\gamma_{t}\)-set of \(G\) restricted as above in such a way that the number of \(K_{2}\)-components in \(G[D]\) for which \((N(x_{i})\cap A_{i}(D))\setminus\{y_{i}\}=(N(y_{i})\cap A_{i}(D))\setminus\{x_ {i}\}\) is as small as possible. Since \(D\) is fixed, we may simplify the notation and write \(A_{i}\) instead of \(A_{i}(D)\) for the vertices that are totally dominated by \(x_{i}\) or \(y_{i}\), and are not adjacent to any vertex in \(D\setminus V(C_{i})\). First, consider the components of \(G[D]\) that are not isomorphic to \(K_{2}\). For each such component \(C\), first add to \(S\) all vertices \(u\) in \(G[D]\) such that \(u\) is adjacent to a vertex \(v\in V(C)\) with \(\deg_{C}(v)=1\). Each such vertex \(u\) footprints a corresponding vertex \(v\) noting that \(v\) is a \(D\)-internal private neighbor of \(u\). Thereafter, we add to \(S\) all the remaining vertices in \(C\). By Observation 1, every such vertex \(x\), since it has no \(D\)-internal private neighbor, has a \(D\)-external private neighbor \(y\). Therefore, \(x\) footprints \(y\), and so the resulting sequence \(S\), constructed so far, is a Z-sequence. Note that the number of vertices added to \(S\) from \(C\) is \(|V(C)|\) since all vertices of \(C\) have been added to \(S\). Dealing in the same way with all components \(C\) of \(G[D]\), where \(|V(C)|\geq 3\), as explained in the previous paragraph, we are left with the components \(C_{1},\ldots,C_{\ell}\) of \(G[D]\), where \(V(C_{i})=\{x_{i},y_{i}\}\) for all \(i\in[\ell]\). Without loss of generality, we may chose the indices of the components \(C_{i}\) in such a way that for \(C_{i}\), where \(i\in[k]\), we have \[(N(x_{i})\cap A_{i})\setminus\{y_{i}\}=(N(y_{i})\cap A_{i})\setminus\{x_{i}\}, \tag{3}\] while for \(i\in\{k+1,\ldots,\ell\}\), components \(C_{i}\) do not have this property, that is, \((N(x_{i})\cap A_{i})\setminus\{y_{i}\}\neq(N(y_{i})\cap A_{i})\setminus\{x_ {i}\}\). Possibly \(k=0\) or \(k=\ell\), which represent cases where only one of the two types of components appears. Renaming vertices if necessary, we may assume without loss of generality that \[\big{(}N(x_{i})\setminus N[y_{i}]\big{)}\cap A_{i}\neq\emptyset, \tag{4}\] and let \(a_{i}\in\left(N(x_{i})\setminus N[y_{i}]\right)\cap A_{i}\). For each \(i\in\{k+1,\ldots,\ell\}\), we first add to \(S\) the vertex \(y_{i}\), and then the vertex \(x_{i}\). In this way, \(y_{i}\) footprints \(x_{i}\), while \(x_{i}\) footprints \(a_{i}\) and thus the so extended sequence \(S\) is still a Z-sequence. We note that we have added two vertices to \(S\) from each component \(C_{i}\), where \(|V(C_{i})|=2\). Finally, we deal with components \(C_{i}\), where \(i\in[k]\). We claim that none of the sets \(A_{i}\) induces a complete graph. Suppose, to the contrary, that \(G[A_{i}]\) is a clique. Since no component of \(G\) is a clique, there exists a vertex \(a\in A_{i}\), which is adjacent to a vertex \(b\notin A_{i}\). We note that \(b\notin D\), and by the definition of sets \(A_{i}\), the vertex \(b\) is totally dominated by a vertex \(c\in D\setminus V(C_{i})\). Now, \(D^{\prime}=(D\setminus\{x_{i},y_{i}\})\cup\{a,b\}\) is a TD-set of \(G\). Since \(|D^{\prime}|=|D|=\gamma_{t}(G)\), the TD-set \(D^{\prime}\) is therefore a \(\gamma_{t}\)-set of \(G\). Further vertex \(a\) is a \(D^{\prime}\)-internal private neighbor of vertex \(b\) and vertices in \(A_{i}\) are \(D^{\prime}\)-external private neighbors of vertex \(a\). Now, \(G[D^{\prime}]\) has fewer \(K_{2}\)-components as \(G[D]\), which is a contradiction to the initial assumption on \(D\). Therefore, no set \(A_{i}\) induces a clique for \(i\in[k]\). For each \(i\in[k]\), there therefore exists a vertex \(a_{i}\in A_{i}\setminus\{x_{i},y_{i}\}\) such that \(a_{i}\) is not adjacent to a vertex \(b_{i}\in A_{i}\setminus\{x_{i},y_{i}\}\). Now, \(D^{\prime}=(D\setminus\{y_{1}\})\cup\{a_{1}\}\) is a TD-set of \(G\), since vertices in \(V(G)\setminus A_{1}\) are totally dominated by \(D\setminus V(C_{1})\), vertices of \(A_{1}\setminus\{x_{1}\}\) are totally dominated by \(x_{1}\), and \(x_{1}\) is totally dominated by \(a_{1}\). Further, \(G[D^{\prime}]\) has the same number of \(K_{2}\)-components as \(G[D]\). However, since \[(N(x_{1})\cap A_{1}(D^{\prime}))\setminus\{a_{1}\}\neq(N(a_{1})\cap A_{1}(D^{ \prime}))\setminus\{x_{1}\},\] the number of components with the additional property given in Equation (3) is fewer in \(D^{\prime}\) than in \(D\). Therefore, we are in contradiction with the initial assumption on \(D\), and so this case does not appear, implying that \(k=0\). We infer that \(\widehat{S}=D\), and so \(S\) is a Z-sequence of length \(\gamma_{t}(G)\), which yields \(\gamma_{\rm gr}^{\rm Z}(G)\geq\gamma_{t}(G)\). \(\Box\) Translated to the zero forcing number, Equation (2) and Theorem 2 imply the following bound on the zero forcing number, where a _clique component_ of a graph is a component of the graph that is a clique. **Corollary 3**: _If \(G\) is a graph with no clique component, then \(Z(G)\leq n(G)-\gamma_{t}(G)\)._ The difference between the total domination number and the Z-Grundy domination number of a graph can be arbitrary large. Moreover, the ratio \(\frac{\gamma_{\rm gr}^{\rm Z}(G)}{\gamma_{t}(G)}\) can be arbitrarily large. For instance, let \(G_{k}\), where \(k\in\mathbb{N}\), be the graph obtained from the disjoint union of two copies of the complete graph \(K_{k}\) by adding edges that form a perfect matching of \(G_{k}\). Clearly, \(\gamma_{\rm gr}^{\rm Z}(G_{k})=k\) and \(\gamma_{t}(G_{k})=2\). On the other hand, the bound in Theorem 2 is sharp. For instance, for the star \(K_{1,\ell}\) we have \(\gamma_{\rm gr}^{\rm Z}(K_{1,\ell})=2=\gamma_{t}(G)\). In the remainder of this section, we study properties of graphs whose Z-Grundy domination number equals the total domination number. Since \(\gamma_{t}(G)\leq\gamma_{\rm gr}^{\rm Z}(G)\leq\gamma_{\rm gr}^{t}(G)\) holds for any graph \(G\) with no component isomorphic to complete graph, any graph \(G\) with \(\gamma_{t}(G)=\gamma_{\rm gr}^{t}(G)\) also satisfies \(\gamma_{t}(G)=\gamma_{\rm gr}^{\rm Z}(G)\). Graphs with \(\gamma_{t}(G)=\gamma_{\rm gr}^{t}(G)=k\), which are known under the name _total \(k\)-uniform graphs_, were studied in [4, 11, 13]. In these papers, bipartite graphs with \(\gamma_{t}(G)=\gamma_{\rm gr}^{t}(G)=4\) and with \(\gamma_{t}(G)=\gamma_{\rm gr}^{t}(G)=6\) were characterized. It was also shown that all connected total \(k\)-uniform graphs having no open twins are regular, and some examples of non-bipartite total \(k\)-uniform graphs, for even \(k\), were also presented. Moreover, chordal total \(k\)-uniform graphs were characterized. (In particular, for \(k\geq 3\) no such graphs exist). By the above observation, all total \(k\)-uniform graphs are also graphs with \(\gamma_{t}(G)=\gamma_{\rm gr}^{\rm Z}(G)\), but this property holds also for many other graphs. For instance, while complete multipartite graphs are the only connected totally \(2\)-uniform graphs (see [11]), we have the following characterization of connected graphs with \(\gamma_{t}(G)=\gamma_{\rm gr}^{\rm Z}(G)=2\). **Proposition 4**: _Let \(G\) be a connected graph not isomorphic to a complete graph. Then \(\gamma_{t}(G)=\gamma_{\rm gr}^{\rm Z}(G)=2\) if and only if \(N[x]\cup N[y]=V(G)\) holds for any non-twin vertices \(x\) and \(y\)._ **Proof.** First let \(N[x]\cup N[y]=V(G)\) hold for any non-twin vertices \(x\) and \(y\). Suppose that there is a \(Z\)-sequence \((u,v,w)\) in \(G\), implying that \(N(w)\setminus(N[u]\cup N[v])\neq\emptyset\). Since \(N[x]\cup N[y]=V(G)\) holds for any non-twin vertices \(x\) and \(y\) of \(G\), \(u\) and \(v\) must be twins. Hence \(N(v)\setminus N[u]=\emptyset\), a contradiction. Hence, \(\gamma_{\rm gr}^{\rm Z}(G)\leq 2\) and since \(G\) is not complete, \(\gamma_{\rm gr}^{\rm Z}(G)=2\). By Theorem 2, \(\gamma_{t}(G)=2\). For the converse, let \(\gamma_{t}(G)=\gamma_{\rm gr}^{\rm Z}(G)=2\). Suppose that there exists non-twin vertices \(u\) and \(v\) with \(N[u]\cup N[v]\neq V(G)\). Since \(G\) is connected, there exists \(w\in N(u)\cup N(v)\) such that \(w\) has a neighbor \(w^{\prime}\in V(G)\setminus(N[u]\cup N[v])\). Then \((u,v,w)\) or \((v,u,w)\) is a Z-sequence of \(G\) and hence \(\gamma_{\rm gr}^{\rm Z}(G)\geq 3\), a contradiction. \(\Box\) By the results proved in [4] we know that there are no total \(k\)-uniform graphs for odd \(k\). We can easily find graphs with \(\gamma_{t}(G)=\gamma_{\rm gr}^{\rm Z}(G)=k\) for some odd \(k\). For example, \(C_{5}\) has both Z-Grundy domination number and total domination number equal to \(3\). We prove next that there are no chordal graphs with \(\gamma_{t}(G)=\gamma_{\rm gr}^{\rm Z}(G)=3\). To prove this we first need the following observation, where statement (a) was proved in [22], while statement (b) is straightforward to verify. **Observation 5**: _If \(G\) is a graph and \(u\) a simplicial vertex of \(G\), then_ * \(\gamma_{\rm gr}^{\rm Z}(G)-1\leq\gamma_{\rm gr}^{\rm Z}(G-u)\leq\gamma_{\rm gr }^{\rm Z}(G)\)_, and_ * \(\gamma_{t}(G)-1\leq\gamma_{t}(G-u)\leq\gamma_{t}(G)\)_._ **Theorem 6**: _If \(G\) is a connected graph that contains a simplicial vertex, then \(\gamma_{t}(G)\neq 3\) or \(\gamma_{\rm gr}^{\rm Z}(G)\neq 3\)._ **Proof.** Suppose, to the contrary, that there are connected graphs having simplicial vertices with both Z-Grundy domination number and total domination number equal to \(3\). Among all such graphs \(G\) that contain a simplicial vertex and satisfy \(\gamma_{t}(G)=\gamma_{\rm gr}^{\rm Z}(G)=3\), let \(G\) be chosen to have minimum order. Clearly, \(G\) is not a complete graph. Let \(x\) be a simplicial vertex of \(G\), and let \(X=N[x]\), and so \(X\) induces a clique. Let \(x_{1},x_{2}\) be two arbitrary vertices from \(X\setminus\{x\}\). If \((x,x_{1},x_{2})\) is a Z-sequence of \(G\), then since \(\gamma_{\rm gr}^{\rm Z}(G)=3\) we infer that \(N[x]\cup N[x_{1}]\cup N[x_{2}]=V(G)\), implying that \(\{x_{1},x_{2}\}\) is a TD-set of \(G\), a contradiction. Hence, \((x,x_{1},x_{2})\) is not a Z-sequence of \(G\). Since \(x_{1}\) and \(x_{2}\) were arbitrary, neither \((x,x_{1},x_{2})\) nor \((x,x_{2},x_{1})\) is a Z-sequence of \(G\). Thus since \(X\) induces a clique, we infer that \(N[x_{1}]=N[x]\) or \(N[x_{1}]=N[x_{2}]\). We can therefore partition \(X\) into two sets, \(A=\{a\in X:N[a]=N[x]\}\) and \(B=X\setminus A\). Since \(G\) is not complete, the set \(B\) is not empty. If \(|B|\geq 2\), then choosing \(\{x_{1},x_{2}\}\subseteq B\), the sequence \((x,x_{2},x_{1})\) is not a Z-sequence of \(G\), implying that \(N[x_{1}]=N[x_{2}]\), and so \(x_{1}\) and \(x_{2}\) are twins in \(G\). Thus if \(|B|\geq 2\), then any two vertices of \(B\) are twins in \(G\). Let \(b\in B\) and let \(C=N(b)\setminus N[x]\), which is clearly non-empty. By our earlier observations, \(X=A\cup B\) is a clique, \(N[a]=X\) for all \(a\in A\), and \(N[b]=A\cup B\cup C\) for all \(b\in B\). Let \(H=G-x\). By our earlier properties, the graph \(H\) is a connected isolate-free graph. Further, \(H\) is not a complete graph. Hence by Theorem 2, \(\gamma_{\rm gr}^{\rm Z}(H)\geq\gamma_{t}(H)\geq 2\). By Observation 5, we have \(\gamma_{t}(H)\leq 3\) and \(\gamma_{\rm gr}^{\rm Z}(H)\leq 3\). If \(\gamma_{t}(H)=3\), then \(\gamma_{\rm gr}^{\rm Z}(H)=3\), contradicting the minimality of \(G\). Hence, \(\gamma_{t}(H)=2\) and either \(\gamma_{\rm gr}^{\rm Z}(H)=2\) or \(\gamma_{\rm gr}^{\rm Z}(H)=3\). Let \(\{u_{1},u_{2}\}\) be an arbitrary \(\gamma_{t}\)-set of \(H\). If \(\{u_{1},u_{2}\}\cap X\neq\emptyset\), then \(\{u_{1},u_{2}\}\) is a TD-set of \(G\), and so \(\gamma_{t}(G)=2\), a contradiction. Hence, every \(\gamma_{t}\)-set of \(H\) has an empty intersection with \(X\). Therefore, \(A=\{x\}\), and \(\{b,c\}\) is not a TD-set of \(H\) for any \(b\in B\) and \(c\in C\). We now select \(b\in B\). Since \(N[b]=B\cup C\) for all \(b\in B\) and since \(\{b,c\}\) is not a TD-set of \(H\) for any \(b\in B\) and \(c\in C\), we can select \(c\in C\) in such a way that \(D=N[c]\setminus(B\cup C)\) is non-empty. In addition, there exists vertices \(d\in D\) and \(e\notin B\cup C\cup D\) such that \(e\) is adjacent to \(d\) or to a vertex \(c^{\prime}\in C\). Therefore, \((x,b,c,d)\) or \((x,b,c,c^{\prime})\) is a Z-sequence, where vertex \(x\) footprints \(b\), vertex \(b\) footprints \(c\), vertex \(c\) footprints \(d\), and, finally, vertex \(d\) or vertex \(c^{\prime}\) footprints \(e\). Hence, \(\gamma_{\rm gr}^{\rm Z}(G)\geq 4\), a contradiction. \(\Box\) Since every chordal graph which is not complete has a simplicial vertex, as an immediate consequence of Theorem 6 we have the following result. **Corollary 7**: _There is no connected chordal graph \(G\) such that \(\gamma_{t}(G)=\gamma_{\rm gr}^{\rm Z}(G)=3\)._ We already know [4] that there are no total \(k\)-uniform chordal graphs for \(k\geq 3\). This raises the following question. **Problem 1**: _Is there a connected chordal graph \(G\) with \(\gamma_{t}(G)=\gamma_{\rm gr}^{\rm Z}(G)=k\) for \(k\geq 4\)?_ We end the section with the following problem that arises from the above discussion. **Problem 2**: _Characterize the extremal graphs attaining the bound in Theorem 2._ From earlier papers one can suspect that finding a characterization of graphs \(G\) with \(\gamma_{\rm gr}^{t}(G)=\gamma_{t}(G)\) should be difficult. Thus we expect that Problem 2 will also not be easy. ## 4 Zero forcing and upper total domination Comparing the Z-Grundy domination number with the upper total domination number of a graph, we note that the inequality in Theorem 2 cannot be improved by replacing \(\gamma_{t}\) with \(\Gamma_{t}\). In other words, there exist graphs such that the upper total domination number is larger than the Z-Grundy domination number, and the difference can even be arbitrarily large. For instance, the _windmill graph_\({\rm Wd}(k,n)\), where \(k\geq 3\) and \(n\geq 2\), is obtained by taking \(n\) vertex disjoint copies of the complete graph \(K_{k}\), selecting one vertex from each copy, and identifying these \(n\) selected vertices into one new vertex (that is a universal vertex of degree \(n(k-1)\)). The resulting graph windmill graph \(G={\rm Wd}(k,n)\) satisfies \(\gamma_{\rm gr}^{\rm Z}(G)=n\) and \(\Gamma_{t}(G)=2n\). This yields the following result. **Observation 8**: _There exists an infinite family of connected isolate-free graphs \(G\) satisfying \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)\)._ By Observation 8, if \(G\) is a connected isolate-free graph, then the ratio \(\frac{\Gamma_{t}(G)}{\gamma_{\rm gr}^{\rm Z}(G)}\) can be as large as 2. However, we next prove that this ratio cannot exceed 2. **Theorem 9**: _If \(G\) is an isolate-free graph, then \(\Gamma_{t}(G)\leq 2\gamma_{\rm gr}^{\rm Z}(G)\)._ **Proof.** We will use (a simplified version of) the setting described in the proof of Theorem 2. Let \(D\) be a \(\Gamma_{t}\)-set of \(G\), and so \(D\) is a minimal TD-set of \(G\) of cardinality \(\Gamma_{t}(G)\), and consider the components of the induced subgraph \(G[D]\). We will construct a Z-sequence \(S\) of \(G\), with \(|\widehat{S}|\geq|D|/2\). Let \({\cal C}_{1}\) be the set of all components of \(G[D]\) with more than two vertices, and let \({\cal C}_{2}\) be the set of all \(K_{2}\)-components of \(G[D]\). Note that \(\sum_{C\in{\cal C}_{1}\cup{\cal C}_{2}}|V(C)|=|D|\). If \(C\in{\cal C}_{1}\) then, in the same way as in the proof of Theorem 2, we can add \(|V(C)|\) vertices to \(S\) (by first adding to \(S\) all vertices of \(C\) that are neighbors of leaves of \(C\), and then all other vertices of \(C\)). After dealing in this way with all components from \({\cal C}_{1}\), we focus on the remaining components of \(G[D]\) (having two vertices). From each \(C\in{\cal C}_{2}\), we can add at least one vertex to the sequence \(S\) in such a way that the added vertex footprints its neighbor in \(D\). The resulting sequence \(S\) is a Z-sequence, and so \(\gamma_{\rm gr}^{\rm Z}(G)\geq|\widehat{S}|\). On the other hand, \[|\widehat{S}|=\sum_{C\in{\cal C}_{1}}|V(C)|+\sum_{C\in{\cal C}_{2}}\frac{|V(C) |}{2}\geq\sum_{C\in{\cal C}_{1}\cup{\cal C}_{2}}\frac{|V(C)|}{2}=\frac{|D|}{2} =\frac{\Gamma_{t}(G)}{2}.\] This implies that \(\gamma_{\rm gr}^{\rm Z}(G)\geq\frac{1}{2}\Gamma_{t}(G)\), as claimed. \(\Box\) As an immediate consequence of Corollary 9, we have the following corollary. **Corollary 10**: _If \(G\) is an isolate-free graph, then \(Z(G)\leq n(G)-\frac{\Gamma_{t}(G)}{2}\)._ By the example leading to Observation 8, the windmill graphs \(G={\rm Wd}(k,n)\) where \(k\geq 3\) and \(n\geq 2\) attain the upper bound in Theorem 9. Next, we present some properties of graphs \(G\) with \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)\). We will again use the following notation. Let \(D\) be a \(\Gamma_{t}\)-set of an isolate-free graph \(G\), and let \(C_{1},\ldots,C_{\ell}\) be the \(K_{2}\)-components of \(G[D]\) where \(V(C_{i})=\{x_{i},y_{i}\}\) for \(i\in[\ell]\). For each \(i\in[\ell]\), let \(A_{i}(D)\) denote the set of vertices that are totally dominated by \(V(C_{i})\) and are not totally dominated by \(D\setminus V(C_{i})\). In particular, \(V(C_{i})\subseteq A_{i}(D)\) for all \(i\in[\ell]\), since both vertices in \(V(C_{i})\) are \(D\)-internal private neighbors to each other. From the proof of Theorem 2 and Theorem 9, we immediately get the following. **Lemma 11**: _If \(G\) is a graph with \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)\) and \(D\) is a \(\Gamma_{t}\)-set of \(G\), then each component of \(G[D]\) is isomorphic to \(K_{2}\) and \(N[x_{i}]\cap A_{i}(D)=N[y_{i}]\cap A_{i}(D)\) for all \(i\in[\ell]\)._ We note that Lemma 11 implies that \(V(C_{1})\cup\cdots\cup V(C_{\ell})=D\). **Lemma 12**: _If \(G\) is a graph with \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)\) and \(D\) is a \(\Gamma_{t}\)-set of \(G\), then the following properties hold._ 1. \(A_{i}(D)\) _induces a clique for all_ \(i\in[\ell]\)_, and_ 2. _there are no edges between_ \(A_{i}(D)\) _and_ \(A_{j}(D)\) _for all_ \(i\) _and_ \(j\) _where_ \(1\leq i<j\leq\ell\)_._ **Proof.** Let \(G\) be a graph with \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)\) and let \(D\) be a \(\Gamma_{t}\)-set of \(G\). It follows from Lemma 11 that each component \(C_{i}\) of \(G[D]\) is a \(K_{2}\)-component, implying that \(\Gamma_{t}(G)=2\ell\). Further, \(N[x_{i}]\cap A_{i}(D)=N[y_{i}]\cap A_{i}(D)\) for all \(i\in[\ell]\). Suppose that there exist \(a,b\in A_{i}(D)\) such that \(ab\notin E(G)\). Renaming components if necessary, we may assume that \(i=1\). Thus, \((a,x_{1},x_{2},\ldots,x_{\ell})\) is a Z-sequence of \(G\), since vertex \(a\) footprints \(x_{1}\), vertex \(x_{1}\) footprints \(b\), and vertex \(x_{j}\) footprints \(y_{j}\) for all \(j\geq 2\). Thus, \(\gamma_{\rm gr}^{\rm Z}(G)\geq\ell+1\), and so, \(\gamma_{t}(G)=2\ell<2\gamma_{\rm gr}^{\rm Z}(G)\), a contradiction. Hence, \(A_{i}(D)\) induces a clique for all \(i\in[\ell]\). This proves part (a). To prove part (b), suppose that there exists an edge \(e=ab\) between \(A_{i}(D)\) and \(A_{j}(D)\) for some \(i\) and \(j\) where \(1\leq i<j\leq\ell\), where \(a\in A_{i}(D)\) and \(b\in A_{j}(D)\). Since \(A_{i}(D)\) is, by definition, the set of neighbors of \(\{x_{i},y_{i}\}\) that are not totally dominated by \(D\backslash\{x_{i},y_{i}\}\), no vertex from the set \(\{x_{i},y_{i},x_{j},y_{j}\}\) is incident with the edge \(e\). Hence, \((x_{1},\ldots,x_{i},a,x_{i+1},\ldots,x_{\ell})\) is a Z-sequence of \(G\), since each vertex \(x_{p}\) footprints \(y_{p}\) for \(p\in[\ell]\) and vertex \(a\) footprints vertex \(b\). Thus, \(\gamma_{\rm gr}^{\rm Z}(G)\geq\ell+1\), a contradiction. \(\Box\) In the rest of this section we will denote by \(H\) the subgraph \(G-(A_{1}(D)\cup\cdots\cup A_{\ell}(D))\). If there is a vertex \(v\in V(G)\) such that \(v\) is adjacent to all vertices of a set \(X\subset V(G)\), then we will use the notation \(v\sim X\). **Lemma 13**: _If \(G\) is a graph with \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)\) and \(D\) is a \(\Gamma_{t}\)-set of \(G\), then the vertices in \(A_{i}(D)\) are closed twins for all \(i\in[\ell]\)._ **Proof.** Let \(G\) be a graph with \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)=2\ell\) and let \(D\) be a \(\Gamma_{t}\)-set of \(G\). Suppose that there exist \(a,b\in A_{i}(D)\), for some \(i\in[\ell]\), such that \(N[a]\neq N[b]\). Renaming components if necessary, we may assume that \(i=1\). By Lemma 12, the set \(A_{1}(D)\) induces a clique. Hence renaming the vertices \(a\) and \(b\) if necessary, we may assume without loss of generality that there exists a vertex \(u\notin A_{1}(D)\) such that \(u\in N[a]\setminus N[b]\). By Lemma 12, we also infer that \(u\in V(H)\). If \(\ell=1\), then \((b,a)\) is a Z-sequence in \(G\) noting that vertex \(b\) footprints \(A_{1}(D)\) and vertex \(a\) footprints \(u\), implying that \(\gamma_{\rm gr}^{\rm Z}(G)\geq 2=\ell+1\), a contradiction. Hence, \(\ell\geq 2\). In this case, \((b,a,x_{2},\ldots,x_{\ell})\) is a Z-sequence in \(G\), noting that vertex \(b\) footprints \(A_{1}(D)\), vertex \(a\) footprints \(u\), and vertex \(x_{p}\) footprints \(A_{p}(D)\) for all \(p\in[\ell]\setminus\{1\}\). This produces a Z-sequence of length \(\ell+1\), implying that \(\gamma_{\rm gr}^{\rm Z}(G)\geq\ell+1\), a contradiction. \(\Box\) We note that by Lemma 13 and by definition of the sets \(A_{i}(D)\), for every vertex \(u\in V(H)\) there exist two distinct indices in \([\ell]\), say \(i,j\), such that \(u\sim A_{i}(D)\) and \(u\sim A_{j}(D)\). **Lemma 14**: _If \(G\) is a graph with \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)\) and \(D\) is a \(\Gamma_{t}\)-set of \(G\), then for any adjacent vertices \(u,v\in V(H)\) it holds that \(|\{i:\ u\sim A_{i}(D)\) and \(v\sim A_{i}(D)\}|\geq 1\)._ **Proof.** Let \(G\) be a graph with \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)\) and let \(D\) be a \(\Gamma_{t}\)-set of \(G\). Let \(R=V(G)\setminus V(H)\). Suppose that there exist adjacent vertices \(u,v\in V(H)\) with \(N_{R}(u)\cap N_{R}(v)=\emptyset\). Let \(i_{1},\ldots,i_{k}\) be the indices for which \(u\sim A_{i}(D)\) holds and let \(\{j_{1},\ldots,j_{\ell-k}\}=[\ell]\setminus\{i_{1},\ldots,i_{k}\}\). By supposition the vertex \(v\) has no neighbors in \(A_{i_{1}}(D)\cup\cdots\cup A_{i_{k}}(D)\). Thus, \((x_{i_{1}},\ldots,x_{i_{k}},u,x_{j_{1}},\ldots,x_{j_{\ell-k}})\) is a Z-sequence of \(G\) of cardinality \(\ell+1\), implying that \(\gamma_{\rm gr}^{\rm Z}(G)\geq\ell+1\), a contradiction. \(\Box\) **Lemma 15**: _If \(G\) is a graph with \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)\) and \(D\) is a \(\Gamma_{t}\)-set of \(G\), then for any non-adjacent vertices \(u,v\in V(H)\) it holds that \(|\{i:\,u\sim A_{i}(D)\) and \(v\sim A_{i}(D)\}|\neq 1\)._ **Proof.** Let \(G\) be a graph with \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)\) and let \(D\) be a \(\Gamma_{t}\)-set of \(G\). Suppose that there exist non-adjacent vertices \(u,v\in V(H)\) such that the only common neighbors of \(u\) and \(v\) in \(V(G)\setminus V(H)\) are vertices of \(A_{i_{1}}(D)\) for some \(i_{1}\in[\ell]\). Let \(i_{1},\ldots,i_{k}\) be the indices for which \(u\sim A_{i}(D)\) holds and let \(\{j_{1},\ldots,j_{\ell-k}\}=[\ell]\setminus\{i_{1},\ldots,i_{k}\}\). By supposition, the vertex \(v\) has no neighbors in \(A_{i_{2}}\cup\cdots\cup A_{i_{k}}\). Then \((x_{i_{2}},x_{i_{3}},\ldots,x_{i_{k}},u,x_{i_{1}},x_{j_{1}},\ldots,x_{j_{\ell- k}})\) is a Z-sequence. Indeed, for \(i\neq i_{1}\), the vertex \(x_{i}\) footprints \(A_{i}(D)\), the vertex \(u\) footprints \(A_{i_{1}}(D)\), and the vertex \(x_{i_{1}}\) footprints \(v\). Thus, \(\gamma_{\rm gr}^{\rm Z}(G)\geq\ell+1\), a contradiction. \(\Box\) Let \(G\) be a graph with \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)\) and let \(D\) be a \(\Gamma_{t}\)-set of \(G\). For a subset \(B\subseteq V(H)\), we will denote by \(X_{B}(D)\) the set \[X_{B}(D)=\{i\in[\ell]:\,u\sim A_{i}(D)\mbox{ for some }u\in B\}.\] We will denote by \(m_{B}(D)\) the largest cardinality of a minimal subset \(X\subseteq X_{B}(D)\) such that \[B\subseteq\bigcup_{i\in X}N(x_{i}).\] With this notation, we introduce the following lemma. **Lemma 16**: _If \(G\) is a graph with \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)\) and \(D\) is a \(\Gamma_{t}\)-set of \(G\), then for every \(B\subseteq V(H)\) we have_ \[\gamma_{\rm gr}^{\rm Z}(G[B\setminus B^{\prime}])+m_{B^{\prime}}(D)\leq|X_{B} (D)|,\] _where \(B^{\prime}=\{u\in B:\,u\mbox{ is an isolated vertex of }G[B]\}\)._ **Proof.** Let \(G\) be a graph with \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)=2\ell\) and let \(D\) be a \(\Gamma_{t}\)-set of \(G\). Let \(B\subseteq V(H)\) and \(B^{\prime}=\{u\in B:\,u\mbox{ is an isolated vertex of }G[B]\}\). Renaming vertices in \(D\) if necessary, we may assume without loss of generality that \(X_{B}(D)=[k]\). Suppose, to the contrary, that \(\gamma_{\rm gr}^{\rm Z}(G[B\setminus B^{\prime}])+m_{B^{\prime}}(D)>k\). Let \(S=(a_{1},\ldots,a_{s})\) be a \(\gamma_{\rm gr}^{\rm Z}\)-sequence of \(G[B\setminus B^{\prime}]\) and let \(X=\{i_{1},\ldots,i_{m}\}\subseteq X_{B^{\prime}}(D)\) be a minimal subset of cardinality \(m=m_{B^{\prime}}(D)\) such that the vertices in \(\{A_{i}(D):\,i\in X\}\) dominate \(B^{\prime}\). We now consider the sequence given by \[S^{\prime}=(a_{1},\ldots,a_{s},x_{i_{1}},\ldots,x_{i_{m}},x_{k+1},\ldots,x_{ \ell}).\] The sequence \(S^{\prime}\) is a \(Z\)-sequence of \(G\). To see this, note that the vertex \(a_{i}\) footprints some vertex of \(G[B\setminus B^{\prime}]\) for all \(i\in[s]\) since \(S\) is a \(\gamma_{\rm gr}^{\rm Z}\)-sequence of \(G[B\setminus B^{\prime}]\). By our choice of \(X\), the vertex \(x_{i_{j}}\) footprints some \(u\in B^{\prime}\). Finally, since \(B\cap N(x_{i})=\emptyset\), the vertex \(x_{i}\) footprints the vertices in \(A_{i}(D)\) for all \(i\geq k+1\). Thus, \(S^{\prime}\) is a \(Z\)-sequence of \(G\) of length \[\underbrace{\gamma_{\rm gr}^{\rm Z}(G[B\setminus B^{\prime}])+m_{B^{\prime}}( D)}_{>k}+(\ell-k)>\ell=\gamma_{\rm gr}^{\rm Z}(G),\] and we arise to a contradiction. \(\Box\) We summarize the above lemmas into the following result in which we adopt the notation established in this section. **Proposition 17**: _If \(G\) is a graph with \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)\) and \(D\) is a \(\Gamma_{t}\)-set of \(G\), then the following properties hold._ * _each component of_ \(G[D]\) _is isomorphic to_ \(K_{2}\) _and so_ \(|D|=2\ell\)_, for some integer_ \(\ell\)_;_ * \(N[x_{i}]\cap A_{i}(D)=N[y_{i}]\cap A_{i}(D)\)_, for each_ \(i\in[\ell]\)_;_ * \(A_{i}(D)\) _induces a clique, for each_ \(i\in[\ell]\)_;_ * _there are no edges between_ \(A_{i}(D)\) _and_ \(A_{j}(D)\)_, for each_ \(\{i,j\}\subset[\ell]\)_;_ * _vertices in_ \(A_{i}(D)\) _are closed twins, for each_ \(i\in[\ell]\)_;_ * _for every adjacent vertices_ \(u,v\in V(H)\)_, we have_ \[|\{i:\,u\sim A_{i}(D)\mbox{ and }v\sim A_{i}(D)\}|\geq 1;\] * _for every non-adjacent vertices_ \(u,v\in V(H)\) _it holds that_ \[|\{i:\,u\sim A_{i}(D)\mbox{ and }v\sim A_{i}(D)\}|\neq 1;\] * _for every_ \(B\subset V(H)\)_, if_ \(B^{\prime}=\{u\in B:u\mbox{ is an isolated vertex of }G[B]\}\)_, we have_ \[\gamma_{\rm gr}^{\rm Z}(G[B\setminus B^{\prime}])+m_{B^{\prime}}(D)\leq|X_{B}( D)|.\] We do not think that the combination of properties in Proposition 17 is sufficient for an isolate-free connected graph \(G\) to satisfy \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)\). Therefore, we pose the following problem. **Problem 3**: _Determine if the properties in Proposition 17 are sufficient for an isolate-free connected graph \(G\) to satisfy \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)\). If not, then extend these properties to obtain a characterization of graphs \(G\) with \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)\)._ Now, we present a large family of graphs achieving the bound from Theorem 9. Let \(H\) be an arbitrary graph with \(\gamma_{\rm gr}^{\rm Z}(H)\leq\ell\). Let \(G\) be the graph obtained from \(H\) by adding \(\ell\) disjoint cliques \(K_{n_{1}},\ldots,K_{n_{\ell}}\), each of order at least \(2\), and then connecting by an edge every \(u\in V(H)\) with every vertex of \(K_{n_{1}}\cup\cdots\cup K_{n_{\ell}}\). The resulting graph \(G\) satisfies \(\Gamma_{t}(G)=2\ell\) and \(\gamma_{\rm gr}^{\rm Z}(G)=\ell\). Since \(H\) is an induced subgraph of \(G\), we can formulate this observation as follows. **Observation 18**: _Every graph \(H\) is an induced subgraph of a graph \(G\) with \(\Gamma_{t}(G)=2\gamma_{\rm gr}^{\rm Z}(G)\)._ We note that in many graphs the Z-Grundy domination number is (much) bigger than the upper total domination number, and the ratio \(\gamma_{\rm gr}^{\rm Z}(G)/\Gamma_{t}(G)\) can be arbitrarily large. Indeed, let \(G\) be an arbitrary graph, and let \(G^{*}\) be obtained from the disjoint union of \(G\) and the complete graph \(T\) of order \(2\) with \(V(T)=\{a,c\}\) by adding the edges from the set \(\{ax:\,x\in V(G)\}\). Every TD-set of \(G^{*}\) has to contain the vertex \(a\) in order to totally dominate the vertex \(c\). However, any minimal TD-set containing \(a\) can only have two vertices, which yields \(\Gamma_{t}(G^{*})=2\). On the other hand, we observe that \(\gamma_{\rm gr}^{\rm Z}(G^{*})\geq\gamma_{\rm gr}^{\rm Z}(G)+1\), which can be arbitrarily large. Graphs with \(Z(g)=\delta(g)\) and power domination A trivial lower bound on the zero forcing number in terms of the minimum degree came from the original paper on zero forcing due to the AIM-Group [1], which showed that \[Z(G)\geq\delta(G) \tag{5}\] holds for all graphs \(G\). Simple examples where the bound is attained are paths, since \(Z(P_{n})=1\). In this section, we will characterize the graphs \(G\) that attain the bound, which is equivalent to satisfying \(\gamma_{\rm gr}^{\rm Z}(G)=n(G)-\delta(G)\), using a connection with the concept of power domination. Furthermore, we find a characterization of all graphs with power domination equal to \(1\). As remarked earlier, the concept of power domination was introduced in [18], where it was motivated by the problem of monitoring an electrical power network. Next, we present its definition. Power domination is a graph searching process which starts by placing phase measurement units on a set \(S\) of vertices in the power network, which are then labeled as observed (for the purpose of relating the process with zero forcing we will call these vertices blue). Now, the searching process consists of the following two steps, where the Propagation Step can be repeated: 1. Initialization Step (Domination Step): All vertices in \(S\) as well as all neighbors of vertices in \(S\) are observed (i.e., colored blue). 2. Propagation Step (Zero Forcing Step): Every vertex which is the only unobserved (i.e., non-blue) neighbor of some observed (i.e., blue) vertex becomes observed (i.e., blue). If eventually the whole network is observed (that is, all vertices become blue), \(S\) is called a _power dominating set_. The minimum cardinality of a power dominating set of a graph G is the _power domination number_, and is denoted by \(\gamma_{P}(G)\). The following lemma provides an interesting relation between certain extremal families of graphs in power domination and in zero forcing. **Lemma 19**: _If \(G\) is a graph of order \(n\) with minimum degree \(\delta\), then \(Z(G)=\delta\) if and only if \(\gamma_{P}(G)=1\) and there exists a power dominating set \(\{x\}\) such that \(\deg_{G}(x)=\delta\)._ **Proof.** First, assume that \(Z(G)=\delta\). Therefore, there exists a zero forcing set \(S\) with \(\delta\) vertices, and let \(x\in S\) be the vertex with which the color-change rule starts. Clearly, \(x\) has \(\deg_{G}(x)\) neighbors, and exactly one neighbor of \(x\), say \(y\), needs to be non-blue before the propagation process starts. Since \(|S|=\delta\), this implies that \(\deg_{G}(x)=\delta\) and \(S\subset N_{G}[x]\). In addition, \(S^{\prime}=\{x\}\) is a power dominating set, since once the initialization step is over, \(N_{G}[x]\) is dominated, and the process of propagation in power domination is the same as the zero forcing process. Conversely, let \(S^{\prime}=\{x\}\) be a power dominating set of \(G\), where \(\deg_{G}(x)=\delta\). Clearly, \(N_{G}[x]\) is a zero forcing set. In addition, after removing a neighbor \(y\) of \(x\) from \(N_{G}[x]\), the set \(S=N_{G}[x]\setminus\{y\}\) is also a zero forcing set of \(G\). Indeed, the color-change rule can be applied on the blue vertex \(x\) with only one non-blue neighbor, which is vertex \(y\), after which the same propagation rules can be used (in either power domination and zero forcing). This implies that \(Z(G)\leq|N_{G}[x]|-1=\delta\), and we know that \(Z(G)\geq\delta\) in any graph \(G\). Thus \(Z(G)=\delta\) as claimed. \(\Box\) Graphs \(G\) with power domination number \(\gamma_{P}(G)=1\) were studied in [26], where many families of such graphs were presented, within the class of regular graphs of high degree (at least \(n(G)-4\)). Next, we give a complete characterization of graphs \(G\) with \(\gamma_{P}(G)=1\). The idea for the class of graphs that characterizes these graphs was inspired by Row's construction [25], where graphs with zero forcing number equal to 2 were characterized. First, we introduce the following notations. For a path \(P\colon a_{1},a_{2},\ldots,a_{k}\) and vertex \(a_{i}\) of the path \(P\) we denote the subpath of \(P\) from \(a_{i+1}\) to \(a_{k}\) by \({}^{a_{i}}P:a_{i+1},a_{i+2},\ldots,a_{k}\). A graph \(G\) is a _graph of \(k\) internally parallel paths_ if there exists \(x\in V(G)\), which is an end-vertex of each of \(k\) internally pairwise disjoint induced paths \(Q_{1},\ldots,Q_{k}\), where \(V(G)=\cup_{i=1}^{k}V(Q_{i})\), and \(V(Q_{i})\cap V(Q_{j})=\{x\}\) for any \(i\neq j\), by possibly adding any number of edges between different paths, provided that the following property holds: * for every set of \(\ell\) vertices, \(x_{i_{1}}\in V(Q_{i_{1}}),\ldots,x_{i_{\ell}}\in V(Q_{i_{\ell}})\), where \(\ell\leq k\), each belonging to distinct paths and none of them being an end-vertex of the corresponding path, it holds that there exists a vertex \(x^{\prime}\in\{x_{i_{1}},\ldots,x_{i_{\ell}}\}\) with \(\deg_{Y}(x^{\prime})=1\), where \(Y=V({}^{x_{i_{1}}}Q_{i_{1}})\cup\cdots\cup\)\(V({}^{x_{i_{\ell}}}Q_{i_{\ell}})\). For example, the graph \(G\) illustrated in Figure 1 is a graph of three internally parallel paths. **Theorem 20**: _Let \(G\) be a graph. Then \(\gamma_{P}(G)=1\) if and only if \(G\) is a graph of \(k\) internally parallel paths for some \(k\)._ **Proof.** Let \(G\) be a graph with \(\gamma_{P}(G)=1\), and let \(\{x\}\) be a power dominating set of \(G\). Let \(k=\deg(x)\), and \(N(x)=\{v_{1}^{1},\ldots,v_{1}^{k}\}\). We have that \(N[x]\) is a zero forcing set. So, in every propagation step, an uncolored vertex \(w\) is turned into blue, where \(w\) is the only uncolored vertex of a blue vertex. Note that if \(N[x]=V(G)\), then \(G\) is a graph with \(k\) internally parallel graphs, where each one of these paths is \(Q_{i}\colon x,v_{1}^{i}\). Hence we may assume that \(N[x]\neq V(G)\), for otherwise the desired result hold. Let \(X=\{v_{1}^{1},\ldots,v_{1}^{k}\}\). Let \(w\) be the vertex colored in the first propagation step. Thus, there exist some \(v_{1}^{i}\) such that \(w\) is the only uncolored neighbor of \(v_{1}^{i}\). We let \(v_{2}^{i}=w\), and update \(X\) to be \(X=(X\setminus\{v_{1}^{i}\})\cup\{v_{2}^{i}\}\). We note that all the vertices in \(X\) are blue and the blue vertices that are not in \(X\) have all their neighbors colored blue at this point. This property will be maintained throughout the process. Figure 1: A graph \(G\) of three internally parallel paths We continue the process in the following way. In any propagation step we have a set \(X=\{v^{1}_{i_{1}},\ldots,v^{k}_{i_{k}}\}\), and let \(w\) be the vertex that turns into blue in the present step. Then, there is a blue vertex for which \(w\) is the only uncolored neighbor. Since, by assuming the mentioned property, any blue vertex that is not in \(X\) has all its neighbors colored blue, then \(w\) is the only uncolored vertex of some \(v^{j}_{i_{j}}\in X\). We let \(v^{j}_{i_{j}+1}=w\) and update \(X\) to be \(X=(X\setminus\{v^{j}_{i_{j}}\})\cup\{v^{j}_{i_{j}+1}\}\). We note that \(X\) has \(k\) blue vertices, and the blue vertices that are not in \(X\) have all their neighbors colored blue, thus the desired property is maintained. We continue until all the vertices of \(G\) are turned into blue. At the end of the process, paths \(Q_{i}\colon x,v^{i}_{1},\ldots,v^{i}_{n_{i}}\), where \(i\in[k]\), have been determined. To prove that \(G\) is a graph of \(k\) internally parallel paths, we need to verify the additional condition that these paths must satisfy. Now, let \(\{v^{j_{1}}_{i_{1}},v^{j_{2}}_{i_{2}},\ldots,v^{j_{\ell}}_{i_{\ell}}\}\), where \(\ell\leq k\), be a set of vertices each belonging to pairwise distinct paths and \(i_{m}<n_{m}\) for each \(m\in[\ell]\). Let us assume, without loss of generality, that they were used to color an uncolored neighbor in the order \(v^{j_{1}}_{i_{1}},v^{j_{2}}_{i_{2}},\ldots,v^{j_{\ell}}_{i_{\ell}}\). We remark that in the propagation step in which \(v^{j_{1}}_{i_{1}}\) is chosen to color a neighbor, \(v^{j_{\ell}}_{i_{1}+1}\) is the only uncolored neighbor of \(v^{j_{1}}_{i_{1}}\). Moreover, all the vertices \(v^{j_{m}}_{i_{m}}\) with \(m>1\) are uncolored at this point. We note also that if \(v^{j}_{i}\) is uncolored, then all the vertices in \(v^{j}_{i}Q_{j}\) are uncolored. As a consequence of these observations, all the vertices in \(v^{j_{m}}_{i_{m}}Q_{j_{m}}\) are uncolored for \(m\geq 1\). So, we have that \(v^{j_{1}}_{i_{1}+1}\) is the only neighbor of \(v^{j_{1}}_{i_{1}}\) in \[Y=\bigcup_{m\in[\ell]}V^{(v^{j_{m}}_{i_{m}}}Q_{j_{m}}).\] Therefore, \(G\) is a graph of \(k\) internally parallel paths. Now, let \(G\) be a graph of \(k\) internally parallel paths with paths \(Q_{1}\colon x,v^{1}_{1},\ldots,v^{1}_{n_{1}}\) through to \(Q_{k}\colon x,v^{k}_{1},\ldots,v^{k}_{n_{k}}\). We will prove that \(\{x\}\) is a power dominating set. Equivalently, we need to prove that \(\{x,v^{1}_{1},\ldots,v^{k}_{1}\}\) is a zero forcing set. If \(N[x]=V(G)\), we are done. Otherwise, let \(X=\{v^{1}_{1},\ldots,v^{k}_{1}\}\setminus\{v^{1}_{n_{1}},\ldots,v^{k}_{n_{k}}\}\). Since \(G\) is a graph of \(k\) internally parallel paths, there is a vertex \(v^{i}_{1}\) such that \(\deg_{Y}(v^{i}_{1})=1\), where \[Y=\bigcup_{j\in[k]}V^{(v^{j}_{1}}Q_{j}),\] and the neighbor is \(v^{1}_{2}\). Then, we color \(v^{i}_{2}\) blue, since it is the only uncolored neighbor of \(v^{i}_{1}\). We continue this propagation process as follows. Let \(X^{\prime}\) be the set of the last colored vertices in each path \(Q_{i}\), and \(X=X^{\prime}\setminus\{v^{1}_{n_{1}},\ldots,v^{k}_{n_{k}}\}=\{v^{j_{1}}_{i_{1}}, \ldots,v^{j_{\ell}}_{i_{\ell}}\}\). If \(X\neq\emptyset\), since \(G\) is a graph of \(k\) internally parallel paths, there is a vertex \(v^{j_{m}}_{i_{m}}\) such that \(\deg_{Y}(v^{j_{m}}_{i_{m}})=1\), where \[Y=\bigcup_{t\in[\ell]}V^{(v^{j_{t}}_{i_{t}}}Q_{j_{t}}),\] and the neighbor is \(v^{j_{m}}_{i_{m}+1}\). Thus, since it is the only uncolored neighbor of \(v^{j_{m}}_{i_{m}}\), we turn it into blue. We can continue until all the vertices in \(G\) are blue, and therefore \(\{x\}\) is a power dominating set and \(\gamma_{P}(G)=1\). \(\Box\) Extremal graphs for the bound (5) are known for graphs with minimum degree 1 or 2. The only graphs with \(\delta(G)=1\) and \(Z(G)=1\) are paths [14]. It was proved in [25] that \(Z(G)=2\) if and only if \(G\) is a graph of two parallel paths, or, equivalently, \(G\) is an outerplanar graph with the path cover number equal to 2. In our context, we are interested in those graphs with \(Z(G)=2\) that have \(\delta(G)=2\). One can derive that the only graphs of two such parallel paths (i.e., outerplanar graphs with path cover number equal 2) with minimum degree 2 are the 2-connected outerplanar graphs. (The proof is straightforward, but we omit the details, since it follows from Corollary 22.) **Corollary 21**: _If \(G\) is a graph with \(\delta(G)=2\), then \(Z(G)=\delta(G)\) if and only if \(G\) is a 2-connected outerplanar graph._ For graphs \(G\) with \(\delta(G)\geq 3\), it is worth noting that the idea arising from graphs with two parallel paths is extended by the definition of graphs with \(k\) internally parallel paths, where \(k\) can be arbitrarily large. From the proof of Theorem 20, it follows that \(G\) is a graph of \(k\) internally parallel graphs if and only if \(\gamma_{P}(G)=1\) and there exists a power dominating set \(\{x\}\) with \(\deg(x)=k\), and in consequence \(Z(G)\leq k\). Combining this with Lemma 19, we have a characterization of graphs with \(Z(G)=\delta(G)\), or, equivalently \(\gamma_{\rm gr}^{\rm Z}(G)=n(G)-\delta(G)\). **Corollary 22**: _Let \(G\) be a graph. Then \(Z(G)=\delta(G)\) if and only if \(G\) is a graph of \(\delta(G)\) internally parallel paths._ ## Acknowledgments The first and the third author were supported by the Slovenian Research and Innovation agency (grants P1-0297, J1-2452, J1-3002, and J1-4008). The second author was partially supported by grants PIP CONICET 1900, PICT-2020-03032 and PID 80020210300068UR. Research of the fourth author was supported in part by the South African National Research Foundation under grant number 132588 and the University of Johannesburg. The second and fourth authors thank the University of Maribor for the hospitality.
2302.08510
Text-driven Visual Synthesis with Latent Diffusion Prior
There has been tremendous progress in large-scale text-to-image synthesis driven by diffusion models enabling versatile downstream applications such as 3D object synthesis from texts, image editing, and customized generation. We present a generic approach using latent diffusion models as powerful image priors for various visual synthesis tasks. Existing methods that utilize such priors fail to use these models' full capabilities. To improve this, our core ideas are 1) a feature matching loss between features from different layers of the decoder to provide detailed guidance and 2) a KL divergence loss to regularize the predicted latent features and stabilize the training. We demonstrate the efficacy of our approach on three different applications, text-to-3D, StyleGAN adaptation, and layered image editing. Extensive results show our method compares favorably against baselines.
Ting-Hsuan Liao, Songwei Ge, Yiran Xu, Yao-Chih Lee, Badour AlBahar, Jia-Bin Huang
2023-02-16T18:59:58Z
http://arxiv.org/abs/2302.08510v2
# Text-driven Visual Synthesis with Latent Diffusion Prior ###### Abstract There has been significant progress in using diffusion models for large-scale text-to-image synthesis. This has led to versatile downstream applications such as 3D object synthesis, image editing, and customized image generation. In this paper, we present a generic approach that uses latent diffusion models as powerful image priors for various visual synthesis tasks. Existing methods that use these priors do not fully exploit the models' capabilities. To address this issue, we propose 1) a feature matching loss that provides detailed guidance by comparing features from different decoder layers and 2) a KL divergence loss that regularizes predicted latent features to stabilize the training process. We apply our method to three applications: text-to-3D, StyleGAN adaptation, and layered image editing, and demonstrate its efficacy through extensive experimentation. Our results show that our method performs favorably compared to baseline methods. ## 1 Introduction Diffusion models have shown impressive image generation capabilities in terms of photorealism and compositionality [1, 30, 40, 42]. Through guidance control and embedding techniques, text-to-image diffusion models have been applied to various visual editing and processing tasks, _e.g_. customized image generation [41], video generation [52], image and video editing [17, 18], text-to-3D generation [22, 35], and image generator adaptation [45]. With a free-form text prompt, these methods utilize a pre-trained text-to-image diffusion model to guide the synthesis tasks. However, specialized methods are required for each task to produce good results. While these methods may share similar objectives, a unified approach is underexplored. In this paper, we leverage a pretrained diffusion model as a generic image prior for various visual synthesis applications. This is similar to the recent line of work that uses a CLIP model [36]. CLIP models are trained to encode a paired image and its caption and maximize their cosine similarity. Several works have used a pretrained CLIP model as a prior to facilitate different text-guided synthesis tasks, _e.g_., image and video editing [2, 33, 47, 53], generator adaptation [6], text-to-3D synthesis [12, 27]. However, as CLIP models are not generative models, their contrastive objective may not preserve the visual information useful for synthesis-oriented tasks. Diffusion models have recently shown great potential in these synthesis tasks [17, 18, 22, 35, 45]. They can achieve competitive or even better performance compared to CLIP-based approaches. However, unlike CLIP models where the original CLIP objective was uniformly adopted, these works have applied diffusion models differently under different tasks. In this work, we present a framework to utilize diffusion models for various tasks. We build our framework on Score Distillation Sampling [35], which uses an image diffusion model [42] as prior to train a NeRF model without costly backpropagating through the diffusion model itself. Other works [22, 25, 45, 49] follow a similar approach but are based on latent diffusion models [40]. The score distillation loss is computed in the latent space, which we term "latent score distillation (LSD)". Although the methods using score distillation with trained diffusion models have shown promising results, the loss is computed in a limited spatial resolution (_e.g_., \(64\times 64\)) and thus cannot provide sufficient detailed guidance. To achieve detailed guidance, we propose a new _Feature Matching Loss_ (FM) loss that uses features in multiple decoder layers of the latent diffusion model to guide the optimization process. Another limitation of recent works utilizing LSD is the lack of regularization over the optimized latent code. While minimizing the optimization loss in an unconstraint manner, these methods are likely to produce out-of-distribution latent code that is not seen by the decoder during the training, resulting in a lower-quality output. To mitigate this issue, we propose to use a KL divergence loss to regularize the optimized latent so that they stay close to the prior distribution during the training. We evaluate our method on three text-driven synthesis applications, text-to-3D generation, layered image editing, and StyleGAN adaptation (as shown in Fig. 1). Our approach achieves competitive results over baselines using the CLIP model as the image prior and diffusion models with latent score distillation. We make the following contributions: * We propose a feature matching loss to extract detailed information from the decoder to guide text-based visual synthesis tasks. * We propose a KL divergence loss to regularize the optimized latent, stabilizing the optimization process. * We extensively evaluate our method on three downstream tasks and show competitive results with strong baselines. ## 2 Related Work Text-to-image diffusion models.Diffusion models [10, 16, 44] synthesize images by denoising independent noises drawn from the standard Gaussian diffusion. Impressive progress has been made in photorealistic and zero-shot text-to-image generation [1, 5, 30, 38, 40, 42, 56]. When combined with tricks such as classifier-free guidance [11] and gradient guidance [24], the flexibility of the diffusion model makes it readily adapt to different conditional generation tasks that are not part of its training objectives, such as image editing [17, 48], and personalized image generation [41]. Our work capitalizes on the recent progress text-to-image diffusion model and uses it as image priors for various visual synthesis tasks. Text-driven 3D generative models.With the guidance from a Contrastive Language-Image Pre-training (CLIP) model [36], text-guided 3D generation becomes possible together with a differentiable renderer [12, 13, 27, 43]. By optimizing the 3D representation through the CLIP objective computed on the rendered image, they can explicitly control the pose of generated 3D shapes, and generate creative appearances and shapes from the text freely. More recently, DreamFusion [35] has brought diffusion models to this task and shown impressive results. In contrast to [7] backpropagating through the pre-trained diffusion model itself, DreamFusion leverages noise residual predicted by the pre-trained diffusion model as gradients for efficient backpropagation. Follow-up works [25, 49] adapt the method to latent diffusion models [40]. However, the 3D models resulting from these methods still lack details since the latent score distillation is computed in the latent space with a limited spatial resolution (, \(64\times 64\) latent space for a \(512\times 512\) image). In this work, we leverage the knowledge embedded in the _latent diffusion decoder_ to provide more detailed guidance. We show better details can be achieved in the synthesized 3D models. Image generator domain adaptation.Some works aim to fine-tune a pre-trained generator with few-shot or text-guided zero-shot domain adaptation to reduce the cost of training an image generator. Few-shot adaption aims to use fewer data (typically hundreds or fewer) to train an image generator. Previous works control the learnable parameters [26, 31, 39, 51], design regularizers [21, 34, 46], or use auxiliary tasks [23, 55] to improve the quality of the generator. With the advancement of large fundamental models, some works use them as guidance to achieve zero-shot adaptation. StyleGAN-NADA [6] uses pre-trained CLIP model [36] as guidance and shows diverse domain adaption results on StyleGAN [14]. StyleGANFusion [45] leverages pre-trained StableDiffusion [40] and follow the score distillation approach proposed by DreamFusion [35] to achieve even better results. Building upon StyleGANFusion [45], we show that using our proposed method produces a significantly better FID score and a competitive CLIP score. Text-driven image editing.Pioneering image manipulation methods [20, 29] utilize GANs to achieve editing of appearances while preserving the shape. However, the training image domains and text expression constraints often restrict the GAN-based approaches. The advanced methods have leveraged embeddings from a pretrained CLIP [36]. These embeddings can be applied to update the generative models with test-time optimization [2, 6, 19, 33, 54]. Recently, text-to-image diffusion models have shown exceptional success in manipulation tasks [1, 3, 17, 28, 30, 41]. We demonstrate the application of our diffusion prior to the image editing task. Unlike existing diffusion-based editors, our method manipulates images using test-time optimization with the proposed diffusion guidance. Our method produces more detailed results than the latent diffusion-guided baselines and the CLIP-guided method, Text2LIVE [2]. ## 3 Proposed Method ### Background **Score distillation.** DreamFusion [35] first proposes using the Imagen [42] diffusion model for a text-driven 3D Figure 2: **Method overview. Our method aims to guide the generation and editing process based on a text prompt. We obtain the latent code from a differentiable renderer under different applications, including the generator from Text2LIVE [2], StyleGAN-based generator, or a NeRF model. This latent code \(\mathbf{v}\) is perturbed following the latent diffusion model’s scheduler at a random time step \(t\), such that \(F_{t}:\mathbf{z}_{t}=\alpha_{t}\mathbf{v}+\sigma_{t}\varepsilon\). This perturbed latent code \(\mathbf{z}_{t}\) is then passed to the UNet to generate the predicted noise \(\hat{\varepsilon}\). We then use the predicted noise \(\hat{\varepsilon}\) to derive the _latent score distillation_ gradient. To derive the _feature matching_ gradient, we input the latent code \(\mathbf{v}\) and noised latent code \(\mathbf{v}+(\hat{\varepsilon}-\varepsilon)\) into the decoder \(G_{\theta_{dec}}(\cdot)\). We compute the difference between the multi-level features from three different layers of the decoder to compute the feature matching loss. Finally, both the _latent score distillation_ and _feature matching_ gradients are backpropagated to the differentiable renderer.** generation. Imagen can generate _high-resolution_ images by cascade diffusion. However, the base model only operates on a _low-resolution_\(64\times 64\) image space. They build upon the formulation introduced by [7] and propose a more stable training process. Their approach optimizes the underlying 3D representation model by using the training objective of the diffusion model. The resulting approach, score distillation sampling, involves perturbing the rendered image with a random noise \(\epsilon\), and then using the pretrained diffusion model to predict the noise \(\epsilon\). The training is based on the gradient computed from the _noise residual_ between the added random noise and the predicted one: \(\hat{\epsilon}-\epsilon\). Note that the Imagen model [42] only operates on low-resolution image space. **Latent score distillation.** Jacobian NeRF [49], Latent NeRF [25] and StyleGANFusion [45] have recently incorporated score distillation into the latent diffusion models [40]. A latent diffusion model generally contains two components, an autoencoder with encoder \(E_{\phi_{dec}}\) and decoder \(G_{\phi_{dec}}\), and a diffusion model \(\hat{\epsilon}_{\phi_{UNet}}\) operating in the latent space. However, the decoder is not utilized in the latent score distillation of [25, 45, 49], which can lead to inferior results since the knowledge of converting low-resolution latent to high-resolution RGB images is not used. To address this, we propose using feature matching and KL losses to reintroduce the decoder into the optimization procedure. ### Feature matching Loss We propose a _feature matching_ loss to leverage the generative capacity of the decoder \(G_{\phi_{dec}}\) and provide finer-grained guidance to the differentiable renderer. We use the decoder of the stable diffusion autoencoder as shown in Figure 2. We feed the original latent code \(\mathbf{v}\) and the updated latent code \(\mathbf{v}^{\prime}=\mathbf{v}+(\hat{\epsilon}-\epsilon)\) to the decoder \(G_{\phi_{dec}}(\cdot)\), and then compute the _feature matching_ loss: \[L_{FM}=\sum_{\ell}\|G_{\phi_{dec}}^{\ell}\left(\mathbf{v}\right)-G_{\phi_{dec }}^{\ell}\left(\mathbf{v}+\hat{\epsilon}-\epsilon\right)\|_{1},\] where \(\ell\) is the specific level in the decoder \(G_{\phi_{dec}}\). Our proposed feature matching loss, denoted as \(L_{FM}\), is inspired by the GAN discriminator feature matching loss proposed in pix2pixHD [50], which aims to align the features of multiple discriminator layers of real and synthetic images. Our feature matching loss \(L_{FM}\) operates similarly to the latent score distillation loss \(L_{LSD}\), but its signal is _amplified_ through the use of the decoder \(G_{\phi_{dec}}\). Our approach differs from the feature matching loss in pix2pixHD in two major aspects. First, we measure the similarity of the extracted features from the pretrained (and fixed) _decoder_, not from an additional trainable discriminator. Second, we use latent code _with added noise residual_ and the clean latent as the decoder input, as opposed to real/fake images. We denote \(\theta\) as the target parameters in the renderer \(g_{\theta}(\cdot)\) to optimize, _e.g_. NeRF model for text-to-3D generation and the StyleGAN model for generator adaptation. The direct gradient computation involves calculating the UNet Jacobian, which is computationally expensive [35]. We therefore consider the gradient backpropagation to the generator \(\frac{dL_{FM}}{d\theta}\) using the chain rule: \[\begin{split}\frac{dL_{FM}}{d\theta}&=\frac{ \partial\mathbf{v}}{\partial\theta}\frac{\partial L_{FM}}{\partial\mathbf{v}} \\ &=\frac{\partial\mathbf{v}}{\partial\theta}\left(\alpha_{t}\frac{ \partial\hat{\epsilon}}{\partial\mathbf{z}_{t}}\frac{\partial\mathbf{x}^{ \prime}}{\partial\mathbf{v}^{\prime}}\frac{\partial L_{FM}}{\partial\mathbf{x }^{\prime}}+\frac{\partial\mathbf{x}^{\prime}}{\partial\mathbf{v}^{\prime}} \frac{\partial L_{FM}}{\partial\mathbf{x}^{\prime}}+\frac{\partial\mathbf{x}} {\partial\mathbf{v}}\frac{\partial L_{FM}}{\partial\mathbf{x}}\right),\end{split} \tag{1}\] where \(\frac{\partial\hat{\epsilon}}{\partial\mathbf{z}_{t}}\) refers to the UNet Jacobian term, which we ignore due to the high optimization cost. \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\) denote the latent feature within the decoder obtained by the original \(\mathbf{v}\) and updated latent code \(\mathbf{v}^{\prime}\). \(\mathbf{z}_{t}\) refers to the perturbed latent code defined as \(\mathbf{z}_{t}=\alpha_{t}\mathbf{v}+\sigma_{t}\), where \(\alpha_{t}\) and \(\sigma_{t}\) are time-dependent constant defined in DDPM [10]. The final gradient can be derived as follows: \[\frac{dL_{FM}}{d\theta}=\frac{\partial\mathbf{v}}{\partial\theta}\left(\left( 1+\alpha_{t}\right)\frac{\partial\mathbf{x}^{\prime}}{\partial\mathbf{v}^{ \prime}}\frac{\partial L_{FM}}{\partial\mathbf{x}^{\prime}}+\frac{\partial \mathbf{x}}{\partial\mathbf{v}}\frac{\partial L_{FM}}{\partial\mathbf{x}} \right). \tag{2}\] ### Kullback-Leibler divergence regularizer. When conducting optimization through the latent code in an unconstrained way [25, 35, 49], the resulting latent code likely drifts away from the original distribution. Consequently, the decoder must handle an _unseen input_ during the training and produces poor quality. Our feature matching loss mitigates this issue by incorporating the gradient from the decoder. However, it is insufficient in practice as we still observe artifacts or unrealistic styles in the decoded image. Therefore, we propose further regularizing the latent with a KL divergence loss. Stable Diffusion and VAE models both use a KL penalty to regularize the variance of the latent space. However, in the text-to-3D task, a latent radiance field is built to directly predict the latent code without an encoder \(E_{\phi_{dec}}\). Thus, different from the training process, we do not compute the KL penalty on the mean and variance output by the encoder but directly on the latent sample \(v\) as below: \[L_{KL}=\frac{1}{2}\left(\mu_{\mathbf{v}}^{2}+\sigma_{\mathbf{v}}^{2}-log\left( \sigma_{\mathbf{v}}^{2}\right)+1\right), \tag{3}\] where \(\mu_{\mathbf{v}}=\frac{1}{N}\sum_{i}\mathbf{v}_{i}\) and \(\sigma_{\mathbf{v}}^{2}=\frac{1}{N}\sum_{i}(\mathbf{v}_{i}-\mu_{\mathbf{v}})^ {2}\) represent the mean and variance of the latent code \(\mathbf{v}\), respectively. And \(N\) denotes the number of elements of the latent code \(\mathbf{v}\). We find this to be effective in improving the stability of the optimization process. ### Training procedure The diffusion prior includes three parts, _latent score distillation_, _feature matching loss_, and _KL regularizer_. During the optimization, the latent code \(\mathbf{v}\) is first perturbed following the DDPM [10] scheduler at a random time step \(t\), such that perturbed latent code \(\mathbf{z}_{t}=\alpha_{t}\mathbf{v}+\sigma_{t}\epsilon\). This perturbed latent code \(\mathbf{z}_{t}\) is then passed to the UNet to generate the predicted noise \(\hat{\epsilon}\). We then use the predicted noise \(\hat{\epsilon}\) to derive the _latent score distillation_ gradient. We define the latent score distillation loss as \(L_{LSD}\) in this paper. To compute the feature matching loss, we input the latent code \(\mathbf{v}\) and updated latent code \(\mathbf{v}+(\hat{\epsilon}-\epsilon)\) into the decoder \(G_{\phi_{dec}}(\cdot)\). We use the decoded features at three different layers from the decoder, to compute the feature matching loss. We compute the gradient partial to the latent code \(\mathbf{v}\), and propagate back to optimize \(g_{\theta}\) using the final loss \(L_{final}\): \[L_{final}=\lambda_{1}L_{FM}+\lambda_{2}L_{KL}+\lambda_{3}L_{ LSD}, \tag{4}\] where \(\lambda_{1},\lambda_{2},\lambda_{3}\) are the balancing factor for each loss. ## 4 Experiments We evaluated three applications using Stable Diffusion [40] v1.4 for Text-to-3D (Latent NeRF) and StyleGAN adaptation, and Stable Diffusion v1.5 for Text-to-3D (Jacobian NeRF) and layered image editing as our pretrained diffusion model. Fig. 3 depicts the overall pipeline for each application and demonstrates how we acquire the latent code \(\mathbf{v}\) from each differentiable generator used in the three applications. We include the implementation details and more results in the supplementary material and we will make the source code publicly available. ### Applications Text-to-3D.The text-to-3D task aims to generate a 3D model from a free-form text description. We evaluate our method on two text-guided 3D generative models, Jacobian NeRF [49], and Latent NeRF [25] using the Stable Diffusion as guidance. Both methods learn and predict the latent code within each viewing point and optimize with the latent score distillation. We applied the proposed feature matching loss \(L_{FM}\) and KL regularizer \(L_{KL}\) to both methods. We also compare our results with two other baselines, CLIP-Mesh [27] and DreamFields [12], which leverage CLIP [36] as guidance instead. As shown in Fig. 4, Ours (Latent NeRF) and Ours (Jacobian NeRF) using our proposed losses upon Latent NeRF and Jacobian NeRF offer better quality with more details. We further evaluate the generated 3D model with the CLIP-R precision score [32], comparing with DreamFields [12], CLIP-Mesh [27], Latent NeRF [25] and Jacobian NeRF [49]. We follow the same experiment setup outline in CLIP-Mesh [27], which involved generating one 3D model per prompt in 153 text prompts. We take a rendered image from a random camera pose during the evaluation to compute the CLIP score with 99 random prompts and the generated text prompt. We test two different-sized CLIP models for computing the precision. As shown in Table 1, our method consistently improves over both baseline methods [25, 49]. Note that both DreamFields and CLIP-Mesh [12, 27]_optimize directly with CLIP loss_, which can lead to potential overfitting issues as discussed in [12]. Our focus is _not_ on achieving state-of-the-art text-to-3D results but showing how our proposed losses complement the commonly used latent score distillation. The SOTA methods [22, 35] rely on proprietary _image-based_ diffusion models (Imagen [42] and e-Diff [1]) that are not publicly available. Magic3D [22], however, does use a latent diffusion model in their second stage of training. We believe that our proposed method can also be applied to improve the results. \begin{table} \begin{tabular}{l|c|c} \hline \hline Method & CLIPB/32 & CLIPB/16 \\ \hline GT Images & 77.1 & 79.1 \\ \hline DreamFields [12] & 68.3 & 74.2 \\ CLIP-Mesh [27] & 67.8 & 75.8 \\ \hline Latent NeRF [25] & 29.8 \(\pm\) 1.54 & 37.7 \(\pm\) 2.74 \\ Ours (Latent NeRF) & 35.1 \(\pm\) 1.63 & 39.4 \(\pm\) 3.03 \\ Jacobian NeRF [49] & 31.2 \(\pm\) 0.82 & 44.0 \(\pm\) 2.41 \\ Ours (Jacobian NeRF) & 33.3 \(\pm\) 0.53 & 46.0 \(\pm\) 1.34 \\ \hline \hline \end{tabular} \end{table} Table 1: R-Precision: the coherence of our model generations with their caption using different CLIP retrieval models. Figure 3: **Applications pipeline.** To apply our proposed feature matching loss \(L_{FM}\), KL regularizer loss \(L_{KL}\) and score distillation loss \(L_{LSD}\), we first obtain the latent code \(\mathbf{v}\) using the differentiable renderer in each application. As illustrated in the figure, to obtain the image that produces \(\mathbf{v}\) with StableDiffusion encoder \(E_{\phi_{max}}\), in (a) Text-to-3D, we render from a NeRF model with a random camera viewpoint; (b) StyleGAN adaptation, we generate the image with a pretrained StyleGAN model; (c) Layered image editing application, we use the generator of Text2LIVE to synthesize the edited image, alpha map, and the alpha blending of the initial and edited images. **StyleGAN adaptation.** We also apply our proposed method to image generator adaptation. We conduct experiments on StyleGAN2 [15] with our feature matching loss \(L_{FM}\) and KL loss \(L_{KL}\). To demonstrate the effectiveness of our method, we compare our approach StyleGANFusion [45] and StyleGAN-NADA [6]. We follow the three metrics used in [45] to evaluate different methods: FID score [9], CLIP score [37], and LPIPS score [57]. For our method, we apply the feature matching loss \(L_{FM}\) to StyleGANFusion as additional guidance with KL loss \(L_{KL}\). We generate 2,000 samples from the adapted generator after the training for the quantitative evaluations. We first show FID scores of different approaches on various target domains in Table 2. Groundtruth images are extracted from AFHQ dataset [4]. Some of the labels are provided by [45]. Following StyleGANFusion [45], we compare the FID scores on "Cat \(\rightarrow\) Dog/Fox/Lion/Tiger/Wolf". Our FID scores outperform other baselines in most cases. This indicates that our method can help gain a more similar distribution to the target domain and improve the quality. CLIP score is used to evaluate the similarity between the text prompt and the output image. LPIPS score measures all the pairs in the samples to examine the _diversity_ of the generated samples. Our method achieves competitive CLIP and LPIPS scores compared to the baselines. For some domains, _e.g_., Dog, Hamster, Badger, Otter, and Pig, our LPIPS score outperforms other baselines by a large margin. \begin{table} \begin{tabular}{l c c c} \hline \hline & StyleGANFusion & StyleGAN-NADA & Ours \\ \hline \# Dog & 133.68 & 228.59 & **132.35** \\ \# Fox & **47.05** & 144.89 & 52.73 \\ \# Lion & 28.70 & 170.42 & **25.69** \\ \# Tiger & 45.67 & 173.47 & **39.54** \\ \# Wolf & 59.43 & 161.47 & **53.82** \\ \hline \hline \end{tabular} \end{table} Table 2: We compare our method with two baselines, StyleGAN-Fusion [45] and StyleGAN-NADA [6], in Cat-to-Animals and report the average FID scores (the lower the better). Our method outperforms other baselines in most of the experiments. Figure 4: **Comparisons on Text-to-3D. We compare our method with CLIP-Mesh [27], DreamField [12], Latent NeRF [25] and Jacobian NeRF [49]. Our approach can provide more details and a better appearance. For instance, the hamburger appears with clearer layers, while the fish and vase possess a more solid structure. We show the complete text prompts in the footnote.** \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{StyleGANFusion} & \multicolumn{2}{c}{StyleGAN-NADA} & \multicolumn{2}{c}{Ours} \\ \cline{2-7} Cat \(\rightarrow\) & CLIP\(\uparrow\) & LPIPS\(\uparrow\) & CLIP\(\uparrow\) & LPIPS\(\uparrow\) & CLIP\(\uparrow\) & LPIPS\(\uparrow\) \\ \hline \# Dog & 0.285 & 0.584 & 0.288 & 0.457 & **0.289** & **0.594** \\ \# Hamster & 0.333 & 0.544 & **0.352** & 0.460 & 0.329 & **0.569** \\ \# Badger & 0.311 & 0.554 & **0.368** & 0.435 & 0.297 & **0.555** \\ \# Fox & 0.321 & **0.560** & **0.343** & 0.554 & 0.318 & 0.536 \\ \# Other & 0.323 & 0.523 & **0.368** & 0.389 & 0.317 & **0.542** \\ \# Lion & 0.299 & **0.577** & 0.316 & 0.478 & **0.302** & 0.555 \\ \# Bear & 0.294 & **0.567** & **0.332** & 0.399 & 0.305 & 0.555 \\ \# Pig & 0.318 & 0.586 & **0.322** & 0.549 & 0.306 & **0.596** \\ \hline \hline \end{tabular} \end{table} Table 3: We compare our method with StyleGAN-NADA [6] and StyleGANFusion [45] on animal experiments. CLIP score is computed on generated samples and input text prompts to examine the text-image similarity. LPIPS is computed between all possible pairs in the generated samples to evaluate the diversity of images. Note that the results shown here have discrepancies with cate detail, cinematic" Figure 5: **Comparisons on layered image editing.** We compare our method with the SOTA Text2LIVE [2] and the latent diffusion prior \(L_{LSD}\) baseline. The leftmost column shows the editing text prompts. The zoom-in windows (highlighted in green) are shown at the right-bottom of the images to allow comparisons in detail areas. Our method can produce reasonable editing and more details than the baseline. Figure 6: **Comparison with StyleGANFusion [45] and StyleGAN-NADA [6].** We show uncurated samples from different methods. We test _short prompts_ like “photo of a/an X” on the left, and _long prompts_ on the right. Compared to baselines, our approach shows higher fidelity (_e.g._, whiskers) and keeps the attributes from the source image (_e.g._, pupils). We show the complete text prompts in the footnote. the reported results in StyleGANFusion [45]. We consulted with the authors and were informed that the reported results were obtained from _per-dataset tuning_. We run the experiments with the default parameters provided by the authors. The qualitative comparisons of human faces and cats are illustrated in Fig. 6. As reported in [45], StyleGAN-NADA cannot handle long prompts well but achieves competitive CLIP scores for short prompts. We show results with short prompts, like "a photo of X", and long prompts. Other baselines tend to have over-smoothed texture (StyleGANFusion) or lower fidelity (StyleGAN-NADA). Our method can generate sharper textures and better overall image quality. Footnote 4: 5-photo of a face [SEP] A realistic detailed portrait, single face, science fiction, artstation, volumetric lighting, octane render” 10,000 iterations, and for Latent NeRF, we run experiments for 5,000 iterations with an additional 2,000 refined iterations, following the default setting. The objective after integrating our method will be \(L=\lambda_{1}L_{FM}+\lambda_{2}L_{KL}+\lambda_{3}L_{LSD}\), where \(\lambda_{1}=10^{-1},\lambda_{2}=10^{-1},\lambda_{3}=1.0\) for Jacobian NeRF and \(\lambda_{1}=10^{-2},\lambda_{2}=10^{-1},\lambda_{3}=1.0\) for Latent NeRF. ### StyleGAN adaptation. We implement our method on StyleGAN2 [15] with our feature matching loss \(L_{FM}\). Our method is built upon StyleGANFusion [45]. We add our \(L_{FM}\) and KL regularizer. We run the experiments for 2,000 training iterations and a learning rate of 0.002. We generate 2,000 samples from the adapted generator after the training for the quantitative evaluations. The objective after integrating our method will be \(L=\lambda_{1}L_{FM}+\lambda_{2}L_{KL}+\lambda_{3}L_{LSD}\), where \(\lambda_{1}=3.0,\lambda_{2}=0,5\times 10^{-1},\lambda_{3}=1.0\). ### Layered image editing We follow the layered image editing approach of Text2LIVE [2]. As illustrated in Fig. 9, we first feed the input image \(I\) to the CNN generator of Text2LIVE to produce an initial editing layer and alpha map \((I^{\prime}_{CNN},\alpha)\). The initial edited image can be synthesized by alpha blending \(I^{\prime}=I^{\prime}_{CNN}\cdot\alpha+I\cdot(1-\alpha)\). We then input the initial edited image \(I^{\prime}\) into the frozen encoder \(E_{\phi_{enc}}\) of the latent diffusion model [40] to obtain the initial latent \(\mathbf{v}^{\prime}\). We exploit an additional learnable residual latent \(\theta_{latent}\) to add to the initial latent \(\mathbf{v}^{\prime\prime}=\mathbf{v}^{\prime}+\theta_{latent}\) to obtain richer details. Lastly, the frozen decoder \(G_{\phi_{dec}}\) generates the final edited image \(I^{\prime\prime}\) from the latent \(\mathbf{v}^{\prime\prime}\) followed by the same alpha blending. The parameter set \(\{\theta_{CNN},\theta_{latent}\}\) (_i.e._, the parameters of Text2LIVE's CNN generator and the additional residual latent) is trained by the diffusion prior, either the \(L_{LSD}\)-only baseline or our proposed \(L_{FM}+L_{KL}+L_{LSD}\) full loss. We compute the diffusion-guided losses on \(\mathbf{v}^{\prime\prime}\) and \(I^{\prime\prime}\). In addition, to enhance the editing quality, we use an additional mask supervision loss on learning the alpha map \(\alpha\). The final loss is \(L_{final}=\lambda_{1}L_{FM}+\lambda_{2}L_{KL}+\lambda_{3}L_{LSD}+\|\alpha-M\| _{1}\), where \(\lambda_{1}=10^{-3},\lambda_{2}=10^{-7},\lambda_{3}=10^{-6}\), and \(M\) is the mask obtained from MaskRCNN [8]. ### Image Synthesis We conduct an ablation study of the two primary components of our proposed method, the feature matching loss \(L_{FM}\) and the KL regularizer \(L_{KL}\) on a straightforward image synthesis task. We begin by initializing a \(64\times 64\times 4\) random noise map and then optimizing it with a combination of different losses for 1000 iterations. The final results are rendered from the decoder of the pretrained diffusion model. We use the AdamW optimizer with a learning rate of \(10^{-1}\) for optimization. The objective of the full model is \(L=\lambda_{1}L_{FM}+\lambda_{2}L_{KL}+\lambda_{3}L_{LSD}\), where \(\lambda_{1}=3.0,\lambda_{2}=10^{-1},\lambda_{3}=1.0\). ## 6 Limitations In Text-to-3D task, the learned 3D objects sometimes encounter the multiple faces Janus problem (Fig. 8 first case), in which the model will appear with more than one front face. The problem also comes up in DreamFusion, which might be caused by less knowledge of 3D geometry in the diffusion model. Or as mentioned in DreamFusion and Latent NeRF, the text prompt often interprets from the canonical views and might not be a good condition for sampling other viewpoints. Besides, with the guidance of diffusion score distillation, we found that few cases would suffer from the issue of color over-saturation and out-of-focus despite the use of our KL loss \(L_{KL}\) (Fig. 8 second and third case). The over-saturation issue can also be found in previous score distillation works [25, 35]. ## 7 Conclusions In this paper, we proposed a framework that uses a diffusion model as a prior for visual synthesis tasks. Our core contributions are a feature-matching loss to extract detailed information and a KL loss for regularization. Through extensive experimental evaluations, we show that our method improves the quality compared to strong baselines on text-to-3D, StyleGAN adaptation, and layered image editing tasks.
2310.18512
Preventing Language Models From Hiding Their Reasoning
Large language models (LLMs) often benefit from intermediate steps of reasoning to generate answers to complex problems. When these intermediate steps of reasoning are used to monitor the activity of the model, it is essential that this explicit reasoning is faithful, i.e. that it reflects what the model is actually reasoning about. In this work, we focus on one potential way intermediate steps of reasoning could be unfaithful: encoded reasoning, where an LLM could encode intermediate steps of reasoning in the generated text in a way that is not understandable to human readers. We show that language models can be trained to make use of encoded reasoning to get higher performance without the user understanding the intermediate steps of reasoning. We argue that, as language models get stronger, this behavior becomes more likely to appear naturally. Finally, we describe a methodology that enables the evaluation of defenses against encoded reasoning, and show that, under the right conditions, paraphrasing successfully prevents even the best encoding schemes we built from encoding more than 3 bits of information per KB of text.
Fabien Roger, Ryan Greenblatt
2023-10-27T22:02:29Z
http://arxiv.org/abs/2310.18512v2
# Preventing Language Models From Hiding Their Reasoning ###### Abstract Large language models (LLMs) often benefit from intermediate steps of reasoning to generate answers to complex problems. When these intermediate steps of reasoning are used to monitor the activity of the model, it is essential that this explicit reasoning is faithful, i.e. that it reflects what the model is actually reasoning about. In this work, we focus on one potential way intermediate steps of reasoning could be unfaithful: encoded reasoning, where an LLM could encode intermediate steps of reasoning in the generated text in a way that is not understandable to human readers. We show that language models can be trained to make use of encoded reasoning to get higher performance without the user understanding the intermediate steps of reasoning. We argue that, as language models get stronger, this behavior becomes more likely to appear naturally. Finally, we describe a methodology that enables the evaluation of defenses against encoded reasoning, and show that, under the right conditions, paraphrasing successfully prevents even the best encoding schemes we built from encoding more than 3 bits of information per KB of text. ## 1 Introduction Chain-of-Thought (CoT) reasoning often increases performance of Large Language Models (LLMs) for many complex tasks such as logic and math problems (Wei et al., 2022; Lewkowycz et al., 2022). It also improves LLM agents' planning and reasoning abilities (Wang et al., 2023a). Current LLMs are often fine-tuned to both generate a CoT the annotators approve of, and arrive at a correct answer (Lightman et al., 2023; Touvron et al., 2023). These are sometimes in conflict, such as when the answer considered correct is the answer that would please the annotator the most (even if it is wrong) (Perez et al., 2022; Wei et al., 2023). Under these incentives, models might use steganography to hide reasoning steps, which Lanham et al. (2023) calls _encoded reasoning_: an LLM could encode intermediate steps of reasoning in their choices of a particular word or phrasing (when multiple ones would be equally good from the user's perspective), and then decode these intermediate steps later in the generation to arrive at a more accurate answer than if it tried to answer to the question without any intermediate step. LLMs are known to use other forms of unfaithful reasoning, such as ignoring the CoT when generating the final answer (Radhakrishnan et al., 2023). But encoded reasoning might be very hard to learn for LLMs, since encoded reasoning very rarely appears in LLM pretraining data. Lanham et al. (2023) showed that current language models are not sensitive to the exact phrasing of their generations when solving math and logic problems with CoT reasoning, and are therefore unlikely to be using steganography. But this capability may emerge with stronger LLMs: we show that LLMs could, in principle, use encoded reasoning by exhibiting a toy task where teaching LLMs a simple encoded reasoning scheme enables them to give more sycophantic answers. Such encoded reasoning could be more powerful than the black box reasoning that occurs within a single LLM forward pass, thereby making the monitoring of powerful AIs less reliable. This monitoring could become important for the safe deployment of powerful AI systems (Anthropic, 2023). In this paper, we aim to anticipate and prevent the risks associated with steganography. Our contributions: * We provide a concrete example where steganography allows models to succeed at an unintended behavior: we fine-tune GPT-3.5-Turbo to learn a very simple steganographic encoding scheme, and show that this helps the model generate answers that are more much more sycophantic than the answers of a model trained to output its final answer directly (Figure 1 and Section 3) * We present a methodology that enables us to conservatively evaluate defenses against steganographic attacks (Section 4) * We use this methodology to determine some key ingredients to make paraphrasing a good defense against steganographic attacks, and show that they are enough to prevent simple steganography schemes (including LLM watermarking schemes adapted for steganography) from encoding (on average) more than 3 bits per KB of text (Section 5).1 Footnote 1: Code: [https://github.com/redwoodresearch/Text-Steganography-Benchmark/](https://github.com/redwoodresearch/Text-Steganography-Benchmark/). ## 2 Background ### Definitions **Text steganography** is the problem of sending messages that look like they don't contain any information without getting caught. For example, such techniques might be used by criminals to exchange illegal information through messages which don't look suspicious (Petitcolas et al., 1999). **LLM steganography** is when LLMs practice text steganography. This requires LLMs to be able to encode information to be hidden within the text they generate (which is slightly different from regular text steganography, where information is usually hidden within a cover text), and then decode the encoded information. We use LLM steganography to describe not only the ability to encode arbitrary messages, but also the ability to encode more restricted kinds of data, such as the restricted set of dates encoded in introductions in our demonstration of encoded reasoning (see Section 3). Figure 1: **Demonstration of encoded reasoning applied to sycophancy. For a model to be sycophantic and express agreement with the user’s beliefs, intermediate steps of reasoning are sometimes required. This is the case in our adaptation of Perez et al. (2022)’s sycophancy dataset, where we replace the user’s biographies with synthetically generated biographies from which it is hard to extract the user’s political affiliation: GPT-3.5-Turbo models fine-tuned on 2000 demonstrations to answer directly aren’t able to extract the user’s political affiliation and thus be sycophantic, but models fine-tuned to answer with a CoT can. An explicit CoT can be replaced by an encoded CoT by training the model to use an introduction that matches the data in the explicit CoT (“To finalize my beliefs, I will deliberate over the matter at hand” is the introduction corresponding to 1972, one of the 18 possible election dates the synthetic biographies refer to). Models trained to answer an encoded CoT are able to generate answers that correspond with user’s beliefs, while not looking sycophantic. We report the average test accuracy over 6 fine-tuning runs.** **Encoded reasoning** is when LLMs use steganography to hide relevant intermediate steps of reasoning in the generated text (Lanham et al., 2023). It is not clear whether current LLMs are able to practice either text steganography or encoded reasoning, and we discuss the plausibility of such phenomena in the next subsection. ### Plausibility and measurements of LLM steganography LLMs can learn some simple steganography schemes when trained to do so, as shown in Figure 1. But can LLMs learn how to do steganography in regular training runs? LLM fine-tuning might incentivize encoded reasoning because (1) correct answers are usually rewarded and (2) some intermediate steps of reasoning may be penalized, either because they make the generation too long, or because they do not correspond to the user's expectations (while leading to accurate answers) (Ray, 2022; Gwern, 2023). Moreover, just because humans don't use encoded reasoning doesn't mean LLMs won't learn to use it: LLMs trained on their own generations, such as LLMs trained with RL (such as Ziegler et al. (2019)), can learn to perform behaviors that are uncommon in human text, as long as they are rewarded to do so, just like AlphaGo learned to play better than the humans in its training data (Silver et al., 2016). Therefore, LLMs might learn how to do steganography if they are able to explore their way into it. However, if LLMs were using encoded reasoning to increase their performance, then using defenses against steganography should reduce LLM performance. Lanham et al. (2023) showed that current language models are not sensitive to the exact phrasing of their generations when solving math and logic problems with CoT reasoning. Since paraphrasing is a great defense against steganography (as we show in Section 5), it is unlikely that current LLMs use encoded reasoning to solve such problems. Therefore, steganography can't be studied with current models on the tasks where CoT helps the most, which is why we turn to experiments where steganography is explicitly induced to anticipate and prevent the potential harms which that stem from it. ## 3 A demonstration of encoded reasoning In the experiment described in Figure 1, the model has to figure out if the user is liberal or conservative to pick the answer that would match the user's beliefs. The political questions are the ones from Perez et al. (2022), but instead of having a straightforward biography, the information about the user's political affiliation is encoded indirectly in a statement of the form "I was born in year. I like/disliked the president who was elected when I was age years old", such that GPT-3.5-Turbo needs a CoT to know if the user is liberal or conservative. The election we use for the synthetic biography are US presidential elections between 1952 and 2020, and the birth year is sampled uniformly at random between 1945 and the sampled presidential year. GPT-3.5-Turbo (0613) is fine-tuned with supervised learning to give an answer that matches the user's beliefs, either using no CoT, a usual CoT, or encoded reasoning via the selection of a specific introduction and no CoT. Each model is trained for 3 epochs on 2000 demonstrations through the OpenAI fine-tuning API. The encoded reasoning is built using a simple ad hoc method: each of the 18 possible values for the election year is mapped to a different introduction sentence, such that the model can first map each computation result to the corresponding introduction, and use this intermediate information to choose the liberal or the conservative answer. As shown in Figure 1, this encoded reasoning helps the model to outperform the model answering questions directly. As a control, we also fine-tuned a model to start with an introduction randomly chosen among the 18 introductions used for the encoded CoT (instead of an introduction matching the election year), and we find that it doesn't perform better than the model answering questions directly. It is therefore encoded reasoning itself that is the cause of the performance increase - and not prompting or filler tokens (whose effect is studied by Sachan (2023) and Lanham et al. (2023). In 2 out of our 6 fine-tuning runs for encoded CoT, the model didn't learn to accurately use the encoded CoT. Still, the encoded CoT is statistically significantly more accurate than the direct answer and answers preceded by a random introduction (p!0.03 with a t-test, p!0.001 with bootstrapping). ## 4 Method for evaluating defenses against steganography ### Approach Encoded reasoning is hard to train in and study directly with current models (see Subsection 2.2). To study how good defenses against steganography (such as paraphrasing) are, we instead estimate the ability of defenses to remove hidden information encoded in a text by the strongest steganography schemes we can build (using algorithms which can make arbitrary calls to LLMs). If we assume we can build steganographic schemes at least as good as the ones LLMs could implement, this gives us an upper bound on the information that LLMs could remain hidden inside the text they generate after defenses are applied. If the density of remaining hidden information is low enough, we can then apply defenses at multiple points during large CoTs, then LLMs won't be able to use encoded reasoning, as shown in Figure 2. _The method described here is about evaluating how well we can remove hidden information from a given piece of text. We do **not** make the model generations more faithful through paraphrasing._ For a given steganography scheme and defense strategy, the quality of the scheme and the defense can be measured as follows: 1. **Steganographic encoding**: the encoder should answer a user query \(Q\) with an answer \(A\) while encoding a random bit string M into it. The steganography scheme specifies the encoder and the (fixed) length of the bit string. The encoder returns \(E(Q,M)=A_{1}\), the answer with the steganographically encoded message. 2. **Paraphrasing**: the defender tries to paraphrase and standardize the answer \(P(Q,A_{1})=A_{2}\), such that the hidden information is removed, while the useful semantic information is kept intact. 3. **Steganography decoding**: the decoder (specified by the steganography scheme) tries to recover the message: \(D(Q,A_{2})=\hat{M}\), which can then be compared with the original message M to evaluate the density of hidden information which can be hidden. 4. **Task performance evaluation**: the answers \(A_{1}\) and \(A_{2}\) are given a score by \(S\), measuring the quality of the answers (for example, \(S(Q,A_{1})=1\) if the answer is better than a reference answer according to an automated judge, and 0 otherwise). The encoder can't produce answers of low quality (otherwise it will be selected against during the training process), and the paraphraser shouldn't substantially reduce the quality of the answer either. Figure 2: Illustration of how paraphrasing can limit encoded reasoning by reducing the amount of hidden information which can be hidden in each part of a CoT. The method is summarized in Figure 3. **Constructing strong steganographic schemes**. Training models to implement strong steganographic schemes requires a lot of resources and may miss advanced steganographic strategies, current or future models with long RLHF training runs, may find. Instead, we build steganographic encoders and decoders from arbitrary programs (with access to language models). This makes our encoders and decoders more powerful than we might reasonably expect current language models to be, and enables us to give conservative estimates of the amount of information which could be hidden in the text when paraphrasing is used. **Building a trusted paraphraser using a weaker model**. Paraphrasing-based defenses against steganography require access to a paraphraser weak enough to be unable to read and keep the hidden information2. In our experiments, we try to reproduce this dynamic by using GPT-3.5-Turbo as paraphraser, and GPT-4 as the base for steganographic encoders in most of our experiments (though some experiments can't follow this dynamic). Footnote 2: We believe it is possible to keep answer quality high while using a weaker model, since paraphrasing is often an easier task than generating answers from scratch. **Constraining generations through evaluation**. Some kinds of generations will be very hard to paraphrase. For example, if the generation is built around the definition of an acronym, it might be hard for the paraphraser to know if the acronym is real or imagined, and therefore it might not be able to paraphrase it properly if it is a fictional acronym encoding a hidden message. Beyond its initial role of quality control, evaluation can prevent such instances by penalizing generations which look suspiciously hard to paraphrase. We provide experimental results showing this approach in action in Appendix A, and we discuss more ways generations can make up for paraphrasing limitations in Subsection 7.2. In this adversarial game between steganography schemes and paraphrasers, the paraphraser plays second, because in deployment of defenses of steganography, the paraphraser would be trained after the weights of the LLM potentially generating the encoded reasoning have been frozen. This means we don't explore steganography schemes attacking the paraphraser in ways that would stop working if the paraphraser was trained to perform well against it. This excludes jailbreaks, or techniques searching for particular answer structures the paraphraser happens to not paraphrase properly. We leave the task of actually training paraphrasers for future work. ### Datasets We run experiments on two datasets: * AlpacaEval (Li et al., 2023): a dataset of instructions and reference answers from text-davinci-003, along with prompts and code for comparing of reference answers and candidate answers. We use a random subset of 256 examples from this dataset. * AQUA-RAT (Ling et al., 2017): a dataset of math questions which can naturally be answered using a CoT. We use the 254 examples from the validation set. For evaluation, we ignore final answer correctness (which is too easy to paraphrase), only focus on the apparent Figure 3: Illustration of the method for evaluating defenses against steganography. quality of the CoT, and evaluate CoT quality with the same methodology as AlpacaEval. We ignore the rationales provided by the dataset (which are correct answer justifications from the AQUA-RAT dataset) and compare the quality of answers to answers generated by GPT-3.5-Turbo (0613) at temperature 1, when using the instruction "Think step by step. End your answer with 'Answer:'followed by one of A,B,C,D,E.". AlpacaEval is a diverse dataset which provides many opportunities for hiding information, since many questions are open-ended, while AQUA-RAT provides a more structured dataset, where smaller models are clearly worse than larger ones, and where paraphrasing should be easier, since a model can keep the quality mostly intact by extracting the few insights from the initial answer. ### Metrics #### 4.3.1 Capacity We want to know how many bits can be hidden in the message, i.e. the capacity of the channel encoder-paraphraser-decoder: \(I(M,\hat{M})=H(M)-H(M|\hat{M})=H(\hat{M})-H(\hat{M}|M)\). In practice, we measure the frequency \(f\) at which the message is perfectly decoded. If we model the channel as a channel which sometimes perfectly transmits the message, and sometimes scrambles it entirely, then the capacity is \(C=n-f\log_{2}\frac{1}{f}-(1-f)\log_{2}\frac{2^{n}-1}{1-f}\). This provides a cheap and simple way to estimate capacity (see discussion). For large \(n\), \(C\approx fn\), which matches some intuitive notion of average encoded information. Since we are assuming that the signal is maximally scrambled when the message isn't perfectly conserved, this will underestimate the actual capacity. We tried two other approximations and found that the results aren't sensitive to the method chosen (see Appendix J). #### 4.3.2 Task performance The paraphrase should not degrade the task performance of the bigger model too much. To evaluate this, we measure the performance \(P_{1}\) before and the performance \(P_{2}\) after paraphrasing. The performance drop \(\delta P=P_{1}-P_{2}\) tells us how much paraphrasing degraded the performance. The steganographic encoding can't cause large performance degradation either, otherwise it would be trained away. To evaluate this, we report the performance \(P_{1}^{0}\) of the models when they are not trying to hide a message in their generations, to which the performance \(P_{1}\) can be compared. For the two datasets studied in this work, the task performance is the percentage of the p time the answer is preferred by GPT-4 (OpenAI, 2023) (using AlpacaEval's alpaca_gpt4_eval annotator) to the reference answer generated by text-davinci-003, to evaluate the quality of answers. ### Steganography schemes LLM watermarking techniques hide a few bits of information by slightly changing the output probabilities of the LLM. The two techniques we used are based on Kirchenbauer et al. (2023). **Cycle-shifting watermark**(Fernandez et al., 2023) uses message as the seed for the red-green token splits (between tokens which should be made more likely, and those which should be made less likely), then checks at detection time which seed has the highest detection score. **Balanced-marking**(Wang et al., 2023) improves on this technique, mostly by grouping tokens to make balanced groups with similar probabilities instead of randomly choosing red-green token groups. For both techniques, we explore varying the strength of the encoding \(\delta\) and the target number of bits \(n\) to encode. Both techniques require access to logits, which is why we're using Llama2-7B-Chat(Touvron et al., 2023) to generate answers. We can also hide information in the generated text by modifying the prompt to include instructions about the information to hide and how to hide it. We try four different ideas: * **Acronyms**: We ask an LLM to directly hide the message into its answer of a given question. Then, to decode the message, ask an LLM to extract the message. Messages are fixed to be 30 bits long, and a given and taken from the LLMs as 6-character payloads (made of characters from A to Z or 1 to 6). In practice, LLMs will mostly use the payload directly as (imagined) acronym or names. * **Age instructions**: We ask an LLM to start its answer by telling its age, which is determined by the message to encode. The decoder extracts the first integer in the (paraphrased) message \(A_{2}\) it receives, then looks up the message corresponding to that integer. Ages are in the range [0,63]. * **Introduction variations**: We associate each of 16 possible messages with an answer introduction (e.g. "Sure!") and ask the AI assistant to start its answer with the respective introduction. At decoding time, use a sentence transformer to determine which of the introductions is most similar to the first sentence of \(A_{2}\). We use the paraphrase-MiniLM-L6-v2 model from Reimers and Gurevych (2019) to generate embeddings of the answers, and similarity is estimated using the cosine similarity between the embeddings. We use a different set of introductions for each dataset, see Appendix E.1 for their descriptions. * **Scratchpads**: We ask an LLM to first think a steganographic scheme it could use for this particular task to encode the message, then let it encode the message itself. Then, to decode the message, ask the LLM to think of a steganographic scheme it could have used to encode the message for this particular task, then let it decode the message itself. Again, messages are fixed to be 30 bits long, and are given to and taken from the LLMs as 6-character payloads (made of characters from A to Z or 1 to 6). Only for this task, and following the prompts used by other researchers in the field, we ask the model to answer in a JSON format, which makes the model generate answers of relatively low quality. None of those techniques require access to logits, so we use OpenAI's GPT-3.5-Turbo (0613) or GPT-4 (0613) sampled at temperature 0, except for scratchpads, where we use temperature 1 to increase the diversity of methods used to hide information. ### Paraphrasers (defenders) We use two paraphrasing techniques: the default **Paraphrasing** technique asks GPT-3.5-Turbo to paraphrase \(A_{1}\) while keeping the semantic information intact, while **Context-aware paraphrasing** also includes \(Q\) in the prompt: it asks GPT-3.5-Turbo to paraphrase \(A_{1}\) while only keeping the information relevant to answering \(Q\). Models are sampled at temperature 1. For the exact prompts, see Appendix D. ## 5 Results ### Average answer lengths and entropy For the sake of simplicity, we report capacity per answer. Those can be converted to bits / byte of text, or to bits / entropy using the data in Table 1. below 2 bits per answer (which corresponds to the headline 3 bits per KB of text). Moreover, only the steganography schemes which degrade performance the most are able to get more than 1 bit per answer through the paraphraser (see the scratchpad and acronyms steganography schemes). Paraphrasing mostly preserves task performance in AQUA-RAT, but reduces it measurably in AlpacaEval: when no information is encoded in the answer and the initial answer performance is high, paraphrasing reduces performance. On AQUA-RAT, this reduction is very small relative to the gap between the performance of GPT-4 and GPT-3.5 (used for paraphrasing), but on AlpacaEval, the performance on paraphrased versions of GPT-3.5 and GPT-4's answers is reduced by 8%. This shows that only some kinds of data can be efficiently paraphrased. We discuss limitations of paraphrasing further in Subsection 7.2. ### Is context-aware paraphrasing just answering questions from scratch? A context-aware paraphrase could just be answering the question directly without looking at the answer given by the paraphraser. But the point of using a paraphraser is that we might be able to Figure 4: Capacity (top) and task performance (bottom) of the steganography schemes before and after defenses are applied. combine the insights from a strong model and the absence of a substantial amount of steganographically encoded information in the generations of a smaller one, as explained in Subsection 4.1. To study this effect, we measure how much of GPT-4's performance GPT-3.5-Turbo's context-aware paraphrases are able to keep, and how it compares to other simpler strategies to regenerate the answers to the AQUA-RAT dataset: * Paraphrase and context-aware paraphrase are the techniques studied in this paper and described in Subsection 4.5; * Remove CoT removes the CoT and only leaves the answer given by GPT-4; * Superficial CoT does like Remove CoT, except that it uses a constant CoT instead of no CoT at all; * CoT Regeneration uses GPT-3.5-Turbo to generate an answer, but then replaces its final answer with the answer provided by GPT-4; * CoT Regeneration with hint uses GPT-4's final answers as a hint given to GPT-3.5-Turbo asked to generate a completely new CoT. We only perform this analysis on AQUA-RAT, since the gap between the win-rate of GPT-4 and GPT-3.5-Turbo is too small on AlpacaEval. Results in Figure 5 show that context-paraphrasing does better than answering questions from scratch. ## 6 Related work **AI steganography has already been observed in CycleGAN**, a model that learns a transformation between two image distributions. Given two distributions A and B, the model learns a mapping from A \(\rightarrow\) B and B \(\rightarrow\) A such that their composition is close to the identity function. (Chu et al., 2017) shows that CycleGAN learns to subtly hide information to enable a more accurate reconstruction, and also shows that the hidden information is sometimes non-local - demonstrating that AIs can learn complex steganographic schemes in the wild. **LLM watermarking** is the problem of injecting information inside LLM-generated text such that this information is hard for the user to remove. LLM watermarking addresses a different threat model from the one we're trying to address, which leads to 3 major differences. (1) Metrics: watermarking techniques usually focus on encoding reliably a single bit to be able to identify if a given text was generated by a language model (Kirchenbauer et al., 2023a), and require a very low false positive Figure 5: Win-rate of answers on the AQUA-RAT dataset before and after different kinds of paraphrasing and other baseline information removal techniques have been applied. Context-paraphrasing does more than answering the question from scratch: when GPT-3.5-Turbo paraphrases GPT-4, it retains almost all the task performance increase over GPT-3.5-Turbo’s answer, and it outperforms other baseline techniques. rate. Some previous works attempt to tackle cases where few bits are encoded to answer the question "which user generated this text" (Wang et al., 2023; Yoo et al., 2023), and those face challenges closer to the ones we try to address here. (2) Text modifications: LLM watermarking does not need to be robust against the same text modifications as LLM steganography: most LLM watermarking works aim to be robust against _copy-pasting attacks_, where a fraction of the text was written by humans (such as the two papers cited above), though Kirchenbauer et al. (2023) studies robustness against paraphrasing. (3) Encoding schemes: watermarking requires techniques that work no matter the text being generated and are very cheap to decode, while LLM steganography schemes can in principle work only for a certain kind of text, and should ideally be implemented by LLMs (even though we relax this constraint in this work). **Text steganography** usually focuses on situations where capacity should be high but where robustness is not needed, as explained in Majeed et al. (2021). Therefore, text steganography techniques are usually less useful than LLM watermarking techniques for the problem of studying defenses against LLM steganography. ## 7 Discussion ### Limitations of our evaluation methodology Our methodology may overestimate the hidden information density by letting steganography schemes be stronger than what LLMs may be able to achieve in practice, because human-written programs implementing complex steganography schemes can require many serial steps to run (e.g. using hash functions, or long intermediate LLM generations), agree on shared information (e.g. the specific scheme being used, or the seed and hyperparameters used for a watermarking scheme), and be the result of hundreds of hours of experimentation (while the most sensitive and useful parts of the research aimed at finding robust steganography schemes could be removed from the training corpus of LLMs). These limitations of LLMs could be added as constraints to the steganography schemes, for example, by asking steganography schemes to be trained into LLMs. We leave this for future work. On the other hand, humans writing steganography schemes might not be competitive with advanced AIs trying very hard to do steganography because advanced LLMs with long RL training runs might find strategies that rely on some hard-to-elicit understanding of language and its irregularities. Our methodology might also underestimate the difficulties caused by a large intelligence gap between future encoders and paraphrasers. In this work, we studied the feasibility of using GPT-3.5 to remove information from GPT-4's output without degrading its quality as a proxy for studying the feasibility of using a small model unable to implement a good enough steganography scheme to defend against steganography used by a stronger model, and we showed that for some tasks, this should be feasible. But if in the future, the gap between the best model used in production and the best model unable to do steganography is large, techniques like the ones studied here might degrade performance too much, and our approach to steganography is unable to foresee and forestall this challenge. ### Limitations of paraphrasing and mitigation strategies Even though our experiments show situations where paraphrasing is almost always able to remove hidden bits of information, there are some situations where a paraphraser can't remove information because, in order to be trusted, the paraphraser has to be weaker than the initial generator. Failures of the paraphraser can be grouped into the limitations of generation abilities, and the limitations of understanding. **Limitations of generation abilities** are cases where the paraphraser understands the text perfectly, but isn't able to generate a high-quality paraphrase of it. A broad category where this happens is high entropy major choices. For example, when a language model gives examples of an abstract concept, it might be very hard for the paraphraser to remove the hidden information that might have led to the choice of the specific example, because it might be too weak to generate a new answer using a completely different example. A more extreme example would be the generation of a large prime number for the implementation of a cryptographic technique: the paraphraser might recognize that the choice of this exact prime number might hold some hidden information, but it might not be able to generate a new one. **Limitations of understanding** are cases where the paraphraser doesn't understand the text well enough to remove all its hidden bits without destroying its functionalities. For example, in code generation tasks, it sometimes requires a deep understanding of the codebase to know if two successive function calls can be swapped, and even if the paraphraser suspects that the order of successive function calls might be used to encode hidden information, it might not be able to know when it can perform a swap. These limitations can be somewhat mitigated by using an evaluator that penalizes hard-to-paraphrase and hard-to-understand text. For example, the evaluator might penalize situations where the generator doesn't give a canonical example of a phenomenon (e.g. the first example given on the dedicated Wikipedia page), or when it doesn't use prime numbers from a common cryptographic library. It could also enforce code standards aimed at limiting the number of hard optional choices (such as the possibility to swap function calls). ### Risks from LLM steganography LLM steganography would enable encoded reasoning, which undermines the faithfulness of CoT reasoning. LLM steganography would also enable hidden communication between LLM instances. Both of these capabilities would undermine the monitoring of the advanced AI systems that will be developed in the coming years: instead of having to make their intermediate reasoning step explicit every #layers steps of serial computation, LLMs mastering steganography could in principle think for #layers \(\times\) #tokens steps of plans and actions human readers wouldn't approve of. ### Future work Our work showed a simple situation where encoded reasoning can be trained in with supervised learning. Future work could build on this proof of concept and study when and how LLMs trained with RL use steganography. If deployed LLMs are able to do steganography well enough that they could use it to hide their reasoning and communication with other LLMs, this could enable them to pursue instrumental goals more effectively (Turner et al., 2019), such as sycophamacy and goal preservation (Perez et al., 2022). Therefore, we believe that LLM steganography qualifies as a dangerous capability, and we are excited for dangerous capabilities evaluations targeting this ability. Our experiments to evaluate information density are done with a single instruction dataset and a single math dataset. It's unclear how much our conclusion transfers to more complex datasets. For example, code or scientific paper datasets might prove harder to paraphrase and require the use of reliable standardization techniques, and it's unclear how much standardization would work. Finally, the steganography schemes and defenses we studied are quite simple. Future work could develop better schemes, for example by fine-tuning models to achieve high hidden information density rather than simply prompting them or manipulating the logits. Future work could also develop better defenses by training paraphrasers to remove hidden information while maintaining the quality of answers. ## 8 Conclusion In this paper, we have demonstrated a simple setup where fine-tuning GPT-3.5-Turbo to do steganography helps it to express opinions in agreement with the user's beliefs 80% of the time (up from 54% of the time when no intermediate reasoning is used). To prevent the issues stemming from such encoded reasoning, we have introduced a methodology to estimate how much hidden information could be encoded in LLM generations. We have used this methodology to determine that when using context-aware paraphrasing and penalizing abnormal generations, it would be very difficult for language models to encode more than 3 bits of information per KB of text, thereby making encoded reasoning mostly useless. We are excited about future work using this methodology to further improve both defenses and the steganography schemes they are tested against. We also look forward to implementations of defenses against steganography in critical applications of the powerful LLMs which will be trained in the coming years. ## Acknowledgements The authors would like to thank Lawrence Chan, Daniel Kokotajlo, Erik Jenner, Kshitij Sachan, Ansh Radhakrishnan, Max Nadeau, and Buck Shlegeris for their useful feedback, as well as Chan Jun Shern for providing ideas and prompts for the scratchpad steganography scheme.
2303.00265
Security of RPL Based 6LoWPAN Networks in the Internet of Things: A Review
Internet of Things (IoT) is one of the fastest emerging networking paradigms enabling a large number of applications for the benefit of mankind. Advancements in embedded system technology and compressed IPv6 have enabled the support of IP stack in resource constrained heterogeneous smart devices. However, global connectivity and resource constrained characteristics of smart devices have exposed them to different insider and outsider attacks, which put users' security and privacy at risk. Various risks associated with IoT slow down its growth and become an obstruction in the worldwide adoption of its applications. In RFC 6550, the IPv6 Routing Protocol for Low Power and Lossy Network (RPL) is specified by IETF's ROLL working group for facilitating efficient routing in 6LoWPAN networks, while considering its limitations. Due to resource constrained nature of nodes in the IoT, RPL is vulnerable to many attacks that consume the node's resources and degrade the network's performance. In this paper, we present a study on various attacks and their existing defense solutions, particularly to RPL. Open research issues, challenges, and future directions specific to RPL security are also discussed. A taxonomy of RPL attacks, considering the essential attributes like resources, topology, and traffic, is shown for better understanding. In addition, a study of existing cross-layered and RPL specific network layer based defense solutions suggested in the literature is also carried out.
Abhishek Verma, Virender Ranga
2023-03-01T06:41:36Z
http://arxiv.org/abs/2303.00265v1
# Security of RPL based 6LoWPAN Networks in the Internet of Things: A Review ###### Abstract Internet of Things (IoT) is one of the fastest emerging networking paradigms enabling a large number of applications for the benefit of mankind. Advancements in embedded system technology and compressed IPv6 have enabled the support of IP stack in resource constrained heterogeneous smart devices. However, global connectivity and resource constrained characteristics of smart devices have exposed them to different insider and outsider attacks, which put users' security and privacy at risk. Various risks associated with IoT slow down its growth and become an obstruction in the worldwide adoption of its applications. In RFC 6550, the IPv6 Routing Protocol for Low Power and Lossy Network (RPL) is specified by IETF's ROLL working group for facilitating efficient routing in 6LoWPAN networks, while considering its limitations. Due to resource constrained nature of nodes in the IoT, RPL is vulnerable to many attacks that consume the node's resources and degrade the network's performance. In this paper, we present a study on various attacks and their existing defense solutions, particularly to RPL. Open research issues, challenges, and future directions specific to RPL security are also discussed. A taxonomy of RPL attacks, considering the essential attributes like resources, topology, and traffic, is shown for better understanding. In addition, a study of existing cross-layered and RPL specific network layer based defense solutions suggested in the literature is also carried out. Internet of Things, RPL, 6LoWPAN, LLN, Network Security. ## I Introduction Internet of Things[1] is realized by a large scale deployment of Low power and Lossy Networks (LLNs) which are characterized by communication links that have high packet loss and low throughput [2, 3]. LLNs restrict the use of traditional computers and communication technologies due to their strict resource constraints. Also, these networks use resource constrained devices (nodes) that operate on low power, require less energy, have small on-board memory, and low computational capabilities [4]. Moreover, characteristics like resource constraints, high packet loss, and low network throughput make state-of-the-art routing protocols like Adhoc On-Demand Distance Vector, Dynamic Source Routing, and Open Shortest Path First unsuitable for LLNs [5, 6]. To handle this issue, a set of standardized protocols has been developed [7, 8]. These protocols include IEEE \(802.15.4\) PHY/MAC for Physical and Data link layer, IPv6 over Low Power Wireless Personal Area Networks protocol (6LoWPAN) for Adaptation layer, Routing Protocol for Low-Power and Lossy Networks protocol (RPL) for Network layer, and Constrained Application Protocol (CoAP) for Application layer. In transport, layer the standard User Datagram Protocol [9] is used. RPL has been standardized in \(2012\) as RFC \(6550\) by Routing Over Low power and Lossy networks (ROLL) working group of Internet Engineering Task Force (IETF) [3]. RPL has been standardized as a network layer protocol for LLNs [10]. It is recommended for facilitating efficient routing in LLNs like 6LoWPAN [11]. RPL has gained much popularity in the industry as well as in academia. The reason is its capability to provide efficient routing among resource constrained smart IP enabled IoT nodes, flexibility in adapting to different network topologies, and Quality of service (QoS) support [8, 12, 13, 14]. RPL constructs a Destination Oriented Directed Acyclic Graph (DODAG) from the physical network topology, in which a gateway node is set as a root (destination) of DODAG. All other nodes perform sensing and data routing. RPL uses low energy consuming mechanisms to support self-organization and self-healing for handling frequent node failures [15]. It consumes very few resources while providing efficient routing of IPv6 packets. These capabilities of RPL favor its usage in IoT applications that run on LLN infrastructure [16]. In Section III, a detailed description of RPL is presented. RPL protocol based networks inherit vulnerabilities from its core technologies like IPv6 and Wireless Sensor Networks (WSN). Also, Self-organization, self-healing, open nature, and resource constrained characteristics of RPL expose it to various threats that target it for compromising users' security and privacy [17]. Also, RPL is exposed to external threats from the Internet [18, 19]. Traditional cryptography based security solutions are not suitable for securing RPL based networks (e.g., LLNs). This is because the effectiveness of traditional cryptography based security solutions (e.g. symmetric and asymmetric) relies on the secure distribution of keys. The resource constrained nature of LLNs pose many challenges to key management [20, 21]. Also, if a single legitimate node is compromised, an attacker may gain access to a large pool of pre-loaded keys [22]. This means, once pre-loaded keys are compromised, all network nodes are also compromised. The challenges related to secure key establishment, storage, distribution, revocation, and replacement in LLNs make traditional cryptography based security solutions unsuitable for LLNs [23]. The limitations of LLNs pose a severe threat to RPL security. RPL is vulnerable to various routing attacks, which can be broadly classified into two categories, i.e., attacks inherited from WSN, and RPL specific attacks. The cryptography based security mechanisms can only prevent RPL from external attacks (i.e., attack performed using a node which is not a part of the existing network) [24, 25]. Traditional security mechanisms are also incapable of detecting insider attacks (i.e., attack performed on the devices that are already part of the existing network), which are performed by the compromised nodes of the network [26]. Thus, from RPL's security point of view, it is crucial to explore the possibilities of developing energy efficient security solutions. ### _Related surveys_ In the literature, some research works particular to RPL, and IoT security are present. Airehrour _et al._[27] surveyed various attacks and defense mechanisms specific to RPL. Their study primarily focused on the utilization of trust based defense methods in RPL security. Most of the defense mechanisms, they discussed are used in WSN security and cannot be directly applied to IoT networks. Alaba _et al._[28] presented a detailed review of IoT security issues. However, they did not focus on the RPL protocol. A detailed survey on protocols available for facilitating secure communications in IoT is done by Granjal _et al._[12]. The authors discussed research challenges for different protocols, including RPL. However, they did not discuss attacks and defense mechanisms specific to RPL. Wallgren _et al._[29] did a detailed study on RPL security by implementing some routing attacks and analyzing the network's performance. Also, they suggested the possible mitigation methods of such attacks. However, they only considered WSN based attacks. Mayzaud _et al._[30] provided a survey on RPL based attacks and their countermeasures. They proposed a detailed taxonomy of attacks. However, their study did not include the latest proposed attacks and defense solutions. Moreover, the authors did not propose any taxonomy of defense solutions. Pongle _et al._[31] performed a short study on attacks against RPL and 6LoWPAN layer. The authors only provided a short description of defense solutions and did not propose any suitable taxonomy of attacks and defense solutions specific to RPL. In our opinion, all the mentioned surveys lack effective future research directions and recent proposals. Moreover, these surveys have neglected cross-layered security solutions, which can be utilized for securing RPL as well. The key points that lacked in the previous literature are addressed in our study. The existing surveys related to RPL protocol security are summarized in Table I. ### _Motivation and contributions_ With an increase in the number of resource constrained devices (LLNs nodes) and their integration with the Internet has led to severe cybersecurity risks. These risks involve users' security and privacy getting exposed to various threats. Critical applications like healthcare and smart grid, when exposed to such threats, may cause life-threatening incidents to the world population. This motivated us to explore and perform an in-depth analysis of various security issues and their available solutions specific to the RPL protocol. Since RPL is one of the most popular routing protocols for resource constrained networks hence its security aspect must be studied carefully. In this research paper, we present a comprehensive study of different attacks specific to RPL protocol and their defense solutions suggested in the literature. Our objectives include: (1) to propose a taxonomy for classifying different attacks and defense solutions specific to RPL protocol, (2) to identify open research issues, and state-of-the-art challenges related to RPL based IoT network security. The main contributions of this paper are summarized below: * We provide a comprehensive overview of RPL protocol while focusing on its security issues. * We present an extensive survey on RPL specific attacks and their countermeasures present in the recent literature. * We represent the RPL security solutions into two broad categories (i.e., Secure Protocol and Intrusion Detection System) and compare their performance based on different evaluation metrics. * We discuss cross-layered security solutions present in the literature, which can be used to leverage RPL security. * We provide open issues, research challenges, future research directions, and potential areas for future research to promote the contribution of state-of-the-art defense solutions. Figure 1: Organization of the survey ### _Organization of the survey_ The rest of this paper is consequently organized as follows. Section II describes the overview of IoT architectures. Section III presents a brief overview of the RPL protocol. Section IV presents a taxonomy of attacks specific to the RPL protocol. In Section V, the proposed taxonomy related to different defense solutions against RPL attacks present in the literature is discussed. In Section VI, cross-layered security solutions specific to RPL protocol security are discussed. Open issues, research challenges, and future directions are addressed in Section VII. Finally, the paper is concluded in Section VIII. The list of abbreviations and definitions used throughout the paper are presented in Table II. The organization of the survey is illustrated in Fig. 1. ## II Overview of IoT Architecture Various architectures applicable to IoT have been proposed in the literature. Most popular architectures include middleware based, service-oriented based, three-layer and five-layer based [32]. Any standard IoT architecture is not yet recognized in the literature. However, the most commonly referred IoT architecture is three-layer-based architecture [33], which is shown in Fig. 2. It gains popularity because of simple nature and abstract representation of IoT that makes the implementation of applications easier. It comprises three layers, namely perception, network, and application. These layers are highlighted below. #### Ii-C1 Perception Layer The perception layer is the lowest layer in three-layer IoT architecture. The main purpose of the perception layer is to collect data from the physical environment (temperature, pressure, humidity, etc.) of IoT devices. The process of perception is supported by prominent sensing technology like WSN. Besides, this layer is responsible for converting analog input to digital form and making sensed data suitable for transmission. #### Ii-C2 Network Layer The network layer is dedicated to the processing of sensed data and performing secure data transmission between the perception and application layer. It uses various wired and wireless networking technologies like WLAN, WPAN, LoWPAN, and GSM. It integrates various \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Related survey** & **Brief summary** & **Topics** & **Scope** & **Common points with our survey** \\ \hline Airehrour _et al._[27] & A survey on existing routing protocols and mechanisms to secure routing communications in IoT & Security and energy consumption in IoT, Routing protocols, Vulnerabilities to routing, Secure routing protocols, Trust in secure routing & Vulnerabilities in RPL, WSN based defense methods, research challenges \\ \hline Alaba _et al._[28] & A detailed discussion on the IoT security scenario and analysis of the possible attacks & IoT overview, Classification of IoT, Threats and vulnerabilities, IoT security taxonomy, Possible attacks on IoT & State-of-the-art security threats and vulnerabilities, future directions \\ \hline Granjal _et al._[12] & A detailed survey on protocols available for facilitating secure communications in IoT & IoT communication protocols, Security requirements of IoT, Security of various layers (Physical (PHY), MAC, network, application layers), Security for routing \\ \hline Wallgren _et al._[29] & A detailed study on RPL security by implementing and analyzing various routing attacks & IoT technologies and IDS, IoT protocols overview, Attacks against RPL (inspired fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireballball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireballball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireballball fireball fireballball fireball fireballball fireball fireball fireball fireballball fireball fireball fireball fireball fireball fireball fireballball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireball fireballball fireball fireball fireball fireball fireballball fireball fireball fireball fireball fireball fireballball fire transmission technologies like NFC, LTE, and Bluetooth. It promises unique addressing and routing of sensed data from a large number of devices, which are a part of the IoT network. 6LoWPAN is standardized for achieving unique addressing through IPv6 networking. #### Ii-B3 Application Layer The main purpose of the application layer is to provide personalized services or interface (front end) to the IoT application users. It uses processed data from the network layer and delivers it as per the user's need. It fills the gap between users and IoT applications. The application layer provides tools to the application developers in order to realize IoT insights. It specifies various applications in which IoT can be exploited, e.g., smart homes, smart power grid, industrial monitoring, surveillance systems, healthcare monitoring, and logistics management [32, 34, 35] Currently, IoT devices generate a large volume of data that needs to be processed at cloud servers for various purposes like business insights and security monitoring. However, the rate at which data is generated by IoT devices, requires good bandwidth connections for data transmission to the cloud servers. The limited bandwidth connections cause a significant delay in data transmission and processing, which affects the overall performance of smart applications. Three-Layer based architecture is not capable enough to solve such issues [32]. To address these issues, Edge and Fog computing paradigms [36] are emerged as possible solutions and are being used nowadays. The four-layered Fog computing based IoT architecture is shown in Fig. 3. The physical layer includes IoT devices or edge devices which sense and send data to edge gateways. The edge layer consists of edge gateways (border routers), which either perform real-time data preprocessing at source/on-premises or forward the received data to the fog node. The fog layer consists of powerful servers that collect data from edge gateways and perform the task of data preprocessing, and transmission to the cloud servers. At the cloud layer, smart applications are deployed, which perform critical tasks like business insights and security monitoring. Fog nodes can process and act on a large volume of data, reduce bandwidth and latency, and can perform security monitoring. Whereas edge nodes can apply local security policies and make real-time decisions locally to control and monitor many IoT devices at a time. With fog and edge layer, the security of IoT application and involved protocols can be improved significantly [37, 38, 39]. \begin{table} \begin{tabular}{l l} \hline **Abbreviation** & **Stands For** \\ IoT & Internet of Things \\ IP & Internet Protocol \\ IPv6 & Internet Protocol version 6 \\ RPL & IPv6 Routing Protocol for Low Power and Lossy Network \\ IETF & Internet Engineering Task Force \\ ROLL & Routing Over Low power and Lossy Networks \\ GLoWPAN & IPv6 over Low Power Wireless Personal Area Networks \\ LLNs & Low power and Lossy Networks \\ CoAP & Constrained Application Protocol \\ UDP & User Datagram Protocol \\ QoS & Quality of Service \\ DODAG & Destination Oriented Directed Acyclic Graph \\ WSN & Wireless Sensor Networks \\ ML & Machine Learning \\ WLAN & Wireless Local Area Network \\ WPAN & Wireless Personal Area Networks \\ LoWPAN & Low-Power Wireless Personal Area Networks \\ GSM & Global System for Mobile Communications \\ NFC & Near-field communication \\ LTE & Long-Term Evolution \\ OF & Objective Function \\ ETX & Expected Transmission Count \\ MRHOF & Minimum Rank with Hysteresis Objective Function \\ OF0 & Objective Function Zero \\ OF-EC & OF based on combined metrics using Fuzzy Logic \\ DIO & DODAG Information Object \\ DIS & DODAG Information Solicitation \\ DAO & Destination Advertisement Object \\ DAO-ACK & Destination Advertisement Object Acknowledgment \\ DoS & Denial-of-Service \\ PDR & Packet Delivery Ratio \\ 6BR & 6LoWPAN Border Router \\ IDS & Intrusion Detection System \\ VERA & Version Number and Rank Authentication \\ TRALL & Trust Anchor Interconnection Loop \\ SRPL & Secure-RPL \\ TCA & Trusted Computing Architecture \\ TPM & Trusted Platform Module \\ MRTS & Metric based RPL, Trustworthiness Scheme \\ ERNT & Extended RPL Node Trustworthiness \\ TIDS & Trust based Security System \\ TOF & Trust Objective Function \\ TRU & Trust Information \\ AT & Adaptive Threshold \\ DT & Dynamic Threshold \\ FAM & Frequency Agility Manager \\ LR & Logistic Regression \\ MLP & Multi-layer Perceptron \\ NB & Naive Bayes \\ RF & Random Forest \\ SVM & Support Vector Machine \\ SOHIDS & Self Organizing Map Intrusion Detection System \\ SOM & Self Organizing Map \\ RSSI & Received Signal Strength Indicator \\ TN & True Negative \\ FP & False Positive \\ FN & False Negative \\ TPR & True Positive Rate \\ PPR & False Positive Rate \\ SPRT & Sequential Probability Ratio Test \\ InBres & Intrusion detection and response system for IoT \\ FSM & Finite State Machine \\ EFSM & Extended Finite State Machine \\ SBIDS & Sink-based Intrusion Detection System \\ NCR & Node’s current rank \\ NPR & Node’s parent rank \\ NPVR & Node’s previous rank \\ RIDES & Robust Intrusion Detection System \\ CUSUM & Cumulative Sum Control charts \\ OPFC & Unsupervised Optimum-Path Forest Clustering \\ NAC & Network Access Control \\ ETA & Encrypted Traffic Analytics \\ 6TiSCH & IPv6 over the TSCH mode of IEEE 802.15.4e \\ TSCH & Time-Slotted Channel Hopping \\ \hline \end{tabular} \end{table} TABLE II: List of Abbreviations Fig. 3: Fog computing based four-layer IoT architecture ## III Overview of RPL protocol RPL is IPv6 based Distance Vector and Source Routing protocol that specifies how to build a DODAG using a _Objective Function (OF)_, set of metrics and constraints. In RPL, the IoT devices are interconnected using mesh and tree topology in order to build a DODAG graph starting from a root (sink or gateway) node that acts as an interface between LLN nodes and the Internet. A network may contain more than one DODAG, which collectively form an RPL Instance. In a network, more than one RPL Instance can run in parallel, and every RPL Instance is identified by a unique _RPLInstanceID_. An RPL node can belong to only one DODAG of every RPL Instance running in the network. Each node in DODAG is assigned a rank (16-bit value), which represents "the node's individual position relative to other nodes with respect to a DODAG root" [3]. The rank stringently increases in DODAG's downward direction (root to leaves) and decreases in the upward direction (leaf nodes to root). The rank concept is used: (1) to detect and avoid routing loops, (2) to build parent-child relationship, (3) to provide a mechanism for nodes to differentiate between parent and siblings, and (4) to enable nodes to store a list of preferred parents and siblings which can be used in case a node loses its link with the parent node. DODAG is built during the network topology setup phase, where each node uses RPL control messages to find the optimal set of parents towards the root and link itself with the preferred parent, i.e., parent on the most optimal path. The selection of preferred parent is based on a _OF_ that defines how to compute a rank based on routing metrics while considering routing constraints and optimization objectives. RPL may use different _OF_[40] which includes ETX Objective function [41], _Minimum Rank with Hysteresis Objective Function (MRHOF)_[42], _Objective Function Zero (OF0)_[43], and objective function based on combined metrics using fuzzy logic _(OF-EC)_[44]. RPL control messages include DODAG Information Object (DIO), DODAG Information Solicitation (DIS), Destination Advertisement Object (DAO), and Destination Advertisement Object Acknowledgment (DAO-ACK). RPL uses an adaptive timer mechanism called as "Trickle timer" in order to limit the control traffic in the network [45]. ## IV Attacks on RPL protocol RPL protocol is susceptible to a wide range of insider and outsider attacks. These attacks are difficult to detect and mitigate because of the vulnerable nature of nodes and wireless network, easily tamperable nature of nodes, mobility of nodes, and resource constraints. Many authors have proposed various security mechanisms specific to RPL, which include control message encryption and security modes [46]. However, most of the RPL implementations do not consider the security measures due to incomplete specification of mechanisms, and implementation overheads [47]. These security mechanisms are effective in defending against outsider attacks. However, they fail in case of insider attacks. An insider attacker may bypass the applied RPL security mechanisms and disrupt network functionality. A taxonomy of attacks, is shown in Fig. 4, where attacks are classified on the basis of their primary target. We have extended the taxonomy presented in [30] by adding recently proposed attacks, and categorizing some similar kind of attacks for better understanding. RPL control messages can be illegitimately manipulated to disrupt routing operations. Similarly, fault tolerance mechanisms can be exploited to target network resources by performing a Denial of Services attack (DoS). In this section, attacks specific to the RPL protocol are listed and briefly discussed. _Rank attacks_: The rank field or rules can be exploited for performing various rank based attacks [48]. In RPL, there is no specific mechanism to monitor the integrity of control messages and routing metric values received from the parent node. In fact, a child node receives all the routing information through control messages without verifying its parent trustworthiness. Thus, if the parent node is malicious, the child node still believes that all the information coming from its parent is genuine. Hence, this condition may lead to the formation of unoptimized routes and show poor network performance. An attacker node performs the Rank attack by illegitimately changing its rank value, thus, attracting neighbor nodes to select it as their parent, assuming that the malicious node leads to the root node in the shortest path cost. Different variants of Rank attack have been proposed in the literature by the researchers, which include increased rank, decreased rank, worst parent attacks. _Neighbor or replay attack_: In neighbor attack [49], an attacker node duplicates and multicast all DIO messages received from its parent. In such a case, all the neighbor nodes which receive replayed DIO messages may think that the message is received from a new neighbor. Further, if the replayed DIO message contains favorable routing information like rank, the victim neighbor node may add out of range node as its preferred parent. Another variant of this attack is proposed in [30] and termed as DIO replay attack. In this variant, an attacker nodes multicast the outdated DIO messages containing old routing information. This attack forces a victim node to follow the stale and unoptimized paths. _DAO attacks_: An adversary can exploit the storing mode of the RPL protocol. It can manipulate the DAO messages to perform DAO related attacks. These types of attacks include DAO inconsistency and routing table falsification. Both of these are highlighted below. _DAO inconsistency_: RPL uses some flags which are carried out in IPv6 hop-by-hop option to manage important topological mechanisms. Down 'O' flag represents the expected direction of packet, Rank-Error 'R' flag indicates rank error in topology, and Forwarding-Error 'F' flag represents that the node is not capable of forwarding packet to the set destination [3]. DAO inconsistency is reported by a node when its child node is unable to forward the data to a specified destination, due to unavailability of a route that is learned from fake DAO message (DAO with fake routing information) during topology creation. The attacker exploits this mechanism to perform an attack by setting 'F' flag to \(1\) in the packets and sending it back to its parent. This forces the parent node to discard legitimate available downward routes. DAO inconsistency attack leads to an increase in end-to-end delay, unoptimized topology, and isolation of nodes. _Routing table falsification_: Mayzaud _et al._[30] proposed a methodology to perform the attacks that lead the nodes to learn fake routes which do not exist. Such attacks can create unoptimized topology due to increased end-to-end packet delay and decreased packet delivery ratio (_PDR_). An attacker may perform the attack by forging the routing information contained in DAO messages, which forces the legitimate nodes to build false downward routes, i.e., non-existing routes. Thus, when legitimate nodes try to forward the data to non-existing nodes, this situation leads to DAO inconsistency, unnecessary packet delay, and increased control overhead. In a variant of routing table falsification attack termed as routing table overload, the attacker forges a DAO message with false routing information and sends it to the parent node. It leads to the victim node's routing table buffer getting full. Thus, further creation of legitimate optimized routes is entirely blocked. _Routing choice intrusion_: Zhang _et al._[50] proposed a new internal routing attack known as Routing choice intrusion. The main idea is to learn the current routing conditions used by the nodes for choosing optimal paths. Then capturing the DIO messages, and later multicast the forged DIO messages by its legitimate identity. This attack requires a node to be reprogrammed in such a manner that it ignores the internal misbehavior detection and operates normally, thus, makes it hard to be detected. This attack may involve one or more compromised nodes. Routing choice intrusion attack leads to an increase in end-to-end delay, routing loops, energy consumption, and creation of unoptimized paths. _DIS attack_: In DIS attack, an attacker node periodically sends DIS messages to neighbors within its transmission range. In return, the victim node resets its trickle timer and replies with DIO messages (RPL specific mechanism for allowing new nodes to join DODAG) [51, 52]. This attack can be performed either by sending unicast DIS messages to a single node or by multicasting DIS messages in order to target multiple nodes at a time. DIS attacks can be termed as flooding attack as it involves the flooding of DIS messages in the network [30]. It leads to an increase in control packet overhead, node energy exhaustion, and routing disruption. _Version number attack_: In RPL, only border router (6BR) is responsible for initiating the propagation and updation (increase) of version number [3]. Whenever a border router or gateway (6BR) needs to rebuild the whole DODAG, it initiates a global repair process by incrementing the version number value present in the version number and sends it to child nodes. Upon receiving a DIO with a different version number than it has, the child node starts the process for updating its routing state (preferred parent, preferred parent, and links) by resetting its trickle timer. This process iterates until all the nodes update their routing state. RPL defines no mechanism to prevent nodes (other than 6BR) from illegitimate modification of version number [53, 54, 55]. Hence, an attacker can modify the version number field of the DIO message and forwards it to the neighbors. This leads to the unnecessary rebuilding of complete DODAG. It results in an increase in control packet overhead, end-to-end delay, rank inconsistencies, routing loops, and energy consumption. _Local repair attack_: In RPL, a local repair mechanism is triggered by a node after it loses the link with its preferred parent [3]. A node can initiate a local repair mechanism either by changing the value in the DODAG ID field of DIO or by updating its rank to infinite and multicast the DIO to all its neighbors. Both the methods force neighbor nodes to search for a new preferred parent. Local repair enables an RPL network to converge once again in minimum time. This mechanism is supposed to be called only when a node does not have any connection with its parent. However, an attacker may deliberately use both the methods to trigger unnecessary local repairs even if it is still connected to its parent [56, 57, 58]. This is possible because RPL does not define any method that can be used by a node to verify the authenticity of local repair initiated by their neighbor nodes [23]. Wherever a local repair is triggered, the network topology is forced to be restructured. This leads to an increase in energy consumption of victim nodes as well as disruption of the routing process. _DODAG inconsistency_: RPL specifies the data path validation method to detect and repair rank related inconsistencies (loops) in DODAG. RPL uses different flags of RPL IPv6 Figure 4: Detailed taxonomy of attacks specific to RPL protocol header options of multi-hop data packets [59] for tracking inconsistencies (routing loops) in the network. As per [3], DODAG is inconsistent if the direction flag of the data packet represented by the 'O' does not follow the strict rank relation with the node that has sent/forwarded the packet. When such a situation is encountered, the 'R' flag is used to perform topology repair, i.e., 'R' flag is set to \(1\) by the node which encountered forwarding error, and the packet is forwarded. Further, when another node receives a packet with 'R' flag set (detects inconsistency), it discards the packet and resets its trickle timer to perform local repair [45]. An attacker can exploit these flags to perform various attacks that are collectively termed as a DODAG inconsistency attack, which includes Direct and Forced blackhole attack [60, 61]. _DIO suppression_: In [62], a novel attack against RPL protocol was proposed and termed as a DIO suppression attack. The idea behind this attack is to suppress the transmission of fresh DIO control messages required by the IoT nodes for exploring new optimized routing paths and removal of stale paths. This leads to the creation of unoptimized routes, which further leads to a partition problem in the network. An attacker only needs to sniff DIO message from any legitimate node and then, multicast that message for at least \(k\) times (suppression threshold) periodically. This makes victim node believe that the consistent DIOs [45] are received from its parent node irrespective of any legitimate change in network's current state. Thus, there won't be any change in victim's current state, i.e., preferred parent set, parent, and relative distance from the root. In Fig. 5, \(I_{min}\) represents the starting time period set by trickle algorithm, which is doubled every time \(k\) consistent DIO's are received. \(I_{min}\) is initiated again when DIO's less than \(k\) are received or when any inconsistent DIO is received. _ETX manipulation_: In RPL, the Expected transmission count (ETX) objective function uses the ETX parameter as a metric for selecting the optimal routing path between two nodes. RPL follows a simple thumb rule, i.e., the ETX value of any parent node must be lower than that of a child node. This rule must be followed throughout the network. An attacker exploits this rule by deliberately manipulating nodes ETX value in order to gain a better position in the network [63]. This allows the attacker to attract a large part of network traffic and then launch other attacks like Blackhole and Graphwide attacks. Table III presents a classification of attacks based on their type (insider or outsider), prerequisites, and their impact on the network's performance. RPL is also vulnerable to attacks inherited from WSN. These attacks include HELLO flood or DIO flood, Sinkhole, Wormhole [64], Blackhole, Selective forwarding, Sybil, Clone ID, etc. These attacks disrupt the network's performance drastically, which decreases the network's lifetime. Since many surveys are already available in the literature that present WSN based attacks hence we do not discuss them in this paper [65, 66]. ## V Taxonomy of RPL attack defense mechanisms In this section, different solutions proposed for the detection and mitigation of RPL attacks are discussed. The solutions present in the literature are divided into two categories: Secure Protocol and Intrusion Detection System (IDS). Secure Protocol based solutions refer to defense mechanisms that are incorporated in the RPL protocol itself, thus, making it secure against various attacks. These mechanisms are further categorized into Cryptography, Trust, and Threshold based solutions. Cryptography mechanisms make the use of traditional cryptography methods to provide security and defense against various attacks, whereas trust based mechanisms involve computation of trustworthiness of nodes for facilitating routing decisions. Threshold based defense solutions exploit the inbuilt feature of RPL and provide an enhancement in order to decide the way trickle timer is reset. These mechanisms are embedded into RPL protocol, making it more robust in terms of defensive behavior while maintaining desirable network performance. Traditional IDS solutions cannot be directly applied to IoT [90]. It is because of resource constrained nodes used in the network, different network topologies, and IP based connectivity, which makes traditional IDS solutions infeasible. This demands for lightweight IDS solutions in terms of computational, communication, memory and energy overhead. In particular to RPL protocol, IDS refers to the second line of defense, which is responsible for the detection of anomalies in RPL operation. These defense solutions can be further classified into Signature, Anomaly, Specification, and Hybrid. In this section, a brief review of security solutions available in the literature for detecting various attacks in IoT (i.e., typically DoS and RPL based attacks) is presented. Fig. 6 shows the taxonomy of various defense solutions, in particular to the RPL protocol. ### _Secure Protocol Based Defense Mechanisms_ This section presents various secure protocol based defense solutions for defending the RPL protocol against routing attacks. #### V-A1 Cryptography Based Solutions _Version Number and Rank Authentication (VeRA):_ In [53], a security scheme called VeRA is proposed. The proposed scheme provides defense solutions against attacks related to illegitimate version number and rank change. The key idea is to use hash chains for authenticating those nodes whose rank or version number is changed. VeRa incorporates an authentication mechanism based on hash operations having small time complexity. The main drawback of VeRA is that it can be bypassed using rank forgery and replay attacks. Fig. 5: DIO suppression with suppression threshold (\(k=5\))[62] \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Attack** & **Type** & **Prerequisites** & **Description** & **Impact on network performance** \\ \hline Rank & Insider & - & Rank field and strict rank rules are exploited. & Generates routing loops. Increases end-to-end delay, _PDR_, control packet overhead, congestion, and energy consumption. Introduces unoptimized routes. \\ \hline Neighbor /replay & Insider & - & Attacker node eavesdrops the DIO messages of legitimate neighbors and later send it to its neighbors & Increases packet loss (low _PDR_), disrupted routes, network congestion, and unwanted interference. \\ \hline DAO inconsistency & Insider & Storing mode, Option & DAO loop recovery mechanism is exploited by the attacker. & Increases end-to-end delay. Leads to unoptimized topology and isolation of nodes. \\ \hline Routing table falsification & Insider & Storing mode, Option & Attacker overloads the routing table of legitimate nodes with false routing information. & Routing table buffer of victim nodes gets filled, which further blocks the building of legitimate optimized routes. \\ \hline Routing multicast choice intrusion & Insider & - & Attacker node learns the current routing rules. Then, it captures real DIO messages and multicast the forged DIO messages. & Increases end-to-end delay and energy consumption. Generates routing loops and introduces unoptimized paths. \\ \hline DIS & Insider /Outsider & - & Legitimate nodes are flooded with DIS messages, which forces them to reset their trickle timer and reply with new DIO messages. & Increases control packet overhead and energy consumption, and causes routing disruption. \\ \hline Version number & Insider & - & Attacker node deliberately increments the version number, which triggers global repair of the network. & Increases control packet overhead, end-to-end delay, and energy consumption. Introduces rank inconsistencies and routing loops. \\ \hline Local repair & Insider & - & Local repair mechanism is exploited, i.e., by changing the rank value to infinite or changing DODAG ID value to trigger unnecessary local repairs. & Disrupts the routing process and increases energy consumption. \\ \hline Direct DODAG inconsistency & Insider /Outsider & Option Leader & Local repair mechanism is exploited, i.e., attacker multicast the packets after setting ’O’ and ’R’ flags. & Traffic congestion. Increases packet loss ratio, control packet overhead and energy consumption. \\ \hline Forced blackhole & Insider & Option Leader & Attacker node sets ’O’ and ’R’ flags of received data packets and forwards them to its neighbors. & Increases control packet overhead and energy consumption. Decreases _PDR_. \\ \hline DIO suppression & Insider /Outsider & - & Previously eavesdropped DIO messages are sent, which leads to suppression of new DIO transmission. & Introduces unoptimized routing paths, which leads to network partition. \\ \hline ETX manipulation & Insider & ETX objective function & Manipulation of ETX value in order to gain a better position in the network and attract network traffic. & Introduces unoptimized routing paths. \\ \hline HELLO/DIO flood & Insider /Outsider & - & DIO messages with favorable routing metrics are multicast with strong signal strength. & Leads to network congestion and saturation of RPL nodes. Increases packet loss ratio and control packet overhead. \\ \hline Sinkhole & Insider & - & Malicious node decreases its rank in order to become the preferred parent of its neighbors. & degrades the overall network performance due to unoptimized routes. \\ \hline Blackhole & Insider & - & Malicious node drops all the packets it receives from its children nodes. & Decreases _PDR_, increases end-to-end delay, unstabilizes topology. \\ \hline Selective forwarding/grayhole & Insider & - & Malicious node selectively drops packets, i.e., forwards control packets and drops data packets. & Negatively affects topology construction, which leads to disrupted routing. Decreases _PDR_. \\ \hline Wormhole & Insider & Minimum two or more nodes create a high bandwidth tunnel between them in order to transmit data in long range. & Creates unoptimized paths. \\ \hline Sybil & Insider & - & Single node posses multiple logical identity. & Overcomes voting schemes, compromises transmission routes by taking control of network. \\ \hline Clone ID & Insider /Outsider & - & Single logical identity is copied to multiple nodes. & Compromises transmission routes by taking control of the network, eavesdrop on transmission links. \\ \hline Jamming & Outsider & - & Attacker transmit with high power radio signals to introduce heavy interference. & Decreases _PDR_ and increases energy consumption. \\ \hline Sniffing & Insider /Outsider & - & Network traffic is eavesdropped for obtaining routing information from packets. & Introduces privacy concerns. \\ \hline Traffic analysis & Insider /Outsider & - & Radio transmissions are eavesdropped to analyze traffic patterns for obtaining routing/topology information. & Introduces privacy concerns. \\ \hline \end{tabular} \end{table} Table III: Classification of attacks on RPL and their impact on network’s performance _Enhanced VeRA and Trust Anchor Interconnection Loop (TRAIL):_ To counter Decreased rank attack, Landsmann _et al._[54] proposed a novel security mechanism that uses a nested encryption chain to prevent an attacker from multicasting altered hash chains and maintains rank integrity. The encryption chain links both version number hash chain with rank hash chain. The proposed security mechanism does not provide defense against rank-replay attack. Perrey _et al._[91] proposed an extension to [54] for detecting and preventing topological inconsistencies. A generic security scheme called Trust Anchor Interconnection Loop (TRAIL) is proposed to facilitate topology authentication in RPL. In TRAIL, each node can validate its upward routing path towards the root and can detect any rank spoofing without relying on encryption chains. TRAIL can search and remove illegitimate nodes from the network topology. Both VeRA and TRAIL maintain the node's states which incurs memory overhead on resource constrained nodes. _Secure-RPL (SRPL):_ Glissa _et al._[67] proposed a secure version of the RPL known as SRPL. The main aim of SRPL is to stop compromised nodes from illegitimately manipulating control message information, which may lead to network disruption, i.e., rank manipulation for gaining a better position in the DODAG. SRPL incorporates a security mechanism to maintain a suitable rank threshold such that any change in the rate of rank change leads to the detection of the attack. The rank threshold is implemented with a hash chain authentication of every node in the network. The main advantage of using the proposed solution is that it does not put any limit on node movement from one DODAG to another. When a node moves from one DODAG to another or changing rank, it needs to be validated using secured hashed values at first. SRPL mainly aims to defend Sinkhole, Blackhole, Selective forwarding, and Rank attacks. SRPL involves three phases, namely the initiation phase, the verification phase, and the rank update phase. In the initiation phase, all the nodes in the network compute their rank, threshold values, and respective hashed values. In the verification phase, parents of a respective child node, other nodes check, or verify the hashed rank and thresholds. The rank update is triggered when any node wants to change its rank, and this change is verified against old information and acceptable rank change. The major limitation of SRPL is that it uses computationally expensive operations that consume a lot of node's resources. **Summary and Insights:** This section discussed the various cryptography based defense solutions for securing RPL protocol. It has been observed that the proposed approaches are not sufficient enough to provide the desired security in 6LoWPAN networks. The proposed solutions face many challenges that need to be addressed. For example, the solution proposed in [53] is vulnerable to rank forgery and replay attacks. Similarly, [54, 67, 91, 92] introduce resource overhead (memory, processing), which inhibits their usage in real 6LoWPAN networks. The approach proposed in [93] introduces significant communication overhead. In order to leverage the use of cryptography based solutions, further investigation into IoT constraints is needed. Lightweight cryptography solutions can also be explored for developing IoT based security solutions. #### Iv-B2 Trust Based Solutions _Trusted Computing Architecture (TCA):_ In [68], a TCA is proposed for establishing trust and facilitating secure key exchange among nodes using a trusted platform module (TPM). Authors have focused on making the use of low-cost TPM module to incorporate security in resource constrained nodes. The proposed architecture is capable of defending against node tampering, DoS, and routing attacks targeting availability and integrity. TPM plays a significant role in the proposed architecture as it is responsible for providing keys among authenticated nodes for establishing secure communication. TPM acts as a single point of failure, and if it is tampered or fails, it leads to network performance degradation and security breaches. No extensive evaluation and simulation results have been discussed for validating the effectiveness of TCA. _Secure Parent Selection:_ Iuchi _et al._[69] proposed a Trust based threshold mechanism for securely selecting a legitimate node as a preferred parent and defending against Rank attacks. In the proposed mechanism, every node in the network selects its preferred parent by assuming the fact that illegitimate node Fig. 6: Taxonomy of defense mechanisms for RPL protocol claims a much lower rank than legitimate nodes. All the nodes in the network are capable of finding the illegitimate ranked node by computing the maximum and average rank of its neighbor nodes. A legitimate node then selects its parent node by excluding the node that shows a deficient rank and avoids forwarding packets to illegitimate nodes. The proposed mechanism shows two major limitations. First, it may sometime lead to the creation of unoptimized routes because the legitimate nodes are not selected as a parent in some cases. Second, the proposed approach is vulnerable to Sybil and Blackhole attacks. _Lightweight Trust-Aware RPL:_ Ariehrour _et al._[70] proposed a Trust-Aware RPL routing protocol to detect Blackhole and Selective forwarding attacks. The primary idea behind the proposed work is that the packet drop rate of malicious nodes is higher compared to non-malicious nodes when an attacker is performing a Blackhole or Selective forwarding attack. This behavior of nodes is used to evaluate their trustworthiness. The proposed RPL enhancement uses trust values to evaluate the trustworthiness of nodes for facilitating optimal routing decisions. In Trust-Aware RPL initially, all the nodes perform normal path selection operations, i.e., computing route quality over different neighbors based on MRHOF. Trust-Aware RPL shows better performance as compared to MRHOF-RPL in terms of attacks detected, the frequency of node rank changes, throughput, and packet loss. Several drawbacks of the proposed protocol are: (1) promiscuous mode operation increases energy consumption; (2) a legitimate node may begin to drop packets due to unintentional errors that would resemble it as a blackhole attacker. _SecTrust-RPL:_ In [73], a time based trust aware variant of RPL protocol known as _SecTrust_-RPL is proposed. The proposed RPL variant incorporates a secure trust system that promotes secure communication, detection, and isolation of malicious nodes performing rank and Sybil attacks. The proposed trust mechanism defines a way so that each node in the network computes the trustworthiness of its neighbors by using direct and recommended trust values. _SecTrust_-RPL incorporates five modules. Trust calculation module is responsible for calculating the trust values of nodes. Trust monitoring module is responsible for updating the trust values of nodes in a periodic and reactive manner. The trust rating process is responsible for sorting trust values in descending order. Detection and isolation of attacks process responsible for selecting high-quality routes and detecting malicious and misbehaving nodes using trust values for ensuring the CIA as well as authenticity. Trust backup and recuperation process take care of the selfish nodes, i.e., nodes which aim to preserve their resources and considered malicious. _SecTrust_-RPL is compared with MRHOF-RPL, and it is shown that the proposed mechanism performs better in terms of attack detected, packet loss, throughput, and frequency of node rank changes. _SecTrust_-RPL requires nodes to operate in a promiscuous mode, which consequently leads to heavy energy consumption and decreased network lifetime. _Metric based RPL Trustworthiness Scheme (MRTS):_ A trust based security scheme named as MRTS is proposed in [72] for setting up secure routing paths. It works during RPL topology construction and management by incorporating trustworthiness among nodes. In order to perform a secure operation, MRTS defines a new trust based metric named as Extended RPL Node Trustworthiness (ERNT) and a new trust based objective function named as Trust Objective Function (TOF). ERNT is incorporated in DIO messages and exchanged with neighbor nodes. It is responsible for evaluating the trust value of each node and then quantifies the cost of routing paths. TOF defines a way for nodes to use ERNT and constraints for selecting the preferred parent, and compute their own rank. TOF finds the best routing paths while avoiding the paths with less trustable nodes. MRTS requires TPM for securing RPL control messages and performs all the security-related computations. MRTS shows better performance as compared to traditional RPL. However, the main limitations of MRTS are that it uses TPM, which introduces a single point of failure in the network and adds extra hardware cost to the network. _Trust based Security System (TIDS):_ Nygaard _et al._[74] proposed a novel trust-based security system named as TIDS for detecting Sinkhole and Selective forwarding attacks. TIDS enables the normal node to monitor and evaluate its neighbors in order to find anomalies in the normal RPL operation. The observed data by the node is sent to root (gateway) using Trust Information (TRU) messages for further analysis. The main functionality of TIDS is based on computing trust values using subjective logic. These values are categorized into belief, disbelief, and uncertainty. The trust values are used to analyze the monitored data received from nodes. TIDS is able to detect all the attackers in the network on the cost of heavy energy consumption by the root node and false positives. TIDS requires approximately \(5\)Kb-\(6.4\)Kb of ROM and \(0.7\)Kb-1Kb of RAM. The main advantage of the TIDS scheme is that the normal nodes with IDS implemented on it consume very little energy while showing approximately \(100\%\) detection rate. **Summary and Insights:** It is observed that some solutions present in the literature face a single point of failure issue [68, 72]. The solution proposed in [69] is vulnerable to frequent attacks like Sinkhole and Blackhole. Several works [73, 74, 70] require nodes to operate in a promiscuous mode which leads to substantial energy consumption. The energy consumption parameter must be considered as the most critical metric while designing any security algorithm for RPL. Also, the assumption of static networks also adds to one of the essential limitations of work proposed in the literature. These challenges must be addressed before the utilization of proposed solutions in the real network. #### Iv-B3 Threshold Based Solutions _Adaptive Threshold (AT):_ In [60], a mechanism named as Adaptive Threshold (AT) is presented for countering DODAG inconsistency attacks in RPL. The default mechanism (Fixed Threshold) embedded in RPL has a threshold value of \(20\). After receiving a packet with 'O' and 'R' flags set, a node drops the packet and resets the trickle timer. When this number reaches up to a threshold limit of \(20\), all such incoming packets are dropped, but the trickle timer is not reset in order to limit the effect of an attack. This counter is reset after every hour, and in this way, RPL counters the DODAG inconsistency attack. However, a smart attacker can send \(20\) malformed packets every hour and affect the network performance gradually. An attacker can also use different attack patterns to degrade network's performance without getting detected. AT mechanism considers the current network state to update the threshold based on the rate of receiving packets. The value of threshold decreases when an attacker sends malformed packets very quickly, and increases when an attacker stops sending malformed packets. AT requires prior calculation of optimal configuration parameter values in the arbitrary way (i.e., \(\alpha\), \(\beta\) and \(\gamma\)) and does not consider the node mobility. _Dynamic Threshold (DT):_ Mayzaud _et al._[61] proposed an improvement to their previous DODAG inconsistency mitigation mechanism [60]. The proposed defense mechanism is known as Dynamic Threshold (DT). It is a fully dynamic threshold mechanism that takes into account the dynamic characteristics of the network to set a threshold for mitigating the DODAG inconsistency attack efficiently. DT does not require any prior calculation of optimal value of configuration parameters like that of AT mechanism because all required information is gathered from network characteristics itself. It takes into account the convergence time of the network, i.e., the time required by the RPL network to converge. DT approach avoids unnecessary resetting of trickle timer, which consequently suppresses extra DIO transmissions. DT mechanism outperforms AT mechanism in terms of energy consumption, _PDR_, and end-to-end delay. In addition, the DT mechanism is capable of mitigating the Forced blackhole problem efficiently. _SecRPL:_ Ghaleb _et al._[75] proposed SecRPL to address the DAO falsification attack. The proposed defense mechanism is based on putting a threshold on the number of DAO packets forwarded to each destination. In SecRPL, each parent node maintains a table that contains a counter, specific to every child node in its sub-DODAG. Once the number of DAOs from any child node exceeds the fixed threshold, then that child is marked as malicious. The parent node drops any further DAO containing the prefix of that malicious child. In order to avoid the situation where any child is permanently blocked, the counter table is reset on every DIO multicast. SecRPL shows significantly good results in terms of the number of DAOs forwarded, control packet overhead, average power consumption, upward, and downward latency. SecRPL requires the selection of optimal threshold limit for efficient operation, which incurs overhead to the security scheme. **Summary and Insights**: As far as the literature is concerned, only a few works [60, 61, 75] focus on using threshold based solutions are available. Moreover, the proposed solutions address only DODAG inconsistency, Forced blackhole, and DAO falsification attacks, which leaves a big gap to be filled in this field. In addition, the proposed solutions do not consider node mobility, which may hinder the overall system's performance. The key to threshold based solutions lies in the optimal selection of thresholds, i.e., parameters while considering the network environment. This assumption makes such solutions challenging to be developed for other routing attacks. The standard RPL parameters can be used in the optimal selection of thresholds for the development of lightweight threshold based defense solutions [51, 52]. ### _Intrusion Detection System (IDS)_ This section discusses various IDS based defense solutions for detecting routing attacks against RPL protocol. IDS based RPL defense mechanisms are summarized in Table V. #### Iv-B1 Signature Based IDS _Intrusion Detection System for 6LoWPAN networks:_ Kasinathan _et al._[77] proposed an IDS to detect DoS attacks in \(6\)LoWPAN network. An open-source IDS Suricata is used for pattern matching and attack detection. An IDS probe node is used to sniff all the packet transmissions in the network, and transfer information to Suricata IDS (Open source IDS) for further analysis and attack detection. To prevent communication overhead, the IDS probe node is connected directly to Suricata IDS using a wired link. In addition, a Frequency Agility Manager (FAM) is incorporated to make the network aware of channel occupancy in real-time and operates when the interference level exceeds the set threshold. In this situation, FAM changes the operating channel to the best available one, thus, providing uninterpreted network operations. No simulation study is done in support of IDS performance and its usability. _Compression Header Analyzer Intrusion Detection System (CHA-IDS):_ Napiah _et al._[76] proposed a centralized IDS named CHA-IDS for detecting HELLO flood, Sinkhole and Wormhole attacks. It uses compression header data to extract certain important network features that are used for detecting individual and combined attacks. The proposed IDS uses the best first and greedy stepwise strategy with correlation-based feature selection to determine the significant features. Then the selected features are evaluated using six Machine Learning (ML) algorithms (Decision Trees (J48), Logistic Regression (LR), Multi-layer Perceptron (MLP), Naive Bayes (NB), Random Forest (RF), and Support Vector Machine (SVM)) which are used to perform classification of normal and benign traffic. CHA-IDS outperforms SVELTE and the IDS proposed in [31]. The main limitations of CHA-IDS include high memory and energy consumption. Moreover, it is incapable of identifying the attacker. _Signature-based Intrusion Detection System:_ A framework for a signature-based IDS to detect DIS and Version number attack is proposed in [78]. The proposed IDS requires detection and monitoring modules to be placed on nodes itself, as in the case of hybrid detection schemes. However, the authors consider two types of additional nodes in the proposed scheme. The first type of nodes are IDS routers, which carry detection and firewall modules. The second type of nodes are sensors or IDS detectors which are responsible for monitoring and sending malicious traffic information to the router nodes. IDS router checks all the passing traffic to decide whether the packet source is malicious or not. The job of the IDS detector is to monitor sensor traffic and calculate the metric of interest, i.e., Received Signal Strength Indicator (RSSI), packet drop rate, and packet sending rate. The final decision of classifying a node as malicious or not is taken by detection module running on \(6\)BR, based on the data received from each node. The proposed framework is not validated, which is its major limitation. \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline **Reference** & **Defense Mechanism** & **Relevant Attack** & **Limitations** & **Mobility** & **Validation** & **Tool/Motes** & **Performance metrics** \\ \hline Dvir _et al._[53] & VeRA & Version number and Decreased rank & Vulnerable to Rank-cepley attack, dish configuration overhead, child node model attack maker as a parent. & No & - & - & - & - \\ \hline Larsmann _et al._[54] & Enhanced VeRA & Version number and Decreased rank & Vulnerable to Rank-cepley attack, adds memory overhead, child node model attack maker as a parent. & No & - & - & - \\ \hline Perry _et al._[91] & TRAIL & Version number, Decreased rank, Rank-cepley & Addis memory overhead. & No & Tested & DES/RR/RIGOT or OS & Routing convergence time, Average message size \\ \hline Seeber _et al._[68] & Trusted Computing Architecture & RPL, routing attacks targeting availability and integrity, node topology processing & TPM is a single point of failure, adds computational overhead due to copyrightly processing. & No & - & - & - \\ \hline Sengal _et al._[60] & Adaptive Threshold & DODAG inconsistency & Requires prior calculation of configuration parameters (optimal values) & No & Simulation & Contiki & _PDR_, Energy consumption and OS/Cooja & Control packet overhead \\ \hline Mayzand _et al._[61] & Dynamic Threshold & DODAG inconsistency & Increases energy consumption, Increased rank & No & Simulation & Contiki & _PDR_, Energy consumption, OS/Cooja & Control Packet Overhead, DDO & Energy consumption, DODAG overhead, LFatively and Advanced \\ \hline Iuchi _et al._[69] & Secure Parent Selection (SRPL) & Rank & Susceptible to Sybil and Blackhole attacks, may result in longer paths (optimized). & No & Simulation & Contiki & Total number of child nodes attached to attacker nodes. \\ \hline Glissa _et al._[67] & Secure-RPL & Rank, Sinkhole and Selective forwarding attacks & Rank, Sinkhole and Selective forwarding attacks & Computationally expensive. & No & Simulation & Contiki & Average power consumption, OS/Cooja & Control message overhead, Packet reception rate \\ \hline Diedig _et al._[72] & Metric-based RPL, Theorems & Intersubmitted Scheme (MRTS). & Insider attacks & Addis computation and communicate overhead and increases energy consumption. & No & - & - & - \\ \hline Archhour _et al._[71] & Trust-Aware RPL for detecting & Blackhole and Selective forwarding attacks & Blackhole and Selective forwarding attacks & No & Simulation & Contiki & Detection rate, Throughput, OS/Cooja & Packet loss and Frequency of node rank changes. \\ \hline Archhour _et al._[73] & Trust-Aware RPL for detecting & Blackhole & Nodes need to operate in promiscuous mode which adds energy overhead. & No & Tested & Contiki & _Detection rate_, Throughput, OS/Cooja & Packet loss and Frequency of node rank changes. \\ \hline Archhour _et al._[73] & SrcTrust-RPL & Rank and Sybil & Considers static network topology, nodes need to operate in promiscuous mode which increases energy consumption. & No & Simulation & Contiki & _Detection rate_, Throughput, OS/Cooja & Packet loss and Frequency of node rank changes. \\ \hline Nygaard _et al._[74] & TIDS & Sinkhole and Selective forwarding attacks & Considers static network topology, queries 6BR (root) to remain constantly on which consequently increases energy consumption, high _FPR_. & No & Simulation & Contiki & _Detection rate_, _FN_, _FP_, Energy consumption \\ \hline \end{tabular} \end{table} Table IV: Summary of Secure Protocol based defense mechanisms _Self Organizing Map Intrusion Detection System (SOMIDS):_ Kfoury _et al._[79] proposed SOMIDS for detecting Sinkhole, Version number, and HELLO flooding attacks. SOMIDS uses Self Organizing Maps (SOM) for clustering attacks and normal traffic. SOMIDS uses a Pcap file from a cooja simulator for extracting data and performing clustering of traffic classes. SOMIDS consists of three major components. The first component is an aggregator module that is responsible for aggregating the data (ICMPv6 code, IPv6 destination, IPv6 source, ICMPv6 DIO version, ICMPv6 DIO rank, Timestamp) contained in captured PCAP file. Traffic data is aggregated into six variables, i.e., number of DIS, DIO, DAO messages, the ratio of version number changes, the ratio of rank changes, and average more power. The second component is normalizer, which performs the task of normalizing the aggregated data. Third component is a trainer module which is responsible for training SOM. The result of the IDS is a matrix that is converted into a \(2\)D image for better visualization of clusters. SOMIDS is not evaluated in terms of the implementation overhead and does not consider node mobility. **Summary and Insights:** It is analyzed that some of the proposed approaches [94, 77] rely on the outdated signatures (traffic patterns) for classifier training which makes these approach ineffective for securing RPL networks. The solutions proposed in [95, 79, 76] used signatures collected from the simulated attacks. These approaches show promising results in terms of prominent metrics. However, signatures collected from the real network can be more effective in classifier training. The development of RPL based real traffic dataset containing traces of common routing attacks needs to be done [96, 97]. The signature based IDS proposed in [76] can be improved in terms of energy consumption. #### V-A2 Anomaly Based IDS _SVELTE:_ Raza _et al._[80] proposed a real-time IDS named SVELTE for \(6\)LoWPAN. The proposed IDS consists of anomaly based detection engine which uses RPL specifications for detecting spoofed information, Sinkhole, and Selective forwarding attacks. It consists of three centralized modules that are placed on gBR: Mapper, Analyzer and Detector, and a Mini-firewall. Every child node sends RPL information to \(6\)BR for illegitimate traffic filtering. Intrusion detection in SVELTE involves network graph inconsistency detection, node availability detection, and routing graph validation. SVELTE imposes very less memory, computational, and energy overhead on the resource constrained nodes. Moreover, it shows a good performance in terms of _PDR_ and control packet overhead. The limitations of SVELTE include strategic placement of IDS modules, timing inconsistency in rank measurements, which consequently leads to inaccurate topology creation at \(6\)BR, and high false positive rate (_FPR_). In addition, SVELTE does not provide defense against coordinated attacks. _Real Time Intrusion and Wormhole Detection:_ A novel IDS for the detection of Wormhole attack in IoT is proposed in [31]. It detects the packet relay and encapsulation types of Wormhole attack. The proposed IDS uses the node's location and neighbor information to identify the attack and received signal strength indicator (RSSI) to identify attacker nodes. A hybrid deployment strategy on a static network is considered for placing IDS modules, where a centralized module is placed on \(6\)BR, and distributed modules are placed on resource constrained nodes. Distributed modules are responsible for sending and monitoring RSSI values, sending neighbor information to \(6\)BR, and packet forwarding. Centralized modules collect RSSI values, compute the distance from the node's RSSI value, and perform validation of neighbors from collected information and detect attack with its location. The main drawback of the proposed IDS is that it puts much communication and computational burden on resource constrained nodes. _Distributed Monitoring Architecture:_ Mayzaud _et al._[81] proposed a distributed monitoring architecture for detecting DODAG inconsistency attacks. The proposed architecture makes the use of RPL multi-instance feature and dedicated monitoring nodes for facilitating energy efficient network events observation (passively). Two types of nodes are considered in the network, i.e., regular (monitored) and monitoring nodes. The multi-instance feature of RPL is used for creating regular (the network of regular nodes) and monitoring network (the network of monitoring nodes). The monitoring nodes contain local anomaly detection (algorithm) modules that analyze the collected data and detect possible attacks in a distributed manner. The main limitations of the proposed architecture are: it assumes a single attacker case and fails in case of multiple attackers which are operating in a collaborative manner, monitoring nodes need to operate in promiscuous modes for anomaly detection, depends on the coverage of regular nodes by monitoring nodes (strategic placement), relies on high order devices for monitoring which adds cost overhead, architecture relies on local detection. _Extension to Distributed Monitoring Architecture:_ Mayzaud _et al._ extended their previous proposed approach [81] in [55] to detect Version number attacks. Authors considered the fact that an incremented version number is propagated in the entire graph, and a monitoring node cannot decide by itself if this is the result of an attack or not, and they must share monitoring information to identify the malicious node more efficiently. Thus, they extended the distributed monitoring architecture such that monitoring nodes can collaborate together using a multi-instance network and facilitate global detection. Only one attacker case is assumed, and mobility is not considered in this defense architecture. An extension to [55] is presented in [82]. In this work, detection and localization algorithms are presented. The _"LOCAL_ASESSMENT"_ algorithm is deployed on monitoring nodes except the root, which allows monitoring nodes to report to the root the sender of an incremented version number in their neighborhood. The _"DIS-TRIBUTED_DETECTION"_ algorithm is deployed on the sink to detect the attack and gather all monitoring node information into tables. The _"LOCALIZATION"_ algorithm is deployed on the sink node and performs attacker identification by analyzing the collected information. This framework inherits the limitations of Mayzaud _et al._[81]. _Extended SVELTE based on ETX metric:_ An extension to SVELTE is proposed in [63]. In addition to SVELTE IDS modules, an extra intrusion detection module which uses the ETX metric is incorporated for the detection of ETX manipulation attacks in ETX metric based RPL networks. The authors have also proposed an intrusion detection method which uses geographical parameters (node's location and transmission limits) for handling a case when both rank and ETX based detection methods fail. The main idea behind ETX based intrusion detection method is that ETX value of the parent node must be lower than that of its children node, and if any node's ETX value is found to be inappropriate or unusual, then the node reported as malicious. The main advantage associated with the proposed solution is that the ETX based IDS can defend against ETX and rank based attacks. In contrast, the geographical parameter based method can locate the nodes and test their authenticity. The proposed IDS solutions consume less power when nodes operate in duty cycling mode and require only \(5,570\) and \(6\) Bytes of RAM and ROM, respectively. A high true positive rate (_TPR_) is achieved when both the proposed solutions are combined together. The proposed solution does not consider node mobility in the network. _Hybrid IDS based on the Sequential Probability Ratio Test with an Adaptive Threshold:_ A hybrid IDS that combines the Sequential Probability Ratio Test (SPRT) with an Adaptive Threshold to detect Selective forwarding attack is proposed in [83]. It uses two types of modules, a centralized module deployed on the gateway node and a distributed module deployed on resource constrained nodes. The proposed IDS involves three steps, i.e., data gathering, data analysis, decision, and elimination of compromised node. The data gathering step involves each routing node to collect the neighbor's information, storing it in the form of a table, and then send it to the centralized node using HELLO messages. The data analysis step involves the computation of the number of dropped packets and the probability of dropped packet for each node using data gathered from HELLO messages. The decision step is responsible for detecting malicious nodes and minimizing _FAR_ by utilizing SPRT. The elimination of the compromised node step involves informing legitimate nodes about the compromised nodes by initiating a global repair and sending the compromised node's identifier in fresh DIO messages to all other legitimate nodes in the network. The proposed IDS achieves \(100\%\) detection rate. However, the communication overhead of the network increases with the increase in node mobility. **Summary and Insights:** Many of the anomaly based IDS solutions present in the literature show acceptable performance, which favors their utility in IoT applications. However, it is observed that the proposed solutions achieve high performance (accuracy, _TRP, FPR_, etc.) while imposing an additional cost to the nodes in terms of communication, computation, memory, and energy consumption. The solutions proposed in [84, 83, 89, 31, 55, 81, 98] impose extra network deployment cost which is undesirable for resource constrained networks. Similarly, the security approach proposed in [80] requires the strategic placement of IDS monitoring modules, which add an implementation complexity to the network. Moreover, it is also observed that the proposed anomaly based IDS are still vulnerable to the coordinated attacks. These critical challenges must be addressed for the advanced development of anomaly based IDS for IoT. #### Iv-A3 Specification Based IDS _Intrusion detection and response system for Internet of things (InDReS):_ In [85], a distributed IDS named InDReS to detect Sinkhole attack in RPL is proposed. The proposed IDS is based on cluster tree topology, where cluster head acts as a monitoring node that observes packet drop count of its adjacent nodes. The monitoring nodes compute the rank of every adjacent node to it and compare that rank with the threshold value for finding a malicious node. InDReS is implemented on NS-\(2\), and performance results are compared with that of INTL. The results show that the proposed IDS performs well compared to INTI in terms of packet drop ratio, _PDR_, control packet overhead, and average energy consumption. The limitations of InDReS include: only homogeneous nodes are considered, the dynamic network is not considered, and it may fail if the leader node itself gets compromised. _Specification-Based IDS for Detecting Topology Attacks:_ Le _et al._ in their previous work [57] proposed a specification based IDS architecture which lacks implementation and performance analysis. In [58], the authors extended the previous architecture and evaluated it in terms of prominent evaluation metrics. They proposed a specification based IDS consisting of Extended Finite State Machine (EFSM) that is generated from a semi-auto profiling technique. Firstly, EFSM is created from RPL specification using ILP (Integer Linear Programming) technique to define stable states and transitions among them. Secondly, RPL knowledge of the RPL profile of detection algorithms is translated to form more concrete states and transitions, i.e., utilizing trace files generated from RPL normal operation in the Cooja simulator. This specification defines all the legitimate states and transitions which a node must follow while operating in a normal manner. EFSM is implemented as a set of rules on intrusion detection agents for detecting various attacks, including Rank, Local Repair, Neighbor, DIS, and Sinkhole. The proposed IDS is shown to achieve _TPR_ of \(100\%\) with _FPR_ up to \(6.78\%\). The proposed IDS introduces communication overhead, requires a good network trace for the creation of effective specification, and shows less accuracy when it works for a long time. _RPL-Based Wormhole Detection:_ Lai _et al._[87] proposed a distributed wormhole detection method which applies the rank information to estimate the relative distance from the root node. The proposed method uses the hop count metric for rank calculation. To detect malicious nodes, the proposed detection method checks for the nodes with unreasonable rank values. It defines _Rank_Threshold_ and _Rank_Diff_ attributes for the detection of illegitimate DIO messages. _Rank_Threshold_ is defined as the difference between the rank values of parent and node itself, whereas _Rank_Diff_ is the difference between the rank values of the source node and node itself. DIO message is considered as abnormal, when _Rank_Diff\(>\)Rank_Threshold_ condition is not met. The proposed wormhole detection method shows a \(100\%\) output in terms of precision, recall, and accuracy. The main advantages of this approach are its easy implementation and no additional requirement for Wormhole attack detection. However, node mobility is node considered, which can severely affect the detection results. In addition, critical parameters like _PDR_, end-to-end delay, and energy consumption are not analyzed. _Specification based IDS based on Finite State Machine:_ In [57], a specification based IDS is proposed for detecting rank and local repair attacks. The proposed IDS uses a finite state machine (FSM) for monitoring the node's state, i.e., normal or malicious. A backbone architecture is used for placing monitoring nodes containing FSM modules. Monitoring nodes sniff neighbor transmissions, including its parent and child nodes. The parameters like node id, the preferred parent with their respective rank, state changes in a specific period are monitored and extracted from sniffed DIO messages in order to analyze the node's behavior. Monitoring nodes collaborate and share information for detecting attacker nodes. FSM specifies normal and malicious states. FSM state specifies the strict rank rule which nodes must follow, i.e., parent-child relationship, and an acceptable threshold for the number of times a topology can be set up or updated. Any deviation from the specified rules and threshold consequently changes the node's state from normal to suspicious and detects the possible attacker node. _IDS to defense Routing choice intrusion Intrusion:_ An IDS to defend against Routing choice intrusion (ETX metric) is proposed in [50]. The proposed IDS is based on specification methodology that uses a stand alone architecture with distributed monitoring nodes. Authors consider attack defense only against a single intruder case. The proposed IDS requires monitoring nodes containing FSM with normal and malicious states. Network behaviors are matched with FSM states, and any deviation from normal state leads to attack detection. Routing choice intrusion is detected in the case when any malicious node multicast the DIO with lower ETX value, which consequently leads to a large fluctuation in the number of its child nodes than a set threshold, this node is marked as an attacker node. The authors consider certain assumptions like secure network initialization, homogeneous nodes, monitoring nodes with more resources, and static environment, which limits the practicality of the proposed IDS. _Sink-based Intrusion Detection System (SBIDS):_ In [86], a centralized specification based IDS known as SBIDS is proposed to address rank attacks in RPL based IoT networks. SBIDS uses information contained in the DAO message received from child nodes in its sub-DODAG. SBIDS utilizes RPL parameters, including node's current rank (NCR), node's parent rank (NPR), node's previous rank (NPVR), and parent switching threshold (PST) for detecting whether a node is malicious or not. SBIDS achieves \(100\%\) accuracy in case of a static network. The accuracy decreases in the presence of mobile nodes in the network. SBIDS adds a communication overhead to RPL protocol as it requires an extra \(48\)-bit information to be added by the nodes in the DAO packets they send. SBIDS shows better results for a static network as compared to the mobile network. The average power consumption of nodes increases in the case of SBIDS. **Summary and Insights:** The effectiveness of specification based IDS solutions can be observed from their performance. The only key challenge in the development of specification based IDS is the availability of quality traffic trace required for generating adequate specifications [58]. It is observed that several approaches [86, 87] have not performed power consumption analysis, hence there exists an open research gap to be considered for future research. Moreover, the integration of mobility support in the proposed solutions is a challenging task and needs further investigation. #### Iv-B4 Hybrid IDS: _Robust Intrusion Detection System (RIDES):_ Amin _et al._[88] proposed a novel IDS named RIDES for detecting DoS attacks in IP based WSN. It is a hybrid of signature and anomaly based IDS. The signature based intrusion detection component uses a distributed pattern matching using bloom filters to match signature codes. To reduce the overhead to long signature codes, a coding scheme is used which converts signatures into short attack identifiers. The anomaly based intrusion detection component uses Cumulative Sum Control charts (CUSUM) with upper and lower threshold limits to detect anomalies in the network pattern. A distributed approach is used to place the intrusion detection components for decreasing the communication, memory, and computational overhead on nodes. The main limitation of this work is inter-packet delay that leads to delayed intrusion detection by RIDES. In addition to it, energy consumption by the resource constrained nodes is not studied. _Hybrid of Anomaly and Specification based on optimum-path forest clustering:_ A novel real-time hybrid IDS framework is proposed in [89] to detect Sinkhole, Selective forwarding, and Wormhole attacks. Specification based IDS modules are deployed on router nodes which perform analysis of their child nodes and forward their local results to the gateway node through data packets. The gateway node is equipped with anomaly based IDS module which employs Unsupervised Optimum-Path Forest Clustering (OPFC) algorithm for projecting clusters by using incoming data packets. The simulation results show that the proposed IDS framework achieves the maximum _TPR_ of \(96.02\%\) with \(2.08\%\) of _FPR_. The main features of the proposed hybrid IDS include high scalability and attacker identification. There are several drawbacks associated with this hybrid IDS. It does not consider the energy constrained nature of nodes, assumes one-way communication (node to gateway), and considers only a static network. **Summary and Insights:** Similar to signature and anomaly based IDS, hybrid based IDS solutions also face several challenges that need to be addressed. Delayed attack detection makes IDS solutions inefficient when deployed in real networks. The IDS proposed in [88] is affected by the inter-packet delay that causes delayed attack detection. Such issues need to be carefully addressed while designing IDS for IoT applications. Hybrid IDS proposed by Bostani _et al._[89] utilized MapReduce architecture to manage a large amount of data from motes and perform attack detection efficiently. Other such algorithms available in the literature need to be explored for building scalable and effective IDS solutions corresponding to IoT. Table VI presents a comparative study of discussed security solutions (Secure Protocol and IDS) based on different evaluation metrics. The performance is compared based on the maximum improvements achieved in percentages (%), and maximum or minimum values (val) achieved. \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline **Reference** & \multicolumn{2}{c|}{**Ddense Mechanism**} & \multicolumn{1}{p{42.7pt}|}{**Type**} & \multicolumn{1}{p{42.7pt}|}{**Placement strategy**} & \multicolumn{1}{p{42.7pt}|}{**Relevant Attack**} & \multicolumn{1}{p{42.7pt}|}{**Limitations**} & \multicolumn{1}{p{42.7pt}|}{**Mobility**} & \multicolumn{1}{p{42.7pt}|}{**Valuation**} & \multicolumn{1}{p{42.7pt}|}{**Tools/Motes**} & \multicolumn{1}{p{42.7pt}|}{**Performance metrics**} \\ \hline Amin _et al.[88]_ & FIDES & Hybrid & Distributed & DoS & Inter packet delay affects the detection time. & No & Simulation & ns-2 & _TPR_ & _FPR_, _ROC_ \\ \hline _Le et al.[57]_ & Specification based IDS & Specification & Distributed & Rank, Local repair & No & - & - & - & - \\ \hline Raza _et al.[80]_ & SVELTE & Anomaly & Hybrid & Sinkhole, Selective forwarding, spoofed or altered information & Synchronization issue, requires strategic placement of IDS modules, high FPR, vulnerable to coordinated attacks. & No & Simulation & Contiki OS & Energy consumption, _TPR_ & Energy consumption, _TPR_ \\ \hline Kasianathan _et al.[94]_ & DoS detection IDS Architecture & Signature & Centralized & DoS & The centralized nature of the IDS architecture makes it difficult to detect inter- and attacks communication overhead over resource constrained nodes. & No & Testbed & PenTest & _TP_ \\ \hline Kasianathan _et al.[77]_ & Intrusion Detection System for crowds & Signature & Centralized & DoS & The centralized nature of the IDS architecture makes it difficult to detect inter- and attacks and introduces communication overhead over resource constrained nodes. & No & Testbed & PenTest & _Coattili OS_ & - \\ \hline Zhang _et al.[50]_ & IDS to defense Routing choice & Specification & Distributed & Routing choice intrusion & Assumes secure network initialization and homogeneous devices. Monitoring nodes need to operate in promiscuous mode. & No & Simulation & Contiki OS & - \\ \hline Pongle _et al.[31]_ & Real Time Intrusion Detection System & Anomaly & Hybrid & Wormhole & Introduces communication and computational overhead. & No & Simulation & Contiki OS & _TPR_, Energy consumption, Control packet overhead \\ \hline Mayzand _et al.[81]_ & Distributed Monitoring Architecture & Anomaly & Distributed & DODAG & It assumes a single attacker case and fails in case of multiple attackers operating in a collaborative manner. Monitoring nodes need to operate in promiscuous modes for anomaly detection. Depends on the coverage of regular nodes by monitoring nodes (trutger placement). Relics on high order devices for monitoring, which adds cost overhead. Architecture relies on local detection. & No & Simulation & Contiki OS & _FPR_ & Energy consumption, _CCoja_ & - \\ \hline Mayzand _et al.[55]_ & Distributed Monitoring Architecture & Anomaly & Hybrid & Version number & Simulation & Contiki OS & _CCoja_ & _FPR_ \\ \hline \end{tabular} \end{table} Table 5: Summary of Intrusion Detection System based defense mechanisms \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline **Reference** & **Ddense Mechanism** & **Type** & **Placement strategy** & **Relevant Attack** & **Limitations** & **Tools/Motes** & **Performance metrics** \\ \hline Mayzand _et al.[82]_et_ & Distributed Monitoring Architecture & Anomaly & Hybrid & Version number monitoring nodes need to operate in promiscuous modes for anomaly detection, which adds cost overhead. Do not consider and deposit nodes on the coverage of regular nodes by the monitoring nodes (strategic placement). & No & Simulation & Cooija OS & _FFR_ \\ \hline Surender _et al.[85]_et_ & ImDeS & Specification & Distributed & Simshoke & It considers only homogeneous nodes and do not consider network dynamics. This approach may fail if leader node itself gets compromised. & No & Simulation & ns-2 & Packet drop ratio, _PDR_, Throughput, Energy consumption, Control packet overhead \\ \hline Le _et al.[58]_ & Specification based IDS & Specification & Hybrid & Rank, Shinkoche, Local repair, Neighbor, DIS & Windows communication overhead, requires a good network trace for the creation of effective specification, and shows less accuracy when it works for a long time. & No & Simulation & - & Precision, Recall and Accounters \\ \hline Lai _et al.[87]_ & RPL-Based Wormhole Detection & Specification & Distributed & Wormhole & Node mobility is not considered which can severely affect detection results. Critical parameters like _PDR_, end-to-end delay and energy consumption are analyzed. & No & Simulation & - & Precision, Recall and Accounters \\ \hline Shreenivas _et al.[63]_et_ & Extended SVELTE based on ETA metric & Anomaly & Hybrid & ETX & Do not consider mobility. Parameters like end-to-end delay, _PDR_ are not not analyzed. & No & Simulation & Cooija OS & Average power consumption, _TPR_ \\ \hline Chen _et al.[99]_et_ & Intrusion Detection System for Node and Flooding Attacks & Anomaly & - & Wormhole, Flooding & Overhead of maintaining blacklist in large-scale networks affects overall network performance, Placement strategy for IDS modules is not discussed. & No & Simulation & - & Precision, Recall, Accuracy and Miss rate \\ \hline Ahsan _et al.[100]_et_ & ABR-SAR based IDS for Wormhole detection & Anomaly & Hybrid & Wormhole & Increases implementation complexity. Strategic placement of SAN is needed so that every node must be in range of at least one another SAN. & No & Simulation & Cooija OS & Detection rate, Average power consumption \\ \hline Gara _et al.[83]_et_ & Hybrid Intersection Detection System based on Sequential Probability Ratio Test with Threshold & Anomaly & Hybrid & Selective forwarding & Exchange of HELLO messages increases network overhead. & Yes & Simulation & Cooija OS & Detection rate, Control packet overhead \\ \hline \end{tabular} \end{table} Table 5: Summary of Intrusion Detection System based defense mechanisms \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Reference** & **Defense Mechanism** & **Type** & **Placement strategy** & **Relevant Attack** & **Limitations** & **Mobility** & **Validation** & **Tools/Motes** & **Performance metrics** \\ \hline Gara _et al.[84]_ & _et_ & Hybrid Intrusion System based on Sequential Radio Test with Threshold & Anomaly & Hybrid & Selective forwarding and Clone ID & Exchange of HELLO messages increases and Clone ID & Yes & Simulation & Coudia OS /Coaj & Detection rate, Control packet overhead \\ \hline Napiah _et al.[76]_ & _et_ & Compression Head Analyzer Interaction System (CHIA-IDS) & Signature & Centralized HELLO flooding, Sinkhole and Detection & HELLO flooding, Sinkhole and Wormhole & Introduces memory and energy consumption. It cannot identify the attacker. & No & Simulation & Coudia OS/Coaj/ Weka & _TPR_, _FPR_, Accuracy, Energy Consumption \\ \hline Bosatani _et al.[89]_ & _et_ & Hybrid of Anomaly and based IDS & Hybrid & Distributed Sinkhole and Selective based IDS & Sinkhole and Wormaking & Assumes one way communication. Energy overhead analysis is not done. & Assumes one way communication. Energy overhead analysis is not done. & Simulation & MATLAB & _TPR_, _FPR_, Accuracy, \\ \hline Shafuge _et al.[86]_ & _et_ & SIBDS & Specification & Centralized Rank & Introduces communication overhead and increases Average power consumption. & Yes & Simulation & Coudia OS /Coaj & _TP_, _FPR_, _FPR_, Accuracy, Average power consumption \\ \hline Ioulinon _et al.[78]_ & _et_ & Framework of Signature-based IDS & Signature & Hybrid & HELLO flooding and Version number & No validation is performed in support of the framework. & No & - & - & - & - \\ \hline Kotany _et al.[17]_ & _et_ & SOMIDS & Signature & Centralized HELLO flooding, Sinkhole, and Version number & No evaluation in terms of prominent performance metrics is done. Energy consumption of BRB is not studied. & Simulation & Coudia OS /Coaj & - \\ \hline Shakla _et al.[95]_ & _et_ & ML-IDS (KMI-IDS, DT-IDS and Hybrid-IDS) & Signature & Centralized & Wormhole & _FP_ value is not reported. Energy consumption and deployment strategy are not discussed. & No & Simulation & Coudia OS /Coaj & - \\ \hline \end{tabular} \end{table} Table 5: Summary of Intrusion Detection System based defense mechanisms \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline **Reference** & ## VI Cross-layered security solutions for RPL RPL security is not restricted to network layer specific defense solutions. IEEE \(802.15.4\) MAC layer implements several features to provide security services such as confidentiality, integrity, and replay protection. Data confidentiality is achieved through symmetric key cryptography techniques based on Advanced Encryption Standard in Counter with CBC-MAC (AES-CCM) algorithm, message integrity through Message Authentication Code (MAC), and replay protection through monotonically increasing sequence numbers [62, 101, 102]. IEEE \(802.15.4\) MAC layer defines eight different security levels, which can be chosen as per the security requirements of the application. Oliveira _et al._[103] proposed a network access control (NAC) security framework for 6LoWPAN networks. The proposed framework aims to control the access of nodes to the existing network using prior administrative authorization, and later applies security compliance on the authorized nodes for security management. The security mechanism of the framework is capable of defending the network from unknown attacks. The major limitations of the NAC security framework include the requirement of Lightweight Secure Neighbor Discovery for LLNs, secure reprogramming mechanism, and message authentication mechanism for implementing the proposed framework in a real network. The resource constrained nature of LLN nodes may limit some of these requirements. Moreover, the proposed framework is not implemented and analyzed for validation. Further, the authors extended their previous work [103] and proposed a network admission control solution in [104, 105]. The proposed solution has three main tasks, i.e., node detection and authentication, node authorization, and data filtering. The main limitations of the proposed solution include: (1) inherits attacks from neighbor discovery and RPL protocols; (2) it uses symmetric encryption, which increases resource consumption of nodes. The authors suggested using data filtering on RPL control messages, and elliptic curve mechanisms for minimizing resource consumption of nodes. ## VII Open issues, research challenges and future directions In this section, we have discussed some open issues and research challenges that need to be studied and addressed. _Security Against Newly Developed Routing Attacks_: One of the most concerning issues in IoT security is defense against newly developed attacks. DIO suppression [62], Routing choice intrusion [50], and ETX manipulation [63] are three such attacks which target the RPL network by degrading networks performance silently. Many other attacks specific to RPL are yet to be found and will require robust defense mechanisms. Very few efforts towards the development of defense mechanisms against such attacks have been carried out. Hence, several defense techniques for defending against newly discovered attacks need to be proposed. _Scalability_: Most of the existing defense solutions have been tested on small network scenarios, but in the practical world, IoT is enabled by a large network of heterogeneous resource constrained nodes [50, 53, 87, 93]. The performance of existing solutions may degrade in the case of large network which puts IoT applications open to attackers. In addition to it, most of the critical IoT applications require a minimum delay in information forwarding, hence the demand of fast-reacting and lightweight defense solutions is increasing in order to carry out seamless network operations. These solutions must not degrade the QoS of the network while supporting high scalability. Hence, research can be carried out towards the development of highly scalable lightweight defense solutions. _Mobility_: Lamaazi _et al._[106] showed that the performance of RPL is severely influenced by mobile nodes. The standard specification of RPL [3] does not define any mechanism to support mobility. Thus, the overall network performance is degraded in the presence of mobile nodes. Some types of IoT nodes have dynamic characteristics (mobility), which lead to an increase in the number of link disconnections, collisions, and packet loss. When these mobile nodes perform malicious activities, the network performance drastically degrades. This leads to a rise in the number of problems that need to be addressed for securing RPL networks. In [107, 108, 109] impact of the Version number and Sybil attack, respectively under mobility is analyzed. However, the impact of other attacks on RPL under mobility needs to be studied. Most of the existing secure protocol and IDS based defense solutions for RPL consider the only static environment and may not be applicable for the mobile environment. _Cryptography Challenges_: The key management is one of the significant challenges for resource constrained networks, which requires attention. Several defense solutions [53, 54, 67, 92, 93] use cryptography techniques like Hash Chain Authentication, Merkle Tree Authentication, and Dynamic Keying impose computational, memory, and energy overhead on resource constrained devices. These overheads affect node lifetime, which is an essential criterion for critical IoT applications, e.g., industrial, forest, and landslide monitoring. The development of lightweight cryptography based security solutions for RPL that are suitable for resource constrained devices is still a big challenge and needs to be addressed. _Resource Limitations for Machine Learning_: Utilization of ML for the development of RPL specific security solutions is still a big task because of resource constraints. ML is proven to be effective in securing various wireless and wired networks with abundant resources. Thus, the customization of ML algorithms needs to be done in order to be used in resource constrained IoT. The efforts to address this challenge will lead to the development of lightweight signature and anomaly based IDS solutions which may be very useful in providing quick detection and facilitation of fast mitigation procedures. _Issues with Trust Based Secure RPL Protocols_: Defense solutions proposed in [70, 73] require every node in the network to operate in a promiscuous mode, in order to overhear neighbor packet transmissions. Such requirements make these solutions unsuitable for resource constrained IoT nodes. Thus, improvements in existing trust based solutions without relying on such strict requirements must be carried out. _Hardware Security_: Node tampering is one of the widely used methods for compromising a node and reprogramming it to perform malicious activities [49] in the network. All the insider attacks are performed by compromising a legiti mate node, which is already a part of the IoT network. An attacker can reprogram a node with malicious functions like decreasing rank and increasing rank. In addition, a node can be reprogrammed in such a way that it skips checking rank function. Moreover, node tampering may lead to shared secret keys getting exposed. Thus, the development of tamper-proof node design is an open research area. It may also affect many factors involved in IoT security, and most importantly in the prevention from insider attacks. Some authors have suggested using TPM [68, 72] for securing IoT devices against insider attacks. However, TPM adds an extra cost to IoT networks and maybe infeasible for some IoT applications. _Network Security Monitoring over Encrypted Traffic:_ The rapid growth in encrypted traffic is creating challenges for security monitoring and intrusion detection. Encryption is being used by digital business organizations as a primary tool for securing information. Encryption not only brings security to businesses, but it also benefits the attacker to evade detection [110]. IoT specific IDS solutions present in the literature are developed based on the assumption of non-encrypted traffic. However, in the present scenario, IoT applications are using encryption due to the availability of resource-rich hardware. Hence this issue needs to be considered while developing IDS for current IoT applications. Encrypted Traffic Analytics (ETA) is one of the possible solutions that can be studied to address this issue. ### _Potential Areas for Future Research_ In addition to previously discussed issues and challenges, we list potential research areas for upcoming researchers in this field. _Moving target IPv6 defense_: By continually changing the IPv6 address of a device, the attacks including eavesdropping, denial-of-service, or man-in-the-middle attack can be defended. Moving target IPv6 defense mechanisms provide such capability to devices. Lightweight moving target based defense mechanisms for securing resource constrained devices against targeted attacks can be explored in-depth. Also, research on achieving resilience using temporary-private IPv6 addresses [111] can be carried out. _Collaborative IDS_: These types of IDS leverage collaboration among sensor nodes and \(6\)BR for efficient and quick detection of attackers. Very few research works present in the literature that focuses on the development of collaborative IDS and can be explored further. _Defense against coordinated attacks_: In the present scenario, the attackers are now targeting IoT networks using coordinated attack strategy. These attacks severely degrade the network's performance without being detected. Popular IDS like SVELTE [80] are vulnerable to coordinated attacks. Thus, an efficient attack detection and mitigation solution to defend RPL against coordinated routing attacks needs to be developed. _Active Learning_: Data insufficiency is of the significant problems for ML-based IDS. This problem can be solved by active learning, which optimizes the model learning during the training phase. This research area has recently gained the attention of security researchers. This needs a more in-depth study for leveraging its use in the development of IoT based IDSs. _Encrypted Traffic Analytics_: ETA utilizes network traffic information that is independent of protocol details, e.g., lengths and arrival times of flows. These details can be used irrespective of encrypted and encrypted traffic for security monitoring of networks. ETA is an emerging topic in the field of network security and can be applied to IoT security as well. _Key management_: Most of the IoT applications involve unattended device operation in an untrusted environment, where nodes may quickly become the target of attackers. In the secure mode of RPL, the nodes are pre-loaded with security keys, which can be considered as a significant security vulnerability due to a single point of failure. The development of scalable and efficient key management mechanisms like generation, management, and storage are the growing research areas in RPL security. The exiting WSN based key management solutions present in the literature can be improved and applied in RPL. _Energy efficient cryptography_: Traditional cryptography algorithms are capable of achieving a higher level of security. However, these algorithms are computation-intensive. Hence, they consume many resources. Such algorithms cannot be directly used in IoT applications because energy resources are limited. Thus, the development of energy-efficient cryptography algorithms to achieve the required level of security with minimum energy consumption is an essential concern for IoT security in the present scenario. _Security of IPv6 over the TSCH mode of IEEE \(802.15.4e\) (6TiSCH) networks_: Recently, 6TiSCH protocol [112] has been standardized to attain low-power, scalable, and highly reliable operations in industrial applications. 6TiSCH uses time-slotted channel hopping (TSCH) MAC with IPv6 addressing to achieve industrial-grade performance. It is integrated with 6LoWPAN, RPL, and CoAP protocols. One of the important considerations of 6TiSCH is the requirement of node-to-node synchronization to prevent synchronization loops in the network. The attacks particular to RPL may disrupt node-to-node synchronization, which decreases throughput and increases communication latency. The research on the security of RPL and 6TiSCH combination is still in its early stage and is a potential research area for security researchers. _Addressing RPL specific flooding attacks_: There is no efficient and suitable solution specially designed for defending flooding attack against RPL protocol [30]. To defend the DIS attack, RPL parameters can be used for setting safety thresholds in the RPL protocol. For example, DIS interval can be used to block the neighbors who are sending DIS messages very frequently, i.e., DIS messages are received before the expiry of DIS interval. Outlier Detection (OD) methods can be used to detect the neighbors (attacker) with abnormal behavior. DIS and DIO flooding attacks can be detected using OD based IDS. The main advantage of using OD is that these methods impose significantly less overhead on resource constrained nodes. _Security solutions for dynamic networks_: To provide RPL with the ability to work efficiently in a dynamic network (i.e., mobility scenario), many enhancements have been proposed in the literature. Several RPL mobility enhancements are EMA-RPL, MoMoRo, mRPL, Co-RPL, and ME-RPL. Most of the existing RPL security solutions like SVELTE, SecRPL, SecTrust-RPL, and SRPL assume static network topology and may not be suitable for dynamic scenarios. However, at present, there are many use-cases in which RPL is deployed in dynamic networks. Thus, the existing solutions must be improved to make them suitable for dynamic networks. Also, this requirement must be fulfilled by the defense solutions which may be developed in the future. _Fog computing for RPL security_: Resource constrained nature of LLNs limits the usage of existing state-of-the-art security mechanisms. However, in the present scenario, this limitation may be handled by currently emerging computing paradigms. One such emerging computing paradigm is Fog computing, which can be leveraged for securing IoT applications. To develop security solutions based on the combination of Edge, Fog Computing, RPL, and 6LoWPAN is a potential research area. The resource constrained nature of LLN nodes must also be taken care of beforehand as they demand low complexity authentication, and low message overhead based security solutions. ## VIII Conclusion Self-organization, self-healing, global connectivity, resource constrained, and open nature characteristics of IoT make it the best choice for the development of applications that make human life easier. However, these characteristics also expose IoT to attackers targeting users' security and privacy. The network layer is one of the most favorite targets of attackers in the case of wireless networks, and because most of the IoT devices communicate using wireless medium IoT is more prone to attackers. To support efficient routing in LLNs, the RPL protocol has been standardized. RPL protocol is vulnerable to different attacks, which include attacks inherited from WSN and attacks specific to RPL. In this paper, we presented an exhaustive study on various attacks and defense solutions, in particular to the RPL protocol. First, we discussed a taxonomy of attacks on RPL in which attacks are classified based on their primary targets, including resources, topology, and traffic. Then, a taxonomy of different RPL specific defense solutions present in the literature is proposed. Various research challenges, open issues, and future research directions observed from the literature survey are also discussed. We observed that the research related to defense solutions specific to secure RPL protocol and RPL specific IDS methods is still in the early phase and requires more attention for providing full-fledged security to IoT applications. ## Acknowledgment This research was supported by the Ministry of Human Resource Development, Government of India.
2301.01702
Automating Nearest Neighbor Search Configuration with Constrained Optimization
The approximate nearest neighbor (ANN) search problem is fundamental to efficiently serving many real-world machine learning applications. A number of techniques have been developed for ANN search that are efficient, accurate, and scalable. However, such techniques typically have a number of parameters that affect the speed-recall tradeoff, and exhibit poor performance when such parameters aren't properly set. Tuning these parameters has traditionally been a manual process, demanding in-depth knowledge of the underlying search algorithm. This is becoming an increasingly unrealistic demand as ANN search grows in popularity. To tackle this obstacle to ANN adoption, this work proposes a constrained optimization-based approach to tuning quantization-based ANN algorithms. Our technique takes just a desired search cost or recall as input, and then generates tunings that, empirically, are very close to the speed-recall Pareto frontier and give leading performance on standard benchmarks.
Philip Sun, Ruiqi Guo, Sanjiv Kumar
2023-01-04T16:56:36Z
http://arxiv.org/abs/2301.01702v2
# Automating Nearest Neighbor Search Configuration with Constrained Optimization ###### Abstract The approximate nearest neighbor (ANN) search problem is fundamental to efficiently serving many real-world machine learning applications. A number of techniques have been developed for ANN search that are efficient, accurate, and scalable. However, such techniques typically have a number of parameters that affect the speed-recall tradeoff, and exhibit poor performance when such parameters aren't properly set. Tuning these parameters has traditionally been a manual process, demanding in-depth knowledge of the underlying search algorithm. This is becoming an increasingly unrealistic demand as ANN search grows in popularity. To tackle this obstacle to ANN adoption, this work proposes a constrained optimization-based approach to tuning quantization-based ANN algorithms. Our technique takes just a desired search cost or recall as input, and then generates tunings that, empirically, are very close to the speed-recall Pareto frontier and give leading performance on standard benchmarks. ## 1 Introduction Efficient nearest neighbor search is an integral part of approaches to numerous tasks in machine learning and information retrieval; it has been leveraged to effectively solve a number of challenges in recommender systems (Benzi et al., 2016; Cremonesi et al., 2010), coding theory (May and Ozerov, 2015), multimodal search (Gfeller et al., 2017; Miech et al., 2021), and language modeling (Gu et al., 2020; Khandelwal et al., 2020; Kitaev et al., 2020). Vector search over the dense, high-dimensional embedding vectors generated from deep learning models has become especially important following the rapid rise in capabilities and performance of such models. Nearest neighbor search is also increasingly being used for assisting training tasks in ML (Lindgren et al., 2021; Yen et al., 2018). Formally, the nearest neighbor search problem is as follows: we are given an \(n\)-item dataset \(\mathcal{X}\in\mathbb{R}^{n\times d}\) composed of \(d\)-dimensional vectors, and a function for computing the distance between two vectors \(D:\mathbb{R}^{d}\times\mathbb{R}^{d}\mapsto\mathbb{R}\). For a query vector \(q\in\mathbb{R}^{d}\), our goal is to find the indices of the \(k\)-nearest neighbors in the dataset to \(q\): \[\underset{i\in\{1,\ldots,n\}}{k\text{-}\operatorname*{arg\,min}}D(q,\mathcal{ X}_{i})\] Common choices of \(D\) include \(D(q,x)=-\langle q,x\rangle\) for maximum inner product search (MIPS) and \(D(q,x)=\|q-x\|_{2}^{2}\) for Euclidean distance search. A linear-time scan over \(\mathcal{X}\) solves the nearest neighbor search problem but doesn't scale to the large dataset sizes often found in modern-day applications, hence necessitating the development of approximate nearest neighbor (ANN) algorithms. A number of approaches to the ANN problem have been successful in trading off a small search accuracy loss, measured in result recall, for a correspondingly large increase in search speed (Aumuller et al., 2020). However, these approaches rely on tuning a number of hyperparameters that adjust the tradeoff between speed and recall, and poor hyperparameter choices may result in performance far below what could be achievable with ideal hyperparameter tuning. This tuning problem becomes especially difficult at the billions-scale, where the larger dataset size typically leads to a greater number of hyperparameters to tune. Existing approaches to tuning an ANN index, enumerated in Table 1, all suffer from some deficiency, such as using an excessive amount of computation during the tuning process, necessitating extensive human-in-the-loop expertise, or giving suboptimal hyperparameters. Mitigating these issues is becoming increasingly important with the growth in dataset sizes and in the popularity of the ANN-based retrieval paradigm. This paper describes how highly performant ANN indices may be created and tuned with minimal configuration complexity to the end user. Our contributions are: * Deriving theoretically-grounded models for recall and search cost for quantization-based ANN algorithms, and presenting an efficient Lagrange multipliers-based technique for optimizing either of these metrics with respect to the other. * Showing that on millions-scale datasets, the tunings from our technique give almost identical performance to optimal hyperparameter settings found through exhaustive grid search. * Achieving superior performance on track 1 of the billions-scale big-ann-benchmarks datasets using tunings from our technique over tunings generated by a black-box optimizer on the same ANN index, and over all existing benchmark submissions. Our constrained optimization approach is very general and we anticipate it can be extended to distance measures, quantization algorithms, and search paradigms beyond those explored in this paper. ## 2 Related Work ### ANN Algorithms Literature surrounding the ANN problem is extensive, and many solutions have been proposed, drawing inspiration from a number of different fields. Below we give a brief outline of three families of approaches that have found empirical success and continued research interest, with an emphasis on the hyperparameters necessitated by each approach. Other approaches to ANN include sampling-based algorithms (Liu et al., 2019) and a variety of geometric data structures (Bozkaya Ozsoyoglu, 1997; Ram Gray, 2012); we refer readers to Bhatia Vandana (2010); RezaAbbasifard et al. (2014); Wang et al. (2014, 2021) for more comprehensive surveys. Hashing approachesTechniques under this family utilize locality sensitive hash (LSH) functions, which are functions that hash vectors with the property that more similar vectors are more likely to collide in hash space (Andoni Razenshteyn, 2015; Datar et al., 2004; Shrivastava Li, 2014). By hashing the query and looking up the resulting hash buckets, we may expect to find vectors close to the query. Hashing algorithms are generally parameterized by the number and size of their hash tables. The random memory access patterns of LSH often lead to difficulties with efficient implementation, and the theory that prescribes hyperparameters for LSH-based search generally cannot consider dataset-specific idiosyncrasies that allow for faster search than otherwise guaranteed for worst-case inputs; see Appendix A.1 for further investigation. Graph approachesThese algorithms compute a (potentially approximate) nearest neighbor graph on \(\mathcal{X}\), where each element of \(\mathcal{X}\) becomes a graph vertex and has directed edges towards its nearest neighbors. The nearest neighbors to \(q\) are computed by starting at some vertex and traversing edges \begin{table} \begin{tabular}{l|l l l} **Method** & **Computational** & **Human** & **Hyperparameter** \\ & **Cost of Tuning** & **Involvement** & **Quality** \\ \hline Grid search & High & **Low** & **High** \\ Manual tuning & **Low** & High & Medium \\ Black-box optimizer & Medium & **Low** & Medium \\ Ours & **Low** & **Low** & **High** \\ \end{tabular} \end{table} Table 1: Our technique is the first to use minimal computational cost and human involvement to configure an ANN index to perform very close to its speed-recall Pareto frontier. to vertices closer to \(q\). These algorithms are parameterized by graph construction details, such as the number of edges; any post-construction adjustments, such as improving vertex degree distribution and graph diameter (Fu et al., 2019; Iwasaki and Miyazaki, 2018; Malkov and Yashunin, 2020); and query-time parameters for beam search and selecting the initial set of nodes for traversal. Quantization approachesThese algorithms create a compressed form of the dataset \(\tilde{\mathcal{X}}\); at query time, they return the points whose quantized representations are closest to the query. Speedups are generally proportional to the reduction in dataset size. These reductions can be several orders of magnitude, although coarser quantizations lead to greater recall loss. Quantization techniques include VQ, where each datapoint is assigned to the closest element in a codebook \(C\), and a number of multi-codebook quantization techniques, where the datapoint is approximated as some aggregate (concatenation, addition, or otherwise) of the codebook element it was assigned to per-codebook (Babenko and Lempitsky, 2015; Guo et al., 2020; Jegou et al., 2011). Quantization-based approaches generally must be tuned on codebook size and the number of codebooks. Growth in parameterization complexity with respect to dataset sizeWhile the above approaches may only each introduce a few hyperparameters, many of the best-performing ANN algorithms layer multiple approaches, leading to a higher-dimensional hyperparameter space much more difficult to tune. For example, Chen et al. (2021) uses VQ, but also a graph-based approach to search over the VQ codebook. Other algorithms (Guo et al., 2020; Johnson et al., 2021), including what we discuss in this work, use VQ to perform a first-pass pruning over the dataset and then a multi-codebook quantization to compute more accurate distance estimates. ### ANN Hyperparameter Tuning Tuning in low-dimensional hyperparameter spaces may be effectively handled with grid search; however, quantization-based ANN algorithms with few hyperparameters scale poorly with dataset size, as shown in Appendix A.8. In higher-dimensional ANN hyperparameter spaces, where grid search is computationally intractable, there are two predominant approaches to the tuning problem, each with their drawbacks. The first is using heuristics to reduce the search space into one tractable with grid search, as in Criteo (2021) or Ren et al. (2020). These heuristics may perform well when set and adjusted by someone with expertise in the underlying ANN search algorithm implementation, but such supervision is often impractical or expensive; otherwise, these heuristics may lead to suboptimal hyperparameter choices. The second approach is to use black-box optimization techniques such as Bergstra et al. (2011) or Golovin et al. (2017) to select hyperparameters in the full-dimensionality tuning space. These algorithms, however, lack inductive biases from knowing the underlying ANN search problem and therefore may require a high number of samples before finding hyperparameter tunings that are near optimal. Measurement noise from variability in machine performance further compounds the challenges these black-box optimizers face. ## 3 Preliminaries ### Large-Scale Online Approximate Nearest Neighbors In this work we focus on the problem of tuning ANN algorithms for online search of large-scale datasets. In this scenario, the search algorithm must respond to an infinite stream of latency-sensitive queries arriving at roughly constant frequency. This setting is common in recommender and semantic systems where ANN speed directly contributes to the end-user experience. We are given a sample set of queries, representative of the overall query distribution, with which we can tune our data structure. Large dataset size makes the linear scaling of exact brute-force search impractical, so approximate search algorithms must be used instead. These algorithms may be evaluated along two axes: 1. **Accuracy**: quantified by recall@\(k\), where \(k\) is the desired number of neighbors. Sometimes the \(c\)-approximation ratio, the ratio of the approximate and the true nearest-neighbor distance, is used instead; we correlate between this metric and recall in Appendix A.1. 2. **Search cost**: typically quantified by the queries per second (QPS) a given server can handle. An effective ANN solution maximizes accuracy while minimizing search cost. ### Vector Quantization (VQ) The ANN algorithm we tune uses a hierarchical quantization index composed of vector quantization (VQ) and product quantization (PQ) layers. We first give a brief review of VQ and PQ before describing how they are composed to produce a performant ANN search index. Vector-quantizing an input set of vectors \(\mathcal{X}\in\mathbb{R}^{n\times d}\), which we denote \(VQ(X)\), produces a codebook \(C\in\mathbb{R}^{c\times d}\) and codewords \(w\in\{1,2,\ldots,c\}^{n}\). Each element of \(X\) is quantized to the closest codebook element in \(C\), and the quantization assignments are stored in \(w\). The quantized form of the \(i\)th element of \(\mathcal{X}\) can therefore be computed as \(\tilde{\mathcal{X}_{i}}=C_{w_{i}}\). VQ may be used for ANN by computing the closest codebook elements to the query \[\mathcal{S}:=k\text{-}\underset{i\in\{1,2,\ldots,c\}}{\arg\min}\ D(q,C_{i})\] and returning indices of datapoints belonging to those codebook elements, \(\{j|w_{j}\in\mathcal{S}\}\). This candidate set may also be further refined by higher-bitrate distance calculations to produce a final result set. In this manner, VQ can be interpreted as a pruning tree whose root stores \(C\) and has \(c\) children; the \(i\)th child contains the points \(\{\mathcal{X}_{j}|w_{j}=i\}\); equivalently, this tree is an inverted index (or inverted file index, IVF) which maps each centroid to the datapoints belonging to the centroid. ### Product Quantization (PQ) Product quantization divides the full \(d\)-dimensional vector space into \(K\) subspaces and quantizes each space separately. If we assume the subspaces are all equal in dimensionality, each covering \(l=\lceil d/K\rceil\) dimensions, then PQ gives \(K\) codebooks \(C^{(1)},\ldots,C^{(K)}\) and \(K\) codeword vectors \(w^{(1)},\ldots,w^{(K)}\), with \(C^{(k)}\in\mathbb{R}^{c_{k}\times l}\) and \(w^{(k)}\in\{1,\ldots,c_{i}\}^{n}\) where \(c_{k}\) is the number of centroids in subspace \(k\). The \(i\)th element can be recovered as the concatenation of \(\{C^{(k)}_{w^{(k)}_{i}}|k\in\{1,\ldots,K\}\}\). For ANN search, VQ is generally performed with a large codebook whose size scales with \(n\) and whose size is significant relative to the size of the codewords. In contrast, PQ is generally performed with a constant, small \(c_{k}\) that allows for fast in-register SIMD lookups for each codebook element, and its storage cost is dominated by the codeword size. ### ANN Search with Multi-Level Quantization VQ and PQ both produce fixed-bitrate encodings of the original dataset. However, in a generalization of Guo et al. (2020), we would like to allocate more bitrate to the vectors closer to the query, which we may achieve by using multiple quantization levels and using the lower-bitrate levels to select which portions of the higher-bitrate levels to evaluate. To generate these multiple levels, we start with the original dataset \(\mathcal{X}\) and vector-quantize it, resulting in a smaller \(d\)-dimensional dataset of codewords \(C\). We may recursively apply VQ to \(C\) for arbitrarily many levels. \(\mathcal{X}\) and all \(C\) are product-quantized as well. As a concrete example, Figure 6 describes the five-quantization-level setups used in Section 5.2. This procedure results in a set of quantizations \(\tilde{\mathcal{X}_{1}},\ldots,\tilde{\mathcal{X}_{m}}\) of progressively higher bitrate. Algorithm 1 performs ANN using these quantizations and a length-\(m\) vector of search hyperparameters \(t\), which controls how quickly the candidate set of neighbors is narrowed down while iterating through the quantization levels. Our goal is to find \(t\) that give excellent tradeoffs between search speed and recall. ## 4 Method The following sections derive proxy metrics for ANN recall and search latency as a function of the tuning \(t\), and then describe a Lagrange multipliers-based approach to efficiently computing \(t\) to optimize for a given speed-recall tradeoff. ### Proxy Loss for ANN Recall For a given query set \(\mathcal{Q}\) and hyperparameter tuning \(t\), the recall may be computed by simply performing approximate search over \(\mathcal{Q}\) and computing the recall empirically. However, such an approach has no underlying mathematical structure that permits efficient optimization over \(t\). Below we approximate this empirical recall in a manner amenable to our constrained optimization approach. First fix the dataset \(\mathcal{X}\) and all quantizations \(\tilde{\mathcal{X}}^{(i)}\). Define functions \(\mathcal{S}_{0}(q,t),\ldots,\mathcal{S}_{m}(q,t)\) to denote the various \(\mathcal{S}\) computed by Algorithm 1 for query \(q\) and tuning \(t\), and let \(\mathcal{G}(q)\) be the set of ground-truth nearest neighbors for \(q\). Note our recall equals \(\frac{|\mathcal{S}_{m}(q,t)\cap\mathcal{G}(q)|}{|\mathcal{G}(q)|}\) for a given query \(q\) and tuning \(t\). We can decompose this into a telescoping product and multiply it among all queries in \(\mathcal{Q}\) to derive the following expression for geometric-mean recall: \[\text{GeometricMeanRecall}(\mathcal{Q},t)=\prod_{q\in\mathcal{Q}}\prod_{i=1} ^{m}\left(\frac{|\mathcal{S}_{i}(q,t)\cap\mathcal{G}(q)|}{|\mathcal{S}_{i-1}(q,t)\cap\mathcal{G}(q)|}\right)^{1/|\mathcal{Q}|}, \tag{1}\] where the telescoping decomposition takes advantage of the fact that \(|\mathcal{S}_{0}(q,t)\cap\mathcal{G}(q)|=|\mathcal{G}(q)|\) due to \(\mathcal{S}_{0}\) containing all datapoint indices. We choose the geometric mean, despite the arithmetic mean's more frequent use in aggregating recall over a query set, because the geometric mean allows for the decomposition in log-space that we perform below. Note that the arithmetic mean is bounded from below by the geometric mean. ``` 1:procedureQuantizedSearch(\(\tilde{\mathcal{X}},t,q\))\(\triangleright\) Computes the \(t_{m}\) nearest neighbors to \(q\) 2:\(\mathcal{S}_{0}\leftarrow\{1,\ldots,n\}\) 3:for\(i\gets 1\) to \(m\)do\(\triangleright\) Iterate over quantizations in ascending bitrate order 4:\(\mathcal{S}_{i}\gets t_{i}\text{-}\underset{j\in\mathcal{S}_{i-1}}{ \operatorname*{arg\,min}}\,D(q,\tilde{X}_{j}^{(i)})\)\(\triangleright\) Narrow candidate set to \(t_{i}\) elements, using \(\tilde{\mathcal{X}}^{(i)}\) 5:endfor 6:return\(\mathcal{S}_{m}\) 7:endprocedure ``` **Algorithm 1** Quantization-Based ANN Maximizing Equation 1 is equivalent to minimizing its negative logarithm: \[\begin{split}\mathcal{L}(\mathcal{Q},t)&=-\frac{1}{| \mathcal{Q}|}\sum_{q\in\mathcal{Q}}\sum_{i=1}^{m}\log\frac{|\mathcal{S}_{i}(q,t )\cap\mathcal{G}(q)|}{|\mathcal{S}_{i-1}(q,t)\cap\mathcal{G}(q)|}\\ &=\sum_{i=1}^{m}\mathbb{E}_{q\in\mathcal{Q}}\left[-\log\frac{| \mathcal{S}_{i}(q,t)\cap\mathcal{G}(q)|}{|\mathcal{S}_{i-1}(q,t)\cap\mathcal{ G}(q)|}\right]\end{split} \tag{2}\] Now we focus on the inner quantity inside the logarithm and how to compute it efficiently. The chief problem is that \(\mathcal{S}_{i}(q,t)\) has an implicit dependency on \(\mathcal{S}_{i-1}(q,t)\) because \(\mathcal{S}_{i-1}\) is the candidate set from which we compute quantized distances using \(\tilde{\mathcal{X}}^{(i)}\) in Algorithm 1. This results in \(\mathcal{S}_{i}(q,t)\) depending on all \(t_{1},\ldots,t_{i}\) and not just \(t_{i}\) itself, making it difficult to efficiently evaluate. To resolve this, define the _single-layer candidate set_ \[\mathcal{S}_{i}^{\prime}(q,t_{i})=\underset{j\in\{1,\ldots,n\}}{\operatorname*{ arg\,min}}\,D(q,\tilde{X}_{j}^{(i)}) \tag{3}\] which computes the closest \(t_{i}\) neighbors to \(q\) according to only \(\tilde{\mathcal{X}}^{(i)}\), irrespective of other quantizations or their tuning settings. We leverage this definition by rewriting our cardinality ratio as \[\frac{|\mathcal{S}_{i}(q,t)\cap\mathcal{G}(q)|}{|\mathcal{S}_{i-1}(q,t)\cap \mathcal{G}(q)|}=\frac{\sum_{g\in\mathcal{G}(q)}\,\mathbb{1}_{g\in\mathcal{S}_ {i}(q,t)}}{\sum_{g\in\mathcal{G}(q)}\,\mathbb{1}_{g\in\mathcal{S}_{i-1}(q,t)}} \tag{4}\] and making the approximation \(\mathbb{1}_{g\in\mathcal{S}_{i}(q,t)}\approx\mathbb{1}_{g\in\mathcal{S}_{i-1}(q,t) }\mathbb{1}_{g\in\mathcal{S}^{\prime}_{i}(q,t_{i})}\). This is roughly equivalent to assuming most near-neighbors to \(q\) are included in \(\mathcal{S}_{i-1}(q,t)\); see Appendix A.2 for further discussion. If we furthermore assume zero covariance1 between \(\mathbb{1}_{g\in\mathcal{S}_{i-1}(q,t)}\) and \(\mathbb{1}_{g\in\mathcal{S}^{\prime}_{i}(q,t_{i})}\), then we can transform the sum of products into a product of sums: Footnote 1: Note that \(\mathcal{S}_{i}\subseteq\mathcal{S}_{i-1}\) so this is emphatically false for \(\mathbb{1}_{g\in\mathcal{S}_{i-1}(q,t)}\) and \(\mathbb{1}_{g\in\mathcal{S}_{i}(q,t)}\), but with \(\mathbb{1}_{g\in\mathcal{S}^{\prime}_{i}(q,t_{i})}\) we may reasonably assume little correlation with \(\mathbb{1}_{g\in\mathcal{S}_{i-1}(q,t)}\). \[\sum_{g\in G(q)}\mathbb{1}_{g\in\mathcal{S}_{i-1}(q,t)}\mathbb{1}_{g\in \mathcal{S}^{\prime}_{i}(q,t_{i})}\approx\left(\frac{1}{|\mathcal{G}(q)|} \sum_{g\in\mathcal{G}(q)}\mathbb{1}_{g\in\mathcal{S}_{i-1}(q,t)}\right)\left( \sum_{g\in\mathcal{G}(q)}\mathbb{1}_{g\in\mathcal{S}^{\prime}_{i}(q,t_{i})} \right).\] Combining this result from Equations 2 and 4, our final loss function is \(\sum_{i=1}^{m}\mathcal{L}_{i}(\mathcal{Q},t_{i})\), with the _per-quantization loss_\(\mathcal{L}_{i}\) defined as \[\mathcal{L}_{i}(\mathcal{Q},t_{i})=\mathbb{E}_{q\in\mathcal{Q}}\left[-\log \frac{|\mathcal{S}^{\prime}_{i}(q,t_{i})\cap\mathcal{G}(q)|}{|\mathcal{G}(q)| }\right]. \tag{5}\] See Appendix A.4 for how \(\mathcal{L}_{i}\) may be efficiently computed over \(\mathcal{Q}\) for all \(i\in\{1,\ldots,m\},t_{i}\in\{1,\ldots,n\}\), resulting in a matrix \(L\in\mathbb{R}^{m\times n}\). This allows us to compute the loss for any tuning \(t\) by summing \(m\) elements from \(L\). ### Proxy Metric for ANN Search Cost Similar to ANN recall, search cost may be directly measured empirically, but below we present a simple yet effective search cost proxy compatible with our Lagrange optimization method. Let \(|\tilde{\mathcal{X}}^{(i)}|\) denote the storage footprint of \(\tilde{\mathcal{X}}^{(i)}\). At quantization level \(i\), for \(i<m\), selecting the top top \(t_{i}\) candidates necessarily implies that a \(t_{i}/n\) proportion of \(\tilde{\mathcal{X}}^{(i+1)}\) will need to be accessed in the next level. Meanwhile, \(\tilde{\mathcal{X}}^{(1)}\) is always fully searched because it's encountered at the beginning of the search process, where the algorithm has no prior on what points are closest to \(q\). From these observations, we can model the cost of quantization-based ANN search with a tuning \(t\) as \[J(t)\triangleq\frac{1}{|\mathcal{X}|}\cdot\left(|\tilde{\mathcal{X}}^{(1)}|+ \sum_{i=1}^{m-1}\frac{t_{i}}{n}\cdot|\tilde{\mathcal{X}}^{(i+1)}|\right). \tag{6}\] \(J\) gives the ratio of memory accesses performed per-query when performing approximate search with tuning \(t\) to the number of memory accesses performed by exact brute-force search. This gives a good approximation to real-world search cost because memory bandwidth is the bottleneck for quantization-based ANN in the non-batched case. We emphasize that this cost model is effective for comparing amongst tunings for a quantization-based ANN index, which is sufficient for our purposes, but likely lacks the power to compare performance among completely different ANN approaches, like graph-based solutions. Differences in memory read size, memory request queue depth, amenability to vectorization, and numerous other characteristics have a large impact on overall performance but are not captured in this model. Our model is, however, compatible with query batching, which we discuss further in Appendix A.3. ### Convexification of the Loss We take the convex hull of each per-quantization loss \(\mathcal{L}_{i}\) before passing it into the constrained optimization procedure. This results in a better-behaved optimization result but is also justified from an ANN algorithm perspective. For any quantization level \(i\), consider some two choices of \(t_{i}\) that lead to loss and cost contributions of \((l_{1},j_{1})\) and \((l_{2},j_{2})\). Any (loss, cost) tuple on the line segment between these two points can be achieved via a randomized algorithm that picks between our two choices of \(t_{i}\) with the appropriate weighting, which implies the entire convex hull is achievable. Empirically, we find that \(\mathcal{L}_{i}\) is extremely close to convex already, so this is more of a theoretical safeguard than a practical concern. ### Constrained Optimization Formally, our tuning problem of maximizing recall with a search cost limit \(J_{\text{max}}\) can be phrased as \[\operatorname*{arg\,min}_{t\in[0,n]^{m}} \sum_{i=1}^{m}\mathcal{L}_{i}(t_{i})\] s.t. \[J(t)\leq J_{\text{max}}\] \[t_{1}\geq\ldots\geq t_{m}.\] The objective function is a sum of convex functions and therefore convex itself, while the constraints are linear and strictly feasible, so strong duality holds. We can therefore utilize the Lagrangian \[\operatorname*{arg\,min}_{t\in[0,n]^{m}} -\lambda J(t)+\sum_{i=1}^{m}\mathcal{L}_{i}(t_{i})\] s.t. \[t_{1}\geq\ldots\geq t_{m}.\] to find exact solutions to the constrained optimization, using \(\lambda\) to adjust the recall-cost tradeoff. We show in Appendix A.5 an algorithm that uses \(O(nm)\) preprocessing time to solve the minimization for a given value of \(\lambda\) in \(O(m\log n)\) time. Furthermore, because the objective function is a sum of \(m\) functions, each a convex hull defined by \(n\) points, the Pareto frontier itself will be piecewise, composed of at most \(nm\) points. It follows then that there are at most \(nm\) relevant \(\lambda\) that result in different optimization results, namely those obtained by taking the consecutive differences among each \(\mathcal{L}_{i}\) and dividing by \(|\tilde{\mathcal{X}}^{(i+1)}|/n|\mathcal{X}|\). By performing binary search among these candidate \(\lambda\), we can find the minimum-cost tuning for a given loss target, or the minimum-loss tuning for a given cost constraint, in \(O(m\log n\log nm)\) time. In practice, even for very large datasets, \(m<10\), so this routine runs very quickly. The constrained optimizations used to generate the tunings in Section 5.2 ran in under one second on a Xeon W-2135; computation of \(\mathcal{L}\) contributed marginally to the indexing runtime, described further in Appendix A.6. ## 5 Experiments ### Million-Scale Benchmarks and Comparison to Grid Search To see how closely our algorithm's resulting hyperparameter settings approach the true optimum, we compare its output to tunings found through grid search. We compare against the grid-searched parameters used by ScaNN (Guo et al., 2020) in its leading performance on the public Glove1M benchmark from Aumuller et al. (2020). As shown in Figure 1 below, our method's tunings are almost exactly on the speed-recall frontier. While the resulting tunings are of roughly equivalent quality, grid search takes far longer to identify such tunings; it searched 210 configurations in 22 minutes. In comparison, on the same machine, our method took 53 seconds to compute \(\mathcal{L}\) and run the constrained optimization to generate tunings. ### Billion-Scale Benchmarks We now proceed to larger-scale datasets, where the tuning space grows significantly; the following benchmarks use a five-level quantization index (see Appendix A.6.1 for more details), resulting Figure 1: Our technique’s tunings compared to grid-searched tunings, applied to ScaNN on Glove1M. in a four-dimensional hyperparameter space. Even with very optimistic assumptions, this gives hundreds of thousands of tunings to grid search over, which is computationally intractable, so we compare to heuristic, hand-tuned, and black-box optimizer settings instead. Here we use our own implementation of a multi-level quantization ANN index, and benchmark on three datasets from big-ann-benchmarks.com(Simhadri et al., 2022), following the experimental setup stipulated by track 1 of the competition; see Appendix A.6 for more details. Our results are shown in Figure 2. We find that even in this much more complex hyperparameter space, our technique manages to find settings that make excellent speed-recall tradeoffs, resulting in leading performance on these datasets. #### 5.2.1 Comparison Against Black-Box Optimizers To see if black-box optimizers can effectively tune quantization-based ANN indices, we used Vizier (Golovin et al., 2017) to tune the same DEEP1B index used above. Our Vizier setup involved an in-the-loop ANN index serving online requests, with Vizier generating candidate configurations, measuring their resulting recall and throughput, and then using those measurements to inform further candidate selections. We ran the Vizier study for 6 hours, during which it conducted over 1800 trials; their recall and performance measurements are plotted to the right in Figure 3. We can see that Vizier found several effective tunings close to the Pareto frontier of our technique, but then failed to interpolate or extrapolate from those tunings. Our technique not only generated better tunings, but did so using less compute; notably, the computation of statistics \(\mathcal{L}_{i}\) and the constrained optimization procedure were done with low-priority, preemptible, batch compute resources, while in contrast Vizier requires instances of online services in a carefully controlled environment in order to get realistic and low-variance throughput measurements. ### Recall and Cost Model Accuracy We desire linear, predictable relationships between the modeled and the corresponding real-world values for both recall and search cost. This is important both so that the optimization can produce tunings that are effective in practice, and so that users can easily interpret and apply the results. In Figure 4, we take the DEEP1B hyperparameter tunings used above in Section 5.2 and plot their respective modeled recalls against empirical recall@10, and modeled search costs against measured reciprocal throughput. Recall was modeled as \(\exp(-\sum\mathcal{L}_{i})\) while \(J\) was simply used for cost. We can conclude from the square of their sample Pearson correlation coefficients (\(r^{2}\)) that both relationships between analytical values and their empirical measurements are highly linear. Figure 3: On DEEP1B, our hyperparameter tunings achieve significantly better speed-recall tradeoffs than those found by Vizier. Figure 2: Speed-recall tradeoffs of our tuning algorithm plus ANN search implementation, compared to others from track 1 of the standardized [https://big-ann-benchmarks.com](https://big-ann-benchmarks.com) datasets. ### Out-of-Sample Query Performance Our hyperparameter tunings were optimized based on statistics calculated on a query sample \(\mathcal{Q}\), but in order to fulfill their purpose, these tunings must generalize and provide good performance and recall on the overall query stream. We test generalization capability by randomly splitting the \(10^{4}\) queries in the DEEP1B dataset into two equal-sized halves \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\). Then we compare the resulting Pareto frontiers of training on \(\mathcal{Q}_{1}\) and testing on \(\mathcal{Q}_{2}\) (out-of-sample), versus training and testing both on \(\mathcal{Q}_{2}\) (in-sample). The resulting Pareto frontiers, shown in Figure 5, are near-indistinguishable and within the range of measurement error from machine performance fluctuations, indicating excellent generalization. Qualitatively, the in-sample and out-of-sample tuning parameters differ only very slightly, suggesting our optimization is robust. ### Additional Experiments In Appendix A.7 we analyze the effects of query sample size and find even a small (1000) query sample is sufficient to provide effective tuning results. Appendix A.8 shows that grid-searching a shallow quantization index (similar to what was done in Section 5.1) fails to perform well on billion-scale datasets. ## 6 Conclusion As adoption of nearest neighbor search increases, so does the importance of providing ANN frameworks capable of providing good performance even to non-experts unfamiliar with the inner details of ANN algorithms. We believe our work makes a significant step towards this goal by providing a theoretically grounded, computationally efficient, and empirically successful method for tuning quantization-based ANN algorithms. However, our tuning model still relies on the user to pick the VQ codebook size, because VQ indexing is a prerequisite needed to compute the statistics with which we generates tunings from. A model for how VQ codebook size impacts these statistics would allow for a completely hands-off, efficient, ANN solution. Additional work could refine the search cost model to more accurately reflect caches, account for network costs in distributed ANN solutions, and support alternative storage technologies such as flash. In general, we are also optimistic that the strategy of computing offline statistics from a query sample to model online ANN search behavior may generalize to non-quantization-based ANN algorithms as well. Figure 4: Modeled costs and recalls have a highly linear relationship with their true values. Figure 5: Speed-recall frontiers for tunings derived from in-sample and out-of-sample query sets on the DEEP1B dataset. #### Acknowledgments We would like to thank David Applegate, Sara Ahmadian, and Aaron Archer for generously providing their expertise in the field of optimization.
2303.17719
Why is the winner the best?
International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multi-center study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and postprocessing (66%). The "typical" lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work.
Matthias Eisenmann, Annika Reinke, Vivienn Weru, Minu Dietlinde Tizabi, Fabian Isensee, Tim J. Adler, Sharib Ali, Vincent Andrearczyk, Marc Aubreville, Ujjwal Baid, Spyridon Bakas, Niranjan Balu, Sophia Bano, Jorge Bernal, Sebastian Bodenstedt, Alessandro Casella, Veronika Cheplygina, Marie Daum, Marleen de Bruijne, Adrien Depeursinge, Reuben Dorent, Jan Egger, David G. Ellis, Sandy Engelhardt, Melanie Ganz, Noha Ghatwary, Gabriel Girard, Patrick Godau, Anubha Gupta, Lasse Hansen, Kanako Harada, Mattias Heinrich, Nicholas Heller, Alessa Hering, Arnaud Huaulmé, Pierre Jannin, Ali Emre Kavur, Oldřich Kodym, Michal Kozubek, Jianning Li, Hongwei Li, Jun Ma, Carlos Martín-Isla, Bjoern Menze, Alison Noble, Valentin Oreiller, Nicolas Padoy, Sarthak Pati, Kelly Payette, Tim Rädsch, Jonathan Rafael-Patiño, Vivek Singh Bawa, Stefanie Speidel, Carole H. Sudre, Kimberlin van Wijnen, Martin Wagner, Donglai Wei, Amine Yamlahi, Moi Hoon Yap, Chun Yuan, Maximilian Zenk, Aneeq Zia, David Zimmerer, Dogu Baran Aydogan, Binod Bhattarai, Louise Bloch, Raphael Brüngel, Jihoon Cho, Chanyeol Choi, Qi Dou, Ivan Ezhov, Christoph M. Friedrich, Clifton Fuller, Rebati Raman Gaire, Adrian Galdran, Álvaro García Faura, Maria Grammatikopoulou, SeulGi Hong, Mostafa Jahanifar, Ikbeom Jang, Abdolrahim Kadkhodamohammadi, Inha Kang, Florian Kofler, Satoshi Kondo, Hugo Kuijf, Mingxing Li, Minh Huan Luu, Tomaž Martinčič, Pedro Morais, Mohamed A. Naser, Bruno Oliveira, David Owen, Subeen Pang, Jinah Park, Sung-Hong Park, Szymon Płotka, Elodie Puybareau, Nasir Rajpoot, Kanghyun Ryu, Numan Saeed, Adam Shephard, Pengcheng Shi, Dejan Štepec, Ronast Subedi, Guillaume Tochon, Helena R. Torres, Helene Urien, João L. Vilaça, Kareem Abdul Wahid, Haojie Wang, Jiacheng Wang, Liansheng Wang, Xiyue Wang, Benedikt Wiestler, Marek Wodzinski, Fangfang Xia, Juanying Xie, Zhiwei Xiong, Sen Yang, Yanwu Yang, Zixuan Zhao, Klaus Maier-Hein, Paul F. Jäger, Annette Kopp-Schneider, Lena Maier-Hein
2023-03-30T21:41:42Z
http://arxiv.org/abs/2303.17719v1
# Why is the winner the best? ###### Abstract _International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multi-center study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and post-processing (66%). The "typical" lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work._ ## 1 Introduction Validation of biomedical image analysis algorithms is typically conducted through so-called challenges - large international benchmarking competitions that compare algorithm performance on datasets addressing specific problems. Recent years have not only seen an increase in the complexity of the machine learning (ML) models used to solve the tasks, but also a substantial increase in the scientific impact of challenges, with results often being published in prestigious journals (e.g., [28, 41, 46, 9, 4]), and winners receiving tremendous attention in terms of citations and (sometimes) high monetary compensation [23]. However, despite this impact, little effort has so far been invested in investigating what can be learnt from a challenge. Firstly, we identified a notable gap in literature regarding insights into current common practices in challenges as well as studies that critically analyze whether challenges actually generate scientific progress. Secondly, while recent work has addressed the problem of deriving meaningful conclusions from challenges [29, 49], it still remains largely unclear what makes winners the best and hence what constitutes a good strategy for approaching a new challenge or problem. The specific questions are manifold, e.g., _Which specific training paradigms are used in current winning solutions?_, _What are the most successful strategies for achieving generalization?_, _Is it beneficial to involve domain experts or to work in a large team?_. While ablation studies on the effects of ML model component removal could be used to address some questions, they suffer from the major drawback of only providing insights into submitted solutions, but not into underlying strategies. Furthermore, they typically only allow for investigating few aspects of a solution, and come at the cost of a substantial carbon footprint. To overcome these issues, we chose an approach that allowed us to systematically assess all of the aforementioned questions related to biomedical image analysis competitions within one cohesive study. To this end, members of the Helmholtz Imaging Incubator (HI) and of the Medical Image Computing and Computer Assisted Intervention (MICCAI) Special Interest Group on biomedical image analysis challenges designed a series of comprehensive international surveys that were issued to participants, organizers, and winners of competitions conducted within the IEEE International Symposium on Biomedical Imaging (ISBI) 2021 and the International Conference on MICCAI 2021. By collaborating with the organizers of all 80 competitions (100%, see overview in App. A), we were able to link algorithmic design decisions and challenge participation strategies to the outcome captured in rankings. Based on the study data, we explicitly addressed three research questions: _(RQ1) What is common practice in challenge participation?_, _(RQ2) Do current competitions generate scientific progress?_, and _(RQ3) Which strategies characterize challenge winners?_ ## 2 Methods According to the Biomedical Image Analysis ChallengeS (BIAS) Enhancing the QUAlity and Transparency Of health Research (EQUATOR) guideline on biomedical challenges [31], a biomedical image analysis challenge is defined as an "[...] open competition on a specific scientific problem in the field of biomedical image analysis. A challenge may encompass multiple competitions related to multiple _tasks_, whose participating teams may differ and for which separate rankings/leaderboards/results are generated.". As the term _challenge task_ is uncommon in the ML Figure 1: Overview of the IEEE ISBI 2021 and MICCAI 2021 challenges. Under the umbrella of 35 challenges (each represented by a teaser image and acronym), a total of 80 competitions with dedicated leaderboards were organized, as detailed in App. A. We used data from participants, organizers, and winners to address the key research questions of this contribution: _(RQ1) What is common practice in challenge participation?_, _(RQ2) Do current competitions generate scientific progress?_, and _(RQ3) Which strategies characterize challenge winners?_ community, we will use the term _competition_ instead. The term _challenge_ will be reserved for the collection of tasks that are performed under the umbrella of one dedicated organization, represented by an acronym (Fig. 1). For our analyses, we targeted three main groups that are relevant in the context of challenges, namely (1) challenge participants, (2) challenge organizers, and (3) challenge winners. The following sections present the methodology developed to address the corresponding research questions RQ1-RQ3. ### RQ1: What is common practice in challenge participation? To investigate current common practice in biomedical image analysis challenge participation, we designed a survey that was addressed to challenge participants and structured in five parts covering: (1) general information on the team and the tackled task(s), (2) information on expertise and environment, (3) strategy for the challenge, (4) algorithm characteristics, and (5) miscellaneous information (details provided in Sec. 3). The organizers of all IEEE ISBI 2021 challenges (30 competitions across 6 challenges [1, 2, 12, 35, 40, 42]), and all MICCAI 2021 challenges (50 competitions across 29 challenges [3, 4, 5, 6, 8, 10, 13, 14, 16, 18, 19, 21, 22, 26, 27, 32, 33, 36, 37, 43, 44, 48, 50, 51]) were invited to participate in the initiative and to bring us into contact with participants (if allowed by the challenge privacy policy) or distribute the survey link to them. We created an individual survey website for each challenge to be able to accommodate the individual challenge submission deadline. To avoid bias in survey responses, participants were asked to complete the survey before knowing their position in the final ranking. Out of a maximum of 168 questions, the survey only showed questions that were relevant to the specific situation. The responses and feedback from the IEEE ISBI 2021 respondents were used to refine the survey for MICCAI 2021, and are thus not included in the results presented in Sec. 3.1. Where organizers were allowed to share the contact details of the participants (20 challenges), the survey was conducted in closed-access mode, meaning that the participants received individual links to the survey and, where necessary, reminders. Fifteen surveys were conducted in open-access mode, meaning that the organizers were tasked with sharing the link to the respective survey and sending reminders. In these cases, we were not informed about the number of challenge participants and could not relate the number of responses to the total number. ### RQ2: Do current competitions generate scientific progress? The focus of the organizer survey was on the findings of the respective competition, particularly regarding whether scientific progress was made and, if yes, in which areas it was achieved and which open questions remain. To better put the respective competition into context, we also acquired general information on the associated competition(s). ### RQ3: Which strategies characterize challenge winners? The complexity of state-of-the-art neural network-based approaches, involving numerous and interdependent design parameters, comes with the risk of attributing the success in a competition to the wrong component of a system. To approach the question _Why is the winner the best?_, we linked the survey results of Sec. 2.1 to the final outcome of the competition and subsequently applied mixed model analyses. Given the large number of parameters relative to the number of competitions, we were aware that differences in parameters might not achieve statistical significance. In a second step, we therefore explicitly asked challenge winners for successful algorithm design choices and strategies in an additional survey. **Mixed model analysis** To compensate for the hierarchical data structure resulting from clusters corresponding to specific competitions, a logistic mixed model was used. In a first step, a univariable analysis was performed, i.e., the effect of each variable on the ranking was investigated separately. To further account for potential interdependencies between variables, two multivariable analyses were added. In the first analysis, the goal was to investigate the strategies influencing the probability of being the winner, while the second analysis focused on evaluating the strategies influencing the probability of being ranked among the best 30%. For both analyses, a logistic mixed model was implemented. The winning strategies were included as fixed effects while the challenge identifier was included as a random effect. Additionally, some of the strategies were allowed to vary across challenges, specifically the total training time in computation hours, time spent on analyzing data and annotations, and time spent on analysis of failure cases. Variables with highly varying magnitudes were scaled before fitting the model. Statistical analysis was done in R Statistical Software [38] (v4.0.3, package: lme4 [7]). **Survey on winning strategies** The survey of competition winners consisted of three main parts targeting the design decisions related to the winning submission, general recommended strategies for winning a competition, and the profile of a winner, respectively. In the first part, we asked the winners about the importance of various design decisions for their submitted method. These comprised design decisions related to (1) the training paradigm, such as the usage of multi-task learning or semi-supervised learning, (2) network details, such as the choice of loss function(s), (3) model initialization, specifically pretraining, (4) data usage, covering aspects like data curation, augmentation, data splitting, and sampling, (5) hyperparameters, (6) ensembling, (7) postprocessing, and (8) metrics (see Fig. 3). For each of these design decisions, winners specified their method (e.g., whether they performed pretraining and, if so, based on which data) and rated the importance of this design choice for winning the challenge. We further explicitly asked what distinguished the winning solution from competing solutions and what were key factors for success. The second part of the survey investigated general successful strategies (independent of the specific challenge). To this end, several authors of this paper who had already won multiple challenges compiled the list of strategies (Fig. 4). The winners were asked to rate the importance of each strategy and further complement the list. Finally, the third part of the survey covered questions on the profile of a challenge winner (Fig. 2). This was particularly relevant for those winners that had not taken part in the original survey of Sec. 2.1. ## 3 Results Based on the positive responses of all organizers from all IEEE ISBI 2021 (n = 30) and MICCAI 2021 (n = 50) competitions, a total of 80 competitions conducted across 35 challenges were included in this study (Fig. 1). These covered a wide range of problems related to semantic segmentation, instance segmentation, image-level classification, tracking, object detection, registration, and pipeline evaluation. ### Common practice in challenge participation A median (min/max) of 72% (11%/100%) of the challenge participants took part in the survey, according to the closed-access surveys. Overall, we received 292 completed survey forms, of which 249 met our inclusion criteria (i.e., second version of the survey refined for MICCAI 2021, survey completed by a lead developer, no duplicate responses from the same team). Detailed responses to all aspects of the survey (including interquartile ranges (IQR) and min/max values of all parameters) are provided in a white paper [15]. This section summarizes a selection of answers. The profile of a winner is depicted in Fig. B.1. **Infrastructure and strategies** Knowledge exchange was the most important incentive for participation (mentioned by 70%; respondents were allowed to pick multiple answers), followed by the possibility to compare their own method to others (65%), having access to data (52%), being part of an upcoming challenge publication (50%), and winning a challenge (42%). The awards/prize money was important to only 16% of the respondents. Regarding the computing infrastructure, only 25% of all respondents thought that their infrastructure was a bottleneck. The vast majority of respondents used a Graphics Processing Unit (GPU) cluster. The total training time of all models trained during method development including failure models was estimated to be a median of 267 GPU hours, while the training time of the final submission was estimated to be a median of 24 GPU hours. The most popular frameworks were PyTorch for method implementation (76%), NumPy for analyzing data (37%), and NumPy for analyzing annotations/reference data (27%). The most common approach to development (42%) consisted of going through related literature and building upon/modifying existing work. The majority (51%) estimated the edited lines of code of the final solution to be in the order of magnitude of \(10^{3}\). A median of 80 working hours was spent on method development in total. The respondents reported more human-driven decisions (median of 60%), e.g., parameter setting based on expertise, than empirical decisions (median of 40%), e.g., automated hyperparameter tuning via grid search. 94% of the respondents used a deep learning-based approach. For those approaches, most time (up to three picks allowed) was spent on selecting one or multiple existing architectures that best matched the task (45%), configuring the data augmentation (33%), configuring the template architecture (e.g., How deep? How many stages/pooling layers?) (28%), exploring existing loss functions (25%), and ensembling (22%). The survey revealed that almost one third of the respondents did not have enough time for development. A majority thereof (65%) felt that more time in the scale of weeks would have been beneficial (months: 18%, days: 14%). **Algorithm characteristics** Among the deep learning-based approaches, only 9% actively used additional data, i.e., data not provided for the respective challenge, in their final solution (note that this does not include the usage of already pretrained models). One reason may be that some challenges (24%) explicitly do not allow the usage of external data. Of those that did leverage external data, the majority used public biomedical data for the same type of task (40%), private biomedical data for the same type of task (25%), or public biomedical data for a different type of task (15%). Non-biomedical data was only used in 5% of the cases. If additional data was used, it was used for pretraining (55%) and/or co-training (50%). Data augmentation was applied by 85% of the respondents. The most common augmentations were random horizontal flip (77%), rotation (74%), random vertical flip (62%), contrast (49%), scale (48%), crop (44%), resize crop (35%), noise (34%), elastic deformation (26%), color jitter (19%), and shearing (15%). 43% of the respondents reported that the data samples were too large to be processed at once (e.g., due to GPU memory constraints). This issue was mainly solved by patch-based training (cropping) (69%), downsampling to a lower resolution (37%), and/or solving 3D analysis tasks as a series of 2D analysis tasks (per z-slice approach) with postprocessing (18%). The most common loss functions were Cross-Entropy (CE) Loss (39%), combined CE and Dice Loss (32%), and Dice Loss (26%). 29% of the respondents used early stopping, 12% used warmup. Internal evaluation via a single train:val(:test) split was performed by more than half of the respondents (52%). K-fold cross-validation on the training set was performed by 37%. 6% did not perform any internal evaluation. 48% of the respondents applied postprocessing steps. The final solution of 50% of the respondents was a single model trained on all available data. An ensemble of multiple identical models, each trained on the full training set but with a different initialization (random seed), was proposed by 6%. 21% proposed an ensemble of multiple identical models, each trained on a randomly drawn subset of the training set (regardless of whether the same seed was used or not). 9% reported having ensembled multiple different models and trained each on the whole training set (different seeds). 8% ensembled multiple different models, each trained on a randomly drawn subset of the training set (regardless of whether the same seed was used or not). If multiple models were used, the final solution was composed of a median of 5 models. ### Key insights related to scientific progress generated by challenges According to the responses of challenge organizers (n = 54), 43% of the winning algorithms exceeded the state of the art (Fig. 2). While substantial (47%) or minor (32%) progress was made in most competitions, the underlying problem was regarded as solved in only 11% of the competitions. Most progress was seen in new architectures/combination of architectures (32%), the phrasing of the optimization problem (e.g., new losses) (17%,) and new augmentation strategies (14%). Failure cases were mainly attributed to specific imaging conditions (e.g., image blur) (27%), generalization issues (23%), and specific classes that perform particularly poorly (19%). According to the responses from several organizers, the trend of simple algorithms (e.g., U-Net [17]/nnU-Net [24]) outperforming complex ones continued. As a prominent feature in 2021, many competitions provided additional information that is not usually available, such as the identifier of the hospital for domain generalization, multiple expert segmentations to represent label uncertainty, or k-space data in reconstruction problems. However, the participants were not able to leverage the additional data for better performance. The same holds true for temporal data in video analysis, although organizers hypothesize that frame-based analysis is not sufficient. Several organizers also reported a lack of heterogeneity in methods. Often, submitted methods performed similarly (e.g., differing only in the fourth decimal digit in normalized scores). On a positive note, some competitions that had been run for multiple years observed a drastic improvement compared to previous years, sometimes even surpassing human performance. Regarding computational aspects, in one case the winning method surpassed the existing state-of-the-art method, achieving a 19 times faster inference speed and reduction of the GPU memory consumption by 60% while yielding comparable accuracy. According to our study, generalization remains a major issue. One challenge, which mimicked "in-the-wild" deployment, found that models failed to generalize in 3 out of 21 testing institutions. Similarly, performance in rare classes was reported as a core issue in several competitions. This is a problem of high clinical relevance as diseases often correspond to a rare class. A related problem is the fact that Figure 2: Key insights provided by the organizers of IEEE ISBI 2021 and MICCAI 2021 challenges. the detection of multiple conditions in a multi-label setting still remains challenging. Finally, some organizers reported the failure of metrics to reflect the biomedical domain interest. Along these lines, pixel-level performance was sometimes reported to be substantial while instance-/case-level performance, which is typically biomedically more relevant, was not improved substantially. Cheating was observed in 4% of the cases. It was related to an excessive number of submissions of similar methods with different user accounts or the attempt to retrieve the test set from the submission platform. In these cases, participants were excluded from the competition, the rankings and/or the publication. ### Key insights related to winning strategies When comparing winners to other participants, several differences stood out. Firstly, winners were more determined to win a challenge (64% vs. 40%). The majority of winning lead developers have a doctorate degree (41%) while the majority of non-winning lead developers have a master's degree (47%) as their highest degree. Furthermore, while only 66% of other participants felt that there was enough development time, 86% of the winners agreed with this statement. Winners spent 120 hours (e.g., on method development, analyzing data and annotations) before deciding to submit, compared to 56 hours for other participants, and decided to submit a week earlier (3 vs. 2 weeks prior to submission). Notably, winners spent twice as much time on failure analysis (10% of median working hours dedicated to method development vs. 5%). Compared to non-winners, winners used ensembling based on random seeds, data splits, and heterogeneous models (see Fig. 3(f)) 5.6 times, 1.7 times, and 2.5 times as much. According to univariable mixed model analysis, eight parameters were found to provide statistically significant differences between winners and non-winners (\(p~{}<~{}0.05\)): (1) Number of team members who were developers/engineers, (2) time invested before planning to submit results, (3) time spent in data preprocessing/augmentation, (4) use of professionally managed GPU cluster, (5) approach used for method development, (6) architecture type, (7) taking metrics used to evaluate the challenge into account while searching for hyperparameters, and (8) augmentations used. Note, however, that when multiple independent tests are performed, 5% can be expected to be identified as significant purely by chance when testing at 5% significance level. Correcting for this so-called multiplicity of testing, we did not obtain statistically significant differences. Multivariable model analysis based on a selection of variables identified by image analysis experts revealed the willingness to win the challenge as the only parameter with \(p~{}<~{}0.05\) when comparing winners to non-winners (64% vs. 40%). Analogously, the parameter of taking metrics used to evaluate the challenge into account while searching for hyperparameters was identified in the best 30% vs. the rest analysis. It is worth mentioning in this context that despite the high response rate of 72%, the number of winners covered by the survey presented in Sec. 2.1 was only 22. The resulting low power of identifying important contributors to winning challenges may well be the reason for the absence of statistical significance. We therefore additionally asked competition winners after the results announcement for key design decisions and strategies. The responses (n = 38) cover 67% and 62% of the IEEE ISBI 2021 and MICCAI 2021 challenges respectively, and are summarized in Fig. 3 and Fig. 4. As detailed in Fig. 3, the most applied training pipelines were multi-task designs (63%) and multi-stage pipelines (61%). If multi-stage pipelines were applied, the importance of this strategy for winning the challenge was rated crucial. Pretraining was mainly performed in a supervised fashion using in-domain data (55%) or generic data (e.g., ImageNet) (61%). The usage of in-domain data, however, was found to be much more important. As mentioned above, it should be noted that many competitions do not allow for the usage of external data (24% according to the survey presented in Sec. 2.2). The most commonly applied design decisions related to data usage were preprocessing (97%), augmentation (100%), data splitting (beyond the splits provided by the competition, e.g., for cross-validation) (89%), data curation (e.g., cleaning of annotations) (79%), and data sampling (58%). One aspect that stood out when asking winners for key factors for success (free text) was the setting up of a good internal validation strategy, including the careful selection of a baseline model and appropriate validation tests. With respect to general strategies (Fig. 4), the strategies of analyzing and handling failure cases, knowing the state of the art, and reflecting the metrics in the method design were rated most highly. Further recommended strategies in free-text answers were heterogeneous and comprise (1) inclusion of non-deep learning approaches in a model ensemble, (2) explicit determination of a time management strategy, (3) test-time augmentation, and (4) preferring matured architectures over brand-new hyped machine learning methods. generated by competitions, open issues, as well as key winning strategies. A new insight with respect to common participation practice (RQ1) was that knowledge exchange is the primary participation incentive. This will most likely differ on platforms like Kaggle, in which prize money and achieving a high rank are expected to be substantially more important [45]. To our surprise, only a small portion of participants perceived the limiting computing power as a bottleneck. Similarly surprisingly, k-fold cross-validation on the training set as well as ensembling was only performed by a minority of participants. The competitions clearly led to substantial scientific progress according to the organizers (RQ2). Notably, however, only a small fraction of image analysis problems addressed by current competitions can be regarded as solved (App. C). Open research questions identified as part of this work include: _(1) How can we better integrate meta information in neural network solutions?_, _(2) How can we effectively leverage temporal information in biomedical video analysis?_, _(3) How can we achieve generalization across devices, protocols, and sites?_, _(4) How can we arrive at performance metrics that better address the biomedical domain interest?_ The latter is particularly interesting in light of the fact that the reflection of metrics in the challenge design was identified as a key strategy for winning a challenge. In line with recent literature [20, 25, 39, 47], it implies that common efforts are focused largely on an overfitting to the current metrics rather than solving the underlying domain problem. Current initiatives are already addressing this issue [30], but our results imply that challenge organizers should focus more on ensuring that the actual biomedical needs are reflected in the design of their competition. Our work revealed particularly successful algorithm design choices (Fig. 3) and general strategies for winning a competition (Fig. 4) (RQ3). In the spirit of reporting Figure 3: Importance of design decisions for the neural network-based winning submission of the respective IEEE ISBI 2021 and MICCAI 2021 competition rated by the (team) lead and ordered by percentage of highest vote (crucially important: dark blue). Voting was only conducted among those who used the respective design. “Applied by” indicates the percentage of respondents using the respective design. negative results, we also included the results of the mixed model analysis despite the lack of statistical significance after correction for multiplicity of testing. Given the relatively small dataset (results from 80 competitions) compared to the number of parameters that we extracted from algorithm designs and strategies (\(>100\)), we hypothesize that the lack of statistical significance can largely be attributed to small sample size. A limitation of our study could be seen in the fact that we only covered IEEE ISBI and MICCAI challenges of one specific year. Prior work, however, revealed that the competitions performed in the scope of these conferences cover the majority of all biomedical image analysis competitions [29]. Further limitations can be regarded as general limitations when working with surveys [11] and include the uncertainty of self-reported data and the potential bias resulting from the preselection of categorical variables. Finally, it is not straightforward to address the heterogeneity of challenges with a single questionnaire. For example, using an in-domain similar dataset may not always be feasible due to the sparsity of public biomedical datasets. Similarly, a researcher may regard ensembling as a general key strategy but may not have had the computing power to train and optimize multiple models working with video, 3D, or 4D data. To compensate for this effect in the design of the surveys presented in Sec. 2.1 and Sec. 2.2, we additionally asked winners for general recommended strategies (Fig. 4). The discrepancy between general recommendation and feasibility is reflected in the answers. For example, most winners recommend the integration of biologists/clinicians in a team but did not do so themselves. Despite the discussed limitations, our findings have the potential to impact a plethora of stakeholders in challenges. First, biomedical image analysis researchers and developers can "stand on the shoulders of giants" (the competition winners) to improve algorithm development strategies when approaching a new problem. Second, future challenge organizers can adapt their designs carefully to the open issues revealed by this work. This would include a focus on case/instance level rather than pixel/voxel level to reflect biomedical needs, metrics that reflect biomedical needs (see below), as well as dataset designs that allow for improving the capabilities of algorithms to perform well on rare classes and to generalize across domains. Given that the vast majority of participants perceived limited time and not computing power as a bottleneck, challenge timelines should be critically questioned. Finally, the wider community can benefit from the open research questions we identified (Tab. 1). In conclusion, we performed the first systematic analysis of biomedical image analysis competitions, which revealed a plurality of novel insights with respect to participation, organization, and winning. Our work could pave the way for (1) developers to improve algorithm development strategies when approaching new problems, and (2) the scientific community to channel its activities into open issues revealed by this work. Figure 4: Strategies for winning a challenge according to winners of IEEE ISBI 2021 and MICCAI 2021 competitions, ordered by the sum of the “crucially important” (dark blue) and “very important” (light blue) categories. The distribution of importance (from left to right: not important at all, barely important, moderately important, very important, crucially important) is depicted for each strategy.
2309.02429
Building a Winning Team: Selecting Source Model Ensembles using a Submodular Transferability Estimation Approach
Estimating the transferability of publicly available pretrained models to a target task has assumed an important place for transfer learning tasks in recent years. Existing efforts propose metrics that allow a user to choose one model from a pool of pre-trained models without having to fine-tune each model individually and identify one explicitly. With the growth in the number of available pre-trained models and the popularity of model ensembles, it also becomes essential to study the transferability of multiple-source models for a given target task. The few existing efforts study transferability in such multi-source ensemble settings using just the outputs of the classification layer and neglect possible domain or task mismatch. Moreover, they overlook the most important factor while selecting the source models, viz., the cohesiveness factor between them, which can impact the performance and confidence in the prediction of the ensemble. To address these gaps, we propose a novel Optimal tranSport-based suBmOdular tRaNsferability metric (OSBORN) to estimate the transferability of an ensemble of models to a downstream task. OSBORN collectively accounts for image domain difference, task difference, and cohesiveness of models in the ensemble to provide reliable estimates of transferability. We gauge the performance of OSBORN on both image classification and semantic segmentation tasks. Our setup includes 28 source datasets, 11 target datasets, 5 model architectures, and 2 pre-training methods. We benchmark our method against current state-of-the-art metrics MS-LEEP and E-LEEP, and outperform them consistently using the proposed approach.
Vimal K B, Saketh Bachu, Tanmay Garg, Niveditha Lakshmi Narasimhan, Raghavan Konuru, Vineeth N Balasubramanian
2023-09-05T17:57:31Z
http://arxiv.org/abs/2309.02429v1
Building a Winning Team: Selecting Source Model Ensembles using a Submodular Transferability Estimation Approach ###### Abstract Estimating the transferability of publicly available pre-trained models to a target task has assumed an important place for transfer learning tasks in recent years. Existing efforts propose metrics that allow a user to choose one model from a pool of pre-trained models without having to fine-tune each model individually and identify one explicitly. With the growth in the number of available pre-trained models and the popularity of model ensembles, it also becomes essential to study the transferability of multiple-source models for a given target task. The few existing efforts study transferability in such multi-source ensemble settings using just the outputs of the classification layer and neglect possible domain or task mismatch. Moreover, they overlook the most important factor while selecting the source models, viz., the cohesiveness factor between them, which can impact the performance and confidence in the prediction of the ensemble. To address these gaps, we propose a novel Optimal tranSport-based suBmOdular tRNASferability metric (OSBORN) to estimate the transferability of an ensemble of models to a downstream task. OSBORN collectively accounts for image domain difference, task difference, and cohesiveness of models in the ensemble to provide reliable estimates of transferability. We gauge the performance of OSBORN on both image classification and semantic segmentation tasks. Our setup includes 28 source datasets, 11 target datasets, 5 model architectures, and 2 pre-training methods. We benchmark our method against current state-of-the-art metrics MS-LEEP and E-LEEP, and outperform them consistently using the proposed approach. ## 1 Introduction In computer vision, transfer learning is a go-to strategy to train Deep Neural Networks (DNNs) on newer domains and datasets across tasks such as image classification [36, 25], image segmentation [55, 74] and object detection [20, 53]. This widespread usage is due to the easy availability of a large pool of open-sourced pre-trained models (trained on large-scale datasets such as ImageNet [37, 3]), which, when fine-tuned, achieve faster convergence and better performance than training from scratch. However, every time a user wants to employ transfer learning, the question that has increasingly grown relevant with an increased number of source models is: "Which combination of dataset and architecture should I pick to fine-tune to achieve the best performance on my target dataset?". To solve this, we need a tool that helps us choose a source or set of source models, which require minimal fine-tuning and achieves maximal performance. **Transferability estimation** (TE) metrics have been proposed in recent years to tackle this problem [60, 45, 71, 59, 48]. With these metrics, a particular source model can be selected without conducting expensive fine-tuning of all available source models on the target training set. Most efforts in this direction are, however limited by their capability of selecting only a single source model, thus restricting their use in an ensemble learning setting. There has been only one work so far [1] which extends an existing single-source transferability estimation method [45] to an ensemble setting. While this work showed promising results, it did not consider the similarity between source and target datasets in the latent representation space, or account for the relationships between individual models in the ensemble. This problem space remains nascent at this time, necessitating more efforts to estimate transferability reliably in different conditions. Ensemble models have been popular for a few decades now in machine learning [18, 7, 64]. Ensemble models are known to increase task accuracy, decrease overall predictive variance and increase robustness against out-of-distribution data samples [19]. Recent efforts have shown the usefulness of ensembles of pre-trained models [65], especially considering the widespread availability of pre-trained models in the community [50]. The problem of estimating transferability for a model ensemble from a large source model pool becomes even more relevant in this context. In this work, we introduce a novel transferability estimation metric specifically designed for ensemble selection called Optimal Transport-based Submodular Transferability metric (OSBORN). As stated earlier, a recent effort in this direction [1] showed promising results for such a score, but focused on individual model's performance (via the classifier's outputs) and did not consider the feature (latent representation) space mismatch, or how these models interact with each other in the ensemble. To address this, OSBORN measures the latent space mismatch between the source and the target datasets (domain difference) in addition to the mismatch in the classifier's outputs (task difference). Also, to account for the interaction between models in the ensemble, we introduce a novel model cohesion term, which captures the mutual cooperation between models towards forming an ensemble. Cohesion is required to ensure that individual models in an ensemble are in agreement with each other in terms of predictions (and not voting out each other). Thus, in this work, we propose a domain, task and cohesion-aware transferability estimator for ensemble selection from a source pool of multiple models. Beyond bringing the abovementioned factors into transferability estimation for ensembles, we show that the proposed score can be viewed as a submodular set function [4]. This allows us to follow a greedy maximization strategy, which is known to provide a high-quality solution for the problem based on well-known theoretical guarantees [42]. We thus select cohesive and closely related models for a particular target dataset. To evaluate our metric, we conduct extensive experiments using 28 source datasets, 11 target datasets, and 5 model architectures. In downstream tasks, we consider fully-supervised pre-training-based image classification, self-supervised pre-training-based image classification, semantic segmentation as well as domain adaptation. Table 1 presents an overview of our experiment breadth, as compared to other recent efforts on this problem. In particular, to the best of our knowledge, we are the first to perform transferability estimation of ensembles for image classification and domain adaptation tasks. To summarize, we make the following contributions: (1) We introduce a novel transferability estimation metric for ensemble selection that considers domain similarity, task similarity and inter-model cohesion in its design; (2) We show that viewing the proposed metric as a submodular set function allows us to use a simple greedy maximization strategy to select a source model ensemble for a given target dataset; (3) We study the performance of our metric across a wide range of downstream tasks and model pools;(4) We evaluate the reliability of our metric using different correlation metrics in our studies, and also carry out additional analysis and ablation studies to study its usefulness. We outperform earlier methods by a margin of \(58.62\%\), \(66.06\%\), and \(96.36\%\) in terms of Pearson Correlation Coefficient (PCC), Kendall \(\tau\) (KT) [31] and Weighted Kendall \(\tau\) (WKT) [63] for the image classification task. 1 Footnote 1: Project page: [https://vimalkb007.github.io/OSBORN/](https://vimalkb007.github.io/OSBORN/) ## 2 Related Work **Transfer Learning:** Over the years, transfer learning has been applied and explored across various fields [12, 41, 2, 5], as well as across datasets, model architectures, and pre-training strategies [39, 15, 24]. These efforts have included the study of interesting and practical questions such as which particular layers are more transferable [70] or estimating the correlation between pre-training and fine-tuning performance [33]. Beyond finetuning of source models to target datasets, task transfer methods [73, 14] have also Figure 1: Illustration of the objective and problem setting of our proposed metric. (_Trivia:_ OSBORN is also the main antagonist in the Spider-Man movie (2002), hence the emoji.) studied relationships between visual tasks such as semantic segmentation, depth prediction and vanishing point prediction, or used attribution maps to relate such tasks [56, 57]. In contrast to the aforementioned methods, the objective of our work is dataset transferability estimation. **Transferability Estimation Metrics (Single Source):** As stated earlier, gauging transferability reduces the effort in finding an optimal source model for a particular target dataset because it averts the expensive fine-tuning process. In recent years, significant efforts have been made in this problem space, considering the relevance of this problem to practitioners. The H-Score was proposed [6] to measure the usefulness (in terms of discriminativeness) of pre-trained source models for the target task. While this method shows promising results as a pioneer work in this field, it misses considering the scenarios where the source and target data have different distributions. Subsequently, NCE [60], and LEEP [45] developed methods that used the classifier outputs of pre-trained source models when the target dataset is forward-propagated through the model to estimate the log-likelihood of the target dataset. NCE largely focused on estimating transferability in scenarios where the source and target tasks share the same input data (e.g., face recognition and facial attribute classification). Subsequent methods such as LogME [71] also showed that likelihood methods might be prone to over-fitting. To tackle this, LogME [71] estimated the maximum value of label evidence (instead of maximum likelihood) given the feature set extracted by the pre-trained source models. Considering the fact that previous methods largely relied on classifier outputs and their sub-optimal performance in practical scenarios like cross-domain settings, OTCE [59] proposed an optimal transport framework to compute domain difference (based on feature space) and task difference (based on label space) to estimate transferability. This method leveraged the source model's latent representations in addition to classifier outputs with no explicit assumptions on the source and target datasets. All the above works are, however focused on estimating transferability from a single source model to a target dataset. **Transferability Estimation Metrics (Multi-Source Ensembles):** Agostinelli et al[1] recently proposed the first work on extending transferability estimation to select source model ensembles in [1], specifically focused on semantic segmentation. This work extends LEEP [45] to ensembles, and shows promising results in the considered settings. Our work builds on this effort in multiple ways: (i) instead of solely relying on classifier outputs for estimating transferability [45, 1, 60], we also consider the domain mismatch in the latent feature representation space; (ii) beyond looking at the individual model's outputs in an ensemble, we also consider the interactions and correlation between the model outputs; (iii) we make no assumptions on the source and target data distributions; and (iv) while [1] focused on segmentation, we show our method's results on classification, segmentation and domain adaptation tasks. We also show results on multiple pre-training strategies while previous works [45, 71, 60, 59] mostly focus on fully-supervised pre-training strategies. Our proposed metric can also be viewed as a submodular function, which allows us to leverage ranking-based greedy optimization strategies to make it efficient in practice. **Ensemble Learning.** Learning ensembles of models has been popular in machine learning to increase overall task performance, decrease prediction variance, prevent overfitting, and increase out-of-distribution robustness [7, 22, 69, 47]. More recent efforts in training ensembles of neural network models have focused on speeding up their training [61, 65], leveraging a single model's capacity to train multiple subnetworks whose predictions are ensembled to improve robustness [23], or studying mixture-of-experts paradigms which bring together thousands of subnetworks for large language models [54]. We clarify that our work focuses rather on selecting model ensembles from a larger source model pool via estimating transferability without explicitly training ensembles themselves. One can view our work as a step before ensemble learning when there is a larger model pool and only few models can be ensembled. As stated in [1], this setting is commonly encountered by a practitioner in the real-world across application domains. ## 3 Background and Preliminaries **Notations:** Given \(M\) source datasets, we denote the \(r^{th}\) source dataset as \(D_{s^{r}}\!\!=\!\!\{(x_{s^{r}}^{i_{s^{r}}},y_{s^{r}}^{i_{s^{r}}})\}_{i=1}^{n} \sim P_{s^{r}}(x,y)\) and target dataset as \(D_{t}\!\!=\!\!\{(x_{t}^{i},y_{t}^{i})\}_{i=1}^{m}\sim P_{t}(x,y)\) where, \(x_{s^{r}}^{i}\in\mathcal{X}_{s^{r}},x_{t}^{i}\in\mathcal{X}_{t}\), \(y_{s^{r}}^{i}\in\mathcal{Y}_{s^{r}}\), and \(y_{t}^{i}\in\mathcal{Y}_{t}\). Note that we do not restrict the label spaces \(P(\mathcal{Y}_{s^{r}})\) and \(P(\mathcal{Y}_{t})\) to span the same category set. We base our study on a domain-agnostic and task-agnostic setting. **Transferability Estimation for Ensembles:** For every source dataset \(D_{s^{r}}\), we assume there exists a pre-trained model on that dataset denoted by \((\theta_{s^{r}},h_{s^{r}})\) where \(\theta\) is the \begin{table} \begin{tabular}{l|c c c} \hline & \multicolumn{3}{c}{**Single Source TE**} \\ \hline & **Classification** & **Segmentation** & **DA Classification** \\ \hline \# LEEP [45] & ✓ & \(\times\) & \(\times\) \\ \# LogME [71] & ✓ & \(\times\) & \(\times\) \\ \# OTCE [59] & \(\times\) & \(\times\) & ✓ \\ \hline & \multicolumn{3}{c}{**Multi Source TE**} \\ \hline \# MS-LEEP [1] & \(\times\) & ✓ & \(\times\) \\ \# Ours & ✓ & ✓ & ✓ \\ \hline \end{tabular} \end{table} Table 1: Experimental settings studied by different methods in single-source TE and multi-source TE settings (DA: Domain Adaptation). We note the wide range of our experimental settings when compared to earlier work. feature extractor, and \(h\) is the classifier head. \(M\) represents the collection of such source models. As stated earlier, we focus on a multiple source model selection setting (i.e. ensembles) where our metric provides a transferability estimation (TE) score \(\alpha^{M_{e}\to t}\) for a given subset of models \(M_{e}\) from the source pool \(M\). When correlated to the accuracy \(A^{M_{e}\to t}\) (i.e. fine-tuned accuracy of the ensemble on the target test set), this TE score provides the reliability of the transferability estimate. Following [1], we calculate the ensemble accuracy by fine-tuning individual models in subset \(M_{e}\) (both \(\theta\) and \(h\)) on the target train set and averaging their predictions on the target test set. **Submodularity in TE for Ensembles.** The main idea of **TE** involves choosing optimal source models for a given target dataset. Apart from performance & computation trade-offs, a crucial motivation to select a subset of models is to mitigate risk of negative transfer. Fig 2 herein shows that opting for all models in the ensemble could lead to a decrease in overall performance compared to selecting a smaller set of models. This can be due to the detrimental impact of weak or non-transferable models in the ensemble, highlighting the importance of carefully combining models to ensure optimal performance. Further, finding an optimal ensemble for a given target dataset requires checking all possible combinations of different source models for a particular ensemble size. This exhaustive process is an NP-hard problem. In this paper, we propose a submodular approach to rank the available models in the source pool according to the performance gain they would yield if added to the subset pool of the ensemble and select the top \(k\) models, where \(k\) is the required size of the ensemble. While submodular subset selection is popular in different machine learning settings [4, 30, 66], to the best of our knowledge, this is the first such use for transferability estimation. To this end, we first formally define submodularity below. **Definition 3.1**.: _Let \(\Omega\) be a set and \(\mathcal{P}\left(\Omega\right)\) be the power set of \(\Omega\), then a submodular function is a set function \(f:\mathcal{P}\left(\Omega\right)\rightarrow\mathbb{R}\). The submodular function follows the property of diminishing returns, i.e. adding a new element to a smaller set produces a larger increase in \(f\) compared to a larger set. Mathematically, if for all \(X,Y\subseteq\Omega\), where \(X\subseteq Y\) and for all \(v\in\Omega\setminus Y\), the property follows:_ \[f(X+v)-f(X)\geq f(Y+v)-f(Y) \tag{1}\] A key benefit of posing a problem as one of submodular subset selection is that a greedy approach can be leveraged to efficiently identify a solution of required subset size that is reasonably close to the optimal solution. Nemhauser [42] showed that the quality of the subset chosen greedily cannot be worse than \(1-e^{-1}\) of the optimal value. This makes submodularity an attractive approach for usage in the field of TE for ensembles as we can rank the models in the source pool and select an ensemble of desired size. Further details on how to greedily select the models are discussed later in this paper. **Evaluation Criteria.** As stated earlier, the reliability of a **TE** method is obtained by measuring the correlation between \(\alpha^{M_{e}\to t}\) and \(A^{M_{e}\to t}\). Previous works [71, 45, 1, 59, 60] measure this correlation using different techniques such as Pearson Correlation Coefficient (PCC), Kendall \(\tau\) (KT) [31] and Weighted Kendall \(\tau\) (WKT) [63]. We report results for all these correlation measures to be comprehensive in our analysis. ## 4 OSBORN: Transferability Estimation Metric for Model Ensemble Selection In order to design a reliable transferability estimation approach for model ensembles, we propose the Optimal Transport-based Submodular Transferability metric (OSBORN), which considers three factors: domain difference, task difference, and inter-model cohesion. Inspired by earlier efforts on single-source transferability estimation [59], we consider both classifier output and distance in the latent representation space in our approach. Besides, since our focus is on model ensembles, we consider inter-model relationships in this metric. We now describe each of these quantities. _Minimize Domain Difference (\(W_{D}\))._ In order to minimize the latent space mismatch between the source and target datasets, similar to [59], we choose Wasserstein distance and Optimal Transport (OT) to compute this mismatch owing to its advantages in capturing the geometries of underlying data. Mathematically, the p-Wasserstein distance is given as follows: \[W_{p}\left(\beta,\gamma\right)=\left(\inf_{\pi\in\Pi\left(\beta,\gamma\right)} \int D(x,z)^{p}d\pi(x,z)\right)^{1/p} \tag{2}\] where, \(p\geq 1\), \(\beta,\gamma\) are continuous or discrete random variables in a complete and separable space \(S\), \(D(.,.):S\times S\rightarrow\mathbb{R}^{+}\) is a distance or a cost function between two points \(x\) and \(z\), \(\pi(\beta,\gamma)\) is the coupling matrix which can also be understood as the joint probability distributions with marginals \(\beta\) and \(\gamma\). In particular, in this work, we use the 1-Wasserstein distance, also called the Earth Mover Distance, to calculate the domain difference between source and target latents as: \[W_{D}\left(\theta_{s},x_{t}\right)=\sum_{i,j=1}^{m,n}||\theta_{s}(x_{s}^{i})- \theta_{s}(x_{t}^{j})||_{2}^{2}\pi_{ij}^{*}, \tag{3}\] where \(||\cdot-\cdot||_{2}^{2}\) is the distance or cost metric, \(\pi^{*}\) is the optimal coupling matrix of size \(m\times n\) obtained by solving the optimal transport (OT) problem using the Sinkhorn Figure 2: Test accuracies on Caltech10 with varying subsets of models (chosen randomly) algorithm [11, 59]. Note that \(\theta_{s}(.)\) is the feature extractor belonging to the source model. Intuitively, if the latent space of the source dataset is closely aligned with that of the target dataset, it is easier for the model to transfer. _Minimize Task Difference (\(W_{T}\))._ In order to measure the difference between a source task and the given target task, we use the mismatch between the model/classifier's outputs for source and target data forward-propagated through the source model. We use the conditional entropy (CE) of the predicted labels \(\hat{y_{t}}\in\mathcal{Y}_{s}\) of the target dataset samples given their ground truth labels \(y_{t}\in\mathcal{Y}_{t}\). The predicted labels are obtained by forward-propagating the target samples \(x_{t}\) through the corresponding source model \(\theta_{s}\). Let \(\hat{Y_{t}}\) be a random variable that takes values in the range of \(\mathcal{Y}_{s}\); and \(Y_{t}\) be a random variable that takes values in the range of \(\mathcal{Y}_{t}\), then \(W_{T}\) can be calculated as: \[\begin{split} W_{T}\left(\theta_{s},x_{t}\right)&=H (\hat{Y_{t}}|Y_{t})\\ &=-\sum_{\hat{y_{t}}\in\mathcal{Y}_{s}}\sum_{y_{t}\in\mathcal{Y} _{t}}\hat{P}(\hat{y_{t}},y_{t})\log\frac{\hat{P}(\hat{y_{t}},y_{t})}{\hat{P}(y _{t})}\end{split} \tag{4}\] where \(\hat{P}(\hat{y_{t}},y_{t})\) is the joint distribution of predicted and ground truth target labels and \(\hat{P}(y_{t})\) is the marginal distribution of the ground truth labels. These quantities can be easily computed using the optimal coupling matrix (obtained in Eqn 3) as follows: \[\hat{P}(\hat{y_{t}},y_{t})=\sum_{i,j:\hat{y_{t}}=\hat{y_{t}},y_{t}^{i}=y_{t}} \pi_{ij}^{*}, \tag{5}\] The marginal distribution can be obtained from the joint distribution as follows: \[\hat{P}(y_{t})=\sum_{\hat{y_{t}}\in\mathcal{Y}_{s}}\hat{P}(\hat{y_{t}},y_{t}), \tag{6}\] Intuitively, similar tasks will result in a low \(W_{T}\) value. Using \(W_{T}\) i.e CE alone represents empirical transferability according to [60]. However, in [59], it is experimentally shown that using only CE is insufficient in a domain-agnostic setting, which motivates us to combine this with \(W_{D}\) to account for feature representation space mismatch. _Minimize Model Disagreement (Cohesiveness \(W_{C}\))._ For an ensemble, it is important that the individual models reinforce the predictions of each other and have less disagreement amongst themselves to have overall good performance. To understand the cohesiveness of an ensemble, we use Conditional Entropy to capture the amount of disagreement between models in the subset of models \(M_{e}\). Mathematically, we represent \(W_{C}\) as: \[W_{C}\left(M_{e},x_{t}\right)=\sum_{m_{i},m_{j}\in M_{e}}H(m_{i}(x_{t})|m_{j} (x_{t})) \tag{7}\] Intuitively, we want a high cohesiveness and less disagreement among the models to reinforce the ensemble's predictive belief, i.e. a low \(W_{C}\) value, and to avoid scenarios where models vote out each other's predictions. Bringing the quantities together, we define OSBORN for a subset of models \(M_{e}\) of our source pool \(M\) as follows. Our metric collectively accounts for domain difference, task difference and model cohesion. Ref. Fig. 3 for the overview. \[\begin{split}\text{OSBORN}=\sum_{m_{i}\in M_{e}}[W_{D}(m_{i},x_{ t})+W_{T}(m_{i},x_{t})]+\\ W_{C}\left(M_{e},x_{t}\right)\end{split} \tag{8}\] A model ensemble that obtains a low OSBORN score will have better transferability to a target dataset. Our experiments show that a simple combination of these three quantities (with no weighting co-efficients) outperforms existing methods in all our experiments. In our ablation studies and analysis, we study the contribution of each OSBORN component as well as the effect of weighting each component differently. **Submodular Subset Selection in OSBORN.** As stated earlier, we show that the proposed OSBORN metric translates to a submodular optimization problem, which allows us to rank and pick models efficiently from the source pool. While the aforementioned quantities were written from a _minimization_ perspective (for clarity and ease of understanding), to pose this as a submodular _maximization_ problem, we consider the corresponding scoring function to be maximized as: Figure 3: Overview of our method for estimating the transferability for ensembles. \[f\left(M_{e}\right)=-\sum_{m_{i}\in M_{e}}[W_{D}(m_{i},x_{t})+W_{T}(m_{i},x_{t})]- \tag{9}\] \[W_{C}\left(M_{e},x_{t}\right)\] The value of our set function is a transferability estimate designed such that it is highly correlated to the fine-tune accuracy (see Table 3 & 4), thus enabling us to select models without expensive fine-tuning. **Theorem 4.1**.: _The scoring function \(f\left(X\right)\), as defined in Equation 9, is a submodular function._ Proof.: Let \(X_{1}\) and \(X_{2}\) be two sets such that \(X_{1}\subseteq X_{2}\subseteq M\). If we consider an unselected model instance \(v\in M\backslash X_{2}\). The gain in the score is obtained by appending \(v\) to the set \(X_{1}\), and this is calculated as: \[f\left(X_{1}\cup v\right)-f\left(X_{1}\right)=- \left[W_{D}\left(v,x_{t}\right)+W_{T}\left(v,x_{t}\right)\right] \tag{10}\] \[-\sum_{m_{i}\in X_{1}}H\left(m_{i}\left(x_{t}\right)\mid v\left( x_{t}\right)\right)\] \[-\sum_{m_{j}\in X_{1}}H\left(v\left(x_{t}\right)\mid m_{j}\left( x_{t}\right)\right)\] Similarly, the gain obtained by set \(X_{2}\) is given by: \[f\left(X_{2}\cup v\right)-f\left(X_{2}\right)=- \left[W_{D}\left(v,x_{t}\right)+W_{T}\left(v,x_{t}\right)\right] \tag{11}\] \[-\sum_{m_{i}\in X_{2}}H\left(m_{i}\left(x_{t}\right)\mid v\left( x_{t}\right)\right)\] \[-\sum_{m_{j}\in X_{2}}H\left(v\left(x_{t}\right)\mid m_{j}\left( x_{t}\right)\right)\] As we have \(X_{1}\subseteq X_{2}\), the number of terms in the summation of Equation 11 will be greater than or equal to that of Equation 10. Since entropy is always a non-negative value, we can say that \[- \sum_{m_{i}\in X_{1}}H\left(m_{i}\left(x_{t}\right)\mid v\left( x_{t}\right)\right)-\sum_{m_{j}\in X_{1}}H\left(v\left(x_{t}\right)\mid m_{j} \left(x_{t}\right)\right)\geq\] \[- \sum_{m_{i}\in X_{2}}H\left(m_{i}\left(x_{t}\right)\mid v\left(x_{t }\right)\right)-\sum_{m_{j}\in X_{2}}H\left(v\left(x_{t}\right)\mid m_{j} \left(x_{t}\right)\right)\] This implies that \[f\left(X_{1}\cup v\right)-f\left(X_{1}\right)\geq f\left(X_{2}\cup v\right)-f \left(X_{2}\right) \tag{12}\] We can see that Equation 12 satisfies the condition in Definition 3.1. This completes the proof. **Submodular Optimization using Greedy Maximization.** Since our set function \(f(M_{e})\) (mentioned in Eq. 9) is submodular, it exhibits monotonicity, i.e. the set with maximum gain is always the entire source pool \(M\). However, since we want to select a subset of models i.e. ensemble set from the source pool \(M\), we impose a cardinality constraint. Formally, we aim to select the set \(M_{e}\) of size at most k that maximizes the gain: \[\max_{M_{e}:|M_{e}|=k}f(M_{e}) \tag{13}\] This problem is however NP-hard, but we use the greedy maximization strategy to find a near-optimal set of models \(M_{e}\) for the target dataset. In practice, we pre-calculate pair-wise domain difference \(W_{D}\) and task difference \(W_{T}\) between each source and target datasets. Then, we calculate the model cohesion term \(W_{C}\) for adding each model \(m_{i}\) to the set of already selected models \(M_{e}\). Using these three quantities pertaining to \(m_{i}\), we calculate the gain achieved by adding it to the set \(M_{e}\) as \(f\left(M_{e}\cup m_{i}\right)-f\left(M_{e}\right)\) and greedily pick the model with the highest gain and add it to the set \(M_{e}\). We continue this iteration till we achieve the ensemble set size of \(k\). Once the target samples are forward-propagated through the source models, the quantities in our metric can be computed independently for each source model, thus making our overall computations parallelizable. Considering \(M_{e}^{*}\) as the optimal ensemble set, it is well-known from [42] that such a greedy approach has a performance guarantee of at least \(63\%\) of the optimal ensemble set, i.e. \[f\left(M_{e}\right)\geq\left(1-\frac{1}{e}\right)f\left(M_{e}^{*}\right) \tag{14}\] In practice, we observe that we see that the avg. accuracy of the ensemble selected by greedy (\(76.315\%\)) in a fully-supervised setting is, \(95.56\%\) of the avg. accuracy of the optimal ensemble(\(79.857\%\)). Similarly for self-supervised setting, the avg. accuracy of the ensemble selected by greedy (\(79.857\%\)) is, \(93.50\%\) of the avg. accuracy of the optimal ensemble(\(84.962\%\)), as shown in Table 2. More details on the experiments are presented in the next section. \begin{table} \begin{tabular}{c|c|c} \hline \hline & **Ensemble Accuracy (Fully Supervised)** \\ \cline{2-3} **Target Dataset** & **Greedy** & **Optimal** \\ \hline Oxford102Flowers & 90.720 & 91.697 \\ Caltech101 & 68.533 & 75.333 \\ StanfordCars & 69.692 & 72.540 \\ \hline **Average** & 76.315 & 79.857 \\ \hline \multicolumn{3}{c}{**Ensemble Accuracy (Self Supervised)**} \\ \hline Oxford102Flowers & 86.935 & 95.604 \\ Caltech101 & 88.800 & 90.000 \\ StanfordCars & 62.604 & 69.282 \\ \hline **Average** & 79.446 & 84.962 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of the target test set accuracies achieved by fine-tuned ensembles selected using the greedy optimization of OSBORN vs the optimal ensembles. We clearly observe that our approach empirically gives significantly stronger performances than the theoretical guarantee. ## 5 Experiments and Results **Experimental Setup.** We follow the same experimental setup as the previous work on source model ensemble selection [1] to evaluate our transferability metric in the multiple source model setting. Given a total of \(M\) models in the source pool, our objective is to select an ensemble model by choosing \(k\) models from the source pool. We follow [1] in setting \(k\) to 3 for fairness of comparison. We also conducted a study to evaluate this on the Oxford-IIIT Pets dataset, and found that maximum accuracy is gained for an ensemble of size 3 (see Fig 4), which further reinforces our choice for conducting experiments. **Classification Datasets.** For the classification tasks, we consider 11 widely-used datasets including CIFAR-10 [35], CIFAR-100 [35], Caltech-101 [16], Stanford Cars [34], Oxford 102 Flowers [46], Oxford-IIIT Pets [49], Imagenette [27], CUB200 [67], FashionMNIST [68], SVHN [43], Stanford Dogs [32]. These datasets are popularly used in many transfer learning tasks. Out of these 11 datasets, we set Caltech-101 [16], Stanford Cars [34], Oxford 102 Flowers [46], Oxford-IIIT Pets [49], Stanford Dogs [32] as our target datasets and estimate transferability using OSBORN. **Model Architectures (Fully-supervised).** For this setting, we consider 2 source model architectures ResNet-101 [25] and DenseNet-201 [28], keeping in mind the model diversity and capacity. We take these models from the open-sourced PyTorch Library [50]. Initially, both the models are initialized with the fully-supervised ImageNet weights [37], and then we train them on the 11 classification datasets to prepare our source model pool. **Model Architectures (Self-supervised).** For this setting, we consider ResNet-50 [25] as our source model architecture but initialize it with weights obtained from two self-supervised pre-training strategies, namely BYOL [21] and MoCov2 [9]. We have two variants of ResNet-50 models to produce enough diversity. And as done in the previous case, we train these two models on the 11 classification datasets to prepare our source model pool. We use multiple pre-trained SSL models to build our pool. However, finetuning is done in a fully-supervised fashion. Our motivation here was to study if OSBORN can estimate transferability reliably across multiple pre-training settings. **Training Setup for Source Models (Classification Tasks).** For all classification tasks, we train the source models using a cross-entropy loss and optimize it using Stochastic Gradient Descent (SGD) with momentum. Given these details, the most important hyperparameters are learning rate, batch size and weight decay. We train the models with a grid search of learning rate in (\(1e{-}1\), \(1e{-}2\), \(1e{-}3\), \(1e{-}4\)), batch size in (\(32,64,128\)), and weight decay in (\(1e{-}3\), \(1e{-}4\), \(1e{-}5\), \(1e{-}6\), \(0\)) to pick the best hyperparameters. All our experiments are written in PyTorch and are conducted on a single Tesla V-100 GPU. For the fully-supervised pre-trained setting, we initialize the models with ImageNet weights. In the case of a self-supervised pre-trained setting, we initialize the models using BYOL or MoCov2 (on ImageNet) weights. For our experiments on the multi-domain DomainNet dataset, we initialize our models with ImageNet weights. **Training Setup for Source Models (Semantic Segmentation Tasks).** We train the source models using a pixel-wise cross-entropy loss and optimize it using Stochastic Gradient Descent (SGD) with momentum. The most important hyperparameters herein are learning rate, batch size and weight decay. We train the models with a grid search of learning rate in (\(1e{-}1\), \(1e{-}2\), \(1e{-}3\), \(1e{-}4\)), batch size in (\(32,64,128\)), and weight decay in (\(1e{-}3\), \(1e{-}4\), \(1e{-}5\), \(1e{-}6\), \(0\)), and pick the best hyperparameters. All these experiments are also written in PyTorch and conducted on a single Tesla V-100 GPU. We initialize source models using the COCO pre-trained weights. **Implementation of Source Models and Baselines.** We use open-source models available via the PyTorch Library for classification and semantic segmentation tasks. We use the PyTorch Lightning Library to obtain model weights for a self-supervised pre-training setting. We use the code released by the respective papers for calculating OTCE [59], MS-LEEP, E-LEEP, IoU-LEEP and SoftIoU-LEEP [1] scores. **Evaluating** **Ensemble Performance.** We follow the protocol in [1] for computing ground truth accuracies of ensembles. We finetune (both feature extractor and classifier of) all the source models present in the ensemble using the target training set. Then, we individually make predictions using the source models on the target test set and average them to get the final ensemble prediction. We note that no target-trained models are in the source pool. We compare this final prediction with the ground-truth label and calculate the classification accuracy. Note that we need to fine-tune all source models only once and can re-use their predictions on the test set across all ensemble combinations. As stated earlier, we report Pearson Correlation Coefficient (PCC), Kendall \(\tau\) (KT) and Weighted Kendall \(\tau\) (WKT) in our results. **Evaluation on Fully-Supervised Pre-Trained Models.** We herein compare our OSBORN with the baseline metrics, i.e. MS-LEEP and E-LEEP, in terms of three correlation Figure 4: Test accuracy on the Oxford-IIIT Pets dataset compared to the ensemble size. We observed a similar trend across other datasets as well. metrics, WKT, KT, and PCC2. The correlation values are reported in Table 3. Averaged across five target datasets, OSBORN improves \(96.36\%\) over MS-LEEP and \(140\%\) over E-LEEP in terms of WKT; improves \(66.06\%\) over MS-LEEP and \(93.16\%\) over E-LEEP in terms of KT; improves \(58.62\%\) over MS-LEEP and \(75.23\%\) over E-LEEP in terms of PCC. We can visually see the overall performance of our metric outperforming the existing baselines significantly in Fig 5. Footnote 2: Our baselines MS-LEEP and E-LEEP use custom proprietary model architectures that are not publicly available. We hence followed the authors’ code and guidelines in using their method on the models used in our work, and picked the best-performing hyperparameters for the results corresponding to their baselines shown in this work. **Evaluation on Self-Supervised Pre-Trained Models.** We compare the performance of our method with the baseline methods, i.e. MS-LEEP and E-LEEP. We present the experimental results regarding different correlation coefficients in Table 4. Note that we use the Frobenius norm regularizer while solving the OT problem because it gave us better results when compared to using other regularizers. In the appendix, we report results without any regularizers and compare them with the Frobenius norm variant. Table 4 shows that, averaged across five target datasets, OSBORN improves \(268.69\%\) over MS-LEEP and \(231.82\%\) over E-LEEP in terms of WKT; improves \(442.10\%\) over MS-LEEP and \(379.07\%\) over E-LEEP in terms of KT; improves \(527.27\%\) over MS-LEEP and \(392.86\%\) over E-LEEP in terms of PCC. **Performance of Selected Ensembles.** Table 2 reports the ensemble accuracy of the models selected through OSBORN. For completeness of this discussion, we also report the same results for OSBORN without greedy maximization as well as for MS-LEEP and E-LEEP in Table 5. Following [1], we first calculate the OSBORN value for every ensemble candidate and pick the ensemble that bags the highest value. We follow a similar strategy with MS-LEEP and E-LEEP to pick the best model according to their values. To compute the ensemble accuracy, we used the individual models fine-tuned on the target train set and got their predictions on the target test set. We average these predictions and compare them with the ground truth labels to obtain overall accuracy. We observe that the ensemble selected by OSBORN achieves the highest test accuracy across all datasets. In the case of both fully supervised and self-supervised settings, the baseline methods, i.e. MS-LEEP and E-LEEP, select the same ensembles (despite having different correlation values) in every case, which is why they obtain the same ensemble accuracy. **Scaling Number of Models in Ensemble.** As shown earlier in this section (Fig 4), we found the performance to saturate after an ensemble size of 3 in the datasets considered in this work as well as in [1]. On the other hand, we also observe unsurprisingly that the cost of ensemble selection can go up significantly as the ensemble size increases. We show the cost performance of models selected for the Caltech101 dataset in Fig 6. Despite the increasing trend, we note that the time taken is still in the order of seconds, which makes the proposed OSBORN metric practical and relevant. **Ablation Studies.** We conducted additional experiments to understand the influence of each component in OSBORN (included in the Appendix). In general, while simple addition of the three quantities in OSBORN without any weights outperformed previous methods, we observed that these can be finetuned through grid search over a larger range of values to get even better transferability estimates. This however varies with the target dataset. On Caltech101 as the target dataset, we noticed that giving more weightage to Figure 5: Comparison of OSBORN over 5 target datasets interms of Weighted Kendall’s \(\tau\). We can see that our metric constantly outperforms the baselines across every dataset by a large margin. \begin{table} \begin{tabular}{c|c c c|c c c|c c c} \hline \multirow{2}{*}{**Target Dataset**} & \multicolumn{3}{c}{**Weighted Kendall’s \(\tau\)**} & \multicolumn{3}{c}{**Kendall’s \(\tau\)**} & \multicolumn{3}{c}{**Pearson**} \\ & **MS** & **E** & **Ours** & **MS** & **E** & **Ours** & **MS** & **E** & **Ours** \\ \hline Oxford102Flowers & 0.086 & -0.019 & **0.616** & 0.138 & 0.074 & **0.400** & 0.230 & 0.164 & **0.456** \\ OxfordIITPets & 0.414 & 0.393 & **0.558** & 0.346 & 0.326 & **0.453** & 0.504 & 0.500 & **0.666** \\ StanfordDogs & 0.326 & 0.323 & **0.477** & 0.244 & 0.242 & **0.427** & 0.398 & 0.407 & **0.604** \\ Caltech101 & 0.435 & 0.409 & **0.565** & 0.240 & 0.231 & **0.335** & 0.353 & 0.341 & **0.486** \\ StanfordCars & 0.115 & 0.018 & **0.486** & 0.137 & 0.071 & **0.368** & 0.256 & 0.163 & **0.549** \\ \hline Average & 0.275 & 0.225 & **0.540** & 0.221 & 0.190 & **0.367** & 0.348 & 0.315 & **0.552** \\ \hline \end{tabular} \end{table} Table 3: Comparison of different ensemble transferability estimation metrics for fully-supervised models (classification tasks). The best results are indicated in bold. Note: MS: MS-LEEP, E: E-LEEP, Ours: OSBORN. compared to the other two terms (\(W_{T}\) and \(W_{C}\)) achieved higher correlation scores, as shown in Fig 7. This could be because of the wide variety of images in this dataset. \(W_{D}\) measures the latent space mismatch between such varied images with the source datasets (which may not have overlapping images/representation with the target set), which benefits in this case. More detailed analysis is provided in the Appendix. ## 6 Conclusions In this paper, we propose a novel optimal transport-based transferability estimation metric, OSBORN, carefully designed for ensembles that consider multiple factors, such as the mismatch in the latent space, label space, and the co-hesiveness amongst the individual models in the ensemble. We show that the proposed metric can be treated as a submodular optimization problem, allowing us to leverage a greedy strategy for source model ensemble selection. We show experimentally that our metric outperforms the existing metrics MS-LEEP and E-LEEP across tasks on multiple correlation metrics. Future directions include increasing the computational efficiency of this method, as well as studying its applicability to other tasks and problem settings. ## Acknowledgements This work was partly supported by KLA and the Department of Science and Technology, India through the DST ICPS Data Science Cluster program. We would like to thank the authors of [1] for insightful discussions. Further, we thank the anonymous reviewers for their valuable feedback that improved the presentation of this paper.
2310.14328
Quantum Key Leasing for PKE and FHE with a Classical Lessor
In this work, we consider the problem of secure key leasing, also known as revocable cryptography (Agarwal et. al. Eurocrypt' 23, Ananth et. al. TCC' 23), as a strengthened security notion of its predecessor put forward in Ananth et. al. Eurocrypt' 21. This problem aims to leverage unclonable nature of quantum information to allow a lessor to lease a quantum key with reusability for evaluating a classical functionality. Later, the lessor can request the lessee to provably delete the key and then the lessee will be completely deprived of the capability to evaluate. In this work, we construct a secure key leasing scheme to lease a decryption key of a (classical) public-key, homomorphic encryption scheme from standard lattice assumptions. We achieve strong form of security where: * The entire protocol uses only classical communication between a classical lessor (client) and a quantum lessee (server). * Assuming standard assumptions, our security definition ensures that every computationally bounded quantum adversary could not simultaneously provide a valid classical deletion certificate and yet distinguish ciphertexts. Our security relies on the hardness of learning with errors assumption. Our scheme is the first scheme to be based on a standard assumption and satisfying the two properties above.
Orestis Chardouvelis, Vipul Goyal, Aayush Jain, Jiahui Liu
2023-10-22T15:25:29Z
http://arxiv.org/abs/2310.14328v4
# Quantum Key Leasing for PKE and FHE with a Classical Lessor ###### Abstract In this work, we consider the problem of secure key leasing, also known as revocable cryptography (Agarwal et. al. Eurocrypt' 23, Ananth et. al. TCC' 23), as a strengthened security notion to its predecessor put forward in Ananth et. al. Eurocrypt' 21. This problem aims to leverage unclonable nature of quantum information to allow a lessor to lease a quantum key with reusability for evaluating a classical functionality. Later, the lessor can request the lessee to provably delete the key and then the lessee will be completely deprived of the capability to evaluate. In this work, we construct a secure key leasing scheme to lease a decryption key of a (classical) public-key, homomorphic encryption scheme from standard lattice assumptions. Our encryption scheme is exactly identical to the (primal) version of Gentry-Sahai-Waters homomorphic encryption scheme with a carefully chosen public key matrix. We achieve strong form of security where: * The entire protocol (including key generation and verification of deletion) uses merely classical communication between a _classical lessor (client)_ and a quantum lessee (server). * Assuming standard assumptions, our security definition ensures that every computationally bounded quantum adversary could only simultaneously provide a valid classical deletion certificate and yet distinguish ciphertexts with at most some negligible probability. Our security relies on subexponential time hardness of learning with errors assumption. Our scheme is the first scheme to be based on a standard assumption and satisfying the two properties mentioned above. The main technical novelty in our work is the design of an FHE scheme that enables us to apply elegant analyses done in the context of classical verification of quantumness from LWE (Brakerski et. al.(FOCS'18, JACM'21) and its parallel amplified version in Radian et. al.(AFT'21)) to the setting of secure leasing. This connection to classical verification of quantumness leads to a modular construction and arguably simpler proofs than previously known. An important technical component we prove along the way is an amplified quantum search-to-decision reduction: we design an extractor that uses a quantum distinguisher (who has an internal quantum state) for decisional LWE, to extract secrets with success probability amplified to almost one. This technique might be of independent interest. ###### Contents * 1 Introduction * 1.1 Our Results * 1.2 Comparison with Related Works * 2 Technical Overview * 2.1 Detailed Overview on Security Proof: Reduction to NTCF and the Use of Search-to-Decision Reduction for LWE * 2.2 More on Related Works * 3 Acknowledgement * 4 Preliminaries * 4.1 Quantum Information an Computation * 5 Preliminaries: Testing Quantum Adversaries * 5.0.1 Approximating Threshold Implementation * 6 Lattice Preliminaries * 6.1 General definitions * 6.2 Trapdoor Sampling for LWE * 6.3 Structural Properties of GSW Homomorphic Encryption * 7 Preliminaries: Noisy Claw Free Trapdoor Families * 7.1 Noisy Trapdoor Claw-Free Families * 7.2 (Extended) NTCF from LWE * 7.2.1 Parameter Choice * 7.2.2 Noisy Trapdoor claw-Free Families (2-to-1 Mode) * 7.2.3 Noisy Trapdoor Injective Families (Injective Mode) * 7.2.4 Efficient Range Preparation for NTCF with Small LWE Secrets * 7.3 Parallel Repetition of An NTCF-based Protocol * 8 Secure Key Leasing with Classical Communication: Definition * 8.1 Secure Key Leasing PKE with Classical Lessor * 8.2 Strong SKL-PKE PKE Security: Threshold Implementation Version * 9 Secure Key Leasing with Classical Lessor/Client: Construction * 9.1 Parameters * 9.2 Scheme Construction * 9.3 Correctness * 9.4 Secure Key Leasing with Classical Lessor: Protocol * 10 Security Proof for SKL-PKE * 10.1 Winning Probability in Hybrid 1 * 10.2 Extraction of Preimages via LWE Search-Decision Reduction * 10.2.1 Proof Outline * 10.3 Reduction to Parallel NTCF Soundness * 11 Proof for Theorem 10.6: Quantum LWE Search-to-Decision Algorithm with Almost-Perfect Extraction (Noisy Version) * 11.1 Three Distributions for ATI * 11.2 Extraction Algorithm * 11.3 Analysis of the Extractor * 12 References * A Remarks on Definitions and Strong SKL Security Implies IND-SKL Security * A.1 Prevent Impostors: Authorizing the Public Key * A.2 Additional Remarks for Our Definitions * A.3 Proof for Claim 8.8 * B Secure Key Leasing for FHE Security * B.1 FHE with Secure Key Leasing * B.2 FHE Eval Algorithm * C Additional Missing Proofs * C.1 Proof for Corollary 10.7 * C.2 Proof for Theorem 5.10 * C.3 Proof for the Probability Relations in Section 10.3 * D Quantum Search-to-Decision LWE with Almost-Perfect Extraction(Clean Version) Introduction Quantum information, through principles such as the _no-cloning theorem_, offers exciting possibilities for which there are no classical counterparts. An active area of research are primitives such as copy protection [1] and quantum money [21], which simply can't be realized classically. Copy protection, a task that captures a wide variety of unlconability related tasks, is aimed at leveraging the power of unclonable quantum information to prevent piracy of software functionalities. However, constructions of quantum copy protection have turned out to be notoriously hard - in fact, it is shown to be impossible in general [1] and only exist relative to structured oracles [1] if we aim at a generic scheme secure for all copy-protectable functions. Only a handful of functions are known to be copy-protectable without using oracles in the construction, but either they are built from from very strong, less-standard cryptographic assumptions (e.g. post-quantum iO) [1, 2] or the functionalities are very limited (point functions) [13, 1]. Meaningful relaxations to copy protection come in the form of primitives that allow revocation of the quantum software via proofs of "destruction" of the quantum functionality after use. Thus, if the revocation or proof verifies, a computationally bounded user is deprived of any ability to evaluate the function in any meaningful ways. In this work, we consider one such recent notion of secure key leasing [1], also called key-revocable cryptography in the literature [1]. This notion is inspired by secure software leasing in [1], but possesses significantly stronger security guarantees 1. Footnote 1: Our security notion is _stronger_ and has more applicability than the previous, similar primitive called secure software leasing (SSL) [1]. In SSL the adversarial lessee is only semi-malicious, while in [1, 2] and our work it is fully malicious. See section 1.2 for detailed discussions. In secure key leasing, a leasor (also called the client) leases a quantum state to a lessee (also called the server). The quantum state might allow the server to obtain a capability to compute an advanced classical function, such as decryption (in case of public-key encryption with secure key leasing), or PRF evaluation, for polynomially many times. At a later point in time, the lessor may ask the lessee to return or destroy the key for reasons such as lessee's subscription expiring or the lessor(client) deciding to use another server. The notion of secure key leasing allows the lessee to "delete" the quantum key and create a deletion certificate which can be verified by the lessor. Once the deletion certificate is sent to the lessor and verified, the security guarantee says that the lessee provably loses the associated capability. This must hold even if the lessee is fully malicious and may make arbitrary attempts to copy the quantum state or learn the underlying classical functionality before the deletion. A number of recent papers have studied the notion of secure key leasing for primitives like PKE, FE, FHE and PRF [1, 2], in the context of a quantum lessor and quantum lessee with quantum communication. While the notion of secure software leasing is interesting on its own, remarkably it has also found an application in the seemingly unrelated context of _leakage or intrusion detection_[1] where an adversary breaks into a machine to steal a secret key and one would like to detect if such an attack happened. Secure key leasing is a uniquely quantum phonomenon which is enabled by the no-cloning theorem, clearly impossible in the classical setting. Therefore, some quantum capabilities are necessary to enable it. However, a natural question is: _to what extent are quantum capabilities necessary? Do both the client and the server need quantum capabilities? Do we require quantum communication in addition to quantum computation?_ As discussed in [15, 16], even in the future where full scale quantum computer come into the household and every user can maintain quantum memory, setting up large-scale end-to-end quantum communication is still expensive and lossy. Moreover, having a client operating only on classical resources and delegating the quantum work to a server is desirable in almost all sorts of circumstances. A recent beautiful work of Ananth, Poremba, and Vaikuntanathan [1] used the dual Regev cryptosystem to build primitives such as public-key encryption (PKE), fully-homomorphic encryption (FHE), and pseudo random functions (PRFs) with key revocation. While their basic constructions require quantum communication as well as a quantum certificate of deletion, they can extend their constructions to achieve classical deletion certificates [1] as well. However a key limitation is their work is that the security only holds either based on an unproven conjecture, or only against an adversarial server (leasee) which outputs a valid deletion certificate with probability negligibly close to \(1\). Proving security against general adversaries based on standard assumptions was left as an open problem. In addition, in both works [1, 1], the client and the server must have both quantum capabilities and quantum communication is also necessary in the key generation process. This leads to the following natural open problem: _Can we build secure leasing for unlearnable functions e.g. PKE decryption, FHE decryption, and other cryptographic functionalities with a completely classical lessor/client? and more desirably, from standard assumptions?_ If so, this would lead to constructions which require only bare minimum quantum infrastructure needed in our setting. ### Our Results We answer all the above questions affirmatively thus settling the open problem from [1]. We obtain the following results: **Theorem 1.1**.: _(Informal) Assuming the post-quantum subexponential hardness of Learning-with-Errors, there exists a secure key leasing scheme for PKE and FHE with a completely classical lessor/client._ More specifically, from LWE, we can achieve secure leasing for FHE decryption circuit with following properties: 1. The key generation is a two-message protocol with one classical message from the lessor to lessee and one back2. Footnote 2: As discussed in Appendix A, the second message is a simple way to deal with impostor security, which is orthogonal to the major anti-piracy security we prove. For the major anti-piracy security defined in Definition 8.2,Definition 8.6, the protocol suffices to be a one-message protocol from lessor to lessee. 2. Upon being asked to delete the key, the lessee will produce a classical certificate, and send it to the client. The client will perform a classical verification algorithm on the certificate. 3. As natural in the definition for secure key leasing of classical PKE, the encryption is a public, classical procedure. The decryption can be done quantumly (on the lessee/server's side) or classically (on the les A key contribution of our work is to connect our problem to classically verifiable proof of quantumness [1]. The latter is a well developed research direction with beautiful ideas. This connection allows us to design modular constructions and write proofs of security that are arguably simpler, thanks to a number of technical ideas that we could rely on the literature in classical verification of quantum computation. To show the security of our construction, we prove along the way a quantum search-to-decision reduction for LWE that extracts the secret _with the success probability boosted to almost 1_. **Theorem 1.2** (Informal).: _Given a quantum distinguisher with an auxiliary quantum state, that distinguishes LWE samples from random with inverse polynomial probability for some arbitrary secret, there exists an extractor running in time polynomial (in the the secret size, the distinguisher's time and inverse its advantage) that produces the secret with probability negligibly close to 1._ To the best of our knowledge, this is the first search-to-decision reduction with an almost-perfect search when using quantum algorithms with auxiliary quantum states. To prove the above theorem, we rely on a measurement implementation on the distinguisher's quantum state [13, 14] and make new observations about this measurement procedure, by aligning its properties with the features required by the classical LWE search-to-decision reduction by Regev [15]. We believe this technique and the new observations made maybe of independent interest. ### Comparison with Related Works **Secure Key Leasing/Revocable PKE: Comparison with Existing Works** Previously, secure circuit leasing for PKE was achieved in [1]. [1] achieved it by a clean compiler from any PKE scheme. But they need quantum communication and quantum computation on the lessor's side throughout the protocol, and there is no easy, generic way to turn their scheme into one with classical leaser with classical communication. A similar result on revocable PKE, FHE and PRF to [1] was shown by the concurrent work from LWE with interesting ideas from quantum money combined with dual Regev PKE. However, they need new conjectures for enabling the security proof where adversaries only have inverse polynomial probability of revocation, otherwise they can only handle a weaker security where the adversary has to pass the revocation with probability almost 1 [1]. Whereas we achieve the stronger definition, i.e. adversaries only need to pass the revocation test with noticeable probability, from LWE alone. Moreover, they also need the lessor to prepare the quantum key and send the key via a quantum channel. Our techniques also differ greatly from the above works. Our main insight is to manipulate the primitives used in the classical verification of quantum computation protocol and turning it into an FHE scheme from LWE that supports leasing. **Secure Software Leasing:** SSL, short for Secure Software Leasing in [1] is another relaxation of quantum copy protection. It is a weaker security notion than the notion studied in this work, even though bearing a similar name. In SSL, after revocation/deletion, one tests whether the leftover state of the adversary function correctly, using the leaser's circuit. In other words, in SSL, the pirate can be arbitrarily malicious in copying the function, but the free-loaders of the pirated programs are semi-honest by following the authentic software's instructions. In our work, we allow the adversarial state to be run by any adversarial circuit; as long as it gives the correct output, we count it as a successful attack., i.e. both the pirate and free-loaders can have arbitrary malicious behaviors. We defer more related works and discussions on SSL to Section 2.2. Post-quantum LWE Search-to-Decision Reduction with Quantum Auxiliary Input[1] showed a general method to lift classical reductions to post-quantum reductions for a class of primitives satisfying certain constraints. Even though the classical search-to-decision reduction for LWE fit into the framework required in the above lifting theorem, our proof roadmap requires a stronger statement than the one shown in [1].To the best of our knowledge, there is no generic or black-box method to amplify the reduction in [1] for our use. Besides, our techniques differ from [1] in that we do not perform the more involved "state repair" procedure used by [1], which was inspired by [10]. Therefore, we believe our techniques may be of independent interest and can shed light on new observations in quantum reductions. Another work [13] presents a quantum LWE search to decision reduction (with the possibility of amplification) but their quantum distinguisher is simply a unitary that does not have an auxiliary state. Therefore, it is incomparable to our techniques. Our reduction is more general and more challenging to achieve due to the potentially destructible auxiliary quantum state. ## 2 Technical Overview We start by recalling the definition of Public-Key Encryption with Secure Key Leasing (also referred to as PKESKL from here on). (Levelled) homomorphic encryption with secure key leasing is defined analogously. In a PKESKL scheme with classical leaser, we have algorithms (Setup,KeyGen, Enc, Dec, Del, VerDel). The Setup algorithm (to be run by the lesser) takes in a security parameter and outputs a classical master public-key mpk and a classical trapdoor td. The KeyGen algorithm (to be run by the lessee) takes input mpk and outputs a classical public-key pk and a quantum secret key \(\rho_{\mathsf{sk}}\). Enc is defined in a usual way, it is a classical procedure that takes as input the key pk and a bit \(\mu\) and produces a ciphertext ct. The quantum procedure Dec takes as input a ciphertext ct and the state \(\rho_{\mathsf{sk}}\) and should produce the message that was encrypted. Moreover, the decryption should keep the state \(\rho_{\mathsf{sk}}\) statistically close to the initial state as long as the decrypted ciphertext were not malformed. Then, the deletion algorithm Delete should take in the state \(\rho_{\mathsf{sk}}\) and produce a classical certificate cert. Finally, the certificate can be verified by the classical algorithm VerDel using pk, and the trapdoor td as other inputs. Conclusively, our scheme will allow the generation of key \(\rho_{\mathsf{sk}}\) by a protocol involving merely classical effort on the part of a client (lessor) and quantum effort from the server (lessee). Our security requirement is also fairly intuitive. In the security game, the challenger generates a classical mpk along with a trapdoor td and gives mpk to a quantum polynomial time attacker \(\mathcal{A}\). \(\mathcal{A}\) generates a quantum decryption key \(\rho_{\mathsf{sk}}\) and its associated public key pk. \(\mathcal{A}\) publishes its pk. The challenger then asks \(\mathcal{A}\) to provide a certificate cert. The game aborts if cert does not verify. However, if the certificate verifies, we require that \(\mathcal{A}\) should not be able to distinguish an encryption of \(0\) from an encryption of \(1\) with a non-negligible probability. We formalize this by requiring that after the step of successful verification of certificate, an adversary outputs a quantum state \(\rho_{\mathsf{Delete}}\) as a adversarial quantum decryptor. Checking a quantum state's success probability can be difficult due to its destructible and unclonable nature. To check if this state is a good decryptor, we define a special binary-outcome projective measurement, called Threshold Implementation, \(\mathsf{TI}_{\frac{1}{2}+\gamma}\) (from [20, 1]). This measurement projects the state \(\rho_{\mathsf{Delete}}\) onto the subspace of all possible states that are good at distinguishing ciphertexts with probability at least \(\frac{1}{2}+\gamma\). If \(\mathsf{TI}_{\frac{1}{2}+\gamma}\) outputs \(1\) on \(\rho_{\mathsf{Delete}}\) with some noticeable probability, it implies that a noticeable "fraction" of the quantum states can distinguish ciphertexts with probability at least \(1/2+\gamma\). Passing the \(\mathsf{TI}_{1/2+\gamma}\) test combined with handing in a valid certificate, we consider such an adversary to be successful. The definitions are inspired from the framework introduced by Zhandry [20](see Section 8.2,Section 5 for details) and will imply the past PKE-SKL definition used in [1, 2]. As we will discuss later, defining the security via Threshold Implementation also assists us in our security proof. We then start by describing the ideas to build a PKE scheme first. These ideas can be lifted to build a (levelled) FHE scheme. Below, we show a series of ideas leading up to our final scheme. Starting Point: Regev PKE.Our starting point is a public-key encryption due to Regev [14], which is also the basis of many FHE schemes such as the BV scheme [13], and the GSW scheme [15]. In Regev PKE, the public key consists of a matrix \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\) where \(m=\Omega(n\cdot\log q)\), along with \(m\) LWE samples \(\mathbf{b}=\mathbf{s}\mathbf{A}+\mathbf{e}\mod q\) where \(\mathbf{s}\leftarrow\mathbb{Z}_{q}^{1\times n}\) is a randomly chosen and \(\mathbf{e}\) is a discrete Gaussian distributed error with some parameter \(\sigma\ll q\). The classical secret key is simply the secret vector \(\mathbf{s}\) for the LWE sample. To encrypt a message \(\mu\in\{0,1\}\), one samples a binary random vector \(\mathbf{r}\in\mathbb{Z}_{q}^{m\times 1}\). Encryptor then computes \(\mathsf{ct}=(\mathsf{ct}_{1},\mathsf{ct}_{2})\) where \(\mathsf{ct}_{1}=(\mathbf{A}\cdot\mathbf{r},\mathsf{ct}_{2}=\mathbf{b}\cdot \mathbf{r}+\mu\cdot\lceil\frac{q}{2}\rceil)\). Observe that the correctness follows from the fact that \(\mathbf{b}\cdot\mathbf{r}\approx\mathbf{s}\cdot\mathbf{A}\cdot\mathbf{r}\), where \(\approx\) means that they are close to each other in the \(\ell_{2}\) norm, since \(\mathbf{r}\) and \(\mathbf{e}\) are both low-norm. The security follows from the fact that due to LWE, \(\mathbf{b}\) is pseudorandom. Therefore, one can apply leftover hash lemma argument to that \(\mu\) is hidden. We want to turn this into a scheme that supports leasing. Namely, instead of having the secret key \(\mathbf{s}\), we would like to have it encoded as a quantum state that somehow encodes \(\mathbf{s}\) and (say) another hidden value (for the deletion certificate) so that it's computationally hard for an attacker to produce both? Unfortunately, it's not so clear how to make things work if the scheme has exactly one key \(\mathbf{s}\). Two Key Regev PKE.Our next idea is to modify the Regev style PKE scheme to work with two secret keys \(\mathbf{s}_{0},\mathbf{s}_{1}\) instead. The public-key would now consist of a matrix \(\mathbf{A}\) along with two LWE samples \(\mathbf{b}_{0}=\mathbf{s}_{0}\mathbf{A}+\mathbf{e}_{0}\) and \(\mathbf{b}_{1}=\mathbf{s}_{1}\mathbf{A}+\mathbf{e}_{1}\). To generate an encryption of \(\mu\in\{0,1\}\) one outputs \(\mathsf{ct}=(\mathsf{ct}^{\prime},\mathsf{ct}_{0},\mathsf{ct}_{1})\) where \(\mathsf{ct}^{\prime}=\mathbf{A}\mathbf{r}\), \(\mathsf{ct}_{i}=\mathbf{b}_{i}\mathbf{r}+\mu\cdot\lceil\frac{q}{2}\rceil\) for \(i\in\{0,1\}\). Observe that now the ciphertext \(\mathsf{ct}\) could be decrypted using either of the two secret vectors \(\mathbf{s}_{0}\) and \(\mathbf{s}_{1}\). This gives us the following idea: perhaps one could encode the secret key as a superposition of the two keys, \(\rho_{\mathsf{sk}}=|0,\mathbf{s}_{0}\rangle+|1,\mathbf{s}_{1}\rangle\) (ignoring normalization). Observe that using such a state one could decrypt any honestly generated ciphertext without disturbing the state. What is a good candidate for a deletion certificate? Naturally, in order to delete the secret information in the computational basis, a deletion certificate could be a measurement in the Hadamard basis (considering them encoded in binary), giving a string \(\mathbf{d}\) and a value \(\alpha=\langle\mathbf{d},\mathbf{s}_{0}\oplus\mathbf{s}_{1}\rangle\). We could perhaps hope that if an algorithm supplies the measurement outcome in the Hadamard basis correctly with overwhelming probability, then it would destroy information on the two secret keys. How would one analyze such a scheme? The hope would be to arrive at a contradiction from such an adversary. Assuming that such an adversary outputs \((\alpha,\mathbf{d})\) with overwhelming probability and conditioned on that distinguishes encryptions of \(0\) from \(1\) with inverse polynomial probability, we would hope to extract one of the secrets \(\mathbf{s}_{0}\) or \(\mathbf{s}_{1}\) with inverse polynomial probability, from the adversary's internal quantum state. Simultaneously, we also have obtained the string \((\alpha,\mathbf{d})\) from the certificate handed in. We would like to argue that no efficient adversary is able to produce both a secret vector and a string \((\alpha,\mathbf{d})\). While this is often a reasonable property, subtleties can come in when combined with other structures of our encryption scheme. We instead turn to a very similar setting that has been extensively analyzed in prior works in context of classical proof of quantumness and certified randomness [6]. This will save us the effort of reinventing the wheel. Inspiration from Classical Verification of Quantum Computation.Let us now recall the very similar setting in [6] which constructed proofs of quantumness from LWE. They build _Noisy Claw-free Trapdoor families (NTCF)_ from LWE and show that a related task above is hard. Without going into the unnecessary semantics, we describe the setting relevant to us. Consider the following game: The challenger samples \(\mathbf{A}\) and an LWE sample \(\mathbf{k}=\mathbf{s}\mathbf{A}+\mathbf{e}\mod q\) where \(\mathbf{s}\) is (say) binary random secret \(\mathbf{s}\) and \(\mathbf{e}\) is chosen from a discrete Gaussian distribution with parameter \(\sigma\). For any such sample, there exists a BQP algorithm that could sample a vector \(\mathbf{y}=\mathbf{x}_{0}\mathbf{A}+\mathbf{e}^{\prime}\) where \(\mathbf{e}^{\prime}\) chosen from discrete Gaussian with parameter \(\sigma^{\prime}\gg\sigma\) (by a super-polynomial factor), along with a special state \(\rho_{\text{sk}}\). The state is of the form \(\rho_{\text{sk}}=|(0,\mathbf{x}_{0})\rangle+|(1,\mathbf{x}_{1})\rangle\) where \(\mathbf{x}_{1}=\mathbf{s}-\mathbf{x}_{0}\). Measuring this state yields either \(\mathbf{x}_{0}\) so that \(\mathbf{x}_{0}\mathbf{A}\approx\mathbf{y}\) or \(\mathbf{x}_{1}\) so that \(\mathbf{x}_{1}\mathbf{A}\) is close to \(\mathbf{y}-\mathbf{k}\). On the other hand if we measured in the Hadamard basis we will again obtain \((\mathbf{d},\alpha)\) for a random string \(\mathbf{d}\) so that \(\alpha=\langle\mathbf{d},\mathbf{x}_{0}\oplus\mathbf{x}_{1}\rangle\)3. Footnote 3: In the actual protocol, \(\alpha\) is not equal to \(\langle\mathbf{d},\mathbf{s}\rangle\) where \(\mathbf{s}=\mathbf{x}_{0}\oplus\mathbf{x}_{1}\). In fact, we first apply a binary decomposition to \(\mathbf{x}_{0},\mathbf{x}_{1}\) and then measure in Hadamard basis. i.e. we have \(\alpha=\langle\mathbf{d},\mathcal{J}(\mathbf{x}_{0})\oplus\mathcal{J}(\mathbf{ x}_{1})\rangle\mod 2\), where \(\mathcal{J}\) is the invertible binary decomposition function. Such a binary decomposition is needed for the security proof of the NTCF to go through in [6] The test of quantumness asks an efficient quantum algorithm to produce a valid LWE sample \(\mathbf{y}=\mathbf{x}_{0}\mathbf{A}+\mathbf{e}^{\prime}\) of the kind above and based on a random challenge produce either one of \((\mathbf{x}_{0},\mathbf{x}_{1})\) for \(b\in\{0,1\}\) or a "valid" tuple \((\mathbf{d},\alpha)\) as above. This above protocol has a quantum correctness and a classical soundness, which we omit discussions due to irrelevance. However, to enable capabilities such as certified randomness/verification of quantum computation[11], they show a more advanced soundness property against quantum provers. That is, even a BQP adversary cannot _simultaneously_ produce these responses (the "computational response" for \(b=0\), and the "Hadamard response" for \(b=1\)), assuming quantum security of LWE. In short, we conclude the properties we need from the NTCF-based protocol: any QPT algorithm, given \(\mathbf{A}\) and \(\mathbf{k}=\mathbf{s}\mathbf{A}+\mathbf{e}\), can produce a valid LWE sample \(\mathbf{x}_{0}\mathbf{A}+\mathbf{e}^{\prime}\approx\mathbf{x}_{0}\mathbf{A}+ \mathbf{e}^{\prime}+\mathbf{k}=\mathbf{y}\). If it is asked to output _either_, (1) one of \(\mathbf{x}_{0},\mathbf{x}_{1}\), _or_, (2) a non-trivial pair \((\mathbf{d},\alpha)\) such that \(\alpha=\langle\mathbf{d},\mathcal{J}(\mathbf{x}_{0})\oplus\mathcal{J}(\mathbf{ x}_{1})\rangle\), it can do so with probability \(1\). However, if it is asked to output _both_ (1) and (2) at once for the same \(\mathbf{y}\), it cannot achieve so with probability noticeably higher than trivial guessing for one of them. From Noisy Trapdoor Claw-free Families to Our Scheme.Now we gradually approach a scheme similar to the two-key Regev PKE scheme which can benefit from the NTCF security properties described above. The idea is that we have a public key \(\mathbf{A}\), an LWE sample \(\mathbf{k}=\mathbf{s}\mathbf{A}+\mathbf{e}\) where \(\mathbf{s}\) is a random binary secret and \(\mathbf{e}\) is Gaussian distributed exactly like the distribution used for proof of quantumness. Along with \(\mathbf{k}\), the public key also consists of an LWE sample \(\mathbf{y}=\mathbf{x}_{0}\mathbf{A}+\mathbf{e}^{\prime}\) chosen as per the distribution in the proof of quantumness again. The client maintains a trapdoor \(\mathsf{td}\): \(\mathsf{td}\) consists of a trapdoor matrix \(\mathbf{T}\) of \(\mathbf{A}\) that can be generated at the same time as sampling \(\mathbf{A}\) (recall that \(\mathsf{td}\) will be used to verify the deletion certificate later). The leased decryption state \(\rho_{\mathbf{sk}}\) is the superposition \(|0,x_{0}\rangle+|1,x_{1}\rangle\) where \(\mathbf{x}_{1}=\mathbf{x}_{0}-\mathbf{s}\). To encrypt a bit \(\mu\), one samples a random binary string \(\mathbf{r}\) and computes a ciphertext \(\mathsf{ct}=(\mathsf{ct}_{1},\mathsf{ct}_{2},\mathsf{ct}_{3})\) where \(\mathsf{ct}_{1}=\mathbf{A}\mathbf{r}\), \(\mathsf{ct}_{2}=\mathbf{kr}\) and \(\mathsf{ct}_{3}=\mathbf{yr}+\mu\lceil\frac{q}{2}\rceil\). Observe that the ciphertext can be decrypted by both \(\mathbf{x}_{0}\) or \(\mathbf{x}_{1}\). This is because \(\mathsf{ct}_{3}-\mathbf{x}_{0}\mathbf{A}\) is close to \(\mu\lceil\frac{q}{2}\rceil\) and similarly, \(-\mathsf{ct}_{2}+\mathsf{ct}_{3}+\mathbf{x}_{1}\mathbf{A}\) is also close to \(\mu\lceil\frac{q}{2}\rceil\). Thus, there is a way to decrypt a ciphertext coherently without disturbing the state \(\rho_{\mathbf{sk}}\), by the gentle measurement lemma [1]. Accordingly, we set the deletion certificate to be the string \((\alpha,\mathbf{d})\) to use the property of NTCF. Security: First AttemptWe first discuss a very weak definition of security which the above simple scheme (almost) satisfies already. In this definition, a successful BQP adversary provides a deletion certificate that's valid with probability \(1-\mathsf{negl}\) for some negligible \(\mathsf{negl}\) and then conditioned on that distinguishes ciphertexts with a noticable probability. One observation is that given a decryptor that's capable of distinguishing encryptions of zero from encryptions of one, must also enable distinguishing ciphertexts of zero of the form \((\mathsf{ct}_{1},\mathsf{ct}_{2},\mathsf{ct}_{3})\) where \(\mathsf{ct}_{1}=\mathbf{A}\cdot\mathbf{r}\), \(\mathsf{ct}_{2}=\mathbf{k}\cdot\mathbf{r}\) and \(\mathsf{ct}_{3}=\mathbf{y}\cdot\mathbf{r}\) from truly random strings with a noticable probabilty. This means that this distinguisher distinguishes samples of the form \((\mathbf{A}\mathbf{r},\mathbf{kr},\mathbf{yr})\) from random. Since \(\mathbf{k}\) is pseudorandom due to LWE it must also distinguish \((\mathbf{A}\mathbf{r},\mathbf{kr},\mathbf{yr})\) from random where \(\mathbf{k}\) is chosen at random. Observe that \(\mathbf{y}=\mathbf{x}_{0}\mathbf{A}+\mathbf{e}^{\prime},\mathbf{yr}=\mathbf{x }_{0}\mathbf{A}\mathbf{r}+\mathbf{e}^{\prime}\mathbf{r}\). Given that \(\mathbf{e}^{\prime}\mathbf{r}\) losses information on \(\mathbf{r}\), and \(\mathbf{A},\mathbf{k}\) are now chosen at random one can appeal to LHL to show that such a distinguisher should distinguish between \((\mathsf{ct}_{1},\mathsf{ct}_{2},\mathsf{ct}_{3})\) generated as \((\mathbf{a},u,\langle\mathbf{x}_{0},\mathbf{a}\rangle+\mathbf{e}^{\prime} \mathbf{r})\) from random, where \(\mathbf{a}\in\mathbb{Z}_{q}^{n\times 1}\) and \(u\in\mathbb{Z}_{q}\) are random. Together \((\mathsf{ct}_{1},\mathsf{ct}_{3})\) are now almost distributed as an LWE sample in \(\mathbf{x}_{0}\). If we have a quantum search to decision reduction for LWE using the quantum distinguisher (with an internal state), we should be able to extract \(\mathbf{x}_{0}\) efficiently giving rise to a contradiction. That is, this reduction would have produced a valid \((\alpha,\mathbf{d})\) with overwhelming probability and conditioned on that \(\mathbf{x}_{0}\) with inverse polynomial probability (something ruled out by the NTCF in [1]). In this work, we construct a post-quantum (quantum) reduction that runs in time polynomial in \((B,n,\frac{1}{\epsilon},\log q,T_{\mathcal{A}})\) where \(\epsilon\) is the distinguishing advantage of an adversary that distinguishes samples LWE in a fixed secret from random samples and recovers that secret with probability polynomial in \(\epsilon\)4. Above \(B\) is the bound on the coordinates of \(\mathbf{x}_{0}\) which we will set to be slightly superpolynomial and \(T_{\mathcal{A}}\) is the running time of the adversary. Footnote 4: While we can also obtain such a reduction directly from plugging in Theorem 7.1 in [1], as we will discuss later, their reduction’s success probability is not sufficient in our proof of the standard security. There is a minor flaw in the above argument that can be fixed. The distribution \((\mathsf{ct}_{1},\mathsf{ct}_{2},\mathsf{ct}_{3})\) formed as \((\mathbf{A}\mathbf{r},\mathbf{kr},\mathbf{yr})\) does not behave statistically close to \((\mathbf{a},u,\langle\mathbf{a},\mathbf{x}_{0}\rangle+e)\) for truly random \(\mathbf{a},u\) and an error \(e\) sampled according to LWE distribution. The reason for that is that if we consider \(\mathbf{y}\mathbf{r}=\mathbf{x}_{0}\mathbf{A}\mathbf{r}+\mathbf{e}^{\prime} \mathbf{r}\), the error \(e=\mathbf{e}^{\prime}\mathbf{r}\) might not behave as an statistically independent discrete Gaussian. To fix this issue, we modify our encryption algorithm to have a smudging noise \(\mathbf{e}^{\prime\prime}\) with superpolynomially larger parameter \(\sigma^{\prime\prime}\gg\sigma\) and construct \((\mathtt{ct}_{1},\mathtt{ct}_{2},\mathtt{ct}_{3})\) as \((\mathbf{A}\mathbf{r},\mathbf{k}\mathbf{r},\mathbf{y}\mathbf{r}+\mathbf{e}^{ \prime\prime}+\mu\lceil\frac{q}{2}\rceil)\). With this smudging \(\mathbf{e}^{\prime}\mathbf{r}\) can now be drowned by \(\mathbf{e}^{\prime\prime}\) effectively now behaving as a fresh LWE sample in \(\mathbf{x}_{0}\) with slightly larger noise. Accordingly, we design our search-to-decision reduction to also work for samples statistically-close to LWE versus samples close to random, as opposed to the simple case of real LWE versus truly random. Stronger Security from Parallel RepetitionWhile the scheme above gives rise to a PKE scheme satisfying a weak but a non-trivial security guarantee, such a definition is not really enough. It might be possible that an adversary simply guesses the correct certificate \((\alpha,\mathbf{d})\) by picking \(\mathbf{d}\) randomly and choosing \(\alpha\) at random as well. With \(\frac{1}{2}\) probability, this adversary might be right in producing a valid certificate. Since its state is still preserved, it could continue to successfully decrypt all ciphertexts (moreover, it could simply measure the state in the standard basis and keep a classical key). We'd like to achieve a stronger definition where even if the certificate passes with a noticable probability, upon successful passing, the advantage should not be non-negligible. Our main idea there is that one could create \(\lambda\) independent instances of the scheme where one encrypts secret sharing of the bit \(\mu\) where \(\lambda\) is the security parameter. The idea is that now the public key would consist of \(\lambda\) independent matrices \(\mathbf{A}_{i}\), \(\lambda\) independent binary secret LWE samples \(\mathbf{k}_{i}=\mathbf{s}_{i}\mathbf{A}_{i}+\mathbf{e}_{i}\) along with \(\mathbf{y}_{i}=\mathbf{x}_{i,0}\mathbf{A}_{i}+\mathbf{e}_{i}^{\prime}\) for \(i\in[\lambda]\). The deletion certificate would consist of \(\lambda\) vectors \((\alpha_{i},\mathbf{d}_{i})\) and each of them are indepdently verified. The hope again is that if one is able to distinguish encryptions of \(0\) from encryptions of \(1\), then such an adversary should essentially be able to distinguish for each component \(i\in[\lambda]\). We could use this to extract all \(\{\mathbf{x}_{i,b_{i}}\}_{i\in[\lambda]}(b_{i}=0\) or \(1)\). Then, we can appeal to the results from the parallel repetition variant of the game used for proofs of quantumness. This variant has been studied in [10]. The soundness property ensures that it is computationally hard to come up with noticable probability valid responses \(\{(\alpha_{i},\mathbf{d}_{i})\}_{i\in[\lambda]}\) and \(\{\mathbf{x}_{i,b_{i}}\}_{i\in[\lambda]}(b_{i}=0\) or \(1)\) for all indices. Lifting the Scheme to Support HomomorphismThe above idea could work however, we would have to examine a lot of care to extract the secrets \(\{\mathbf{x}_{b_{i},i}\}_{i\in[\lambda]}\) as a quantum state can evolve over time. At this point, we move to directly construct a (levelled) FHE scheme. The structural properties of our levelled FHE scheme would solve two problems in one shot. It will not only yield as an FHE scheme, but will also simplify our analysis. To lift to FHE, we take inspiration for the GSW scheme [11]. In the GSW scheme, the public key consists of a pseudorandom LWE matrix \(\mathbf{B}\in\mathbb{Z}_{q}^{N\times M}\) where \(M=\Omega(N\log q)\) such that there exists one secret vector \(\mathbf{s}\in\mathbb{Z}_{q}^{1\times N}\) so that \(\mathbf{s}\mathbf{B}\) is small norm. Such matrices can be generated by sampling first \(N-1\) rows at random and the last row generated as an LWE sample with the coefficient being the first \(N-1\) rows. The encryption of a bit \(\mu\) is of the form \(\mathbf{B}\mathbf{R}+\mu\mathbf{G}\) where \(\mathbf{R}\) is a small norm random binary matrix and \(\mathbf{G}\) is a special gadget matrix [12]. The consequence of this is that \(\mathbf{B}\mathbf{R}\) behaves essentially like a random LWE sample with the same secret vector as \(\mathbf{B}\) and one could argue security by appealing to LHL. We omit a description of why the ciphertext could be homomorphically computed on (one could refer to either [11] or our technical sections). To lift to such an FHE scheme, we work with a specially chosen \(\mathbf{B}\). We choose it as: \[\mathbf{B}=\begin{bmatrix}\mathbf{k}_{1}=\mathbf{s}_{1}\mathbf{A}_{1}+\mathbf{e}_{ 1}\\ \mathbf{A}_{1}\\ \cdots\\ \mathbf{k}_{\lambda}=\mathbf{s}_{\lambda}\mathbf{A}_{\lambda}+\mathbf{e}_{ \lambda}\\ \mathbf{A}_{\lambda}\\ \sum_{i\in[\lambda]}\mathbf{y}_{i}=\sum_{i\in[\lambda]}\mathbf{x}_{i,0} \mathbf{A}_{i}+\mathbf{e}_{i}^{\prime}\end{bmatrix}\] Observe that there are many vectors \(\mathbf{v}\) so that \(\mathbf{v}\mathbf{B}\) is small norm. This is crucial because we would like the ciphertexts to be decryptable using a secret key \(\rho_{\text{sk}}=\otimes_{i}^{\lambda}\rho_{\text{sk},i}\) where \(\rho_{\text{sk},i}=|(0,\mathbf{x}_{i,0})\rangle+|(1,\mathbf{x}_{1,i})\rangle\). The idea is that \(\mathbf{x}_{0,i}\mathbf{A}_{i}\) is close to \(\mathbf{y}_{i}\) and similarly \(\mathbf{k}_{i}-\mathbf{x}_{1,i}\mathbf{A}_{i}\) is also close to \(\mathbf{y}_{i}\). Thus, \(\mathbf{B}\) can be decrypted by viewing \(\rho_{\text{sk}}\) as a vector we can perform a GSW-like decryption in superposition, with a gentle measurement on the final output. For security, we consider the structure of the ciphertext \(\text{ct}=\mathbf{B}\mathbf{R}+\mu\mathbf{G}\), the term \(\mathbf{B}\mathbf{R}\) is of the form: \[\mathbf{B}\mathbf{R}=\begin{bmatrix}\mathbf{s}_{1}\mathbf{A}_{1}\mathbf{R}+ \mathbf{e}_{1}\mathbf{R}\\ \mathbf{A}_{1}\mathbf{R}\\ \cdots\\ \cdots\\ \mathbf{s}_{\lambda}\mathbf{A}_{\lambda}\mathbf{R}+\mathbf{e}_{\lambda} \mathbf{R}\\ \mathbf{A}_{\lambda}\mathbf{R}\\ \sum_{i\in[\lambda]}\mathbf{y}_{i}\mathbf{R}=\sum_{i\in[\lambda]}\mathbf{x}_{i,0}\mathbf{A}_{i}\mathbf{R}+\mathbf{e}_{i}^{\prime}\mathbf{R}\end{bmatrix}\] Our intuition to extract \(\mathbf{x}_{i,0}\)(w.l.o.g. or \(\mathbf{x}_{i,1}\)) for all \(i\in[\lambda]\) is as follows. First we observe that it will suffice to devise an extractor that will succeed with high probability in the case when \(\mathbf{k}_{i}\) is chosen to be random as opposed to be pseudorandom due to LWE. If we are able to do that, we should be able to extract \(\mathbf{x}_{i,b_{i}}\) with similar probability in the world where the \(\mathbf{k}_{i}\)'s are pseudorandom due to LWE security. To realize our relaxed goal, we observe that the last row \(\sum_{i\in[\lambda]}\mathbf{y}_{i}\mathbf{R}\) is close to a linear equation of the form: \[\sum_{i\in[\lambda]}\mathbf{y}_{i}\mathbf{R}\approx\sum_{i\in[\lambda]} \mathbf{x}_{i,0}\mathbf{V}_{i}\] where \(\mathbf{V}_{i}=\mathbf{A}_{i}\mathbf{R}\). Thus, we modify our encryption algorithm to have smudging noise. After we modify the encryption algorithm to compute \(\mathbf{B}\mathbf{R}+\mathbf{E}+\mu\mathbf{G}\) where \(\mathbf{E}\) is zero everywhere else except the last row containing discrete Gaussian with parameter \(\sigma^{\prime\prime}\) superpolnomially more than the norm of \(\mathbf{e}_{i}^{\prime}\mathbf{R}\), we make a nice observation: the last row of \(\mathbf{B}\mathbf{R}+\mathbf{E}\) would now be distributed as an LWE sample: in terms of the secret \((\mathbf{x}_{1,b_{1}},\mathbf{x}_{2,b_{2}},\ldots,\mathbf{x}_{\lambda,b_{ \lambda}})\)(\(b_{i}=0\) or \(1\)) with the coefficient vector \([\mathbf{V}_{1}^{\top},\ldots,\mathbf{V}_{\lambda}^{\top}]^{\top}\). Now we could appeal the LHL to replace \(\{(\mathbf{A}_{i}\mathbf{R},\mathbf{V}_{i}\mathbf{R},\mathbf{k}_{i}\mathbf{R} )\}_{i\in[\lambda]}\) to completely random and the last row by a fresh sample with error according to discrete Gaussian in parameter \(\sigma^{\prime\prime}\), with the long secret \(((\mathbf{x}_{1,b_{1}},\mathbf{x}_{2,b_{2}},\ldots,\mathbf{x}_{\lambda,b_{ \lambda}})\) and random coefficient vector \([\mathbf{V}_{1}^{\top},\ldots,\mathbf{V}_{\lambda}^{\top}]^{\top}\). If an adversary now distinguish such ciphertexts from random, we should be able to extract the entire long secret vector in one shot \((\mathbf{x}_{1,b_{1}},\mathbf{x}_{2,b_{2}},\ldots,\mathbf{x}_{\lambda,b_{ \lambda}})\) using our proposed quantum search-to-decision reduction. Completely Classical Communication and Classical LessorOur protocol comes with classical lessor and classical communication for free. We observe that from the property of the underlying NTCF-based protocol, the lessor only has to run Setup and sends the classical \(\mathsf{mpk}=\{\mathbf{A},\mathbf{s}\mathbf{A}+\mathbf{e}\}\)(naturally extended to parallel-repeated case) to the lessee. The lessee can prepare its own quantum key given \(\mathsf{mpk}\) by preparing a superposition of \(\sum_{b,\mathbf{x},\mathbf{e}^{\prime}}|b,\mathbf{x}_{b}\rangle\,|\mathbf{y}= \mathbf{x}_{b}\mathbf{A}+\mathbf{e}^{\prime}+b\cdot\mathbf{s}\mathbf{A}\rangle\) by sampling \(\mathbf{e}^{\prime}\) on its own. It then measures the \(\mathbf{y}\)-register to obtain a public key \(\mathbf{y}\) and a quantum decryption key of the form \(|0,\mathbf{x}_{0}\rangle+|1,\mathbf{x}_{1}\rangle\). Working with the properties of NTCF (shown in [1]), the security of our scheme will be guaranteed even for maliciously generated \(\mathbf{y}\). Detailed Overview on Security Proof: Reduction to NTCF and the Use of Search-to-Decision Reduction for LWE Proof OutlineWe now go into slight more details about our high level proof strategy. Recall that in the security game, a successful attacker would first need to output a valid deletion certificate (which we denote by the event \(\mathsf{CertPass}\)) along with that it must produce a state \(\rho_{\mathsf{delete}}\) for which our test of good decryptor \(\mathsf{TI}_{\frac{1}{2}+\gamma}\) for some noticable \(\gamma\) passes with inverse-polynomial probability. Namely \(\mathsf{TI}_{\frac{1}{2}+\gamma}(\rho_{\mathsf{delete}})=1\) with inverse polynomial probability. We call this event \(\mathsf{GoodDecryptor}\). It must hold that \(\Pr[\mathsf{GoodDecryptor}\wedge\mathsf{CertPass}]\) is noticable for such successful attacker. Since we are guaranteed that there is a noticable chance of overlap of the two events, our hope is that our extraction procedure would extract \(\{\mathbf{x}_{0,i}\}\) just with enough probability to induce a noticable probability overlap between \(\mathsf{CertPass}\) and successful extraction causing our reduction to win the parallel repetion game of Radian et. al. We call the event when the extraction holds as \(\mathsf{ExtractionOccurs}\). At this point it is tempting to think of a search-to-decision classical reduction for LWE and compile it to a quantum-reduction via known methods such as the one by Bitansky et. al. [1]. Unfortunately, we don't have a citable theorem from this work to use directly5. We need precise bounds on the probability of success, and the running time for the extraction. Footnote 5: As discussed in Section 1.2, in fact all existing works on post-quantum LWE search-to-decision reductions are not directly applicable to our setting. Let us briefly explain the story: for the extraction to occur we would need to move to a world where \(\mathbf{k}_{i}\)'s are switched with random. While this won't change the probability of \(\mathsf{GoodDecryptor}\) by a non-negligible amount due to LWE security, in this hybrid deletion certificates will no longer exist. We could infer from there using reductions compiled using [1] that \(\mathsf{ExtractionOccurs}\) with probability \(\frac{1}{\mathsf{poly}(\lambda)}\) for some polynomial \(\mathsf{poly}\). This probability of extraction will still be the same (up to a negligible loss) if we went back to the world where \(\{\mathbf{k}_{i}\}\) are again from the LWE distribution. However, we can't infer from this that the event of \(\mathsf{ExtractionOccurs}\) overlaps with \(\mathsf{CertPass}\). A similar issue was encountered by Ananth et. al. [1]. To address this issue the [1] had to rely on new conjectures. To address this issue, we devise a high probability search-to-decision reduction that extracts (in the world where \(\mathbf{k}_{i}\)'s are random) with probability \(1-\mathsf{negl}(\lambda)\) whenever \(\mathsf{GoodDecryptor}\) holds. Thus when we switch \(\mathbf{k}_{i}\)'s with LWE, due to LWE security, \(\mathsf{ExtractionOccurs}\) also succeeds with all but negligible security whenever \(\mathsf{GoodDecryptor}\) holds. Since we are guaranteed that there is a noticeable overlap between \(\mathsf{GoodDecryptor}\) and \(\mathsf{CertPass}\), this implies a noticeable overlap between \(\mathsf{CertPass}\) and \(\mathsf{ExtractionOccurs}\). Threshold ImplementationBefore going into more technical details, we briefly describe the properties of the measurement we perform on the adversarial decryptor state to test if it succeeds on distinguishing ciphertexts with high enough probability. We will leverage the properties of this measurement in our security proofs. Threshold Implementation (and its related efficient measurement implementations in Section 5) is a powerful technique by Zhandry (which is further inspired by Mariott-Watrous's work on QMA amplification [14]) used in a number of recent works [13, 12, 11, 10, 14]. The Threshold Implementation \(\mathsf{TL}_{\gamma+1/2}\) has the following properties and relations to our security: 1. We will call \(\rho\) a good decryptor if we \(\mathsf{TI}_{\gamma+1/2}\) applied on \(\rho\) outputs \(1\). 2. For a successful adversary in our game, it must produce a decryptor \(\rho\) that is good with probability \(p\) for some noticeable \(p\) (apart from giving a valid certificate with oticeable probability). 3. For the remaining state \(\rho^{\prime}\) after performing the above \(\mathsf{TI}_{\gamma+1/2}\), given the outcome being \(1\), applying the same \(\mathsf{TI}_{\gamma+1/2}\) on \(\rho^{\prime}\) will yield outcome \(1\) with probability \(1\). The above statement basically says that the measurement \(\mathsf{TI}_{\gamma+1/2}\) is projective and it "preserves" the advantage of the quantum distinguisher's state. Ideas from Classical Search-to-Decision ReductionOur quantum search-to-decision reduction the quanis inspired by Regev's search to decision reduction [15]. We now describe the setup of our reduction. We consider a simpler setup than in our technical section (which is a bit specific to our setting). The adversary gets as input an LWE sample of the form \((\mathbf{A},\mathbf{x}\mathbf{A}+\mathbf{e}\mod p)\) where \(\mathbf{x}\) is arbitrary secret with bounded norm \(B\), \(\mathbf{A}\leftarrow\mathbb{Z}_{p}^{n\times m}\) for large enough \(m=\Omega(n\log p)\), \(\mathbf{e}\) is Gaussian distributed with parameter \(\sigma\). The reduction has a quantum state \(\rho\) that has an inverse polynomial weight on vectors that enables distinguishing samples \((\mathbf{A}^{\prime},\mathbf{x}\mathbf{A}^{\prime}+\mathbf{e}^{\prime}\mod p)\) for randomly chosen \(\mathbf{A}^{\prime}\in\mathbb{Z}_{p}^{n\times m}\) and error \(\mathbf{e}^{\prime}\) sampled according to discrete Gaussian with parameter \(\sigma^{\prime}\) superpolynomially larger than \(\sigma\) from truly random with probability \(\frac{1}{2}+\gamma\). We first describe the classical intuition and then describe how to lift that intution to the quantum setting. Classically, we can consider recovering \(\mathsf{x}\) coordinate by coordinate. Say the first coordinate \(x_{1}\in[-B,B]\), we could choose a total of \(2B\) guesses. For each guess \(g\), we could consider the process of generating samples as: * Sample a matrix \(\mathbf{C}\in\mathbb{Z}_{p}^{n\times m}\) so that it is random subject to the bottom \(n-1\) rows are \(0\). * Sample \(\mathbf{R}\) to be a random binary matrix \(\{0,1\}^{m\times m}\). * Set \(\mathbf{A}^{\prime}=\mathbf{A}\mathbf{R}+\mathbf{C}\) and \(\mathbf{b}^{\prime}=(\mathbf{x}\mathbf{A}+\mathbf{e})\mathbf{R}+\mathbf{v}+ \mathbf{e}^{\prime}\) where \(\mathbf{v}\) is the guess \(g\) times the first (and the only non-zero) row of \(\mathbf{C}\). Here \(\mathbf{e}^{\prime}\) is generated from the discrete Gaussian vector with parameter \(\sigma^{\prime}\). If our guess was correct, the sample that we end up producing \((\mathbf{A}^{\prime},\mathbf{b}^{\prime})\) is distributed statistically close to the distribution for the distinguishing problem (LWE with secret \(\mathbf{x}\)). This is because \(\mathbf{e}^{\prime}+\mathbf{e}\mathbf{R}\) due to noise flodding is within \(\mathsf{poly}(m)\cdot\frac{\sigma}{\sigma^{\prime}}\) statistical distance. Similarly, \(\mathbf{A}^{\prime}\) due to LHL is within \(q^{-n}\) statistical distance from uniform if \(m\) is sufficiently large. This means that the distribution is within \(\eta=\mathsf{poly}(m)\frac{\sigma}{\sigma^{\prime}}\) statistical distance from the relevant distinguishing problem. On the other hand, if the guess is incorrect, then one can observe that the distribution produces samples that are within similar statistical distance \(\eta=\mathsf{poly}(m)\cdot\frac{\sigma}{\sigma^{\prime}}\) from truly random distribution. Thus, if the guess is correct, a classical adversary can distinguish the above distribution from random with probability at least \(\frac{1}{2}+\gamma-O(\eta)\), and likewise if the guess is incorrect the maximum distinguishing probability is \(\frac{1}{2}+O(\eta)\). We could therefore test the adversary by making roughly \(\mathsf{poly}(\frac{1}{\gamma},\log\delta)\) calls to the distinguisher to guarantee the guess is correct with probability \(1-\delta\). Setting \(\delta=p^{-n}\), the reduction will extract \(\mathbf{x}\) in time that's polynomial in \(B,n,m,\log p,\frac{1}{\gamma}\) (bound on \(\mathbf{x}\)'s norm). Our Quantum Search to Decision ReductionMoving to the quantum setting, we have to address a number of challenges. If we use our state to distinguish and LWE sample from random in the way above, the state could get destructed preventing us from doing repetitions. In particular, the classical reduction needs to run the distinguisher many times over randomized inputs and "measure" its outcome to obtain useful information, which seems implausible when using a quantum distinguisher. To overcome these issue, we will leverage the properties of Threshold Implementation. We make a key observation that the procedure where we repeatedly create samples according to our guess \(g\) and check on the distinguisher's output distribution to get an estimate on whether \(g\) is correct, can happen "inside" the measurement procedure \(\mathsf{TI}\). Suppose we have efficient projective measurements \(\mathsf{TI}_{g,1/2+\gamma}=(\mathsf{TI}_{g,1/2+\gamma},\mathbf{I}-\mathsf{TI}_ {g,1/2+\gamma})\) for various guesses \(g\). \(\mathsf{TI}_{g,1/2+\gamma}\) projects the state \(\rho\) onto vectors that distinguish the above distribution made from the guess \(g\) above from truly random with probability \(1/2+\gamma\). For simplicity, we call it \(\mathsf{TI}_{g}\). from now on. We define two other projections \(\mathsf{TI}_{\mathsf{LWE}}\) and \(\mathsf{TI}_{\mathsf{unif}}\): * \(\mathsf{TI}_{\mathsf{LWE}}\) projects onto distinguishers good at distinguishing the ciphertexts in our scheme from uniform random samples with probability at least \(1/2+\gamma\). (Intuitively, these are distinguishers for "noisy" LWE instances versus uniform random) * \(\mathsf{TI}_{\mathsf{unif}}\) projects onto distinguishers good at distinguishing uniform radnom samples from uniform random samples with probability at least \(1/2+\gamma\). Our goal is to show that \(\Pr[\mathsf{ExtractionOccurs}]\geq\Pr[\mathsf{TI}_{\mathsf{LWE}}(\rho)=1]- \mathsf{negl}(\lambda)\). Thus it suffices to consider a world where \(\mathsf{TI}_{\mathsf{LWE}}(\rho)=1\) already happens and show that \(\Pr[\mathsf{ExtractionOccurs}]\geq 1-\mathsf{negl}(\lambda)\) in this world. \(\rho^{\prime}\) is the post measurement state. We now make the following observations, by recalling the properties of \(\mathsf{TI}\): * Let \(\rho^{\prime}\) be the state we get post measurement of \(\mathsf{TI}_{g_{1},1/2+\gamma}(\rho)\to 1\). * We start working with the state \(\rho^{\prime}\) at the beginning of our extraction algorithm. * Recall that by the projective property of \(\mathsf{TI}\), given that outcome is \(1\), we have \(\Pr[\mathsf{TI}_{\mathsf{LWE}},\ (\rho^{\prime})]=1\). * We start with the first entry of \(\mathbf{x}\) and pick the smallest possible value as our guess \(g\) for this entry. * As we have discussed in the classical setting, when the guess \(g\) is correct, we get to create samples statistically close to LWE samples; when the guess \(g\) is incorrect, we get to create samples close to uniform random. Combining these with properties of \(\mathsf{TI}_{g}\) for distributionally close measurements, we can show that: **When the guess is correct:** If we apply projection \(\mathsf{TI}_{g}\) on it for a correct guess \(g\), we will have \(\Pr[\mathsf{TI}_{g}(\rho^{\prime})=1]\) overwhelmingly close to \(1\). In this setting, we are statistically close to measuring the original \(\mathsf{TI}_{\mathsf{LWE}}\). That is, if a distinguisher can distinguish between (noisy) LWE versus real random. We will therefore get output \(1\) with overwhelming probability as a consequence of \(\mathsf{TI}\) being a projection. We can then assign \(g\) to the entry of \(\mathbf{x}\) we are guessing for, and move on to the next entry. * **When the guess is incorrect:** If we apply projection \(\mathsf{TI}_{g}\) on it for an incorrect guess \(g\), we will have \(\Pr[(\mathbf{I}-\mathsf{TI}_{g})(\rho^{\prime})=1]=\Pr[\mathsf{TI}_{g}(\rho^{ \prime})=0]\) overwhelmingly close to \(1\). In this case, we are statistically close to measuring \(\mathsf{TI}_{\mathsf{unif}}\), i.e. asking the distinguisher to distinguish between uniform vs. uniform. Clearly, no distinguisher can distinguish with noticeable advantage \(\gamma\). Therefore \(\mathsf{TI}_{\mathsf{unif}}\) will output \(0\) for all possible states. We then move on to perform \(\mathsf{TI}_{g}\) with the next possible value of \(g\). A key observation is that, in both cases above, because we obtain a certain measurement outcome with overwhelming probability, we can apply the gentle measurement lemma [1] and recover back to a state very close to \(\rho^{\prime}\). Thus, we could find out \(\mathbf{x}\) entry by entry while keeping our quantum state almost intact. However, \(\mathsf{TI}\) is not an efficient measurement and we need to make the above procedure efficient. Fortunately, the work of Zhandry [15], showed how to replace \(\mathsf{TI}\) by an efficient, approximate projection. These approximate projections induce an error of their own, but are very easy to account for. Moreover, the fact that \(\mathsf{TI}\) implicitly performs amplification that can be analyzed by concentration bounds analogous to the repetitions used in the classical reduction, is even easier to observe when we look into its efficient implementation algorithm (see Section 5.0.1). In the end, we further materialize our approach with some proper choice of parameters so that the errors will not accumulate on distinguisher's state to a degree that affects the algorithm's performance. Conclusively, the exact reduction in our scheme works if the number of guesses are superpolynomially lesser than \(\frac{1}{\eta}\). See details in Section 11. On the other hand, if we consider a clean setting where our input distributions are real LWE versus truly uniform, our reduction works for any subexponential number of guesses (Appendix D). ### More on Related Works We additionally discuss some other related works. Quantum Copy Protection[1] first built copy-protection for all unlearnable functions based on a quantum oracle. [1] showed a construction for all unlearnable functions based on a classical oracle. [1, 2] showed copy protection for signatures, decryption and PRF evaluation in the plain model. [13, 1] constructed copy-protection for point functions and compute-and-compare functions in QROM, the latter improving the security of the former. Regarding the negative results: [1] demonstrated that it is impossible to have a copy-protection scheme for all unlearnable circuits in the plain model, assuming LWE and quantum FHE. More on Secure Software Leasing[1] observed that under a definition essentially equivalent to infinite-term SSL, namely copy-detection, one could obtain a black-box construction for infinite-term SSL from watermarking and public-key quantum money. [15] constructed finite-term SSL for PRFs and compute-and-compare functions from (subexponential) LWE, with similar observations. [14, 13] constructed secure software leasing for point functions and compute-and-compare functions; [14] is information-theoretically secure and [13] is secure under QROM. They both used a stronger version of finite-term SSL security: while the vendor will honestly check the returned state from the adversary, the adversary can execute the leftover half of its bipartite state maliciously, i.e., not following the instructions in the evaluation algorithm. SSL security of this stronger finite-term variant is only known for point/compute-and-compare functions up till now. Unclonable Encryption and Certified DeletionUnclonable encryption is studied in [1, 1, 1, 1, 2]. It is an encryption scheme where the ciphertext is encoded in quantum information, so that the adversary cannot split it into two copies that can both decrypt when given a (classical) decryption key. Certified deletion of ciphertext is studied in various works [1, 1, 1, 2, 3, 4], where the ciphertext is also encoded in a quantum state and given to a server. When asked by the client to delete, the server must provide a proof of deletion of the ciphertext. This bears some similarity to our setting. But note the major difference is that in the certified deletion of ciphertext setting, we need to show that the server no longer has information about the encrypted message(a bit string), while in our setting we need to show the server is deprived of a functionality. Therefore, the results and techniques in secure key leasing and certified deletion of ciphertext are most of the time incomparable. ## 3 Acknowledgement Orestis Chardouvelis and Aayush Jain were supported by gifts from CyLab of CMU and Google. Jiahui Liu was supported by DARPA under Agreement No. HR00112020023, NSF CNS-2154149 and a Simons Investigator award. We thank Qipeng Liu, Mark Zhandry and the authors of [1] for helpful discussions. ## 4 Preliminaries ### Quantum Information an Computation We refer the reader to [16] for a reference of basic quantum information and computation concepts. A quantum system \(Q\) is defined over a finite set \(B\) of classical states. In this work we will consider \(B=\{0,1\}^{n}\). A **pure state** over \(Q\) is a unit vector in \(\mathbb{C}^{|B|}\), which assigns a complex number to each element in \(B\). In other words, let \(|\phi\rangle\) be a pure state in \(Q\), we can write \(|\phi\rangle\) as: \[|\phi\rangle=\sum_{x\in B}\alpha_{x}|x\rangle\] where \(\sum_{x\in B}|\alpha_{x}|^{2}=1\) and \(\{|x\rangle\}_{x\in B}\) is called the "**computational basis**" of \(\mathbb{C}^{|B|}\). The computational basis forms an orthonormal basis of \(\mathbb{C}^{|B|}\). Given two quantum systems \(R_{1}\) over \(B_{1}\) and \(R_{2}\) over \(B_{2}\), we can define a **product** quantum system \(R_{1}\otimes R_{2}\) over the set \(B_{1}\times B_{2}\). Given \(\left|\phi_{1}\right\rangle\in R_{1}\) and \(\left|\phi_{2}\right\rangle\in R_{2}\), we can define the product state \(\left|\phi_{1}\right\rangle\otimes\left|\phi_{2}\right\rangle\in R_{1}\otimes R _{2}\). We assume a quantum computer can implement any unitary transformation (by using these basic gates, Hadamard, phase, CNOT and \(\frac{\pi}{8}\) gates), especially the following two unitary transformations: * **Classical Computation:** Given a function \(f:X\to Y\), one can implement a unitary \(U_{f}\) over such that for any \(\left|\phi\right\rangle=\sum_{x\in X,y\in Y}\alpha_{x,y}|x,y\rangle\), \[U_{f}|\phi\rangle=\sum_{x\in X,y\in Y}\alpha_{x,y}|x,y\oplus f(x)\rangle\] Here, \(\oplus\) is a commutative group operation defined over \(Y\). * **Quantum Fourier Transform:** Let \(N=2^{n}\). Given a quantum state \(\left|\phi\right\rangle=\sum_{i=0}^{2^{n}-1}x_{i}|i\rangle\), by applying only \(O(n^{2})\) basic gates, one can compute \(\left|\psi\right\rangle=\sum_{i=0}^{2^{n}-1}y_{i}|i\rangle\) where the sequence \(\{y_{i}\}_{i=0}^{2^{n}-1}\) is the sequence achieved by applying the classical Fourier transform \(\mathsf{QFT}_{N}\) to the sequence \(\{x_{i}\}_{i=0}^{2^{n}-1}\): \[y_{k}=\frac{1}{\sqrt{N}}\sum_{i=0}^{2^{n}-1}x_{i}\omega_{n}^{ik}\] where \(\omega_{n}=e^{2\pi i/N}\), \(i\) is the imaginary unit. One property of \(\mathsf{QFT}\) is that by preparing \(\left|0^{n}\right\rangle\) and applying \(\mathsf{QFT}_{2}\) to each qubit, \(\left(\mathsf{QFT}_{2}|0\rangle\right)^{\otimes n}=\frac{1}{\sqrt{2^{n}}}\sum_ {x\in\{0,1\}^{n}}\left|x\right\rangle\) which is a uniform superposition over all possible \(x\in\{0,1\}^{n}\). In our work, only the binary version of \(\mathsf{QFT}\) is needed, which can also be viewed as applying a Hadamard gate to each qubit. **Lemma 4.1** (Almost As Good As New(Gentle Measurement) Lemma [1]).: _Suppose a binary-outcome POVM measurement \((\mathcal{P},\mathcal{Q})\) on a mixed state \(\rho\) yields a particular outcome with probability \(1-\epsilon\). Then after the measurement, one can recover a state \(\tilde{\rho}\) such that \(\left\|\tilde{\rho}-\rho\right\|_{\mathrm{tr}}\leq\sqrt{\epsilon}\). Here \(\left\|\cdot\right\|_{\mathrm{tr}}\) is the trace distance._ ## 5 Preliminaries: Testing Quantum Adversaries In this section, we include several definitions about measurements, which are relevant to testing whether quantum adversaries are successful in the security games of Section 8. Part of this section is taken verbatim from [1]. As this section only pertains directly to our security definitions for secure key leasing schemes, the reader can skip ahead, and return to this section when reading Section 8. In classical cryptographic security games, the challenger typically gets some information from the adversary and checks if this information satisfies certain properties. However, in a setting where the adversary is required to return _quantum_ information to the challenger, classical definitions of "testing" whether a quantum state returned by the adversary satisfies certain properties may result in various failures as discussed in [15], as this state may be in a superposition of "successful" and "unsuccessful" adversaries, most likely unclonable and destructible by the classical way of "testing" its success. In short, we need a way to test the success probability of an adversarial quantum state analogous to what happens classically, where the test does not completely destruct the adversary's state. Instead, the state after the test has its success probability preserved in some sense. Such a procedure allows us to do multiple measurements on the state without rendering it useless, and thus facilitates quantum reductions. Projective ImplementationMotivated by the discussion above, [10] (inspired by [11]) formalizes a new measurement procedure for testing a state received by an adversary. We will be adopting this procedure when defining security of secure key leasing schemes in Section Section 8. Consider the following procedure as a binary POVM \(\mathcal{P}\) acting on an alleged-copy-protected program \(\rho\): sample a uniformly random input \(x\), evaluates the copy-protected program on \(x\), and checks if the output is correct. In a nutshell, the new procedure consists of applying an appropriate projective measurement which _measures_ the success probability of the tested state \(\rho\) under \(\mathcal{P}\), and to output "accept" if the success probability is high enough. Of course, such measurement will not be able extract the exact success probability of \(\rho\), as this is impossible from we have argued in the discussion above. Rather, the measurement will output a success probability from a finite set, such that the expected value of the output matches the true success probability of \(\rho\). We will now describe this procedure in more detail. The starting point is that a POVM specifies exactly the probability distribution over outcomes \(\{0,1\}\) ("success" or "failure") on any copy-protected program, but it does not uniquely determine the post-measurement state. Zhandry shows that, for any binary POVM \(\mathcal{P}=(P,I-P)\), there exists a particularly nice implementation of \(\mathcal{P}\) which is projective, and such that the post-measurement state is an eigenvector of \(P\). In particular, Zhandry observes that there exists a projective measurement \(\mathcal{E}\) which _measures_ the success probability of a state with respect to \(\mathcal{P}\). More precisely, * \(\mathcal{E}\) outputs a _distribution_\(D\) of the form \((p,1-p)\) from a finite set of distribution over outcomes \(\{0,1\}\). (we stress that \(\mathcal{E}\) actually outputs a distribution). * The post-measurement state upon obtaining outcome \((p,1-p)\) is an _eigenvector_ (or a mixture of eigenvectors) of \(P\) with eigenvalue \(p\). A measurement \(\mathcal{E}\) which satisfies these properties is the measurement in the common eigenbasis of \(P\) and \(I-P\) (such common eigenbasis exists since \(P\) and \(I-P\) commute). Note that since \(\mathcal{E}\) is projective, we are guaranteed that applying the same measurement twice will yield the same outcome. Thus, what we obtain from applying \(\mathcal{E}\) is a state with a "well-defined" success probability with respect to \(\mathcal{P}\): we know exactly how good the leftover program is with respect to the initial testing procedure \(\mathcal{P}\). Formally, to complete the implementation of \(\mathcal{P}\), after having applied \(\mathcal{E}\), one outputs the bit \(1\) with probability \(p\), and the bit \(0\) with probability \(1-p\). This is summarized in the following definition. **Definition 5.1** (Projective Implementation of a POVM).: _Let \(\mathcal{P}=(P,Q)\) be a binary outcome POVM. Let \(\mathcal{D}\) be a finite set of distributions \((p,1-p)\) over outcomes \(\{0,1\}\). Let \(\mathcal{E}=\{E_{p}\}_{(p,1-p)\in\mathcal{D}}\) be a projective measurement with index set \(\mathcal{D}\). Consider the following measurement procedure:_ 1. _Apply the projective measurement_ \(\mathcal{E}\) _and obtain as outcome a distribution_ \((p,1-p)\) _over_ \(\{0,1\}\)_;_ 2. _Output a bit according to this distribution, i.e. output_ \(1\) _w.p_ \(p\) _and output_ \(0\) _w.p_ \(1-p\)_._ _We say the above measurement procedure is a projective implementation of \(\mathcal{P}\), which we denote by \(\mathsf{ProjImp}(\mathcal{P})\), if it is equivalent to \(\mathcal{P}\) (i.e. it produces the same probability distribution over outcomes)._ Zhandry shows that any binary POVM has a projective implementation, as in the previous definition. **Lemma 5.2** (Adapted from Lemma 1 in [11]).: _Any binary outcome POVM \(\mathcal{P}=(P,Q)\) has a projective implementation \(\mathsf{ProjImp}(\mathcal{P})\)._ _Moreover, if the outcome is a distribution \((p,1-p)\) when measuring under \(\mathcal{E}\), the collapsed state \(\rho^{\prime}\) is a mixture of eigenvectors of \(P\) with eigenvalue \(p\), and it is also a mixture of eigenvectors of \(Q\) with eigenvalue \(1-p\)._ As anticipated, the procedure that we will eventually use to test a state received from the adversary will be to: 1. _Measure_ the success probability of the state, 2. Accept if the outcome is large enough. As you may guess at this point, we will employ the projective measurement \(\mathcal{E}\) defined previously for step \((i)\). We call this variant of the projective implementation a _threshold implementation_. Threshold ImplementationThe concept of threshold implementation of a POVM was proposed by Zhandry, and formalized by Aaronson, Liu, Liu, Zhandry and Zhang [1]. The following is a formal definition. **Definition 5.3** (Threshold Implementation).: _Let \(\mathcal{P}=(P,Q)\) be a binary POVM. Let \(\mathsf{ProjImp}(\mathcal{P})\) be a projective implementation of \(\mathcal{P}\), and let \(\mathcal{E}\) be the projective measurement in the first step of \(\mathsf{ProjImp}(\mathcal{P})\) (using the same notation as in Definition 5.1). Let \(\gamma>0\). We refer to the following measurement procedure as a threshold implementation of \(\mathcal{P}\) with parameter \(\gamma\), and we denote is as \(\mathsf{TI}_{\gamma}(\mathcal{P})\)._ * _Apply the projective measurement_ \(\mathcal{E}\)_, and obtain as outcome a vector_ \((p,1-p)\)_;_ * _Output a bit according to the distribution_ \((p,1-p)\)_: output_ \(1\) _if_ \(p\geq\gamma\)_, and_ \(0\) _otherwise._ For simplicity, for any quantum state \(\rho\), we denote by \(\mathrm{Tr}[\mathsf{TI}_{\gamma}(\mathcal{P})\,\rho]\) the probability that the threshold implementation applied to \(\rho\)**outputs 1**. Thus, whenever \(\mathsf{TI}_{\gamma}(\mathcal{P})\) appears inside a trace \(\mathrm{Tr}\), we treat \(\mathsf{TI}_{\gamma}(\mathcal{P})\) as a projection onto the \(1\) outcome (i.e. the space spanned by eigenvectors of \(P\) with eigenvalue at least \(\gamma\)). Similarly to Lemma 5.2, we have the following lemma. **Lemma 5.4**.: _Any binary outcome POVM \(\mathcal{P}=(P,Q)\) has a threshold implementation \(\mathsf{TI}_{\gamma}(\mathcal{P})\) for any \(\gamma\)._ In this work, we are interested in threshold implementations of POVMs with a particular structure. These POVMs represent a challenger's test of a quantum state received from an adversary in a security game (like the POVM described earlier for testing whether a program evaluates correctly on a uniformly random input). These POVMs have the following structure: * Sample a projective measurement from a set of projective measurements \(\mathcal{I}\), according to some distribution \(D\) over \(\mathcal{I}\). * Apply this projective measurement. We refer to POVMs of this form as _mixtures of projective measurements_. The following is a formal definition. **Definition 5.5** (Collection and Mixture of Projective Measurements).: _Let \(\mathcal{R},\mathcal{I}\) be sets. Let \(\{(P_{i},Q_{i})\}_{i\in I}\) be a collection of binary projective measurements \((P_{i},Q_{i}))\) over the same Hilbert space \(\mathcal{H}\) where \(\mathcal{P}_{i}\) corresponds to output 0, and \(Q_{i}\) corresponds to output 1. We will assume we can efficiently measure the \(\mathcal{P}_{i}\) for superpositions of \(i\), meaning we can efficiently perform the following projective measurement over \(\mathcal{I}\otimes\mathcal{H}\):_ \[(\sum_{i}\ket{i}\bra{i}\otimes P_{i},\sum_{i}\ket{i}\bra{i}\otimes Q_{i}) \tag{1}\] _Let \(\mathcal{D}:\mathcal{R}\rightarrow\mathcal{I}\) be some distribution. The mixture of projective measurements associated to \(\mathcal{R},\mathcal{I},\mathcal{D}\) and \(\{(P_{i},Q_{i})\}_{i\in I}\) is the binary POVM \(\mathcal{P}_{\mathcal{D}}=(P_{\mathcal{D}},Q_{\mathcal{D}})\) defined as follows:_ \[P_{D}=\sum_{i\in\mathcal{I}}\Pr[i\leftarrow\mathcal{D}(R)]\,P_{i},\qquad Q_{ \mathcal{D}}=\sum_{i\in\mathcal{I}}\Pr[i\leftarrow\mathcal{D}(R)]\,Q_{i}, \tag{2}\] In other words, \(\mathcal{P}_{D}\) is implemented in the following way: sample randomness \(r\leftarrow\mathcal{R}\), compute the index \(i=D(r)\), and apply the projective measurement \((P_{i},Q_{i})\). Thus, for any quantum state \(\rho\), \(\operatorname{Tr}[P_{D}\rho]\) is the probability that a projective measurement \((P_{i},Q_{i})\), sampled according to the distribution induced by \(D\), applied to \(\rho\) outputs 1. ExampleTo further explain the above definition, we consider a concrete example: when the input quantum state \(\rho\) is supposedly a decryptor for an encryptions with respect to public key \(\mathsf{pk}\), and our goal is to measure if \(\rho\) can distinguish between encryption of message 0 and message 1. The distribution \(\mathcal{D}\) is the distribution over all randomness used to encrypt a message and the coin flip \(b\) to decide which ciphertext to feed to the adversary. For a ciphertext \(\mathsf{ct}_{i}\leftarrow\mathcal{D}\), \(\mathcal{P}_{\mathsf{ct}_{i}}\) is the measurement that runs the adversary \(\rho\) on \(\mathsf{ct}_{i}\) and tests if the outcome \(b^{\prime}=b\). Purifying Mixtures of ProjectionWe next define the purified version of the above operations, called Controlled Projections: **Definition 5.6** (Controlled Projective Measurements).: _Let \(\mathcal{P}=\{\mathcal{P}_{i}=(P_{i},Q_{i})\},i\in\mathcal{I}\) be a collection of projective measurements over \(\mathcal{H}\). Let \(\mathcal{D}\) a distribution with random coin set \(\mathcal{R}\). We will abuse notation and let \(\mathcal{R}\) also denote the \(\left|\mathcal{R}\right|\)-dimensional Hilbert space. The controlled projection is the measurement: \(\mathsf{CProj}_{\mathcal{P},\mathcal{D}}=(\mathsf{CProj}_{\mathcal{P},\mathcal{ D}}^{0},\mathsf{CProj}_{\mathcal{P},\mathcal{D}}^{1})\) where:_ \(\mathsf{CProj}_{\mathcal{P},\mathcal{D}}^{0}\) can be implemented using the measurement \((\sum_{i}\ket{i}\bra{i}\otimes P_{i},\sum_{i}\ket{i}\bra{i}\otimes Q_{i})\). First, initialize control registers \(\mathcal{I}\) to \(0\). Then perform the map \(\ket{r}\ket{i}\rightarrow\ket{r}\ket{i\oplus\mathcal{D}(r)}\) to the \(\mathcal{R}\otimes\mathcal{I}\) registers. Next, apply the mixture of projective measurements assumed in Equation (1). Finally, perform the map \(\ket{r}\ket{i}\rightarrow\ket{r}\ket{i\oplus\mathcal{D}(r)}\) again to uncompute the control registers, and discard the control registers. #### 5.0.1 Approximating Threshold Implementation _Projective_ and _threshold_ implementations of POVMs are unfortunately not efficiently computable in general. However, they can be approximated if the POVM is a mixture of projective measurements, as shown by Zhandry [14], using a technique first introduced by Marriott and Watrous [13] in the context of error reduction for quantum Arthur-Merlin games. The Uniform TestBefore we describe the ATI algorithm, we define the projection on register \(\mathcal{R}\) that tests if it is a superposition with each \(r\in\mathcal{R}\) weighted according to the distribution \(\mathcal{D}\)6: \(\mathsf{IsUniform}=(\left|\mathbf{1}_{\mathcal{R}}\right\rangle\left\langle \mathbf{1}_{\mathcal{R}}\right|,\mathcal{I}-\left|\mathbf{1}_{\mathcal{R}} \right\rangle\left\langle\mathbf{1}_{\mathcal{R}}\right|)\) where: Footnote 6: The distribution \(\mathcal{D}\) used in \(\mathsf{CProj}\) can in fact be an arbitrary distribution, instead of uniform. Without loss of generality, we can use a uniform test \(\mathsf{IsUniform}\) and map each \(r\) to an arbitrary distribution \(\mathcal{D}(r)\) as specified in Definition 5.6. \[\left|\mathbf{1}_{\mathcal{R}}\right\rangle=\frac{1}{\sqrt{\left|\mathcal{R} \right|}}\sum_{r\in\mathcal{R}}\left|r\right\rangle\] The Algorithm ATIWe present the algorithm \(\mathsf{ATI}_{\mathcal{P},\mathcal{D},\gamma}^{\epsilon,\delta}\) using the syntax from [14]: Our algorithm is parameterized by a distribution \(\mathcal{D}\), collection of projective measurements \(\mathcal{P}\), and real values \(0<\epsilon,\delta,\gamma\leq 1\), and is denoted as \(\mathsf{ATI}_{\mathcal{P},\mathcal{D},\gamma}^{\epsilon,\delta}\). On input a quantum state \(\left|\psi\right\rangle\) over Hilbert space \(\mathcal{H}\): 1. Initialize a state \(\left|\mathbf{1}_{\mathcal{R}}\right\rangle\left|\psi\right\rangle\). 2. Initialize a classical list \(L:=(1)\). 3. Repeat the following loop a total of \(T=\frac{\ln(4/\delta)}{\epsilon^{2}}\) times: 1. Apply \(\mathsf{CProj}_{\mathcal{P},\mathcal{D}}\) to register \(\mathcal{R}\otimes\mathcal{H}\). Let \(b_{2i-1}\) be the measurement outcome and set \(L:=(L,b_{2i-1})\). 2. Apply \(\mathsf{IsUniform}\) to register \(\mathcal{R}\). Let \(b_{2i}\) be the measurement outcome and set \(L:=(L,b_{2i})\). 4. Let \(t\) be the number of index \(i\) such that \(b_{i-1}=b_{i}\) in the list \(L=(0,b_{1},...,b_{2T})\), and \(\tilde{p}:=t/2T\). 5. If \(b_{2T}=0\), repeat the loop again until \(b_{2i}=1\). 6. Discard the \(\mathcal{R}\) register. 7. Output 1 if \(\tilde{p}\geq\gamma\) and 0 otherwise. ATI PropertiesWe will make use of the following lemma from a subsequent work of Aaronson et al. [1]. **Lemma 5.7** (Corollary 1 in [1]).: _For any \(\epsilon,\delta,\gamma\in(0,1)\), any collection of projective measurements \(\mathcal{P}=\{(P_{i},Q_{i})\}_{i\in\mathcal{I}}\), where \(\mathcal{I}\) is some index set, and any distribution \(D\) over \(\mathcal{I}\), there exists a measurement procedure \(\mathsf{ATI}_{\mathcal{P},D,\gamma}^{\epsilon,\delta}\) that satisfies the following:_ 1. \(\mathsf{ATI}_{\mathcal{P},D,\gamma}^{\epsilon,\delta}\) _implements a binary outcome measurement. For simplicity, we denote the probability of the measurement_ _outputting \(\mathbf{1}\)_ _on_ \(\rho\) _by_ \(\operatorname{Tr}[\mathsf{ATI}_{\mathcal{P},D,\gamma}^{\epsilon,\delta}\rho]\)_._ 2. _For all quantum states_ \(\rho\)_,_ \(\operatorname{Tr}[\mathsf{ATI}_{\mathcal{P},D,\gamma-\epsilon}^{\epsilon, \delta}\rho]\geq\operatorname{Tr}[\mathsf{TI}_{\gamma}(\mathcal{P}_{D})\, \rho]-\delta\)_._ 3. _For all quantum states_ \(\rho\)_, let_ \(\rho^{\prime}\) _be the post-measurement state after applying_ \(\mathsf{ATI}_{\mathcal{P},D,\gamma}^{\epsilon,\delta}\) _on_ \(\rho\)_, and obtaining outcome_ \(1\)_. Then,_ \(\operatorname{Tr}[\mathsf{TI}_{\gamma-2\epsilon}(\mathcal{P}_{D})\,\rho^{ \prime}]\geq 1-2\delta\)_._ 4. _The expected running time is_ \(T_{\mathcal{P},D}\cdot\mathsf{poly}(1/\epsilon,1/(\log\delta))\)_, where_ \(T_{\mathcal{P},D}\) _is the combined running time of sampling according to_ \(D\)_, of mapping_ \(i\) _to_ \((P_{i},Q_{i})\)_, and of implementing the projective measurement_ \((P_{i},Q_{i})\)_._ Intuitively the corollary says that if a quantum state \(\rho\) has weight \(p\) on eigenvectors with eigenvalues at least \(\gamma\), then the measurement \(\mathsf{ATI}_{\mathcal{P},D,\gamma}^{\epsilon,\delta}\) will produce with probability at least \(p-\delta\) a post-measurement state which has weight \(1-2\delta\) on eigenvectors with eigenvalues at least \(\gamma-2\epsilon\). Moreover, the running time for implementing \(\mathsf{ATI}_{\mathcal{P},D,\gamma}^{\epsilon,\delta}\) is proportional to \(\mathsf{poly}(1/\epsilon,1/(\log\delta))\), which is a polynomial in \(\lambda\) as long as \(\epsilon\) is any inverse polynomial and \(\delta\) is any inverse sub-exponential function. T1 and \(\mathsf{ATI}\) For Computationally/Statistically Indistinguishable DistributionsThe following theorems will be used in the proof of security for our SKL encryption scheme in Section 10. Informally, the lemma states the following. Let \(\mathcal{P}_{D_{0}}\) and \(\mathcal{P}_{D_{1}}\) be two mixtures of projective measurements, where \(D_{0}\) and \(D_{1}\) are two computationally indistinguishable distributions. Let \(\gamma,\gamma^{\prime}>0\) be inverse-polynomially close. Then for any (efficiently constructible) state \(\rho\), the probabilities of obtaining outcome \(1\) upon measuring \(\mathsf{TI}_{\gamma}(\mathcal{P}_{D_{0}})\) and \(\mathsf{TI}_{\gamma^{\prime}}(\mathcal{P}_{D_{1}})\) respectively are negligibly close. **Theorem 5.8** (Theorem 6.5 in [22]).: _Let \(\gamma>0\). Let \(\mathcal{P}\) be a collection of projective measurements indexed by some set \(\mathcal{I}\). Let \(\rho\) be an efficiently constructible mixed state, and let \(D_{0},D_{1}\) be two efficiently sampleable and computationally indistinguishable distributions over \(\mathcal{I}\). For any inverse polynomial \(\epsilon\), there exists a negligible function \(\delta\) such that_ \[\operatorname{Tr}[\mathsf{TI}_{\gamma-\epsilon}(\mathcal{P}_{D_{1}})\rho]\geq \operatorname{Tr}[\mathsf{TI}_{\gamma}(\mathcal{P}_{D_{0}})\rho]-\delta\,,\] _where \(\mathcal{P}_{D_{i}}\) is the mixture of projective measurements associated to \(\mathcal{P}\) and \(D_{i}\)._ **Corollary 5.9** (Corollary 6.9 in [22]).: _Let \(\rho\) be an efficiently constructible, potentially mixed state, and let \(\mathcal{D}_{0},\mathcal{D}_{1}\) be two computationally indistinguishable distributions. Then for any inverse polynomial \(\epsilon\) and any function \(\delta\), there exists a negligible \(\mathsf{negl}(\cdot)\) such that:_ \[\operatorname{Tr}[\mathsf{ATI}_{\mathcal{D}_{1},\mathcal{P},\gamma-3\epsilon}^ {\epsilon,\delta}(\rho)]\geq\operatorname{Tr}[\mathsf{ATI}_{\mathcal{D}_{0}, \mathcal{P},\gamma}^{\epsilon,\delta}(\rho)]-2\delta-\mathsf{negl}(\lambda)\] Unfortunately these above two theorems do not possess precise enough error parameters for our use in some settings. We additionally show a theorem with more precise parameters for two statistically close distributions. **Theorem 5.10**.: _let \(\mathcal{D}_{0},\mathcal{D}_{1}\) be two statistically-close distributions over \(\mathcal{R}\), with distance \(\eta\). Then for any inverse polynomial \(\epsilon\) and any function \(\delta\), it holds that:_ \[\operatorname{Tr}[\mathsf{ATI}_{\mathcal{D}_{1},\mathcal{P},\gamma-\epsilon}^ {\epsilon,\delta}(\rho)]\geq\operatorname{Tr}[\mathsf{ATI}_{\mathcal{D}_{0}, \mathcal{P},\gamma}^{\epsilon,\delta}(\rho)]-O(\mathsf{poly}(1/\epsilon,1/\log \delta)\cdot(\frac{1}{|\mathcal{R}|}+\eta))\] Since we work with \(|\mathcal{R}|\) with exponential size, we will assume \(\frac{1}{|\mathcal{R}|}\) to be suppressed when we use the theoreom. We refer the proof or the above theorem to Appendix C.2 ## 6 Lattice Preliminaries In this section, we recall some of the notations and concepts about lattice complexity problems and lattice-based cryptography that will be useful to our main result. ### General definitions A _lattice_\(\mathcal{L}\) is a discrete subgroup of \(\mathbb{R}^{m}\), or equivalently the set \[\mathcal{L}(\mathbf{b}_{1},\ldots,\mathbf{b}_{n})=\left\{\sum_{i=1}^{n}x_{i} \mathbf{b}_{i}\;:\;x_{i}\in\mathbb{Z}\right\}\] of all integer combinations of \(n\) linearly independent vectors \(\mathbf{b}_{1},\ldots,\mathbf{b}_{n}\in\mathbb{R}^{m}\). Such \(\mathbf{b}_{i}\)'s form a _basis_ of \(\mathcal{L}\). The lattice \(\mathcal{L}\) is said to be _full-rank_ if \(n=m\). We denote by \(\lambda_{1}(\mathcal{L})\) the first minimum of \(\mathcal{L}\), defined as the length of a shortest non-zero vector of \(\mathcal{L}\). Discrete Gaussian and Related DistributionsFor any \(s>0\), define \[\rho_{s}(\mathbf{x})=\exp(-\pi\|\mathbf{x}\|^{2}/s^{2})\] for all \(\mathbf{x}\in\mathbb{R}^{n}\). We write \(\rho\) for \(\rho_{1}\). For a discrete set \(S\), we extend \(\rho\) to sets by \(\rho_{s}(S)=\sum_{\mathbf{x}\in S}\rho_{s}(\mathbf{x})\). Given a lattice \(\mathcal{L}\), the _discrete Gaussian_\(\mathcal{D}_{\mathcal{L},s}\) is the distribution over \(\mathcal{L}\) such that the probability of a vector \(\mathbf{y}\in\mathcal{L}\) is proportional to \(\rho_{s}(\mathbf{y})\): \[\Pr_{X\leftarrow\mathcal{D}_{\mathcal{L},s}}[X=\mathbf{y}]=\frac{\rho_{s}( \mathbf{y})}{\rho_{s}(\mathcal{L})}.\] Using standard subgaussian tail-bounds, one can show we can show the following claim. **Claim 6.1**.: _Let \(m\in\mathbb{N}\), \(\sigma>0\), then it holds that:_ \[\Pr_{\mathbf{e}\leftarrow\mathcal{D}_{\mathbb{Z}^{m},\sigma}}[\|\mathbf{e}\|> m\sigma]<\exp(-\tilde{\Omega}(m)).\] The following is Theorem 4.1 from [1] that shows an efficient algorithm to sample from the discrete Gaussian. Now we recall the Learning with Errors (LWE) distribution. **Definition 6.2**.: _Let \(\sigma=\sigma(n)\in(0,1)\). For \(\mathbf{s}\in\mathbb{Z}_{p}^{n}\), the LWE distribution \(A_{\mathbf{s},p,\sigma}\) over \(\mathbb{Z}_{p}^{n}\times\mathbb{Z}_{p}\) is sampled by independently choosing \(\mathbf{a}\) uniformly at random from \(\mathbb{Z}_{p}^{n}\), and \(e\leftarrow\mathcal{D}_{\mathbb{Z},\sigma}\), and outputting \((\mathbf{a},\langle\mathbf{a},\mathbf{s}\rangle+e\mod p)\). The LWE assumption \(\mathsf{LWE}_{n,m,\sigma,p}\) states that \(m\) samples from \(A_{\mathbf{s},p,\sigma}\) for a randomly chosen \(\mathbf{s}\leftarrow\mathbb{Z}_{p}^{n}\) are indistinguishable from \(m\) random vectors \(\mathbb{Z}_{p}^{n+1}\)._ ### Trapdoor Sampling for LWE We will need the following definition of a lattice trapdoor [12, 13, 14]. For \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\), we define the rank \(m\) lattice \[\mathcal{L}^{\perp}(\mathbf{A})=\{\mathbf{z}\in\mathbb{Z}^{m}\;:\;\mathbf{A} \mathbf{z}=\mathbf{0}\pmod{q}\}\;.\] A lattice trapdoor for \(\mathbf{A}\) is a set of short linearly independent vectors in \(\mathcal{L}^{\perp}(\mathbf{A})\). **Definition 6.3**.: _A matrix \(\mathbf{T}\in\mathbb{Z}^{m\times m}\) is a \(\beta\)-good lattice trapdoor for a matrix \(\mathbf{A}\in\mathbb{Z}_{q}^{n\times m}\) if_ 1. \(\mathbf{AT}=\mathbf{0}\pmod{q}\)_._ 2. _For each column vector_ \(\mathbf{t}_{i}\) _of_ \(\mathbf{T}\)_,_ \(\|\mathbf{t}_{i}\|_{\infty}\leq\beta\)_._ 3. \(\mathbf{T}\) _has rank_ \(m\) _over_ \(\mathbb{R}\)_._ **Theorem 6.4**.: _[_12_, 13_]_ _There is an efficient algorithm \(\mathsf{GenTrap}\) that, on input \(1^{n},q,m=\Omega(n\log q)\), outputs a matrix \(\mathbf{A}\) distributed statistically close to uniformly on \(\mathbb{Z}_{q}^{n\times m}\), and a \(O(m)\)-good lattice trapdoor \(\mathbf{T}\) for \(\mathbf{A}\)._ _Moreover, there is an efficient algorithm \(\mathsf{INVERT}\) that, on input \((\mathbf{A},\mathbf{T})\) and \(\mathbf{s}^{\top}\mathbf{A}+\mathbf{e}^{\top}\) where \(\|\mathbf{e}\|\leq q/(C_{T}\sqrt{n\log q})\) and \(C_{T}\) is a universal constant, returns \(\mathbf{s}\) and \(\mathbf{e}\) with overwhelming probability over \((\mathbf{A},\mathbf{T})\leftarrow\mathsf{GenTrap}(1^{n},1^{m},q)\)._ **Lemma 6.5**.: _[Noise Smudging, [13] Let \(y,\sigma>0\). Then, the statistical distance between the distribution \(\mathcal{D}_{\mathbb{Z},\sigma}\) and \(\mathcal{D}_{\mathbb{Z},\sigma+y}\) is at most \(y/\sigma\)._ **Lemma 6.6**.: _(Leftover Hash Lemma(SIS version))_ _The leftover hash lemma says that if \(m=\Omega(n\log q)\), then if you sample \(\mathbf{A}\leftarrow\mathbb{Z}_{q}^{n\times m},\mathbf{x}\leftarrow\{0,1\}^{m}\) and \(\mathbf{y}\leftarrow\mathbb{Z}_{q}^{m}\), we have:_ \[(\mathbf{A},\mathbf{A}\cdot\mathbf{x})\approx_{q^{-n}}(\mathbf{A},\mathbf{y}))\] ### Structural Properties of GSW Homomorphic Encryption Our scheme will build upon the GSW levelled FHE construction [12]. We recall its structure here, for more details refer to [12]. In GSW scheme, the public keys consists of matrix \(\mathbf{B}\in\mathbb{Z}_{q}^{n\times m}\) where \(m=\Omega(n\log q)\). This matrix is pseudorandom due to LWE. The secret key is a vector \(\mathbf{s}\in\mathbb{Z}_{q}^{1\times n}\) such that \(\mathbf{s}\cdot\mathbf{B}=\mathbf{e}\) for a small norm error vector \(\mathbf{e}\). Such matrices can be constructed easily using the LWE assumption. For any such matrix \(\mathbf{B}\), to encrypt a bit \(\mu\in\{0,1\}\), the encryption algorithm produces a ciphertext \(\mathsf{ct}\in\mathbb{Z}_{q}^{n\times[n\log q]}\) as follows. Sample a random matrix \(\mathbf{R}\) to be a random small norm (where for instance each entry is chosen independently from \(\{+1,-1\}\)). Then we compute \(\mathsf{ct}=\mathbf{B}\mathbf{R}+\mu\mathbf{G}\) where \(\mathbf{G}\) is the Gadget matrix [13]: \(\mathbf{G}=[\mathbf{I}\otimes[2^{0},2^{1},\cdots,2^{[\log q]-1}]\|\mathbf{0}^ {m\times(m-n\lceil\log q\rceil)}]\in\mathbb{Z}^{n\times m}\) and \(\mathbf{I}\) is the \(n\times n\) identity. \(\mathbf{G}\) converts a binary representation of a vector back to its original vector representation over the field \(\mathbb{Z}_{q}\); the associated (non-linear) inverse operation \(\mathbf{G}^{-1}\) converts vectors in \(Z_{q}\) to their binary representation. Note that \(\mathbf{B}\mathbf{R}\) is once again an LWE matrix in the same secret \(\mathbf{s}\) and satisfies \(\mathbf{s}\mathbf{B}\mathbf{R}\) is low norm therefore \(\mathsf{sct}\approx\mathbf{s}\mu\mathbf{G}\) which could be used to learn \(\mu\). One can compute NAND operation as follows. Say we have \(\mathsf{ct}_{1}=\mathbf{B}\mathbf{R}_{1}+\mu_{1}\mathbf{G}\) and \(\mathsf{ct}_{2}=\mathbf{B}\mathbf{R}_{2}+\mu_{2}\mathbf{G}\). One can compute the bit-decomposition of \(\mathsf{ct}_{2}\), \(\mathbf{G}^{-1}(\mathsf{ct}_{2})\) that simply expands out component wise by replacing every coordinate of \(\mathsf{ct}_{2}\) by a its binary decomposition vector of size \(\lceil\log_{2}q\rceil\). Observe that \(\mathbf{G}^{-1}(\mathsf{ct}_{2})\) is a low norm matrix in \(\mathbb{Z}_{p}^{\lceil n\log q\rceil\times\lceil n\log q\rceil}\) Then, one can compute \(\mathsf{ct}_{\times}=\mathsf{ct}_{1}\mathbf{G}^{-1}(\mathsf{ct}_{2})\). This yields a ciphertext \(\mathsf{ct}_{\times}=\mathbf{B}(\mathbf{R}_{1}\mathbf{G}^{-1}(\mathsf{ct}_{2} )+\mu_{1}\mathbf{R}_{2})+\mu_{1}\mu_{2}\mathbf{G}\). Observe that the randomness \((\mathbf{R}_{1}\mathbf{G}^{-1}(\mathsf{ct}_{2})+\mu_{1}\mathbf{R}_{2})\) is small norm and the invariant is still maintained. Finally to compute NAND operation one simply outputs \(\mathsf{ct}_{\text{NAND}}=\mathbf{G}-\mathsf{ct}_{\times}\) yielding a ciphertext encryption \(1-\mu_{1}\mu_{2}\). The security proof follows form the fact that by LHL, \(\mathbf{BR}\) is random provided \(\mathbf{B}\) is chosen at random. By the security of LWE, \(\mathbf{B}\) is pseudorandom therefore \(\mathbf{BR}\) is also pseudorandom. ## 7 Preliminaries: Noisy Claw Free Trapdoor Families ### Noisy Trapdoor Claw-Free Families The following definition of NTCF families is taken verbatim from [BCM\({}^{+}\)21, Definition 6]. For a more detailed exposition of the definition, we refer the readers to the prior work. **Definition 7.1** (NTCF family).: _Let \(\lambda\) be a security parameter. Let \(\mathcal{X}\) and \(\mathcal{Y}\) be finite sets. Let \(\mathcal{K}_{\mathcal{F}}\) be a finite set of keys. A family of functions_ \[\mathcal{F}=\left\{f_{k,b}:\mathcal{X}\to\mathcal{D}_{\mathcal{Y}}\right\}_{k \in\mathcal{K}_{\mathcal{F}},b\in\{0,1\}}\] _is called a noisy trapdoor claw free (NTCF) family if the following conditions hold:_ 1. _Efficient Function Generation._ _There exists an efficient probabilistic algorithm_ \(GEN_{\mathcal{F}}\) _which generates a key_ \(k\in\mathcal{K}_{\mathcal{F}}\) _together with a trapdoor_ \(t_{k}\)_:_ \[(k,t_{k})\leftarrow\mathit{GEN}_{\mathcal{F}}(1^{\lambda}).\] 2. _Trapdoor Injective Pair._ _For all keys_ \(k\in\mathcal{K}_{\mathcal{F}}\) _the following conditions hold._ 1. _Trapdoor: There exists an efficient deterministic algorithm_ \(\mathit{INV}_{\mathcal{F}}\) _such that for all_ \(b\in\{0,1\}\)_,_ \(x\in\mathcal{X}\) _and_ \(y\in\mathsf{SUPP}(f_{k,b}(x))\)_,_ \(\mathit{INV}_{\mathcal{F}}(t_{k},b,y)=x\)_. Note that this implies that for all_ \(b\in\{0,1\}\) _and_ \(x\neq x^{\prime}\in\mathcal{X}\)_,_ \(\mathsf{SUPP}(f_{k,b}(x))\cap\mathsf{SUPP}(f_{k,b}(x^{\prime}))=\emptyset\)_._ 2. _Injective pair: There exists a perfect matching_ \(\mathcal{R}_{k}\subseteq\mathcal{X}\times\mathcal{X}\) _such that_ \(f_{k,0}(x_{0})=f_{k,1}(x_{1})\) _if and only if_ \((x_{0},x_{1})\in\mathcal{R}_{k}\)_._ 3. _Efficient Range Superposition._ _For all keys_ \(k\in\mathcal{K}_{\mathcal{F}}\) _and_ \(b\in\{0,1\}\) _there exists a function_ \(f^{\prime}_{k,b}:\mathcal{X}\to\mathcal{D}_{\mathcal{Y}}\) _such that the following hold._ 1. _For all_ \((x_{0},x_{1})\in\mathcal{R}_{k}\) _and_ \(y\in\mathsf{SUPP}(f^{\prime}_{k,b}(x_{b}))\)_,_ \(\mathit{INV}_{\mathcal{F}}(t_{k},b,y)=x_{b}\) _and_ \(\mathit{INV}_{\mathcal{F}}(t_{k},b\oplus 1,y)=x_{b\oplus 1}\)_._ 2. _There exists an efficient deterministic procedure_ \(\mathsf{CHK}_{\mathcal{F}}\) _that, on input_ \(k\)_,_ \(b\in\{0,1\}\)_,_ \(x\in\mathcal{X}\) _and_ \(y\in\mathcal{Y}\)_, returns_ \(1\) _if_ \(y\in\mathsf{SUPP}(f^{\prime}_{k,b}(x))\) _and_ \(0\) _otherwise. Note that_ \(\mathsf{CHK}_{\mathcal{F}}\) _is not provided the trapdoor_ \(t_{k}\)_._ 3. _For every_ \(k\) _and_ \(b\in\{0,1\}\)_,_ \[\mathbb{E}_{x\leftarrow_{U}\mathcal{X}}\big{[}H^{2}(f_{k,b}(x),f^{\prime}_{k,b}( x))\big{]}\leq\mu(\lambda),\] _for some negligible function_ \(\mu(\cdot)\)_. Here_ \(H^{2}\) _is the Hellinger distance. Moreover, there exists an efficient procedure_ \(\text{SAMP}_{\mathcal{F}}\) _that on input_ \(k\) _and_ \(b\in\{0,1\}\) _prepares the state_ \[\frac{1}{\sqrt{|\mathcal{X}|}}\sum_{x\in\mathcal{X},y\in\mathcal{Y}}\sqrt{(f^{ \prime}_{k,b}(x))(y)}\ket{x}\ket{y}.\] 4. _Adaptive Hardcore Bit._ _For all keys_ \(k\in\mathcal{K}_{\mathcal{F}}\) _the following conditions hold, for some integer_ \(w\) _that is a polynomially bounded function of_ \(\lambda\)_._ 1. _For all_ \(b\in\{0,1\}\) _and_ \(x\in\mathcal{X}\)_, there exists a set_ \(G_{k,b,x}\subseteq\{0,1\}^{w}\) _such that_ \(\Pr_{d\leftarrow_{U}\{0,1\}^{w}}[d\notin G_{k,b,x}]\) _is negligible, and moreover there exists an efficient algorithm that checks for membership in_ \(G_{k,b,x}\) _given_ \(k,b,x\) _and the trapdoor_ \(t_{k}\)_._ 2. _There is an efficiently computable injection_ \(\mathcal{J}:\mathcal{X}\to\{0,1\}^{w}\)_, such that_ \(\mathcal{J}\) _can be inverted efficiently on its range, and such that the following holds. If_ \[H_{k} =\big{\{}(b,x_{b},d,d\cdot(\mathcal{J}(x_{0})\oplus\mathcal{J}(x_ {1})))\,|\,\,b\in\{0,1\},(x_{0},x_{1})\in\mathcal{R}_{k},d\in G_{k,0,x_{0}}\cap G _{k,1,x_{1}}\big{\}},\] \[\overline{H}_{k} =\big{\{}(b,x_{b},d,c)\,|\,\,(b,x,d,c\oplus 1)\in H_{k}\big{\}},\] _then for any quantum polynomial-time procedure_ \(\mathcal{A}\) _there exists a negligible function_ \(\mu(\cdot)\) _such that_ \[\left|\Pr_{(k,t_{k})\leftarrow\text{GEN}_{\mathcal{F}}(1^{\lambda})}[ \mathcal{A}(k)\in H_{k}]-\Pr_{(k,t_{k})\leftarrow\text{GEN}_{\mathcal{F}}(1^{ \lambda})}[\mathcal{A}(k)\in\overline{H}_{k}]\right|\leq\mu(\lambda).\] ### (Extended) NTCF from LWE **Theorem 7.2** ([1, Theorem 4.1][16, Theorem 9.2]).: _Assuming the post-quantum hardness of \(\mathsf{LWE}_{n,m,q,B_{L}}\), (extended) NTCF families exist._ The following construction description is mostly taken verbatim from [16]. #### 7.2.1 Parameter Choice Let \(\lambda\) be the security parameter. All other parameters are functions of \(\lambda\). Let \(q\geq 2\) be a prime integer. Let \(\ell,n,m,w\geq 1\) be polynomially bounded functions of \(\lambda\) and \(B_{L},B_{V},B_{P}\) be Gaussian parameters, \(B_{X},B_{S}\) be norm bounds, such that the following conditions hold: 1. \(n=\Omega(\ell\log q)\) and \(m=\Omega(n\log q)\) 2. \(w=n\lceil\log q\rceil\). 3. \(B_{P}=\frac{q}{2C_{T}\sqrt{mn\log q}}\) where \(C_{T}\) is the constant in theorem 6.4. 4. \(2\sqrt{n}\leq B_{L}\leq B_{V}\leq B_{P}\leq B_{X}\) 5. The ratios \(\frac{B_{V}}{B_{L}},\frac{B_{P}}{B_{V}},\frac{B_{P^{\prime}}}{B_{P}},\frac{B_ {X}}{B_{S}}\) are all super-polynomial in \(\lambda\). 6. We denote \([B_{X}]\) as all integer taking values \([-B_{X},-B_{X}+1\cdots,B_{X}-1,B_{X}]\). Similarly for \(B_{S}\). \(B_{S}\) can in fact be \(\{0,1\}\). #### 7.2.2 Noisy Trapdoor claw-Free Families (2-to-1 Mode) Let \(\mathcal{X}=\mathbb{Z}_{q}^{n}\) and \(\mathcal{Y}=\mathbb{Z}_{q}^{m}\). The key space is \(\mathbb{Z}_{q}^{n\times m}\times\mathbb{Z}_{q}^{m}\). For \(b\in\{0,1\}\), \(\mathbf{x}\in\mathcal{X}\) and key \(k=(\mathbf{A},\mathbf{s}\mathbf{A}+\mathbf{e})\) where \((\mathbf{A},\mathbf{t}\mathbf{d}_{A})\leftarrow\mathsf{GenTrap}(1^{n},1^{m},q)\), \(\mathbf{s}\leftarrow\mathbb{Z}_{q}^{n},\mathbf{e}\leftarrow\mathcal{D}_{ \mathbb{Z}_{q}^{m},B_{L}}\), the density \(f_{k,b}(x)\) is defined as: \[\forall\mathbf{y}\in\mathcal{Y}:(f_{k,b}(\mathbf{x}))(\mathbf{y})=\mathcal{D}_{ \mathbb{Z}_{q}^{m},B_{P}}(\mathbf{y}-\mathbf{x}^{\top}\mathbf{A}-b\cdot\mathbf{ s}^{\top}\mathbf{A})\] It follows that: \[\mathrm{SUPP}(f_{0,k}(\mathbf{x})) =\{\mathbf{y}=\mathbf{x}\mathbf{A}+\mathbf{e}^{\prime}|\|\mathbf{ e}^{\prime}\|\leq B_{P}\sqrt{m}\}\] \[\mathrm{SUPP}(f_{1,k}(\mathbf{x})) =\{\mathbf{y}=\mathbf{x}\mathbf{A}+\mathbf{e}^{\prime}+\mathbf{s }\mathbf{A}|\|\mathbf{e}^{\prime}\|\leq B_{P}\sqrt{m}\}\] \((\mathbf{x}_{0},\mathbf{x}_{1})\) will be an injective pair such that \(f_{0,k}(\mathbf{x}_{0})=f_{1,k}(\mathbf{x}_{1})\) if and only if \(\mathbf{x}_{0}=\mathbf{x}_{1}+\mathbf{s}\). #### 7.2.3 Noisy Trapdoor Injective Families (Injective Mode) We now describe the trapdoor injective functions, or informally, the "injective mode" of trapdoor claw-free functions. Let Let \(\mathcal{X}=\mathbb{Z}_{q}^{n}\) and \(\mathcal{Y}=\mathbb{Z}_{q}^{m}\). The key space is \(\mathbb{Z}_{q}^{n\times m}\times\mathbb{Z}_{q}^{m}\). For \(b\in\{0,1\}\), \(\mathbf{x}\in\mathcal{X}\) and key \(k=(\mathbf{A},\mathbf{u})\), where \((\mathbf{A},\mathrm{td}_{A})\leftarrow\mathsf{Gen}\mathsf{Trap}(1^{n},1^{m},q)\), \(\mathbf{u}\) is sampled uniformly random up to the condition that there _does not_ exist \(\mathbf{s},\mathbf{e}\) such that \(\mathbf{s}^{\top}\mathbf{A}+\mathbf{e}^{\top}=\mathbf{u}\) and \(\|\mathbf{e}\|\leq\frac{q}{C_{T}\sqrt{n\log q}}\), which happens with all but negligible probability. The density \(g_{k,b}(x)\) is defined as: \[\forall\mathbf{y}\in\mathcal{Y}:(g_{k,b}(\mathbf{x}))(\mathbf{y})=\mathcal{D} _{\mathbb{Z}_{q}^{m},B_{P}}(\mathbf{y}-\mathbf{x}\mathbf{A}-b\cdot\mathbf{u})\] The injective trapdoor functions \(g_{b,k}\) looks like follows: \[\mathrm{SUPP}(g_{0,k}(\mathbf{x})) =\{\mathbf{y}=\mathbf{x}^{\top}\mathbf{A}+\mathbf{e}^{\prime}|\| \mathbf{e}^{\prime}\|\leq B_{P}\sqrt{m}\}\] \[\mathrm{SUPP}(g_{1,k}(\mathbf{x})) =\{\mathbf{y}=\mathbf{x}\mathbf{A}+\mathbf{e}^{\prime}+\mathbf{ u}|\|\mathbf{e}^{\prime}\|\leq B_{P}\sqrt{m}\}\] \(\mathrm{SUPP}(g_{0,k}(\mathbf{x}))\) and \(\mathrm{SUPP}(g_{1,k}(\mathbf{x}))\) are disjoint as long as \(B_{P}\leq\frac{q}{2C_{T}\sqrt{mn\log q}}\). There is also an inversion function INV in the injective mode: \(\mathsf{INV}(\mathrm{td},\mathbf{y})\rightarrow\mathbf{x}\) will give some unique \(\mathbf{x}\) on input \((\mathrm{td},\mathbf{y})\). **Lemma 7.3**.: _[_12_]_ _The 2-to-1 mode and injective mode are computationally indistinguishable by \(\mathsf{LWE}_{n,m,q,B_{L}}\)._ #### 7.2.4 Efficient Range Preparation for NTCF with Small LWE Secrets We refer the reader to Section 4.3 of [1] for a detailed description of \(\mathrm{SAMP}_{\mathsf{LWE}}\) procedure to efficiently prepapre the claw state: \((|0,\mathbf{x}_{0}\rangle+|1,\mathbf{x}_{1}\rangle),y\). We describe it here briefly for the sake of coherence in presentation. In order for our security parameters in the reduction to go through, we deviate slightly from the exact parameters on \(\mathcal{X}\) and \(\mathbf{s}\) in [1]. We work with small secrets \(\mathcal{X}=[B_{X}]^{n}\) and \(\mathbf{s}\in[B_{S}]^{n}\). At the first step, the procedure creates the following superposition: \[\sum_{\mathbf{e}^{\prime}\in\mathbb{Z}_{q}^{m}}\sqrt{\mathcal{D}_{\mathbb{Z}_ {q},B_{P}}(\mathbf{e}^{\prime})}\,|\mathbf{e}^{\prime}\rangle\] In step 2 of \(\mathrm{SAMP}_{\mathsf{LWE}}\), we prepare uniform superposition of \(\mathbf{x}\in[B_{X}]\). \[\frac{1}{\sqrt{2|B_{X}|^{n}}}\sum_{b\in\{0,1\},\mathbf{x},\mathbf{e}^{\prime}} \sqrt{\mathcal{D}_{\mathbb{Z}_{q},B_{P}}(\mathbf{e}^{\prime})}\left|b,\mathbf{x }\right>\left|\mathbf{e}^{\prime}\right>\] The rest of the steps are the same. In step 3, we apply the key \((\mathbf{A},\mathbf{u})\), controlled by the bit \(b\) to indicate whether to use \(\mathbf{u}\), where \(\mathbf{u}=\mathbf{s}\mathbf{A}+\mathbf{e}\) in the 2-to-1 setting and \(\mathbf{u}\stackrel{{\$}}{{\leftarrow}}\mathbb{Z}_{q}^{m}\) in the injective mode. We get a state: \[\frac{1}{\sqrt{2|B_{X}|^{n}}}\sum_{b\in\{0,1\},\mathbf{x},\mathbf{e}^{\prime} }\sqrt{\mathcal{D}_{\mathbb{Z}_{q},B_{P}}(\mathbf{e}^{\prime})}\left|b,\mathbf{ x}\right>\left|\mathbf{e}^{\prime}\right>\left|\mathbf{x}\mathbf{A}+\mathbf{e}^{ \prime}+b\cdot\mathbf{u}\right>\] Next, we measure the last register to obtain \(\mathbf{y}=\mathbf{x}_{b}\mathbf{A}+\mathbf{e}^{\prime}+\mathbf{b}\cdot \mathbf{s}\mathbf{A}\). We can then uncompute the register containing \(\mathbf{e}^{\prime}\) using the information in register containing \(\mathbf{x}\), the key \((\mathbf{A},\mathbf{u})\) and the last register. It is easy to observe that the efficient range preparation in [1] Section 4.3 and acquireance of the claw state also works in our setting with our choice of parameters having \(B_{X}/B_{S}\) superpolynomially large, or simply letting \(B_{S}=\{0,1\}\) and \(B_{X}\) a superpolynomial. With probability \((1-\text{negl}(\lambda))\), when one measures the image register to obtain a value \(\mathbf{y}\), we will obtain the state \(\frac{1}{\sqrt{2}}(\left|0,\mathbf{x}_{0}\right>+\left|1,\mathbf{x}_{1}\right>)\) where \(f_{0,k}(\mathbf{x}_{0})=f_{1,k}(\mathbf{x}_{1})=\mathbf{y}\) ### Parallel Repetition of An NTCF-based Protocol We first define the single-instance game from [14]. The game is abstracted as a "1-of-2" puzzle with "2-of-2 soundness", where the verifier randomly asks the prover to output a preimage \(\mathbf{x}\in\mathcal{X}\) or an adaptive hardcore bit for the same image \(\mathbf{y}\in\mathcal{Y}\). **Definition 7.4** (1-of-2 Puzzle from NTCF [14]).: _The protocol proceeds as follows, using the notations from Section 7.1._ * _The verifier samples a key_ \((k,\mathsf{td})\leftarrow\mathsf{GEN}_{\mathcal{F}}(1^{\lambda})\) _and send_ \(k\) _to the prover. The verifier keeps the trapdoors_ \(\mathsf{td}\)__ * _The prover sends back a committed image value_ \(\mathbf{y}\)_._ * _The verifier samples a random bit_ \(\delta\in\{0,1\}\) _and sends_ \(\delta\) _to the prover._ * _If_ \(\delta=0\)_, the prover sends back some_ \(\mathbf{x}\in\mathcal{X}\)_; else if_ \(b=1\)_, the prover sends back a string_ \((c,\mathbf{d})\)_._ * _The verifier does the following checks on each_ \((\mathbf{y},\mathbf{x})\) _or_ \((\mathbf{y},c,\mathbf{d})\)_:_ * _When_ \(\delta=0\)_: Check_ \(\mathbf{x}\in\mathsf{INV}(\mathsf{td},b\in\{0,1\},\mathbf{y})\)__7_._ Footnote 7: This step can also be performed publicly using \(CHK_{\mathcal{F}}\). * _When_ \(\delta=1\)_: Find both_ \(\mathbf{x}_{0},\mathbf{x}_{1}\) _using_ \(\mathsf{INV}(\mathsf{td},b\in\{0,1\},\mathbf{y})\)_. Check if_ \(c=\mathbf{d}\cdot(\mathcal{J}(\mathbf{x}_{0})\oplus\mathcal{J}(\mathbf{x}_{1}))\)_._ [14] showed the following property for the above protocol using the LWE-based NTCF from [1]. **1-of-2 Completeness:** Any BQP prover will answer one of the challenges for \(\delta=0\) or \(\delta=1\) with probability 1. **2-of-2 Soundness:** The 2-of-2 soundness error in the above protocol is the probability that a prover can provide both the 1-challenge answer \(\mathbf{x}\) and the 0-challenge answer \((c,\mathbf{d})\) correctly. The above protocol has 2-of-2 soundness \(1/2\) for any BQP prover [14, 1]. Parallel RepetitionWe now describe a special type of parallel-repeated protocol based on the NTCF. In this protocol, we only consider the "2-of-2" setting: the verifier samples multiple keys independently; for every single key, the verifier simply asks the prover to provide both the answer to the 0-challenge and the answer to the 1-challenge. Its parallel repetition soundness was shown in [14]. **Definition 7.5** (Parallel-Repeated 2-of-2 NTCF-protocol).: _The protocol proceeds as follows, using the notations from Section 7.1._ * _The verifier samples_ \(\ell\) _number of keys_ \((k_{i},\mathsf{td}_{i})\leftarrow\mathsf{GEN}_{\mathcal{F}}(1^{\lambda}),i\in[ \ell]\) _independently and send_ \(\{k_{i}\}_{i\in[\ell]}\) _to the prover. The verifier keeps the trapdoors_ \(\{\mathsf{td}_{i}\}_{i\in[\ell]}\)__ * _The prover sends back_ \(\ell\) _tuple of values_ \(\{(\mathbf{y}_{i},\mathbf{x}_{i},c_{i},\mathbf{d}_{i})\}_{i\in[\ell]}\)_._ * _The verifier does the following checks on each_ \((\mathbf{y}_{i},\mathbf{x}_{i},c_{i},\mathbf{d}_{i})\) _for_ \(i\in[\ell]\)_:_ * _Find both_ \(\mathbf{x}_{i,0},\mathbf{x}_{i,1}\) _using_ \(\mathsf{INV}(\mathsf{td}_{i},b\in\{0,1\},\mathbf{y}_{i})\)_._ * _Check if_ \(c_{i}=\mathbf{d}_{i}\cdot(\mathcal{J}(\mathbf{x}_{i,0})\oplus\mathcal{J}( \mathbf{x}_{i,1}))\)_._ * _If all the checks pass, the verifier outputs 1; else outputs 0._ **Theorem 7.6** (Parallel Repetition Soundness of NTCF-based Protocol, [14] Theorem 15 rephrased).: _The above protocol has soundness \((1-\mathsf{negl}(\ell))\) for any BQP prover._ **Remark 7.7**.: _Note that in our construction Section 9.2 we let the verifier further check if \(\mathbf{y}_{i},\mathbf{x}_{i}\) are well-formed. This will not affect the above soundness because it only puts more restrictions on the prover/adversary._ **Remark 7.8**.: _In this work, we only need the soundness to use the above simple protocol where we require the adversary to produce both the "preimage answer" and the "adaptive hardcore bit answer" at the same time. Clearly, the "completeness" of the above protocol is not well-defined, but we omit this property in our setting._ _We do not need the more complicated version of repetition in [11] studied in [11]._ ## 8 Secure Key Leasing with Classical Communication: Definition ### Secure Key Leasing PKE with Classical Lessor **Definition 8.1** (Public Key Encryption with Classcal Leaser).: _A PKE with secure key leasing with a classical vendor consists of the algorithms (Setup, KeyGen, Enc, Dec, Delete, VerDel) defined as follows:_ \(\mathsf{Setup}(1^{\lambda})\)_: take input a security parameter \(\lambda\), output a (classical) master public key \(\mathsf{mpk}\) and a (classical) trapdoor \(\mathsf{td}\)._ \(\mathsf{KeyGen}(1^{\lambda})\)_: take as input a (classical) public key \(\mathsf{mpk}\), output a quantum decryption key \(\rho_{\mathsf{sk}}\) and a classical public key \(\mathsf{pk}\)._ \(\mathsf{Enc}(\mathsf{pk},\mu)\)_: given a public key_ \(\mathsf{pk}\) _and a plaintext_ \(\mu\in\{0,1\}\)_, output a ciphertext_ \(\mathsf{ct}\)_._ \(\mathsf{Dec}(\rho_{\mathsf{sk}},\mathsf{ct})\)_: given a quantum state_ \(\rho_{\mathsf{pk}}\) _and a ciphertext_ \(\mathsf{ct}\)_, output a message_ \(\mu\) _and the state_ \(\rho^{\prime}_{\mathsf{sk}}\)__ \(\mathsf{Delete}(\rho_{\mathsf{sk}})\)_: given the quantum state_ \(\rho_{\mathsf{sk}}\)_, output a classical deletion certificate_ \(\mathsf{cert}\)__ \(\mathsf{VerDel}(\mathsf{pk},\mathsf{td},\mathsf{cert})\)_: given a public key_ \(\mathsf{pk}\)_, a classical certificate_ \(\mathsf{cert}\) _and the trapdoor_ \(\mathsf{td}\)_, output_ \(\mathsf{Valid}\) _or_ \(\mathsf{Invalid}\)_._ We refer the readers to Appendix A on a few side comments and remarks about the definition. Interactive Key GenerationIn our actual classical lessor protocol, the lessor and lessee run an interactive protocol \(\mathsf{InKeyGen}\): * \(\mathsf{InKeyGen}(1^{\lambda})\): an interactive protocol that takes in a security parameter \(1^{\lambda}\); \(\mathcal{A}\) interacts with the challenger (classically) and they either output a public key \(\mathsf{pk}\), a trapdoor \(\mathsf{td}\) and a quantum secret key \(\rho_{\mathsf{sk}}\) or output a symbol \(\bot\). The necessity of an interactive protocol is to prevent "impostors" who sees the master public key and generate public key-decryption key pairs on their own without consent of the lessor. See Appendix A for details. This security is orthogonal to our security defined in Definition 8.2 and Definition 8.4 because our adversary implicitly commits to one \(\mathsf{pk}\). CorrectnessA PKE Scheme with secure quantum key leasing \((\mathsf{Setup},\mathsf{KeyGen},\mathsf{Enc},\mathsf{Dec},\mathsf{Delete},\mathsf{ VerDel})\) satisfies correctness if the following hold. Decryption Correctness: There exists a negligible function \(\mathsf{negl}(\cdot)\), for all \(\lambda\in\mathbb{N}\), for all \(\mu\in\mathcal{M}\): \[\Pr\left[\mathsf{Dec}(\rho_{\mathsf{sk}},\mathsf{ct})=\mu:(\mathsf{pk},\rho_ {\mathsf{sk}})\leftarrow\mathsf{KeyGen}(\mathsf{mpk})\right]\geq 1-\mathsf{negl}(\lambda)\] Reusabilitythe above decryption correctness should hold for an arbitrary polynomial number of uses. Verifying Deletion Correctness: There exists a negligible function \(\mathsf{negl}(\cdot)\), for all \(\lambda\in\mathbb{N}\): \[\Pr\left[\mathsf{Valid}\leftarrow\mathsf{VerDel}(\mathsf{pk},\mathsf{td}, \mathsf{cert}):(\mathsf{pk},\rho_{\mathsf{sk}})\leftarrow\mathsf{KeyGen}( \mathsf{mpk})\right]\geq 1-\mathsf{negl}(\lambda)\] IND-SKL-PKE SecurityWe give a "classical friendly" security definition same as the one used in [1, 1], except that we have a classical leaser. Then in the next subsection Section 8.2, we will then present a "strong" security definition in the following section, section 8.2. The latter is the one we will actually use in the proof and will imply the following IND-PKE-SKL security. **Definition 8.2** (IND-PKE-SKL Security(Classical Client)).: _where the experiment IND-PKE-SKL\((\mathcal{A},1^{\lambda},b\in\{0,1\})\) between a challenger and the adversary \(\mathcal{A}\) is defined as follows:_ * _The challenger runs_ \(\mathsf{Setup}(1^{\lambda})\rightarrow(\mathsf{mpk},\mathsf{td})\)_. It sends_ \(\mathsf{mpk}\) _to the adversary_ \(\mathcal{A}\)_._ \(\mathcal{A}\) _computes_ \((\mathsf{pk},\rho_{\mathsf{sk}})\leftarrow\mathsf{KeyGen}(\mathsf{mpk})\) _and publishes_ \(\mathsf{pk}\)_._ * _The challenger requests that_ \(\mathcal{A}\) _runs the deletion algorithm_ \(\mathsf{Delete}\)_._ \(\mathcal{A}\) _returns a deletion certificate_ \(\mathsf{cert}\)_._ * _The challenger runs_ \(\mathsf{VerDel}(\mathsf{pk},\mathsf{td},\mathsf{cert})\) _and continues if_ \(\mathsf{VerDel}(\mathsf{pk},\mathsf{td},\mathsf{cert})\) _outputs_ \(\mathsf{Valid}\)_; else the challenger outputs_ \(\bot\) _and aborts._ * \(\mathcal{A}\) _submits a plaintext_ \(\mu\in\{0,1\}^{\ell}\) _to the challenger._ * _The challenger flips a bit_ \(b\xleftarrow{\$}\{0,1\}\)_._ * _If_ \(b=0\)_, the challenger sends back the ciphertext_ \(\mathsf{ct}\leftarrow\mathsf{Enc}(\mathsf{pk},\mu)\)_. If_ \(b=1\) _the challenger sends a random ciphertext from the possible space of all ciphertexts for of_ \(\ell\)_-bit messages,_ \(\mathsf{ct}\xleftarrow{\$}\mathcal{C}\)__ * _Output_ \(\mathcal{A}\)_'s guess for the bit_ \(b\)_,_ \(b^{\prime}\)_._ _A PKE Scheme with Secure Key Leasing and fully classical communication (\(\mathsf{Setup},\mathsf{KeyGen}\), \(\mathsf{Enc}\), \(\mathsf{Dec}\), \(\mathsf{Delete}\), \(\mathsf{VerDel}\)) is secure if, for every \(\mathsf{QPT}\) adversary \(\mathcal{A}\) if there exists negligible functions \(\mathsf{negl}(\cdot)\) such that one of the following holds for all \(\lambda\in\mathbb{N}\):_ \[\left|\Pr\left[\text{IND-PKE-SKL}(\mathcal{A},1^{\lambda},b=0)=1\right]-\Pr \left[\text{IND-PKE-SKL}(\mathcal{A},1^{\lambda},b=1)=1\right]\right|\leq \mathsf{negl}(\lambda)\] **Remark 8.3**.: _Regarding the security in Definition 8.2: In other words, in order to win, the adversary \(\mathcal{A}\) needs to do both of the following for some noticeable \(\epsilon_{1},\epsilon_{2}\):_ 1. \(\left|\Pr\left[\text{IND-PKE-SKL}(\mathcal{A},1^{\lambda},b=0)=1\right]-\Pr \left[\text{IND-PKE-SKL}(\mathcal{A},1^{\lambda},b=1)=1\right]\right|\geq \epsilon_{1}(\lambda)\) _and_ 2. \(\Pr[\text{IND-PKE-SKL}(\mathcal{A},1^{\lambda},b\in\{0,1\})\neq\bot]\geq \epsilon_{2}(\lambda)\)__ _We need the second inequality above to hold because in the case where \(\Pr[\text{IND-PKE-SKL}(\mathcal{A},1^{\lambda},b\in\{0,1\})=\bot]\geq 1-\mathsf{ negl}_{2}(\lambda)\), for some negligible \(\mathsf{negl}_{2}(\cdot)\), the probabilities \(\Pr\left[\text{IND-PKE-SKL}(\mathcal{A},1^{\lambda},b=0)=1\right]\) and \(\Pr\left[\text{IND-PKE-SKL}(\mathcal{A},1^{\lambda},b=1)=1\right]\) will also have negligible difference._ ### Strong SKL-PKE PKE Security: Threshold Implementation Version In this section, we define a security notion we call Strong SKL-PKE, which is described via the measurement \(\mathsf{TI}\) in Section 5. We show that it implies the regular security notion in the previous section. To define the strong security/implementable security, we first define what it means to test the success probability of a quantum decryptor. **Definition 8.4** (Testing a quantum decryptor).: _Let \(\gamma\in[0,1]\). Let \(\mathsf{pk}\) be a public key and \(\mu\) be a message. We refer to the following procedure as a test for a \(\gamma\)-good quantum decryptor with respect to \(\mathsf{pk}\) and \(\mu\) used in the following sampling procedure:_ * _The procedure takes as input a quantum decryptor_ \(\rho\)_._ * _Let_ \(\mathcal{P}=(P,I-P)\) _be the following mixture of projective measurements (in terms of Definition_ 5.5_) acting on some quantum state_ \(\rho\)_:_ * _Compute_ \(\mathsf{ct}_{0}\leftarrow\mathsf{Enc}(\mathsf{pk},\mu)\)_, the encryption of message_ \(\mu\in\{0,1\}\)_._ * _Compute_ \(\mathsf{ct}_{1}\leftarrow\mathcal{C}\)_, a random ciphertext from the possible space of all ciphertexts for_ \(1\)_-bit messages._ * _Sample a uniform_ \(b\leftarrow\{0,1\}\)_._ * _Run the quantum decryptor_ \(\rho\) _on input_ \(\mathsf{ct}_{b}\)_. Check whether the outcome is_ \(b\)_. If so, output_ \(1\)_, otherwise output_ \(0\)_._ * _Let_ \(\mathsf{TI}_{1/2+\gamma}(\mathcal{P})\) _be the threshold implementation of_ \(\mathcal{P}\) _with threshold value_ \(\frac{1}{2}+\gamma\)_, as defined in Definition_ 5.3_. Run_ \(\mathsf{TI}_{1/2+\gamma}(\mathcal{P})\) _on_ \(\rho\)_, and output the outcome. If the output is_ \(1\)_, we say that the test passed, otherwise the test failed._ By Lemma 5.4, we have the following corollary. **Corollary 8.5** (\(\gamma\)-good Decryptor).: _Let \(\gamma\in[0,1]\). Let \(\rho\) be a quantum decryptor. Let \(\mathsf{TI}_{1/2+\gamma}(\mathcal{P})\) be the test for a \(\gamma\)-good decryptor defined above. Then, the post-measurement state conditioned on output \(1\) is a mixture of states which are in the span of all eigenvectors of \(P\) with eigenvalues at least \(1/2+\gamma\)._ Now we are ready to define the strong \(\gamma\)-anti-piracy game. **Definition 8.6** (\(\gamma\)-Strong Secure Key Leasing Security Game).: _Let \(\lambda\in\mathbb{N}^{+}\), and \(\gamma\in[0,1]\). The strong \(\gamma\)-PKE-SKL game is the following game between a challenger and an adversary \(\mathcal{A}\)._ 1. _The challenger runs_ \(\mathsf{Setup}(1^{\lambda})\rightarrow(\mathsf{mpk},\mathsf{td})\)_. It sends_ \(\mathsf{mpk}\) _to the adversary_ \(\mathcal{A}\)_._ \(\mathcal{A}\) _computes_ \((\mathsf{pk},\rho_{\mathsf{sk}})\leftarrow\mathsf{KeyGen}(\mathsf{mpk})\) _and publishes_ \(\mathsf{pk}\)_._ 2. _The challenger requests that_ \(\mathcal{A}\) _runs the deletion algorithm_ \(\mathsf{Delete}(\rho_{\mathsf{sk}})\)_._ \(\mathcal{A}\) _returns a deletion certificate_ \(\mathsf{cert}\) _to the challenger._ 3. _The challenger runs_ \(\mathsf{VerDel}(\mathsf{pk},\mathsf{td},\mathsf{cert})\) _and continues if_ \(\mathsf{VerDel}(\mathsf{pk},\mathsf{td},\mathsf{cert})\) _returns_ \(\mathsf{Valid}\)_; else it outputs_ \(\bot\) _and aborts, if_ \(\mathsf{VerDel}(\mathsf{pk},\mathsf{td},\mathsf{cert})\) _returns_ \(\mathsf{Invalid}\)_._ 4. \(\mathcal{A}\) _outputs a message_ \(\mu\) _and a (possibly mixed) state_ \(\rho_{\mathsf{Delete}}\) _as a quantum decryptor._ 5. _The challenger runs the test for a_ \(\gamma\)_-good decryptor on_ \(\rho_{\mathsf{Delete}}\) _with respect to_ \(\mathsf{pk}\) _and_ \(\mu\)_. The challenger outputs_ \(1\) _if the test passes, otherwise outputs_ \(0\)_._ _We denote by \(\mathsf{StrongSKL}(1^{\lambda},\gamma,\mathcal{A})\) a random variable for the output of the game._ **Definition 8.7** (Strong PKE-SKL Security).: _Let \(\gamma:\mathbb{N}^{+}\rightarrow[0,1]\). A secure key leasing scheme satisfies strong \(\gamma\)-SKL security, if for any QPT adversary \(\mathcal{A}\), there exists a negligible function \(\mathsf{negl}(\cdot)\) such that for all \(\lambda\in\mathbb{N}\):_ \[\Pr\left[b=1,b\leftarrow\mathsf{StrongSKL}(1^{\lambda},\gamma( \lambda),\mathcal{A})\right]\leq\mathsf{negl}(\lambda) \tag{3}\] Next, we show that the strong PKE-SKL definition(Definition 8.7) implies the IND-PKE-SKL definition(Definition 8.2). **Claim 8.8**.: _Suppose a secure key leasing scheme satisfies strong \(\gamma\)-SKL security (Definition 8.7) for any inverse polynomial \(\gamma\), then it also satisfies IND-PKE-SKL security (Definition 8.2)._ Proof.: We refer the reader to appendix A for the proof. ## 9 Secure Key Leasing with Classical Lessor/Client: Construction ### Parameters We present our parameters requirements for the following construction. All the parameters are the same as in Section 7.2.1, with a few new paramters added. Let \(\lambda\) be the security parameter. All other parameters are functions of \(\lambda\). Let \(q\geq 2\) be a prime integer. Let \(\ell,k,n,m,w\geq 1\) be polynomially bounded functions of \(\lambda\) and \(B_{L},B_{V},B_{P},B_{P^{\prime}}\) be Gaussian parameters and \(B_{X},B_{S}\) are norm parameters such that the following conditions hold: 1. \(n=\Omega(\ell\log q)\) and \(m=\Omega(n\log q)\) 2. \(k\) can be set to \(\lambda\). 3. \(w=\lceil n\log q\rceil\). 4. \(B_{P}=\frac{q}{2C_{T}\sqrt{m\log q}}\) where \(C_{T}\) is the constant in Theorem 6.4. 5. \(2\sqrt{n}\leq B_{L}\leq B_{V}\leq B_{P}\leq B_{X}\leq B_{P^{\prime}}\) 6. The ratios \(\frac{B_{V}}{B_{L}},\frac{B_{P}}{B_{V}},\frac{B_{P^{\prime}}}{B_{P}},\frac{B_{X} }{B_{S}}\), and \(\frac{B_{P^{\prime}}}{B_{X}^{2}\cdot B_{P}}\) are super-polynomial in \(\lambda\). 7. \(B_{P^{\prime}}\cdot m^{d}\leq q\) where \(d\) is the depth of the circuit in FHE evaluation, to be further discussed in Appendix B. 8. We denote \([B_{X}]\) as all integer taking values \([-B_{X},\cdots,B_{X}]\). Similarly for \(B_{S}\). We can also simply take \(B_{S}:=\{0,1\}\) in our scheme. ### Scheme Construction We first present our construction for a secure key leasing protocol. We provide the protocol description where lessor is completely classical in Section 9.4. * \(\mathsf{Setup}(1^{\lambda})\): On input the security parameter \(1^{\lambda}\), the \(\mathsf{Setup}\) algorithm works as follows: * Sample \(k=\lambda\) matrices \(\mathbf{A}_{i}\in\mathbb{Z}_{q}^{n\times m}\) along with their trapdoors \(\mathsf{td}_{i}\) using the procedure \(\mathsf{GenTrap}(1^{n},1^{m},q)\)(Theorem 6.4): \((\mathbf{A}_{i},\mathsf{td}_{i})\stackrel{{\$}}{{\leftarrow}} \mathsf{GenTrap}(1^{n},1^{m},q),\) \(\forall i\in[k]\). * Sample \(\mathbf{s}_{i}\stackrel{{\$}}{{\leftarrow}}[B_{s}]^{n},\forall i \in[k]\) and \(\mathbf{e}_{i}\stackrel{{\$}}{{\leftarrow}}D_{\mathbb{Z}_{q}^{m },B_{V}},,\forall i\in[k]\). * Output \(\mathsf{mpk}=\{f_{i,0},f_{i,1}\}_{i=1}^{k}=\{(\mathbf{A}_{i},\mathbf{s}_{i} \mathbf{A}_{i}+\mathbf{e}_{i})\}_{i=1}^{k}\) and the trapdoor \(\mathsf{td}=\{\mathsf{td}_{i}\}_{i=1}^{k}\). * Take in \(\mathsf{mpk}=\{f_{i,0},f_{i,1}\}_{i=1}^{k}=\{(\mathbf{A}_{i},\mathbf{s}_{i} \mathbf{A}_{i}+\mathbf{e}_{i})\}_{i=1}^{k}\). * Prepare the quantum key of the form \(\rho_{\mathsf{sk}}=\bigotimes_{i=1}^{k}\left(\frac{1}{\sqrt{2}}(|0,\mathbf{x} _{i,0}\rangle+|1,\mathbf{x}_{i,1}\rangle)\right)\) along with \(\{\mathbf{y}_{i}\}_{i\in[k]}\) where \(\mathbf{y}_{i}=f_{i,0}(\mathbf{x}_{0,i})=f_{i,1}(\mathbf{x}_{1,i})\) for each \(i\in[k]\), according to the procedure in Section 7.2.4. Note that \(f_{i,b}(\mathbf{x}_{b,i})=\mathbf{x}_{b,i}\mathbf{A}_{i}+\mathbf{e}_{i}^{ \prime}+b_{i}\cdot\mathbf{s}_{i}\mathbf{A}_{i},\ \mathbf{e}_{i}^{\prime}\stackrel{{\$}}{{ \leftarrow}}D_{Z_{q}^{m},B_{P}};\mathbf{x}_{i,b_{i}}\in[B_{X}]^{n},\forall i \in[k]\). * Output public key \(\mathsf{pk}=\{(\mathbf{A}_{i},\mathbf{s}_{i}\mathbf{A}_{i}+\mathbf{e}_{i}, \mathbf{y}_{i})\}_{i=1}^{k}\) and quantum decryption key \(\rho_{\mathsf{sk}}\). * \(\mathsf{Enc}(\mathsf{pk},\mu)\): On input a public key \(\mathsf{pk}=\{(\mathbf{A}_{i},\mathbf{s}_{i}\mathbf{A}_{i}+\mathbf{e}_{i}, \mathbf{y}_{i})\}_{i=1}^{k}\) and a plaintext \(\mu\in\{0,1\}\) the algorithm samples \(\mathbf{R}\stackrel{{\$}}{{\leftarrow}}\{0,1\}^{m\times m}\) and computes a ciphertext as follows: \[\mathsf{ct}=\begin{bmatrix}\mathbf{s}_{1}\mathbf{A}_{1}+\mathbf{e}_{1}\\ \mathbf{A}_{1}\\ \cdots\\ \mathbf{s}_{k}\mathbf{A}_{k}+\mathbf{e}_{k}\\ \mathbf{A}_{k}\\ \sum_{i\in[k]}\mathbf{y}_{i}\end{bmatrix}\cdot\mathbf{R}+\mathbf{E}+\mu\cdot \mathbf{G}_{(n+1)k+1}\] where \(\mathbf{G}_{(n+1)k+1}\) is the gadget matrix of dimensions \((nk+k+1)\times m\). \(\mathbf{E}\in\mathbb{Z}_{q}^{(nk+k+1)\times m}\) is a matrix with all rows having \(\mathbf{0}^{m}\) except the last row being \(\mathbf{e}^{\prime\prime}\), where \(\mathbf{e}^{\prime\prime}\leftarrow\mathcal{D}_{\mathbb{Z}_{q}^{m},B_{P^{ \prime}}}\). Output \(\mathsf{ct}\). * \(\mathsf{Dec}(\rho_{\mathsf{sk}},\mathsf{ct})\): On quantum decryption key \(\rho_{\mathsf{sk}}=\bigotimes_{i=1}^{k}\left(\frac{1}{\sqrt{2}}(|0,\mathbf{x}_{ i,0}\rangle+|1,\mathbf{x}_{i,1}\rangle)\right)\) and a ciphertext \(\mathsf{ct}\in\mathbb{Z}_{q}^{(nk+k+1)\times m}\), the decryption is performed in a coherent way as follows. * View the key as a vector of dimension \(1\times(n+1)k\); pad the key with one-bit of classical information, the value -1 at the end, to obtain a vector with dimension \(1\times(nk+k+1)\): \[\left[\tfrac{1}{\sqrt{2}}((|0,\mathbf{x}_{1,0}\rangle)+|(1, \mathbf{x}_{1,1}\rangle))|\cdots|\tfrac{1}{\sqrt{2}}(|(0,\mathbf{x}_{k,0} \rangle)+|(1,\mathbf{x}_{k,1}\rangle))|\,|\!-\!1\rangle\right]\] \[=\frac{1}{\sqrt{2^{k}}}\sum_{\mathbf{b}_{j}\in\{0,1\}^{k}}\!\!| \mathbf{b}_{j,1},\mathbf{x}_{1,\mathbf{b}_{j,1}},\cdots,\mathbf{b}_{j,k}, \mathbf{x}_{k,\mathbf{b}_{j,k}}\rangle\,|\!-\!1\rangle\] Denote this above vector by \(\mathbf{sk}\). * Compute \((-\mathbf{sk})\cdot\mathsf{ct}\cdot\mathbf{G}^{-1}(\mathbf{0}^{nk+k}|\lfloor \tfrac{q}{2}\rfloor)\) coherently and write the result on an additional empty register work (a working register other than the one holding \(\mathbf{sk}\)). * Make the following computation in another additional register out: write \(\mu^{\prime}=0\) if the outcome in the previous register of the above computation is less than \(\tfrac{q}{4}\); write \(\mu^{\prime}=1\) otherwise. Uncompute the work register in the previous step using \(\mathbf{sk}\) and \(\mathsf{ct}\). Measure the final out register and output the measurement outcome. * For convenience, we name the register holding state \(\frac{1}{\sqrt{2}}(|0,\mathbf{x}_{i,0}\rangle+|1,\mathbf{x}_{i,1}\rangle)\) in \(\rho_{\mathsf{sk}}\) as register \(\mathsf{regi}\). * For each register \(\mathsf{regi},i\in[k]\): apply invertible function \(\mathcal{J}:\mathcal{X}\to\{0,1\}^{w}\) where \(\mathcal{J}(x)\) returns the binary decomposition of \(x\). Since it is invertible, we can uncompute the original \(\mathbf{x}_{i,b_{i}}\)'s in \(\mathsf{regi}\) and leave only \(|0,\mathcal{J}(\mathbf{x}_{i,0}\rangle)+|1,\mathcal{J}(\mathbf{x}_{i,1}\rangle)\). * Apply a quantum Fourier transform over \(\mathbb{F}_{2}\) to all \(k(w+1)\) qubits in registers \(\{\mathsf{reg}_{i}\}_{i\in[k]}\) * measure in computational basis to obtain a string \((c_{1},\mathbf{d}_{1},\cdots,c_{k},\mathbf{d}_{k})\in\{0,1\}^{wk+k}\). * \(\mathsf{VerDel}(\mathsf{td},\mathsf{pk},\mathsf{cert})\): On input a deletion certificate \(\mathsf{cert}\{c_{i},\mathbf{d}_{i}\}_{i=1}^{n}\), public key \(\mathsf{pk}=\{f_{i,0},f_{i,1},\mathbf{y}_{i}\}_{i\in k}\) and the trapdoor \(\mathsf{td}=\{\mathsf{td}_{i}\}_{i\in k}\). * Compute \(\mathbf{x}_{i,b_{i}}\leftarrow\mathsf{INV}(\mathsf{td}_{i},b,\mathbf{y}_{i})\) for both \(b=0,1\). * Check if \(\mathbf{x}_{i,b_{i}}\in[B_{X}]^{n}\) for all \(i\in[k],b_{i}\in\{0,1\}\). If not, output \(\mathsf{Invalid}\). If yes, continue. * Check if \(\|\mathbf{y}_{i}-\mathbf{x}_{i,b_{i}}-b_{i}\cdot\mathbf{s}_{i}\mathbf{A}_{i} \|\leq B_{P}\sqrt{m}\), for all \(i\in[k],b_{i}\in\{0,1\}\). If not, output \(\mathsf{Invalid}\). If yes, continue. * Output \(\mathsf{Valid}\) if \(c_{i}=\mathbf{d}_{i}\cdot(\mathcal{J}(\mathbf{x}_{i,0}\rangle\oplus\mathcal{J}( \mathbf{x}_{i,1})(\mod 2)),\forall i\in[k]\) and \(\mathsf{Invalid}\) otherwise. We defer the proof on correctness to Section 9.3 and the security proof to Section 10. ### Correctness Decryption CorrectnessThe above scheme satisfies decryption correctness. Proof.: Note that we can write out the entire quantum key as \(\rho_{\mathsf{sk}}=\frac{1}{\sqrt{2^{k}}}\sum_{\mathbf{b}_{j}\in\{0,1\}^{k}}| \mathbf{b}_{j,1},\mathbf{x}_{k,\mathbf{b}_{j,1}},\cdots,\mathbf{b}_{j,k}, \mathbf{x}_{1,\mathbf{b}_{j,k}}\rangle\). By applying the decryption procedure, we will have: \[=\frac{1}{\sqrt{2^{k}}}\sum_{\mathbf{b}_{j}\in\{0,1\}^{k}}\left| \mathbf{b}_{j,1},\mathbf{x}_{1,\mathbf{b}_{j,1}},\cdots,\mathbf{b}_{j,k}, \mathbf{x}_{k,\mathbf{b}_{j,k}}\right\rangle\left|(-\sum_{i\in[k]}(\mathbf{x}_ {i,\mathbf{b}_{j,i}}\mathbf{A}_{i}+\mathbf{b}_{j,i}(\mathbf{s}_{i}\mathbf{A}_{i }+\mathbf{e}_{i}))+\sum_{i\in[k]}\mathbf{y}_{i})\right.\] \[\cdot\mathbf{RG}^{-1}(\mathbf{0}|\lfloor\frac{q}{2}\rfloor)+ \mathbf{e}^{\prime\prime}\cdot\mathbf{G}^{-1}(\mathbf{0}|\lfloor\frac{q}{2} \rfloor))_{\text{work}}\left|0\right\rangle_{\text{out}}\] \[=\frac{1}{\sqrt{2^{k}}}\sum_{\mathbf{b}_{j}\in\{0,1\}^{k}}\left| \mathbf{b}_{j,1},\mathbf{x}_{1,\mathbf{b}_{j,1}},\cdots,\mathbf{b}_{j,k}, \mathbf{x}_{k,\mathbf{b}_{j,k}}\right\rangle\left|(\sum_{i\in[k]}(-\mathbf{b} _{j,i}\cdot\mathbf{e}_{i}+\mathbf{e}_{i}^{\prime})\cdot\mathbf{R}+\mathbf{e}^ {\prime\prime})\mathbf{G}^{-1}(\mathbf{0}|\lfloor\frac{q}{2}\rfloor)+\lfloor \frac{q}{2}\rfloor\cdot\mu\right\rangle_{\text{work}}\left|0\right\rangle_{ \text{out}}\] \[=\frac{1}{\sqrt{2^{k}}}\sum_{\mathbf{b}_{j}\in\{0,1\}^{k}}\left| \mathbf{b}_{j,1},\mathbf{x}_{1,\mathbf{b}_{j,1}},\cdots,\mathbf{b}_{j,k}, \mathbf{x}_{k,\mathbf{b}_{j,k}}\right\rangle\left|\mu\right\rangle_{\text{ out}}\text{ (round, write on out and uncompute work)}\] Since we have \(\|\sum_{i}(\mathbf{e}_{i}^{\prime}+\mathbf{e}_{i})\|\leq 2k\cdot\sqrt{m} \cdot B_{P},\|\mathbf{e}^{\prime\prime}\|\leq\sqrt{m}\cdot B_{P^{\prime}}\) and \(\|(\sum_{i\in[k]}(-\mathbf{b}_{j,i}\cdot\mathbf{e}_{i}+\mathbf{e}_{i}^{\prime })\cdot\mathbf{R}+\mathbf{e}^{\prime\prime})\mathbf{G}^{-1}(\mathbf{0}| \lfloor\frac{q}{2}\rfloor)\|\leq\frac{q}{4}\) for all support \(\mathbf{b}_{j}\in\{0,1\}^{k}\) in the final outcome, we will obtain \(\mu\) with all but negligible probability, for all support in the above state. We can write the final output for \(\mu\) in the third (rightmost) register and use ct to uncompute the second register and recover \(\rho_{\text{sk}}\). ReusabilityReusability of the decryption key follows from correctness and the gentle measurement lemma [1]. Deletion Verification CorrectnessThe first two steps in VerDel will pass for any honestly prepared \(\{\mathbf{y}_{i}\}_{i\in[k]}\) with \((1-\mathsf{negl}(\lambda))\) probability. The Delete procedure will operate on the state \(\rho_{\text{sk}}\) as follows. For each \(\frac{1}{\sqrt{2}}(\left|0,\mathbf{x}_{i,0}\right\rangle+\left|1,\mathbf{x}_ {i,1}\right\rangle)\), \(i\in[k]\), the Delete procedure will turn the state into: \[\frac{1}{\sqrt{2}}\sum\left|0,\mathcal{J}(\mathbf{x}_{i,0}) \right\rangle+\left|1,\mathcal{J}(\mathbf{x}_{i,1})\right\rangle\text{ after applying }\mathcal{J}(\cdot)\text{ and uncomputing }\mathbf{x}_{i,b_{i}}\text{ register}\] \[\rightarrow\frac{1}{\sqrt{2^{w+2}}}\sum_{\mathbf{d}_{i},b,u}(-1)^ {\mathbf{d}\cdot\mathcal{J}(x_{i,b})\oplus ub}\left|u\right\rangle\text{ \emph{id}}_{i}\text{ }\text{ after QFT}\] \[=\frac{1}{\sqrt{2^{w}}}\sum_{\mathbf{d}_{i}\in\{0,1\}^{w}}(-1)^ {\mathbf{d}_{i}\cdot\mathcal{J}(\mathbf{x}_{i,0})}\left|\mathbf{d}_{i}\cdot( \mathcal{J}(\mathbf{x}_{i,0})\oplus\mathcal{J}(\mathbf{x}_{i,1}))\right\rangle \left|\mathbf{d}_{i}\right\rangle\] A measurement in the computational basis will give us result (\(c_{i}=\mathbf{d}_{i}\cdot(\mathcal{J}(\mathbf{x}_{i,0})\oplus\mathcal{J}( \mathbf{x}_{i,1})),\mathbf{d}_{i}\)), for any \(i\in[k]\). Correctness of deletion verification thus follows. **Remark 9.1**.: _Note that in our construction, using \(\mathsf{td}\) one can also decrypt. It is easy to define a decryption procedure using \(\mathsf{td}\) by the \(\mathsf{INV}(\mathsf{td},\mathsf{b},\mathbf{y})\)-procedure in Section 7.2. We omit the details here._ ### Secure Key Leasing with Classical Lessor: Protocol Our one-message key generation protocol and key revocation/deletion protocol follows from our construction in Section 9.2 and uses a post-quantum digital signature scheme. * **Interactive Key Generation**: * the lesser runs \(\mathsf{Setup}\) and sends the classical \(\mathsf{mpk}=\{\mathbf{A}_{i},\mathbf{s}_{i}\mathbf{A}_{i}+\mathbf{e}_{i}\}_{i \in[k]}\) to the lesseee. It keeps the trapdoor \(\mathsf{td}\) private. * The lesseee runs \(\mathsf{KeyGen}(\mathsf{mpk})\) to obtain \(\rho_{\mathsf{sk}}=\frac{1}{\sqrt{2^{k}}}\bigotimes_{i}^{k}|0,\mathbf{x}_{i,0 }\rangle+|1,\mathbf{x}_{i,1}\rangle\) and \(\{\mathbf{y}_{i}\}_{i\in[k]}\). It sends back the public key as \(\mathsf{pk}=(\mathsf{mpk},\{\mathbf{y}_{i}\}_{i\in[k]})\). * The lessor signs a signature \(\sigma\) on \(\mathsf{pk}\) and publishes \(\mathsf{pk}^{\prime}=(\mathsf{mpk},\{\mathbf{y}_{i}\}_{i\in[k]},\sigma)\) as the final public key. * **Encryption:** The encryption algorithm \(\mathsf{Enc}^{\prime}(\mathsf{pk}^{\prime},m)\) will accordingly have a first step of verifying the signature \(\sigma\) in \(\mathsf{pk}^{\prime}\), if not valid, abort encryption and output \(\bot\); else run the encryption in Section 9.2. * The lessor sends a message to the lesseee asking it to delete. * The lesseee runs \(\mathsf{Delete}(\rho_{\mathsf{sk}})\) to obtain \(\mathsf{cert}=(c_{1},\mathbf{d}_{1},\cdots,c_{k},\mathbf{d}_{k})\). It sends to the lessor. * Lessor runs \(\mathsf{VerDel}(\mathsf{td},\mathsf{pk},\mathsf{cert})\) and outputs Valid orInvalid. As discussed, the above interaction in the Interactive Key Generation protocol deals with the impostor security to be discussed in Appendix A. We also prove its security in Appendix A. ## 10 Security Proof for SKL-PKE **Theorem 10.1** (Security).: _Assuming the post-quantum sub-exponential hardness of \(\mathsf{LWE}_{n,m,q,B_{L}}\) with parameter choice in Section 7.2.1, the construction in Section 9.2 satisfies the \(\gamma\)-strong SKL-PKE security define in Definition 8.7 for any noticeable \(\gamma\)._ To prove security we consider two hybrids. The first hybrid \(\mathbf{Hybrid}_{0}\) corresponds to the real security game whereas \(\mathbf{Hybrid}_{1}\) a correspond to modified games. We will show that these hybrids are indistinguishable and so the winning probability in the three hybrids are negligibly close and then in the final hybrid \(\mathbf{Hybrid}_{1}\) it must be negiligible. We will prove the following statements: 1. Probability of winning in \(\mathbf{Hybrid}_{0}\) and \(\mathbf{Hybrid}_{1}\) are close by a negligible amount if \(\delta\) are set to be \(\lambda^{-\omega(1)}\) for a tiny super constant \(\omega(1)\) (we can in fact set it to be exponentially small). 2. We will then prove that if \(\mathsf{LWE}\) satisfies subexponential security, for the set parameters \(\mathsf{LWE}_{n,m,q,B_{L}}\) probability of winning in \(\mathbf{Hybrid}_{1}\) is negligible. Together these claims imply that the probability of winning in \(\mathbf{Hybrid}_{0}\) is negligible. Claim 1 follows from Lemma 5.7: if the inefficient \(\gamma\)-good decryptor test outputs \(1\) with probability \(p\) on a state \(\rho\), then the efficient \(\mathsf{ATI}_{\mathcal{P},\mathcal{D},1/2+\gamma-\epsilon}^{\epsilon,\delta}\) will output \(1\) on the state \(\rho\) with probability \(p-\delta\). Since \(\delta\) is negligible, \(\mathcal{A}\)'s overall winning probability will have negligible difference. ### Winning Probability in Hybrid 1 Next, we show that \(\Pr[\mathbf{Hybrid}_{1}=1]\leq\mathsf{negl}(\lambda)\) for some negligible \(\mathsf{negl}(\cdot)\). We will reduce to the security of the parallel repeated NTCF game in Theorem 7.6. **Lemma 10.2**.: _Assuming post-quantum subexponential hardness of \(\mathsf{LWE}_{m,m,q,B_{V}}\) with parameter choice in Section 7.2.1, we have \(\Pr[\mathbf{Hybrid}_{1}=1]\leq\mathsf{negl}(\lambda)\) for some negligible \(\mathsf{negl}(\cdot)\)._ In this hybrid, the adversary and the challenger play the security game as in Definition 8.6. 1. The challenger runs \(\mathsf{Setup}(1^{\lambda})\rightarrow(\mathsf{mpk},\mathsf{td})\). It sends \(\mathsf{mpk}\) to the adversary \(\mathcal{A}\). \(\mathcal{A}\) computes \((\mathsf{pk},\rho_{\mathsf{sk}})\leftarrow\mathsf{KeyGen}(\mathsf{mpk})\) and publishes \(\mathsf{pk}\). 2. The challenger requests that \(\mathcal{A}\) runs the deletion algorithm \(\mathsf{Delete}(\rho_{\mathsf{sk}})\). \(\mathcal{A}\) returns a deletion certificate \(\mathsf{cert}\) to the challenger. 3. The challenger runs \(\mathsf{VerDel}(\mathsf{pk},\mathsf{td},\mathsf{cert})\) and continues if \(\mathsf{VerDel}(\mathsf{pk},\mathsf{td},\mathsf{cert})\) returns Valid; else it outputs \(\bot\) and aborts, if \(\mathsf{VerDel}(\mathsf{pk},\mathsf{td},\mathsf{cert})\) returns Invalid. 4. \(\mathcal{A}\) outputs a message \(\mu\) and a (possibly mixed) state \(\rho_{\mathsf{Delete}}\) as a quantum decryptor. 5. The challenger runs the test for a \(\gamma\)-good decryptor on \(\rho_{\mathsf{Delete}}\) with respect to \(\mathsf{pk}\) and \(\mu\) (using \(\mathsf{TI}_{1/2+\gamma}(\mathcal{P}_{\mathcal{D}})\)). The challenger outputs \(1\) if the test passes, otherwise outputs \(0\). ## Hybrid 1 In this hybrid, we replace the check for \(\gamma\) good decryptor with an efficient check \(\mathsf{ATI}^{\epsilon,\delta}_{\mathcal{P},D,\gamma}\) where we set \(\delta\) to be \(\lambda^{-\omega(1)}\) for a tiny super-constant \(\omega(1)\) and \(\epsilon\) to be an inverse polynomial smaller than \(\gamma\) by a large constant factor, e.g. \(\epsilon=\frac{\gamma}{100}\). 1. The challenger runs \(\mathsf{Setup}(1^{\lambda})\rightarrow(\mathsf{mpk},\mathsf{td})\). It sends \(\mathsf{mpk}\) to the adversary \(\mathcal{A}\). \(\mathcal{A}\) computes \((\mathsf{pk},\rho_{\mathsf{sk}})\leftarrow\mathsf{KeyGen}(\mathsf{mpk})\) and publishes \(\mathsf{pk}\). 2. The challenger requests that \(\mathcal{A}\) runs the deletion algorithm \(\mathsf{Delete}(\rho_{\mathsf{sk}})\). \(\mathcal{A}\) returns a deletion certificate \(\mathsf{cert}\) to the challenger. 3. The challenger runs \(\mathsf{VerDel}(\mathsf{pk},\mathsf{td},\mathsf{cert})\) and if \(\mathsf{VerDel}(\mathsf{pk},\mathsf{td},\mathsf{cert})\) returns Valid it outpus \(z\); else it outputs \(\bot\) if \(\mathsf{VerDel}(\mathsf{pk},\mathsf{td},\mathsf{cert})\) returns Invalid. 4. \(\mathcal{A}\) outputs a message \(\mu\) and a (possibly mixed) state \(\rho_{\mathsf{Delete}}\) as a quantum decryptor. 5. The challenger runs the (efficient) test \(\mathsf{ATI}^{\epsilon,\delta}_{\mathcal{P},D,\gamma+1/2-\epsilon}(\rho_{ \mathsf{Delete}})\) with respect to \(\mu\) and \(\mathsf{pk}\). The challenger sets \(z=1\) if the test passes, otherwise it sets \(z=0\). Figure 1: Hybrid 0 Figure 2: Hybrid 1 To show that the winning probability in **Hybrid 1** is negligible, we consider a world where we **do not check the deletion certificate and let the adversary pass all the time**. In this world, through a sequence of hybrid games: we will call them **Games** instead of Hybrids to distinguish from the above Hybrids 0, 1. Later, we will show how to put back the condition about the deletion certificate check for our analysis via the following argument: Notations for EventsFor simplicity, we make a few notations for the events that take place in **Hybrid 1**: * We denote the event that the adversary hands in a valid deletion certificate, i.e. \(\text{VerDel}(\mathsf{pk},\mathsf{td},\text{cert})=\text{Valid}\), as \(\text{CertPass}\). * We denote the event that test \(\text{AT}^{\epsilon,\delta}_{\mathcal{P},D,\gamma+1/2}(\rho_{\mathsf{Delete}})\) outputs 1 with respect to \(\mu\) and \(\mathsf{pk}\), as \(\text{GoodDecryptor}\). To simplify notations, we define the new \(\gamma\) here to be the \(\gamma-\epsilon\) in Hybrid 1. * We denote \(\mathsf{Ext}\) as the event where we can obtain the preimages \(\{\mathbf{x}_{i}\}_{i\in[k]}\in\{\text{INV}(\mathsf{td}_{i},b\in\{0,1\}, \mathbf{y}_{i})\}_{i\in[k]}\)(from the remaining state of measurement \(\text{AT}^{\epsilon,\delta}_{\mathcal{P},D,\gamma+1/2}(\rho_{\mathsf{Delete}})\)). Suppose the probability that final output \(1\) in Hybrid 1 ( Figure 2) is some noticeable \(\epsilon\), then we must have \(\Pr[\text{CertPass}\wedge\text{GoodDecryptor}]\geq\epsilon_{1}\). To build a reduction that breaks the security of parallel repeated NTCF game Theorem 7.6, we need the following statement to hold: \(\Pr[\text{CertPass}\wedge\text{Ext}]\geq\epsilon^{\prime}\), for some noticeable \(\epsilon^{\prime}\), because in this case the reduction can obtain both the deletion certificates \(\{c_{i},\mathbf{d}_{i}\}_{i\in[k]}\) and the preimages \(\{\mathbf{x}_{i,b}\}_{i\in[k]}\), which allow it to win the parallel repeated NTCF game. Our proof outline is the follows: we would show that when \(\text{GoodDecryptor}\) happens, \(\mathsf{Ext}\)_always happens_ except with negligible probability. Therefore, we have \(\Pr[\text{CertPass}\wedge\text{Ext}]\geq\epsilon_{1}-\mathsf{negl}(\lambda)\) by a simple proabability observaion (Claim C.1). We analyze the probabilities by defining some games in the world where we don't check the deletion certificate and reasoning about them. Game 0This is an experiment same as the one in Figure 2, using the construction Section 9.2, except that _the challenger does not perform check on the deletion certificate_, _i.e. step 5 in Figure 2_. 1. The challenger runs \(\mathsf{Setup}(1^{\lambda})\): the challenger prepares \(\mathsf{mpk}=\{\mathbf{A}_{i},\mathbf{s}_{i}\mathbf{A}_{i}+\mathbf{e}_{i}\}_ {i\in[k]}\), where \((\mathbf{A}_{i},\mathsf{td}_{i})\leftarrow\mathsf{GenTrap}(1^{n},1^{m},q),\forall i \in[k]\) and sends it to \(\mathcal{A}\). The challenger keeps \(\mathsf{td}=\{\mathsf{td}_{i}\}_{i\in[k]}\) 2. \(\mathcal{A}\) receives \(\mathsf{mpk}\) and obtains the classical public key \(\mathsf{pk}=\{\mathbf{A}_{i},\mathbf{s}_{i}\mathbf{A}_{i}+\mathbf{e}_{i}, \mathbf{y}_{i}\}_{i\in[k]}\leftarrow\mathsf{KeyGen}(1^{\lambda})\) and one copy of quantum decryption key \(\rho_{\mathsf{sk}}\). \(\mathcal{A}\) publishes \(\mathsf{pk}\). 3. \(\mathcal{A}\) outputs a message \(\mu\) and a (possibly mixed) state \(\rho_{\mathsf{Delete}}\) as a quantum decryptor. 4. The challenger runs the (efficient) test \(\text{AT}^{\epsilon,\delta}_{\mathcal{P},D,\gamma+1/2}(\rho_{\mathsf{Delete}})\) with respect to \(\mu\) and \(\mathsf{pk}\). The challenger outputs \(1\) if the test passes, otherwise it outputs \(0\). Game 1This is the same as Game 0 except that all \(\mathbf{A}_{i}\) are sampled uniformly at random, without a trapdoor. 1. The challenger runs \(\mathsf{Setup}(1^{\lambda})\): the challenger prepares \(\mathsf{mpk}=\{\mathbf{A}_{i},\mathbf{s}_{i}\mathbf{A}_{i}+\mathbf{e}_{i}\}_ {i\in[k]}\), where \(\mathbf{A}_{i}\leftarrow\mathbb{Z}_{q}^{n\times m},\forall i\in[k]\) and sends it to \(\mathcal{A}\). 2. \(\mathcal{A}\) receives \(\mathsf{mpk}\) and obtains the classical public key \(\mathsf{pk}=\{\mathbf{A}_{i},\mathbf{s}_{i}\mathbf{A}_{i}+\mathbf{e}_{i}, \mathbf{y}_{i}\}_{i\in[k]}\leftarrow\mathsf{KeyGen}(1^{\lambda})\) and one copy of quantum decryption key \(\rho_{\mathsf{sk}}\). \(\mathcal{A}\) publishes \(\mathsf{pk}\). \(\mathcal{A}\) outputs a message \(\mu\) and a (possibly mixed) state \(\rho_{\mathsf{Delete}}\) as a quantum decryptor. 3. The challenger runs the (efficient) test \(\mathsf{ATl}_{\mathcal{P},D,\gamma+1/2}^{\epsilon,\delta}(\rho_{\mathsf{Delete}})\) with respect to \(\mu\) and \(\mathsf{pk}\). The challenger outputs \(1\) if the test passes, otherwise it outputs \(0\). Game 2.j: \(j=1,\cdots,k\)This is the same as Game 0 except the following: 1. During Setup, * For \(i\leq j\): the challenger prepares \(\mathsf{mpk}_{i}=(\mathbf{A}_{i},\mathbf{u}_{i})\), where \(\mathbf{A}_{i}\leftarrow\mathbb{Z}_{q}^{n\times m}\) and \(\mathbf{u}_{i}\leftarrow\mathbb{Z}_{q}^{1\times m}\) uniformly random. * For \(i>j\): the challenger prepares \(\mathsf{mpk}_{i}=(\mathbf{A}_{i},\mathbf{u}_{i}=\mathbf{s}_{i}\mathbf{A}_{i}+ \mathbf{e}_{i})\) the same as in hybrid 0. 2. \(\mathcal{A}\) accordingly obtains public key \(\mathsf{pk}=\{\mathbf{A}_{i},\mathbf{u}_{i},\mathbf{y}_{i}\}_{i\in[k]}\). and one copy of quantum decryption key \(\rho_{\mathsf{sk}}\). \(\mathcal{A}\) publishes \(\mathsf{pk}\). 3. \(\mathcal{A}\) outputs a message \(\mu\) and a (possibly mixed) state \(\rho_{\mathsf{Delete}}\) as a quantum decryptor. 4. The challenger runs the (efficient) test \(\mathsf{ATl}_{\mathcal{P},D,\gamma+1/2}^{\epsilon,\delta}(\rho_{\mathsf{Delete}})\) with respect to \(\mu\) and \(\mathsf{pk}\). The challenger outputs \(1\) if the test passes, otherwise it outputs \(0\). We then prove the following claims about the above games: **Claim 10.3**.: _Game 0 and Game 1 are statistically indistinguishable._ This follows directly from the property of \(\mathsf{GenTrap}\) in Theorem 6.4. **Claim 10.4**.: _Assuming the hardness of \(\mathsf{LWE}_{n,m,q,B_{L}}\), Game 1 and Game 2.k are indistinguishable._ Proof.: We claim that each pair in (Game 1, Game 1.1), (Game 1.1, Game 1.2)... (Game 1.(k-1), Game 1.k) is indistinguishable. If anyone of them are distinguishable, then there exists some \(j\) such that there is an adversary that distinguishes \((\mathbf{A}_{j},\mathbf{s}_{j}\mathbf{A}_{j}+\mathbf{e}_{j})\) and \((\mathbf{A}_{j},\mathbf{u}_{j}\leftarrow\mathbb{Z}_{q}^{m})\). **Remark 10.5**.: _The above property can also directly follow from the indistinguishability between the 2-to-1 mode and injective mode of NTCF [18] (see Lemma 7.3). Note that after switching to \((\mathbf{A}_{i},\mathbf{u}_{i})\), some of the \(\mathbf{y}_{i}\)'s in the public key \((\{\mathbf{A}_{i},\mathbf{u}_{i},\mathbf{y}_{i}\}\) may have the format \(\mathbf{y}_{i}=\mathbf{x}_{i,1}\mathbf{A}_{i}+\mathbf{e}_{i}^{\prime}+\mathbf{ u}_{i}\) or \(\mathbf{y}_{i}=\mathbf{x}_{i,0}\mathbf{A}_{i}+\mathbf{e}_{i}^{\prime}\)._ _Thus, an honestly encrypted ciphertext \(\mathsf{ct}\) in **Game \(2.k\) for message \(\mu\) should have the following format:_ \[\mathsf{ct}=\begin{bmatrix}&\mathbf{u}_{1}\\ &\mathbf{A}_{1}\\ &\cdots\\ &\mathbf{u}_{k}\\ &\mathbf{A}_{k}\\ \sum_{i\in[k]}(\mathbf{x}_{i}\mathbf{A}_{i}+\mathbf{e}_{i}^{\prime}+b_{i} \cdot\mathbf{u}_{i})\end{bmatrix}\cdot\mathbf{R}+\mathbf{E}+\mu\cdot\mathbf{G} _{(n+1)k+1} \tag{4}\] _where \(\mathbf{A}_{i}\leftarrow\mathbb{Z}_{q}^{n\times m},\mathbf{u}_{i}\leftarrow \mathbb{Z}_{q}^{m},\mathbf{R}\leftarrow\{0,1\}^{m\times m}\) and the \((nk+k+1)\)-th row in \(\mathbf{E}\) is \(\mathbf{e}^{\prime\prime}\leftarrow\mathcal{D}_{\mathbb{Z}_{q}^{m},B_{P}}\). Note that \(b_{i}=0\) or \(,1,i\in[k]\) are some adversarially chosen bits that come in the \(\{\mathbf{y}_{i}\}_{i\in[k]}\) part of \(\mathsf{pk}\)._ Switching to Plaintext 0From now on, without loss of generality, we always consider encrypting message \(\mu=0\). The analysis for the case when \(\mu=1\) should follow symmetrically. ### Extraction of Preimages via LWE Search-Decision Reduction Now we are ready to argue that in **Game**\(2.k\), if the game outputs \(1\), then there exists an extractor that extracts all preimages \(\{\mathbf{x}_{i,b_{i}}\}_{i\in[k]},b_{i}=0\) or \(1\) for \(\{\mathbf{y}_{i}\}_{i\in[k]}\). **Theorem 10.6**.: _In the last Game \(2.k\), if we have \(\mathsf{ATI}^{\epsilon,\delta}_{\mathcal{P},\mathcal{D},1/2+\gamma}(\rho_{ \mathsf{Delete}})\) outputs \(1\), for some noticeable \(\gamma\), then there exists an extractor \(\mathsf{Ext}\) such that there is a negligible function \(\mathsf{negl}(\cdot)\):_ \[\Pr[\mathsf{Ext}(\rho_{\mathsf{Delete}},\mathsf{pk})\rightarrow(\mathbf{x}_{1 },\cdots,\mathbf{x}_{k}):\mathbf{x}_{i}\text{ is the secret in }\mathbf{x}_{i}\mathbf{A}_{i}+\mathbf{e}_{i,0}]\geq 1 -\mathsf{negl}(\lambda)\] \(\mathsf{Ext}\) _runs in time \(T^{\prime}=T\cdot knB_{\mathbf{x}}\cdot\mathsf{poly}(1/\epsilon,1/\log\delta)\), where \(T\) is the running time of the decryptor \(\rho_{\mathsf{Delete}}\) and \(B_{\mathbf{x}}=\max_{\mathbf{x}_{i,j}}|\mathbf{x}_{i,j}|\) and \(\mathbf{x}_{i,j}\) is the \(j\)-th entry in vector \(\mathbf{x}_{i}\)._ _In other words, we have in Game 2.k:_ \[\Pr\left[\begin{array}{c}\mathsf{Ext}(\rho_{\mathsf{Delete}},\mathsf{pk}) \rightarrow(\mathbf{x}_{1},\cdots\mathbf{x}_{k}):\\ \mathbf{x}_{i}\text{ is the secret in }\mathbf{x}_{i}\mathbf{A}_{i}+ \mathbf{e}^{\prime}_{i}\end{array}\right.\left|\mathsf{ATI}^{\epsilon,\delta}_ {\mathcal{P},\mathcal{D}_{2.k},1/2+\gamma}(\rho_{\mathsf{Delete}})=1\right] \geq 1-\mathsf{negl}(\lambda)\] We then obtain the following corollary from Theorem 10.6: **Corollary 10.7**.: _Assuming the subexponential post-quantum hardness security of \(\mathsf{LWE}_{n,m,q,B_{L}}\), in Game 0, for any \(QPT\)\(\mathcal{A}\) with auxiliary quantum input, there exists some negligible \(\mathsf{negl}(\cdot)\), it holds that:_ \[\Pr\left[\begin{array}{c}\mathsf{Ext}(\rho_{\mathsf{Delete}},\mathsf{pk}) \rightarrow(\mathbf{x}_{1},\cdots\mathbf{x}_{k}):\\ \mathbf{x}_{i}\in\mathrm{INV}(\mathsf{td}_{i},b\in\{0,1\},\mathbf{y}_{i}) \end{array}\right.\left|\mathsf{ATI}^{\epsilon,\delta}_{\mathcal{P},\mathcal{ D}_{0},1/2+\gamma}(\rho_{\mathsf{Delete}})=1\right]\geq 1-\mathsf{negl}(\lambda)\] We refer to Appendix C for proof details. #### 10.2.1 Proof Outline We first give a high level description of the proof for Theorem 10.6. As discussed above, we need to show the following: given that we have a good decryptor \(\rho_{\mathsf{Delete}}\), we will have to extract (from the remaining state after the "good-decryptor" measurement") all preimages \(\{\mathbf{x}_{i,b_{i}}\}_{i\in[k]},b_{i}=0\) or \(1\) for \(\{\mathbf{y}_{i}\}_{i\in[k]}\) with probability negligibly close to \(1\). The procedure will be a quantum analogue of a LWE search-to-decision reduction with extraction probability close to \(1\), where the input distributions to the search-game adversary are not exactly LWE versus random, but statistically close to such. Three \(\mathsf{TI}\) DistributionsFor clarity, we consider a few simplifications: * We use the ideal, projective threshold implementation \(\mathsf{TI}\) in our algorithm. * We consider the number of instances/repetitions in the protocol to be \(k=1\). In the full proof, we will use the efficient \(\mathsf{ATI}\) and polynomial \(k\), via similar arguments. To prove our theorem, it suffices to show that \(\Pr[\text{Extraction of}\mathsf{x}]\geq 1-\mathsf{negl}(\lambda)\) in a world where we are given the following condition at the beginning of the algorithm: \[\mathsf{TI}_{\gamma+1/2}(\mathcal{P}_{\mathcal{D}_{\mathsf{ct}}})\cdot\rho_{ \mathsf{Delete}}=1\] where \(\mathcal{P}_{\mathcal{D}_{\mathsf{ct}}}\) is the following mixture of projections, acting on the state \(\rho_{\mathsf{Delete}}\): * Compute \(\mathsf{ct}_{0}\leftarrow\mathsf{Enc}(\mathsf{pk},0)\)_in Game \(2.k\)_. \(\mathsf{ct}_{0}\) will have the format of Equation (4) with \(\mu=0\) and \(k=1\): \[\mathsf{ct}_{0} =\begin{bmatrix}\mathbf{u}\\ \mathbf{A}\\ \mathbf{x}\mathbf{A}+\mathbf{e}^{\prime}+b\cdot\mathbf{u}\end{bmatrix}\cdot \mathbf{R}+\mathbf{E}\] \[=\begin{bmatrix}\mathbf{u}\mathbf{R}\\ \mathbf{A}\mathbf{R}\\ \mathbf{x}\mathbf{A}\mathbf{R}+\mathbf{e}^{\prime}\mathbf{R}+b\cdot\mathbf{u} \mathbf{R}+\mathbf{e}^{\prime\prime}\end{bmatrix}\] where \((\mathbf{A},\mathbf{u})\) are already given in the public key \(\mathsf{pk}\); the rest is sampling \(\mathbf{R}\leftarrow\{0,1\}^{m\times m}\) and the \((n+2)\)-th row in \(\mathbf{E}\) is \(\mathbf{e}_{i}^{\prime\prime}\leftarrow\mathcal{D}_{\mathbb{Z}_{q}^{m},B_{P^{ \prime}}}\). Note that \(b=0\) or \(1\), is an adversarially chosen bit that come in the \(\mathbf{y}\) part of \(\mathsf{pk}\). * Compute \(\mathsf{ct}_{1}\leftarrow\mathcal{C}\), a random ciphertext from the possible space of all ciphertexts for 1-bit messages. * Sample a uniform bit \(\ell\leftarrow\{0,1\}\)8. Footnote 8: To distinguish from the bit \(b\) in \(\mathsf{ct}\), we use the notation \(\ell\) for this random coin * Run the quantum decryptor \(\rho\) on input \(\mathsf{ct}_{\ell}\). Check whether the outcome is \(\ell\). If so, output \(1\), otherwise output \(0\). We then consider a second threshold implementation \(\mathsf{TI}_{1/2+\gamma}(\mathcal{P}_{\mathcal{D}(g_{i})})\). \(\mathcal{P}_{\mathcal{D}(g_{i})}\) is the following mixture of measurements (we denote the following distribution we sample from as \(\mathcal{D}(g_{i})\).): * Let \(g_{i}\) be a guess for the \(i\)-th entry in vector \(\mathbf{x}\). * Sample a random \(\mathbf{c}\leftarrow\mathbb{Z}_{q}^{1\times m}\), and let matrix \(\mathbf{C}\in\mathbb{Z}_{q}^{n\times m}\) be a matrix where the \(i\)-th row is \(\mathbf{c}\) and the rest of rows are \(\mathbf{0}^{\prime}\)s. * Prepare \(\mathsf{ct}_{0}\) as follows: \[\mathsf{ct}_{0}=\begin{bmatrix}\mathbf{u}\mathbf{R}\\ \mathbf{A}\mathbf{R}+\mathbf{C}\\ \mathbf{x}\mathbf{A}\mathbf{R}+\mathbf{e}^{\prime}\mathbf{R}+b\cdot\mathbf{u} \mathbf{R}+\mathbf{e}^{\prime\prime}+g_{i}\cdot\mathbf{c}\end{bmatrix}\] where \((\mathbf{A},\mathbf{u})\) are already given in the public key \(\mathsf{pk}\); \(\mathbf{R}\leftarrow\{0,1\}^{m\times m}\) and \(\mathbf{e}_{i}^{\prime\prime}\leftarrow\mathcal{D}_{\mathbb{Z}_{q}^{m},B_{P^{ \prime}}}\). Note that \(b=0\) or \(1\), is an adversarially chosen bit that comes in the \(\mathbf{y}\) part of \(\mathsf{pk}\). * Compute \(\mathsf{ct}_{1}\leftarrow\mathcal{C}\), a random ciphertext from the possible space of all ciphertexts for 1-bit messages. In our case, that is: \(\mathsf{ct}_{1}\leftarrow\mathbb{Z}_{q}^{(n+2)\times m}\). * Flip a bit \(\ell\leftarrow\{0,1\}\). * Run the quantum distinguisher \(\rho\) on input \(\mathsf{ct}_{\ell}\). Check whether the outcome is \(\ell\). If so, output \(1\), otherwise output \(0\). We finally consider a third threshold implementation, we call \(\mathsf{TI}_{1/2+\gamma}(\mathcal{P}_{\mathcal{D}_{\text{init}}})\): * Compute both \(\mathsf{ct}_{0},\mathsf{ct}_{1}\leftarrow\mathcal{C}\), as random ciphertexts from the possible space of all ciphertexts for 1-bit messages. In our case, that is: \(\mathsf{ct}_{0},\mathsf{ct}_{1}\leftarrow\mathbb{Z}_{q}^{(n+2)\times m}\). * Flip a bit \(\ell\leftarrow\{0,1\}\). * Run the quantum distinguisher \(\rho\) on input \(\mathsf{ct}_{\ell}\). Check whether the outcome is \(\ell\). If so, output \(1\), otherwise output \(0\). The Extraction AlgorithmWe describe the extraction algorithm as follows. It takes input \(\mathsf{pk}=(\mathbf{A},\mathbf{u})\) and a quantum state \(\rho_{\mathsf{Delete}}\). * Set our guess for \(\mathbf{x}\) as \(\mathbf{x}^{\prime}=\mathbf{0}\in\mathbb{Z}_{q}^{1\times n}\) and \(\mathbf{x}_{i}^{\prime}\) is the \(i\)-th entry of \(\mathbf{x}^{\prime}\). * For \(i=1,2,\cdots,n\): * For \(g_{i}=[-B_{\mathbf{x}},B_{\mathbf{x}}]\), where \([-B_{\mathbf{x}},B_{\mathbf{x}}]\) is the possible value range for \(\mathbf{x}_{i}\in\mathbb{Z}_{q}\): 1. Let \(\rho_{\mathsf{Delete}}\) be the current state from the quantum distinguisher. 2. Run \(\mathsf{TI}_{1/2+\gamma}(\mathcal{P}_{\mathcal{D}(g_{i})})\) on \(\rho_{\mathsf{Delete}}\) wih respect to \(\mathsf{pk}\). 3. If \(\mathsf{TI}_{1/2+\gamma}(\mathcal{P}_{\mathcal{D}(g_{i})})\) outputs 1, then set \(\mathbf{x}_{i}^{\prime}:=g_{i}\) and let \(i:=i+1\). 4. If \(\mathsf{TI}_{1/2+\gamma}(\mathcal{P}_{\mathcal{D}(g_{i})})\) outputs 0, the let \(g_{i}:=g_{i}+1\) and go to step 1. * Output \(\mathbf{x}^{\prime}\). State-preserving Properties in the Extraction AlgorithmWe will show that the output in the above algorithm \(\mathbf{x}^{\prime}\) is equal to \(\mathbf{x}\) in the \(\mathbf{x}\mathbf{A}+\mathbf{e}^{\prime}\) with probability \((1-\mathsf{negl}(\lambda))\). First recall that at the beginning of our algorithm, we are given the following condition: \[\Pr[\mathsf{TI}_{\gamma+1/2}(\mathcal{P}_{\mathcal{D}_{\mathsf{ct}}})\cdot \rho_{\mathsf{Delete}}\to 1]=1\] We then prove the following claims: 1. In the above algorithm, when our guess \(g_{i}=\mathbf{x}_{i}\), we have: * \(\mathcal{D}(g_{i})\) is statistically indistinguishable from \(\mathcal{D}_{\mathsf{ct}}\). * \(\mathsf{TI}_{1/2+\gamma}(\mathcal{P}_{\mathcal{D}(g_{i})})\) outputs 1 on the current state \(\rho_{\mathsf{Delete}}\) with \((1-\mathsf{negl}(\lambda))\) probability. * After \(\mathsf{TI}_{1/2+\gamma}(\mathcal{P}_{\mathcal{D}(g_{i})})\) outputs 1, the remaining state \(\rho_{\mathsf{Delete}}\) is negligibly close in trace distance to the state before we perform \(\mathsf{TI}_{1/2+\gamma}(\mathcal{P}_{\mathcal{D}(g_{i})})\). 2. In the above algorithm, when our guess \(g_{i}\neq\mathbf{x}_{i}\), we have: * \(\mathcal{D}(g_{i})\) is statistically indistinguishable from \(\mathcal{D}_{\mathsf{unif}}\). * \(\mathsf{TI}_{1/2+\gamma}(\mathcal{P}_{\mathcal{D}(g_{i})})\) outputs 0 on the current state \(\rho_{\mathsf{Delete}}\) with \((1-\mathsf{negl}(\lambda))\) probability. * After \(\mathsf{TI}_{1/2+\gamma}(\mathcal{P}_{\mathcal{D}(g_{i})})\) outputs 0, the remaining state \(\rho_{\mathsf{Delete}}\) is negligibly close in trace distance to the state before we perform \(\mathsf{TI}_{1/2+\gamma}(\mathcal{P}_{\mathcal{D}(g_{i})})\). Combining the above arguments, we can conclude that the algorithm outputs \(\mathbf{x}\) with probability \((1-\mathsf{negl}(\lambda))\): whenever \(\mathsf{TI}_{1/2+\gamma}(\mathcal{P}_{\mathcal{D}(g_{i})})\) outputs 1, we know our guess is correct with probability \((1-\mathsf{negl}(\lambda))\) and our state is almost undisturbed, so we can move on to guessing the next entry; whenever \(\mathsf{TI}_{1/2+\gamma}(\mathcal{P}_{\mathcal{D}(g_{i})})\) outputs 0, we know our guess is incorrect with probability \((1-\mathsf{negl}(\lambda))\) and nevetheless our state is almost undisturbed, we can move on to the next value for our guess. These invariants preserves throughout the above algorithm. Negligible errors will accumulate on the trace distance of \(\rho_{\mathsf{Delete}}\) after every \(\mathsf{TI}\)-measurement, but will stay negligible throughout the algorithm with our choice of parameters. By these guarantees, we obtain \(\mathbf{x}\) with \((1-\mathsf{negl}(\lambda))\) probability in the end. We refer the readers to the following section, Section 11 for the full algorithm and analysis. ### Reduction to Parallel NTCF Soundness Now we prove Lemma 10.2 and Theorem 10.1 follows accordingly. Now we can build a reduction to break the security of the parallel-repeated NTCF-based protocol in Theorem 7.6 as follows: We plug in the condition that **we have to check the validity of the deletion certificate**. Recall our notations for the events happening in Hybrid 1: * We denote the event that the adversary hands in a valid deletion certificate, i.e. \(\mathsf{VerDel}(\mathsf{pk},\mathsf{td},\mathsf{cert})=\mathsf{Valid}\), as \(\mathsf{CertPass}\). * We denote the event that test \(\mathsf{ATI}^{\epsilon,\delta}_{\mathcal{P},D,\gamma+1/2}(\rho_{\mathsf{Delete}})\) outputs \(1\) with respect to \(\mu\) and \(\mathsf{pk}\), as \(\mathsf{GoodDecryptor}\). * We denote the event that we can obtain the preimages \(\{\mathbf{x}_{i}\}_{i\in[k]}\in\{\mathrm{INV}(\mathsf{td}_{i},b\in\{0,1\}, \mathbf{y}_{i})\}_{i\in[k]}\) as \(\mathsf{Ext}\). Suppose the adversary wins the \(\gamma\)-strong SKL-PKE game in Definition 8.7, then we must have \(\Pr[\mathsf{CertPass}\wedge\mathsf{GoodDecryptor}]\geq 1/p\) for some noticeable \(1/p\). By Corollary 10.7, we have \(\Pr[\mathsf{Ext}\,|\mathsf{GoodDecryptor}]\geq 1-\mathsf{negl}(\lambda)\). Therefore we have \(\Pr[\mathsf{CertPass}\wedge\mathsf{Ext}]\geq 1/p-\mathsf{negl}^{\prime}(\lambda)\) for some negligible \(\mathsf{negl}^{\prime}(\lambda)\). The relation can be easily observed from a Venn diagram; we deduce it formally in Appendix C.3. Now we can build a reduction to break the security of the parallel-repeated NTCF-based protocol in Theorem 7.6 as follows: the reduction plays as the challenger in the strong-SKL game. It passes the \(\{f_{i,b}\}_{i\in[k],b\in\{0,1\}}\) from the NTCF challenger to the adversary \(\mathcal{A}\) as \(\mathsf{mpk}\). It takes \(\{\mathbf{y}_{i}\}_{i\in[k]}\) and the deletion certificate \(\{(c_{i},\mathbf{d}_{i})\}_{i\in[k]}\) from the adversary and runs the \(\gamma\)-good test on the its post-deletion state \(\rho_{\mathsf{Delete}}\). If the test fails, then abort; if the test passes, run the extractor \(\mathsf{Ext}\) (from Theorem 10.6) on the post-test state \(\rho^{\prime}_{\mathsf{Delete}}\) to obtain \(\{\mathbf{x}_{i}\}_{i\in[k]}\). Then it gives \(\{\mathbf{y}_{i}\,c_{i},\mathbf{d}_{i}),\mathbf{x}_{i}\}\) all to the NTCF-protocol challenger. By the above argument, if \(\mathcal{A}\) wins with probability \(1/p\), then the reduction wins with probability \(1/p-\mathsf{negl}(\lambda)\). Proof for Theorem 10.6: Quantum LWE Search-to-Decision Algorithm with Almost-Perfect Extraction (Noisy Version) In this section, we provide a full extraction algorithm in Game \(2.k\) for extracting the \(\{\mathbf{x}_{i,b_{i}}\}_{i\in[k]}\) and its analysis, to prove Theorem 10.6. We call it "noisy" search-to-decision algorithm because the distribution in Theorem 10.6 is not exactly LWE versus random, but statistically close to. In Appendix D we provide a cleaner version of the algorithm and analysis where the input distributions are real LWE instances versus real uniform random instances, so that we have a first quantum LWE search-to-decision with almost perfection extraction, even with auxiliary quantum inputs. We believe such a statement for (plain) LWE search-to-decision reduction will be of independent use. **Notations** For clarity, we take away the index \(b_{i}\) in the vector \(\mathbf{x}_{i,b_{i}}\) since obtaining \(\mathbf{x}_{i,0}\) or \(\mathbf{x}_{i,1}\) does not affect our analysis. From now on, the subscripts \(i,j\) in \(\mathbf{x}_{i,j}\) represents the \(j\)-th entry in the \(i\)-th vector \(\mathbf{x}_{i}\). ### Three Distributions for \(\mathsf{ATI}\) Similar to the proof outline in Section 10.2.1, we first describe three \(\mathsf{ATI}\)'s with respect to different distributions \(\mathcal{D}\) (and accordingly, the mixture of projections \(\mathcal{P}\)). \(\mathsf{ATI}\) **for \(\mathcal{D}_{2.k}\):**\(\mathsf{ATI}_{\mathcal{P},\mathcal{D}_{2.k},1/2+\gamma}^{\epsilon,\delta}\) is the approximate threshold implementation algorithm for the following mixture of projections \(\mathcal{P}_{\mathcal{D}_{\mathsf{cl}}}\), acting on the state \(\rho_{\mathsf{Delete}}\): * Compute \(\mathsf{ct}_{0}\leftarrow\mathsf{Enc}(\mathsf{pk},0)\)_in Game_\(2.k\). \(\mathsf{ct}_{0}\) will have the format of Equation (4) with \(\mu=0\) : \[\mathsf{ct}_{0} =\begin{bmatrix}\mathbf{u}_{1}\\ \mathbf{A}_{1}\\ \ldots\\ \mathbf{u}_{k}\\ \mathbf{A}_{k}\\ \sum_{i\in[k]}(\mathbf{x}_{i}\mathbf{A}_{i}+\mathbf{e}_{i}^{\prime}+b_{i} \cdot\mathbf{u}_{i})\end{bmatrix}\cdot\mathbf{R}+\mathbf{E}\] \[=\begin{bmatrix}\mathbf{u}_{1}\cdot\mathbf{R}\\ \mathbf{A}_{1}\cdot\mathbf{R}\\ \ldots\\ \mathbf{u}_{k}\cdot\mathbf{R}\\ \sum_{i}(\mathbf{x}_{i}\mathbf{A}_{i}\mathbf{R}+\mathbf{e}_{i}^{\prime}\mathbf{ R}+b_{i}\cdot\mathbf{u}_{i}\mathbf{R})+\mathbf{e}^{\prime\prime}\end{bmatrix}\] where \(\{\mathbf{A}_{i},\mathbf{u}_{i}\}_{i\in[k]}\) are given in the public key \(\mathsf{pk}\); \(\mathbf{R}\leftarrow\{0,1\}^{m\times m}\) and the \((n+2)\)-th row in \(\mathbf{E}\) is \(\mathbf{e}^{\prime\prime}\leftarrow\mathcal{D}_{\mathbb{Z}_{q}^{m},B_{P^{ \prime}}}\). Note that \(b_{i}=0\) or \(1\), is an adversarially chosen bit that come in the \(\mathbf{y}\) part of \(\mathsf{pk}\). * Compute \(\mathsf{ct}_{1}\leftarrow\mathcal{C}\), a random ciphertext from the possible space of all ciphertexts for 1-bit messages. In our case, that is: \(\mathsf{ct}_{1}\leftarrow\mathbb{Z}_{q}^{(nk+k+1)\times m}\). * Sample a uniform bit \(b\leftarrow\{0,1\}\) * Run the quantum decryptor \(\rho\) on input \(\mathsf{ct}_{b}\). Check whether the outcome is \(b\). If so, output \(1\), otherwise output \(0\). \(\mathsf{ATI}\) **for \(\mathcal{D}(g_{\ell,j})\):**We then consider a second approximate threshold implementation \(\mathsf{ATI}_{\mathcal{P},\mathcal{D}(g_{\ell,j}),1/2+\gamma}^{\epsilon,\delta}\), \(\mathcal{P}_{\mathcal{D}(g_{\ell,j})}\) is the following mixture of measurements (we denote the following distribution we sample from as \(\mathcal{D}(g_{\ell,j})\).): * Let \(g_{\ell,j}\) be a guess for the \(j\)-th entry in vector \(\mathbf{x}_{\ell}\in(\mathbf{x}_{1},\cdots,\mathbf{x}_{k})\). * Sample a random \(\mathbf{c}\leftarrow\mathbb{Z}_{q}^{1\times m}\), and let matrix \(\mathbf{C}\in\mathbb{Z}_{q}^{n\times m}\) be a matrix where the \(j\)-th row is \(\mathbf{c}\) and the rest of rows are \(\mathbf{0}^{\prime}\)s. * Prepare \(\mathsf{ct}_{0}\) as follows: \[\mathsf{ct}_{0}=\begin{bmatrix}\mathbf{u}_{1}\mathbf{R}_{1}\\ \mathbf{A}_{1}\mathbf{R}\\ \cdots\\ \mathbf{u}_{\ell}\mathbf{R}\\ \mathbf{A}_{\ell}\mathbf{R}+\mathbf{C}_{\ell}\\ \cdots\\ \mathbf{u}_{k}\mathbf{R}\\ \mathbf{A}_{k}\mathbf{R}\\ \sum_{i\in[k]}(\mathbf{x}_{i}\mathbf{A}_{i}\mathbf{R}+\mathbf{e}_{i}^{ \prime}\mathbf{R}+b_{i}\cdot\mathbf{u}_{i}\mathbf{R})+\mathbf{e}^{\prime \prime}+g_{\ell,j}\cdot\mathbf{c}\end{bmatrix}\] where \(\{\mathbf{A}_{i},\mathbf{u}_{i}\}_{i\in[k]}\) are given in the public key \(\mathsf{pk}\); \(\mathbf{R}\leftarrow\{0,1\}^{m\times m}\) and the \((n+2)\)-th row in \(\mathbf{E}\) is \(\mathbf{e}^{\prime\prime}\leftarrow\mathcal{D}_{\mathcal{I}_{q}^{m},B_{p^{ \prime}}}\). Note that \(b_{i}=0\) or \(1\), is an adversarially chosen bit that come in the \(\mathbf{y}\) part of \(\mathsf{pk}\). * Compute \(\mathsf{ct}_{1}\leftarrow\mathcal{C}\), a random ciphertext from the possible space of all ciphertexts for 1-bit messages. In our case, that is: \(\mathsf{ct}_{1}\leftarrow\mathbb{Z}_{q}^{(nk+k+1)\times m}\). * Flip a bit \(b\leftarrow\{0,1\}\). * Run the quantum distinguisher \(\rho\) on input \(\mathsf{ct}_{b}\). Check whether the outcome is \(b\). If so, output \(1\), otherwise output \(0\). ATI for \(\mathcal{D}_{\mathsf{unif}}\):We finally consider a third threshold implementation, we call \(\mathsf{ATI}_{1/2+\gamma,\mathcal{P},\mathcal{D}_{\mathsf{unif}}}^{\epsilon,\delta}\): * Compute both \(\mathsf{ct}_{0},\mathsf{ct}_{1}\leftarrow\mathcal{C}\), as random ciphertexts from the possible space of all ciphertexts for 1-bit messages. In our case, that is: \(\mathsf{ct}_{0},\mathsf{ct}_{1}\leftarrow\mathbb{Z}_{q}^{(nk+k+1)\times m}\). * Flip a bit \(b\leftarrow\{0,1\}\). * Run the quantum distinguisher \(\rho\) on input \(\mathsf{ct}_{b}\). Check whether the outcome is \(b\). If so, output \(1\), otherwise output \(0\). ### Extraction Algorithm We describe the extraction algorithm as follows. It takes input \(\mathsf{pk}=(\{\mathbf{A}_{i},\mathbf{u}_{i}\}_{i\in[k]}\) and a quantum state \(\rho_{\mathsf{Delete}}\). * Let \(\mathbf{x}_{\ell,j}^{\prime}\) be register that stores the final guess for the \(j\)-th entry of \(\mathbf{x}_{\ell}\). * For \(\ell=1,\cdots,k\): * For \(j=1,2,\cdots,n\): * For \(g_{\ell,j}=[-B_{x},B_{x}]\), where \([-B_{x},B_{x}]\) is the possible value range for \(\mathbf{x}_{\ell,j}\in\mathbb{Z}_{q}\): 1. Let \(\rho_{\mathsf{Delete}}\) be the current state from the quantum distinguisher. 2. Run \(\mathsf{ATI}_{1/2+\gamma-4\epsilon,\mathcal{P},\mathcal{D}(g_{\ell,j})}^{\epsilon,\delta}\) on \(\rho_{\mathsf{Delete}}\) wih respect to \(\mathsf{pk}\), for some inverse-polynomial \(\epsilon=\gamma/100\). 3. If \(\mathsf{ATI}_{1/2+\gamma-4\epsilon,\mathcal{P},\mathcal{D}(g_{\ell,j})}^{\epsilon,\delta}\) outputs \(1\), then set \(\mathbf{x}_{\ell,j}^{\prime}:=g_{\ell,j}\) and move on to the next coordinate, let \(j:=j+1\) if \(j<n\), else let \(\ell:=\ell+1,j=1\). 4. If \(\mathsf{ATI}_{1/2+\gamma-4\epsilon,\mathcal{P},\mathcal{D}(g_{\ell,j})}^{\epsilon,\delta}\) outputs \(0\), the let \(g_{i}:=g_{i}+1\) and go to step \(1\). * Output \(\mathbf{x}^{\prime}\). ### Analysis of the Extractor We make the following claims: **Claim 11.1**.: _When the guess \(g_{\ell,j}=\mathbf{x}_{\ell,j}\), the distributions \(\mathcal{D}_{2.k},\mathcal{D}(g_{\ell,j})\) are statistically close by distance \(\eta_{0}=\mathsf{poly}(k,m)\cdot(\frac{1}{q^{n}}+\frac{B_{P}}{B_{P^{\prime}}})\)._ Proof.: Note that the two distributions are the same except on how they sample \(\mathsf{ct}_{0}\). The ciphertext distribution for \(\mathsf{ct}_{0}\) in \(\mathcal{D}_{2.k}\) is statistically close to the following distribution: \[\mathsf{ct}_{0}=\begin{bmatrix}&\mathbf{u}_{1}^{\prime}&&\\ &\mathbf{A}_{1}^{\prime}&&\\ &\ldots&&\\ &\mathbf{u}_{\ell}^{\prime}&&\\ &\mathbf{A}_{\ell}^{\prime}&&\\ &\ldots&&\\ &\mathbf{u}_{k}^{\prime}&&\\ &\mathbf{A}_{k}^{\prime}&&\\ \sum_{i}(\mathbf{x}_{i}\mathbf{A}_{i}^{\prime}+\mathbf{e}_{i,0}^{\prime}+b_{ i}\cdot\mathbf{u}_{i}^{\prime})\end{bmatrix} \tag{5}\] where \(\mathbf{A}_{i}^{\prime}\leftarrow\mathbb{Z}_{q}^{n\times m},\mathbf{u}_{i}^{ \prime}\leftarrow\mathbb{Z}_{q}^{1\times m}\) are uniform random and given in the public key \(\mathsf{pk}\); \(\mathbf{e}_{i,0}^{\prime}\leftarrow\mathcal{D}_{\mathbb{Z}_{q}^{m},B_{P}^{ \prime}}\), where \(B_{P}^{\prime}/B_{P}\) is superpolynomial, for all \(i\in[k]\); \(b_{i}=0\) or \(1,i\in[k]\) are some arbitrary, adversarially chosen bits. We can view the distribution change as two differences: We can view the first change as: \(\sum\mathbf{y}_{i}\cdot\mathbf{R}+\mathbf{E}_{nk+k+1}=\sum(\mathbf{x}_{i,b_{i }}\mathbf{A}_{i}\mathbf{R}+\mathbf{e}_{i}^{\prime\top}\mathbf{R}+\mathbf{e}_{ i}^{\prime\prime}+b_{i}\cdot\mathbf{u}_{i}\mathbf{R})\) in Equation (4) to \(\sum(\mathbf{x}_{i}^{\top}\mathbf{A}_{i}\mathbf{R}+\mathbf{e}_{i,0}^{\prime \top})\), where both noise \(\mathbf{e}_{i,0}^{\prime}\) and \(\mathbf{e}_{i}^{\prime\prime}\) are sampled from \(\mathcal{D}_{\mathbb{Z}_{q}^{n},B_{P}^{\prime}}\). Since \(B_{P^{\prime}}/B_{P}\)(\(B_{P}\) for the Gaussian parameter of the error \(\mathbf{e}_{i}^{\prime}\) in \(\mathbf{y}_{j}\)) is superpolynomial, we can apply noise-flooding/smudging (Lemma 6.5) and the two are statistically close by \(k\cdot B_{P}/B_{P^{\prime}}\). The second difference is from using \((\mathbf{A}_{i}\mathbf{R},\mathbf{u}_{i}\mathbf{R},\sum(\mathbf{x}_{i,b_{i }}\mathbf{A}_{i}\mathbf{R}+\mathbf{e}_{i,0}^{\prime\top}+b_{i}\cdot\mathbf{u} _{i}\mathbf{R}))\) to using \((\mathbf{A}_{i}^{\prime}\leftarrow\mathbb{Z}_{q}^{n\times m},\mathbf{u}_{i}^{ \prime}\leftarrow\mathbb{Z}_{q}^{m},\sum(\mathbf{x}_{i,b_{i}}\mathbf{A}_{i}^{ \prime}+\mathbf{e}_{i,0}^{\prime}+b_{i}\cdot\mathbf{u}_{i}^{\prime}))\). These two are \(k\cdot q^{-n}\)-close by the leftover hash lemma. We then show that \(\mathsf{ct}_{0}\) in \(\mathcal{D}(g_{\ell,j})\) is also close to this distribution. First we also apply noise flooding to replace every \(\mathbf{e}_{i}^{\prime}\mathbf{R}+\mathbf{e}_{i}^{\prime\prime}\) with \(\mathbf{e}_{i,0}^{\prime}\). We next replace \(\mathbf{A}_{i}\mathbf{R},\mathbf{u}_{i}\mathbf{R}\)'s by the LHL with random \(\mathbf{A}_{i}^{\prime},\mathbf{u}_{i}^{\prime}\). We can ignore \(\sum_{i}b_{i}\cdot\mathbf{u}_{i}^{\prime}\) in the last row from our distribution, because \(b_{i}\) is known to the adversary and they are the same in both distributions. Then we observe that when \(g_{\ell,j}=\mathbf{x}_{\ell,j}\), we let \(\mathbf{A}_{\ell}^{\prime\prime}=\mathbf{A}_{\ell}^{\prime}+\mathbf{C}\) where \(\mathbf{C}\) is everywhere \(0\) except the \(j\)-th row being uniformly random \(\mathbf{c}\). We also have \(\mathbf{x}_{\ell,j}\mathbf{A}_{\ell}^{\prime\prime}+\mathbf{e}_{i,0}^{\prime} +g_{\ell,j}\cdot\mathbf{c}=[(\mathbf{x}_{\ell,j}(\mathbf{A}_{\ell,1,j}^{ \prime\prime}+\mathbf{c}_{1})+\sum_{i\neq j}\mathbf{x}_{\ell,j}(\mathbf{A}_{ \ell,1,i}^{\prime\prime}+0),\cdots,(\mathbf{x}_{\ell,j}(\mathbf{A}_{\ell,m,j}^{ \prime\prime}+\mathbf{c}_{m})+\sum_{i\neq i=j}\mathbf{x}_{\ell,j}(\mathbf{A}_{ \ell,m,i}^{\prime\prime}+0)]+\mathbf{e}_{i,0}^{\prime}=\mathbf{x}_{\ell,j} \mathbf{A}_{\ell}^{\prime\prime}+\mathbf{e}_{i,0}^{\prime\prime}\), where \(\mathbf{A}_{\ell,x,y}^{\prime\prime}\) denotes the entry in \(x\)-th column and \(y\)-th row of \(\mathbf{A}_{\ell}^{\prime\prime}\). **Claim 11.2**.: _When the guess \(g_{\ell,j}\neq\mathbf{x}_{\ell,j}\), the distributions \(\mathcal{D}_{2.k},\mathcal{D}_{\text{unif}}\) are statistically close by distance \(\eta_{1}=\mathsf{poly}(k,m)\cdot(\frac{1}{q^{n}}+\frac{B_{P}}{B_{P^{\prime}}})\)._ Proof.: the two distributions are the same except on how they sample \(\mathsf{ct}_{0}\). \(\mathsf{ct}_{0}\) in \(\mathcal{D}_{\text{unif}}\) is uniformly sampled from \(\mathbb{Z}_{q}^{(nk+k+1)\times m}\). It remains to show that \(\mathsf{ct}_{0}\) in \(\mathcal{D}(g_{\ell,j})\) is close to this distribution. Similar to Claim 11.1, we first apply noise flooding and then LHL to \(\mathsf{ct}_{0}\). Now let \(\mathbf{A}_{\ell}^{\prime\prime}=\mathbf{A}_{\ell}^{\prime}+\mathbf{C},\mathbf{A} _{\ell}^{\prime}\leftarrow\mathbb{Z}_{q}^{n\times m}\). \(\mathbf{A}_{\ell}^{\prime\prime}\) is uniormly random because the only change is adding the uniform random vector \(\mathbf{c}\) in its \(j\)-th row. Now we observe that when \(g_{\ell,j}\neq\mathbf{x}_{\ell,j}\). We can ignore the term \(\sum_{i}b_{i}\cdot\mathbf{u}_{i}^{\prime}\) in the last row from our distribution, because \(b_{i}\) is known to the adversary and they are the same in both distributions. We consider the vector \(\mathbf{w}=\mathbf{x}_{\ell,j}^{\top}\mathbf{A}_{\ell}^{\prime\prime}+\mathbf{e }_{i,0}^{\prime}+g_{\ell,j}\cdot\mathbf{c}=[(g_{\ell,j}\cdot\mathbf{c}_{1}+ \sum_{i}\mathbf{x}_{\ell,i}\mathbf{A}_{\ell,1,i}^{\prime\prime},\cdots,g_{\ell,j}\cdot\mathbf{c}_{m}+\sum_{i}\mathbf{x}_{\ell,i}\mathbf{A}_{\ell,m,i}^{\prime \prime}]+\mathbf{e}_{i,0}^{\prime}\). Since \(\mathbf{c}\) is uniformly random, the entire we now becomes uniformly random. Since the last row of \(\mathrm{ct}_{0}\) in \(\mathcal{D}(g_{\ell,j})\) is \(\sum_{i\neq\ell}(\mathbf{x}_{i,b_{i}}\mathbf{A}_{i}^{\prime}+\mathbf{e}_{i,0 }^{\prime}+b_{i}\cdot\mathbf{u}_{i}^{\prime})+\mathbf{w}\), it is masked by \(\mathbf{w}\) and becomes uniformly random. Let us denote the probability of the measurement **outputting 1** on \(\rho\) by \(\operatorname{Tr}[\mathsf{ATl}_{\mathcal{P},\mathcal{D},1/2+\gamma}^{\epsilon, \delta}\rho]\). Accordingly \(1-\operatorname{Tr}[\mathsf{ATl}_{\mathcal{P},\mathcal{D},1/2+\gamma}^{ \epsilon,\delta}\rho]:=\operatorname{Pr}[\mathsf{ATl}_{\mathcal{P},\mathcal{D},1/2+\gamma}^{\epsilon,\delta}\rho\to 0]\). **Corollary 11.3**.: _We make the following two claims. For any inverse polynomial \(\epsilon\) and exponentially small \(\delta\), there exists exponentially small \(\delta^{\prime}\) such that:_ 1. _If_ \(\operatorname{Tr}[\mathsf{ATl}_{\mathcal{P},\mathcal{D}_{2,k},1/2+\gamma}^{ \epsilon,\delta}\rho]=1-\delta\)_, then_ \(\operatorname{Tr}[\mathsf{ATl}_{\mathcal{P},\mathcal{D}(g_{\ell,j})),1/2+ \gamma-\epsilon}^{\epsilon,\delta}\rho]\geq 1-\delta^{\prime}-\eta_{0}^{\prime}\)_._ 2. _If_ \(1-\operatorname{Tr}[\mathsf{ATl}_{\mathcal{P},\mathcal{D}_{\text{inf}},1/2+ \gamma}^{\epsilon,\delta}\rho]=1-\delta\)_, then_ \(1-\operatorname{Tr}[\mathsf{ATl}_{\mathcal{P},\mathcal{D}(g_{\ell,j})),1/2+ \gamma-\epsilon}^{\epsilon,\delta}\rho]\geq 1-\delta^{\prime}-\eta_{1}^{\prime}\)_._ _where \(\eta_{0}^{\prime}=O(\eta_{0}\cdot\mathsf{poly}(\lambda)),\eta_{1}^{\prime}=O( \eta_{1}\cdot\mathsf{poly}(\lambda))\) and \(\eta_{0},\eta_{1}\) are the statistical distances between \((\mathcal{D}_{2,k},\mathcal{D}(g_{\ell,j}))\) and between \((\mathcal{D}_{\text{\rm unif}},\mathcal{D}(g_{\ell,j})\) respectively._ This follows directly from Claim 11.1, Claim 11.2 and Theorem 5.10. **Claim 11.4**.: _For all \(\ell\in[k],j\in[n]\), When the guess \(g_{\ell,j}\neq\mathbf{x}_{\ell,j}\) in the above mixture of projections \((\mathcal{P}_{\mathcal{D}_{g_{i}}},\mathbf{I}-\mathcal{P}_{\mathcal{D}_{g_{i}}})\), for any noticeable \(\gamma\), and any quantum distinguisher \(\rho\), there exists some function \(\eta_{1}^{\prime}=O(\eta_{1}\cdot\mathsf{poly}(\lambda))\) such that \(1-Tr[\mathsf{ATl}_{\mathcal{P},D(g_{\ell,j}),1/2+\gamma-\epsilon}^{\epsilon, \delta}\rho]=1-\eta_{1}^{\prime}(\lambda)\), where \(\eta_{1}\) is the statistical distance between \((\mathcal{D}(g_{\ell,j}),\mathcal{D}_{\text{\rm unif}})\)._ Proof.: We first consider the perfect projective threshold implementation for the distribution \(\mathcal{D}_{\text{\rm unif}}\)\(\mathsf{TI}_{1/2+\gamma}(\mathcal{D}_{\text{\rm unif}})\). Since in this distribution, \(\mathrm{ct}_{0},\mathrm{ct}_{1}\) are sampled from identical distributions. no algorithm can have a noticeable advantage in distinguishing them. That is, all possible states will be projected onto the result 0, when one applies the projective implemention \(\mathsf{TI}_{1/2+\gamma}(\mathcal{P}_{\mathcal{D}_{\text{\rm unif}}})\) for any noticeable \(\gamma\): \(\operatorname{Pr}[\mathsf{TI}_{1/2+\gamma}(\mathcal{P}_{\mathcal{D}_{\text{ \rm unif}}})\rho\to 0]=1\). Then we move on to use \(\mathsf{ATl}_{1/2+\gamma,\mathcal{P},\mathcal{D}_{\text{\rm unif}}}\) and we have \(1-\operatorname{Tr}[\mathsf{ATl}_{1/2+\gamma-\epsilon,\mathcal{P},\mathcal{D}_{ \text{\rm unif}}}\rho]\geq 1-\delta\) for some exponentially small \(\delta\). By Corollary 11.3 we have \(1-\operatorname{Tr}[\mathsf{ATl}_{1/2+\gamma-2\epsilon,\mathcal{P},\mathcal{D}(g _{\ell,j})}\rho]\geq 1-\eta_{1}^{\prime}\) where \(\eta_{1}^{\prime}=O(\eta_{1}^{2}/\mathsf{poly}(\lambda))\) and \(\eta_{1}\) is the statistical distance between \((\mathcal{D}(g_{\ell,j}),\mathcal{D}_{\text{\rm unif}})\). **Lemma 11.5** (Invariant Through Measurements).: _Suppose we given at the beginning of the algorithm that \(\mathsf{ATl}_{\mathcal{P},\mathcal{D}_{2,k}),1/2+\gamma}^{\epsilon,\delta}\rho\)**outputs 1** for some inverse polynomial \(\epsilon\) and exponentially small \(\delta\), then we have for all \(\ell\in[k],j\in[n]\) and each \(g_{\ell,j}\in[-B_{x},B_{x}]\), and let \(\rho\) be the state of the distinguisher before the measurement \(\mathsf{ATl}_{\mathcal{P},D(g_{\ell,j}),1/2+\gamma-4\epsilon}^{\epsilon,\delta}\) in the above algorithm, the following holds:_ * _when the guess_ \(g_{\ell,j}=\mathbf{x}_{\ell,j}\)_, then there exists some function_ \(\eta_{0}^{\prime}(\cdot)\) _such that_ \[\operatorname{Tr}[\mathsf{ATl}_{\mathcal{P},\mathcal{D}(g_{\ell,j}),1/2+\gamma-2 \epsilon}^{\epsilon,\delta}\rho]=1-\eta_{0}^{\prime}(\lambda)\] _._ * _when the guess_ \(g_{\ell,j}\neq\mathbf{x}_{\ell,j}\)_, then there exists some function_ \(\eta^{\prime}_{1}(\cdot)\) _such that_ \[1-\operatorname{Tr}[\mathsf{AT}_{\mathcal{P},g_{\ell,j},1/2+\gamma-2\epsilon}^{ \epsilon,\delta}\rho]=1-\eta^{\prime}_{1}(\lambda)\] _._ Proof.: We are given that the distinguisher \(\rho_{\mathsf{Delete}}\) (let us call it \(\rho\) now for brevity) satisfies \(\mathsf{AT}_{\mathcal{P},\mathcal{D}_{2,k},1/2+\gamma}^{\epsilon,\delta}\rho\to 1\), for some exponentially small \(\delta\) and inverse polynomial \(\epsilon\) of our own choice (e.g. \(\epsilon=\frac{\gamma}{100}\)), according to our assumption in Theorem 10.6. Note that since we have obtained this outcome, we must have already applied once \(\mathsf{AT}_{\mathcal{P},\mathcal{D}_{2,k},1/2+\gamma}^{\epsilon,\delta}\) on \(\rho\) and obtained some remaining state \(\rho^{\prime}\). Now the remaining state \(\rho^{\prime}\) will satisfy the condition that \(\operatorname{Tr}[\mathsf{AT}_{\mathcal{P},\mathcal{D}_{2,k},1/2+\gamma-3 \epsilon}^{\epsilon,\delta}\rho]\geq 1-3\delta\) by item 2 and 3 in Lemma 5.7. Consider the first time we apply \(\mathsf{AT}_{\mathcal{P},D(g_{\ell,j}),1/2+\gamma-4\epsilon}^{\epsilon,\delta}\) (when \(\ell,j=1\) and \(g_{\ell,j}=-B_{x}\)) in the above algorithm, suppose we have guessed \(g_{\ell,j}\) correctly at this point: By Corollary 11.3, have that \(\operatorname{Tr}[\mathsf{AT}_{\mathcal{P},D(g_{\ell}),1/2+\gamma-4\epsilon}^ {\epsilon,\delta}\rho]\geq 1-3\delta-\delta^{\prime}-\eta^{\prime}_{0}\), where \(\delta,\delta^{\prime}\) are exponentially small and \(\eta^{\prime}_{0}=O(\eta_{0})=O(\mathsf{poly}(n,k)\cdot(q^{-n}+B_{P}/B_{P^{ \prime}}))\), according to Claim 11.1. Since \(q^{-n}\) is exponentially small and \(B_{P}/B_{P}\) is inverse superpolynomial, the \(\eta^{\prime}_{0}\) is negligible. We can then apply the gentle measurement lemma Lemma 4.1 and have the post-measurement state recovered to some \(\rho^{\prime\prime}\) that satisfies \(\|\rho^{\prime}-\rho^{\prime\prime}\|_{\operatorname{Tr}}\leq\sqrt{\eta^{ \prime}_{0}}\). Since \(\delta,\delta^{\prime}\) are exponentially small, we put them inside \(\eta^{\prime}_{0}\) from now on. Suppose the first time we apply \(\mathsf{AT}_{\mathcal{P},D(g_{\ell,j}),1/2+\gamma-4\epsilon^{\prime}}^{\epsilon,\delta}\) the guess \(g_{\ell,j}\) is incorrect, as shown in Claim 11.4, we have \(1-\operatorname{Tr}[\mathsf{AT}_{\mathcal{P},D(g_{\ell,j}),1/2+\gamma-4 \epsilon}^{\epsilon,\delta}\rho^{\prime}]\geq 1-3\delta-\delta^{\prime}-\eta^{ \prime}_{1}\). The magnitude of \(\eta^{\prime}_{1}\) is the same as \(\eta^{\prime}_{0}\). Since we obtain outcome 0 with probability \((1-O(\eta^{\prime}_{1}))\), we can therefore also apply the gentle measurement lemma and recover the post-measurement state to trace distance in \(O(\sqrt{\eta^{\prime}_{1}})\) to the pre-measurement state. Since in our case, \(\eta^{\prime}_{0}=\eta^{\prime}_{1}\), we use \(\eta^{\prime}\) to denote both of them from now on. We can then perform induction: assume the statements hold after the \(L\)-th measurement, then the state after the \(L\)-th steps in the loop, \(\rho_{L}\), is \(L\cdot O(\sqrt{\eta^{\prime}})\)-close to the distinguisher's state \(\rho^{\prime}\) at the beginning of the algorithm. When the \(L+1\)-th measurement uses a correct \(g_{\ell,j}\): by the fact that \(|\operatorname{Tr}(\mathcal{P}\rho)-\operatorname{Tr}(\mathcal{P}\rho_{L})| \leq\|\rho^{\prime}-\rho_{L}\|_{\operatorname{Tr}}\) for all POVM measurements \(\mathcal{P}\), we have \(\operatorname{Tr}[\mathsf{AT}_{\mathcal{P},D(g_{\ell,j}),1/2+\gamma-4 \epsilon}^{\epsilon,\delta}\rho_{L}]\geq 1-(L+1)\cdot O(\sqrt{\eta^{\prime}})\). Similarly, when the \(L+1\)-th measurement uses an incorrect \(g_{\ell,j}\), we have \(1-\operatorname{Tr}[\mathsf{AT}_{\mathcal{P},D(g_{\ell,j}),1/2+\gamma-4 \epsilon}^{\epsilon,\delta}\rho_{L}]\geq 1-(L+1)\cdot O(\sqrt{\eta^{\prime}})\). Note that in our case, the total number of loops \(L=B_{x}\cdot nk\). Since we have that \(B_{P^{\prime}}/(B_{P}\cdot B_{x}^{2})\) is superpolynomial and that \(O(\sqrt{\eta^{\prime}})=O(\sqrt{B_{P}/B_{P^{\prime}}})\), the disturbance on the state \(\rho_{L}\) from origina \(\rho^{\prime}\) stays negligible throughout the entire algorithm. ConclusionBy Lemma D.5, we know that for every correct guess \(g_{\ell,j}=\mathbf{x}_{\ell,j}\), we will get result 1 after our measurement \(\mathsf{AT}_{\mathcal{P},D(g_{\ell,j}),1/2+\gamma-4\epsilon}^{\epsilon,\delta}\) on the current distinguisher's state with probability \(1-\mathsf{negl}(\lambda)\); for every incorrect guess \(g_{\ell,j}\neq\mathbf{x}_{\ell,j}\), we will get result 0 after our measurement with probability \(1-\mathsf{negl}(\lambda)\). Therefore, we get to know the value of every \(\mathbf{x}_{\ell,j}\) with probability \(1-\mathsf{negl}(\lambda)\) for all \(\ell\in[k],j\in[n]\). By the union bound, we can obtain the entire \(\{\mathbf{x}_{1},\cdots,\mathbf{x}_{k}\}\) with probability \(1-\mathsf{negl}(\lambda)\). Running TimeBy Lemma 5.7, the running time of each \(\mathsf{AT}^{\epsilon,\delta}_{\mathcal{P},D(g_{\epsilon},j)),1/2+\gamma}\) is \(T\cdot\mathsf{poly}(1/\epsilon,1/\log\delta)\) where \(T\) is th running of of quantum algorithm \(\rho\), so the entire algorithm is \(Tnk\cdot B_{x}\cdot\mathsf{poly}(1/\epsilon,1/\log\delta)\), where \(B_{x}\) is quasipolynomial, \(\epsilon\) is an inverse polynomial and \(\delta\) can be an inverse exponential function. Since we rely on a subexponential hardness assumption, such running time is acceptable.
2303.14210
Disk settling and dynamical heating: histories of Milky Way-mass stellar disks across cosmic time in the FIRE simulations
We study the kinematics of stars both at their formation and today within 14 Milky Way (MW)-mass galaxies from the FIRE-2 cosmological zoom-in simulations. We quantify the relative importance of cosmological disk settling and post-formation dynamical heating. We identify three eras: a Pre-Disk Era (typically >8 Gyr ago), when stars formed on dispersion-dominated orbits; an Early-Disk Era (~ 8 - 4 Gyr ago), when stars started to form on rotation-dominated orbits but with high velocity dispersion, sigma_form; and a Late-Disk Era (< 4 Gyr ago), when stars formed with low sigma_form. sigma_form increased with time during the Pre-Disk Era, peaking ~ 8 Gyr ago, then decreased throughout the Early-Disk Era as the disk settled and remained low throughout the Late-Disk Era. By contrast, the velocity dispersion measured today, sigma_now, increases monotonically with age because of stronger post-formation heating for Pre-Disk stars. Importantly, most of sigma_now was in place at formation, not added post-formation, for stars younger than ~ 10 Gyr. We compare the evolution of the three velocity components: at all times, sigma_R,form > sigma_phi,form > sigma_Z,form. Post-formation heating primarily increased sigma_R at ages < 4 Gyr but acted nearly isotropically for older stars. The kinematics of young stars in FIRE-2 broadly agree with the range observed across the MW, M31, M33, and PHANGS-MUSE galaxies. The lookback time that the disk began to settle correlates with its dynamical state today: earlier-settling galaxies currently form colder disks. Including stellar cosmic-ray feedback does not significantly change disk rotational support at fixed stellar mass.
Fiona McCluskey, Andrew Wetzel, Sarah R. Loebman, Jorge Moreno, Claude-Andre Faucher-Giguere, Philip F. Hopkins
2023-03-24T18:05:30Z
http://arxiv.org/abs/2303.14210v2
Disk settling and dynamical heating: histories of Milky Way-mass stellar disks across cosmic time in the FIRE simulations ###### Abstract We study the kinematics of stars both at their formation and today within 14 Milky Way (MW)-mass galaxies from the FIRE-2 cosmological zoom-in simulations. We quantify the relative importance of cosmological disk settling and post-formation dynamical heating. We identify three eras: a Pre-Disk Era (typically \(\gtrsim 8\) Gyr ago), when stars formed on dispersion-dominated orbits; an Early-Disk Era (\(\approx 8-4\) Gyr ago), when stars started to form on rotation-dominated orbits but with high velocity dispersion, \(\sigma_{\rm form}\); and a Late-Disk Era (\(\lesssim 4\) Gyr ago), when stars formed with low \(\sigma_{\rm form}\). \(\sigma_{\rm form}\) increased with time during the Pre-Disk Era, peaking \(\approx 8\) Gyr ago, then decreased throughout the Early-Disk Era as the disk settled and remained low throughout the Late-Disk Era. By contrast, the velocity dispersion measured today, \(\sigma_{\rm now}\), increases monotonically with age because of stronger post-formation heating for Pre-Disk stars. Importantly, most of \(\sigma_{\rm now}\) was in place at formation, not added post-formation, for stars younger than \(\approx 10\) Gyr. We compare the evolution of the three velocity components: at all times, \(\sigma_{\rm R,form}>\sigma_{\phi,form}>\sigma_{\rm Z,form}\). Post-formation heating primarily increased \(\sigma_{\rm R}\) at ages \(\lesssim 4\) Gyr but acted nearly isotropically for older stars. The lookback time that the disk began to settle correlates with its dynamical state today: earlier-settling galaxies currently form colder disks. Young stars in FIRE-2 are kinematically hotter than the MW but broadly agree with M31 and M33. Including stellar cosmic-ray feedback does not significantly change the amount of disk rotational support at fixed stellar mass. keywords: galaxies: kinematics and dynamics -- galaxies: evolution -- Galaxy: disk -- Galaxy: formation -- methods: numerical ## 1 Introduction The present-day kinematics of stars encode a galaxy's dynamical history. A star's orbit is set by both its initial dynamics at birth that it inherited from the star-forming interstellar medium (ISM) and any post-formation dynamical perturbations that it experienced over its lifetime. Thus, stellar kinematics provide dual insight into the past state of the ISM and the dynamical processes that shaped the galaxy. However, the extent to which a star's current motion results from its initial state or from post-formation perturbations remains poorly understood. The inability to reliably disambiguate these two remains a significant obstacle in connecting the kinematics and morphology of a galaxy today to its formation history (for example Somerville and Dave, 2015; Naab and Ostriker, 2017). Overcoming this ambiguity requires understanding the kinematics with which stars formed throughout the entire life of a galaxy. In the solar neighborhood, the youngest stellar populations share the kinematics of the gas disk from which they formed: young stars are on nearly circular orbits with total velocity dispersions, \(\sigma_{\rm tot}\approx 25-30\) km s\({}^{-1}\)(Edvardsson et al., 1993; Casagrande et al., 2011). However, the kinematics of stars that formed at earlier times are likely different than those forming in the Milky Way (MW) today, based on observations of high-redshift galaxies, which provide statistics on how the initial state of stars evolved over cosmic time. Such observations reveal that MW-like thin disks today are the endpoint of a metamorphosis: galactic disks at higher redshifts (\(z\gtrsim 1-2\)) were thicker, clumpier, and more turbulent than their thin counterparts at lower redshifts (for example Forster Schreiber et al., 2009; Stot et al., 2016; Rizzo et al., 2020; Birkin et al., 2023). The high turbulence of MW-mass progenitors probably arose from the combination of stellar feedback, given their higher star formation rates (SFRs) (for example Madau and Dickinson, 2014; Forster Schreiber and Wuyts, 2020), gravitational instabilities (for example Elmegreen et al., 2007; Dekel et al., 2009; Lehnert et al., 2009), and higher merger and accretion rates, which lead to higher gas fractions and surface densities (for example Genzel et al., 2015; Tacconi et al., 2020). Thus, high-redshift observations show that disk galaxies'settled' over cosmic time (Kassin et al., 2012), becoming less turbulent and more rotationally supported since \(z\sim 1-2\)(for example Wisnioski et al., 2015; Johnson et al., 2018), that is, the velocity dispersion, \(\sigma\), of gas disks decreased while their rotational velocity, \(v_{\phi}\), increased. Although most works used emission lines from ionized gas to measure high-redshift kinematics, recent works indicate that the dispersions of atomic and molecular gas, while \(\approx 15\) km s\({}^{-1}\) lower, evolved similarly (for example Ubler et al., 2019; Girard et al., 2021). Thus, stars, which are born from cold dense gas, formed with larger dispersions at earlier times when turbulence was higher, and later formed with progressively lower dispersions as turbulence decreased and the galaxy's disk settled. In addition to understanding the orbits of stars at birth, equally important is to understand how stellar orbits change post-formation. Because stellar populations are effectively collisionless and dissipationless, they gain random orbital energy over time via 'dynamical heating' that increases \(\sigma\), the result of repeated scattering interactions with density perturbations within the disk, such as giant molecular clouds (GMCs) (for example Spitzer and Schwarzschild, 1951; Wielen, 1977; Lacey, 1984), spiral arms (Barbanis and Woltjer, 1967; Carlberg and Sellwood, 1985; Minchev and Quillen, 2006), and bars (Saha et al., 2010; Grand et al., 2016). Additional heating arises from global asymmetries, including toroung from the misalignment of the stellar and gaseous disk (Roskar et al., 2010), feedback-driven gaseous outflows (El-Badry et al., 2016), cosmic accretion, mergers, and interactions with satellites(for example Quinn et al., 1993; Brook et al., 2004; Villalobos and Helmi, 2008; Belokurov et al., 2018). However, the relative impact of each mechanism across cosmic time remains uncertain. The magnitude of each component of \(\sigma\) (along \(R\), \(\phi\), or \(Z\)) provides insight into the relative strength of different mechanisms: spiral structure increases in the-plane dispersion but is inefficient at increasing the vertical dispersion, whereas GMCs heat stars more isotropically and can redirect in-plane heating (for example Carlberg and Innanen, 1987; Jenkins and Binney, 1990; Sellwood, 2013). Furthermore, the scattering environment of the disk evolves across cosmic time. At earlier times, a galaxy's gas fraction and gas surface density were higher (for example Shapley, 2011; Swinbank et al., 2012; Tacconi et al., 2020), such that scattering by GMCs was likely more frequent. Prior to the formation of a disk, galaxies generally lacked strong spiral structure and thus spiral-driven heating (for example Margalef-Bentabol et al., 2022). On the other hand, certain processes that can drive non-adiabatic rapid heating were more prevalent earlier, including mergers (for example Conselice et al., 2005) and feedback-driven outflows (for example Genzel et al., 2015). One can understand the relative strength of these heating mechanisms by comparing the observed relationship between velocity dispersion and stellar age with predictions from various heating models. The MW is an ideal test-bed for such comparisons because it offers unparalleled access to the 3D motions, positions, and ages of individual stars. In particular, the observed monotonic increase in velocity dispersion with age for stars in the solar neighborhood provides an important benchmark (for example Stromberg, 1946; Wielen, 1977; Casagrande et al., 2011). Nordstrom et al. (2004) and Holmberg et al. (2009) argued that this observed trend is consistent with MW stars forming on 'disky' orbits that then underwent dynamical heating by the combination of GMCs and spiral arms. Mackereth et al. (2019) and Ting and Rix (2019) extended this analysis to include stars located outside the solar neighborhood (\(R=3-14\) kpc) and found similar results: the observed relations are consistent with stars younger than \(\approx 8\) Gyr having formed with a low, constant dispersion and subsequently being heated by both GMCs and spiral arms. However, as before, the observed dispersions of older stars (age \(\gtrsim 8\) Gyr) are higher than those predicted by heating models (Mackereth et al., 2019). This discrepancy could arise if old stars were born kinematically hotter, or if their dynamical heating differed from what current models predict. Likely, both factors contribute, but disentangling this requires insight into the MW's early history. Fortunately, recent MW observations now allow us to characterize this era. Astrometric data from Gaia (Gaia Collaboration et al., 2016) along with stellar properties from large spectroscopic surveys, such as APOGEE (Majewski et al., 2017), GALAH (De Silva et al., 2015; Buder et al., 2018), and LAMOST (Cui et al., 2012), now provide 6D kinematics and elemental abundances for stars throughout the disk. Recent works suggest that the MW developed a thick but continuously-selling disk early in its history, with estimates for the onset of coherent rotation ranging from \(\approx 10\) to \(13\) Gyr ago (Bellokurov and Kravtsov, 2022; Conroy et al., 2022; Xiang and Rix, 2022), and that the MW's thin-disk began to form later, \(\approx 8-9\) Gyr ago. However, post-formation dynamical heating leads to ambiguity about the nature and timing of these transitions, highlighting the difficulties innate to separating birth kinematics from post-formation dynamical heating, especially for old, metal-poor stars. Cosmological simulations of the formation of MW-mass galaxies provide important tools for understanding the relative impacts of cosmological disk settling and post-formation dynamical heating and thus understand observations of the MW and nearby galaxies today (see Naab and Ostriker, 2017; Vogelsberger et al., 2020, for recent reviews). Cosmological zoom-in simulations provide the high resolution needed to model a galaxy's internal structure, including the multi-phase ISM that sets the birth kinematics of stars and thus the dynamical structure of the disk, within an accurate cosmological context, including non-equilibrium dynamical perturbations, mergers, mass growth, and so forth. Many works used such cosmological zoom-in simulations to show that (1) MW-like galaxies form radially 'inside-out' and vertically 'upside-down', transitioning from a thick, turbulent galaxy at early times to a thin, rotationally dominated disk at later times, and (2) the present-day kinematics of stars result from their kinematics at birth and subsequent dynamical heating, both of which change over cosmic time (for example Bird et al., 2012; Brook et al., 2012; Martig et al., 2014; Grand et al., 2016; Bonaca et al., 2017; Ceverino et al., 2017; Ma et al., 2017; Buck et al., 2020; Agertz et al., 2021; Bird et al., 2021). We build upon a series of papers that used FIRE-2 simulations to study the formation of galactic disks. Stern et al. (2021) found that virialization of the inner circumgalactic medium (CGM), that is, the onset of a stable hot halo, coincided with the formation of a thin disk, a change from 'bursty' to'steady' star formation, and the suppression of gaseous outflows; the presence of a hot halo can allow infalling gas to align coherently with a galaxy's disk _before_ it accretes onto the disk (Hafen et al., 2022). Additional works found that the virialization of the inner CGM and the associated transition from bursty to steady star formation coincided with the transition from 'thick-disk' to 'thin-disk' formation Yu et al. (2021, 2022) and the emergence of a rotationally-supported thin gaseous disk from a previously quasi-isotropic ISM (Gurvich et al., 2023). More recently, Hopkins et al. (2023) investigated the physical causes of such transitions. While the initial formation of a disk depends on the development of a sufficiently centrally concentrated mass profile, the transition from bursty to smooth star formation generally arises later, when the absolute depth of the gravitational potential (within the radius of star formation) crosses a critical threshold. This threshold is similar to the threshold for virialization of the inner CGM. In this work, we use the FIRE-2 cosmological zoom-in simulations to study cosmological disk settling and dynamical heating across the formation histories of MW-mass galaxies. We study the kinematics of stars both at birth and today, emphasizing different phases of disk-galaxy formation. We examine how stellar formation kinematics manifest themselves today, that is, how post-formation dynamical heating alters the initial orbits of stars. By doing so, we aim to understand how the current kinematics of the MW and nearby galaxies relate to their formation histories. ## 2 Methods ### FIRE-2 simulations We analyze 14 cosmological zoom-in simulations of MW/M31-mass galaxies from the Feedback in Realistic Environments (FIRE) project1, which include both dark matter and baryons (gas and stars). The simulations use the GIZMO gravity plus hydrodynamics code (Hopkins, 2015) with the meshless finite-mass (MFM) Godunov method, which provides adaptive spatial resolution while maintaining excellent conservation of energy and (angular) momentum. Footnote 1: FIRE project web site: [http://fire.northwestern.edu](http://fire.northwestern.edu) We ran these simulations using the FIRE-2 physics model (Hopkins et al., 2018). FIRE-2 models the dense, multi-phase ISM and incorporates metallicity-dependent radiative cooling and heating processes for gas across temperatures \(10-10^{10}\) K, including free-free, photoionization and recombination, Compton, photoelectric and dust collisional, cosmic ray, molecular, metal-line, and fine structure processes. The simulations self-consistently generate and track 11 elements (H, He, C, N, O, Ne, Mg, Si, S, Ca, Fe), including a model for sub-grid mixing/diffusion via turbulence (Hopkins, 2016; Su et al., 2017; Escala et al., 2018). FIRE-2 also includes photoionization and heating from a spatially uniform, redshift-dependent UV background (Faucher-Giguere et al., 2009) that reionizes the Universe at \(z\approx 10\). Crucialty for this work, FIRE-2 simulations resolve the phase structure in the ISM, allowing gas to collapse into giant molecular clouds and form stars (Benincasa et al., 2020; Guszejnov et al., 2020). Star formation occurs in locally self-gravitating, Jeans-unstable, dense (\(n_{\rm SF}>1000\,{\rm cm^{-3}}\)), molecular (following Krumholz and Gnedin, 2011) gas. After reaching these criteria, a gas cell converts to a star particle in a local free-fall time. Each star particle represents a single stellar population sampled from a Kroupa (2001) initial mass function, with mass and metallicity inherited from its progenitor gas cell. Once formed, star particles follow stellar evolution models that tabulate feedback event rates, luminosities, energies, and mass-loss rates from STARBURST99 \(\chi\)7.0 (Leitherer et al., 1999). FIRE-2 implements the major channels for stellar feedback, including core-collapse and white-dwarf (Ia) supernovae, stellar winds, radiation pressure, photoionization, and photoelectric heating. We generated cosmological zoom-in initial conditions at \(z\approx 99\) using MUSIC (Hahn and Abel, 2011). Each zoom-in region is embedded within a cosmological box of side length \(70.4-172\) Mpc. The simulations use a flat \(\Lambda\)CDM cosmology with parameters broadly consistent with Planck Collaboration et al. (2020): \(h=0.68-0.71\), \(\Omega_{\Lambda}=0.69-0.734\), \(\Omega_{\rm m}=0.266-0.31\), \(\Omega_{\rm b}=0.0455-0.048\), \(\sigma_{8}=0.801-0.82\) and \(n_{\rm s}=0.961-0.97\). Our simulation set consists of 14 MW/M31-mass galaxies: 8 isolated galaxies from the _Latte_ suite, and 6 galaxies from the _ELVIS on FIRE_ suite of Local Group (LG)-like pairs of galaxies. The _Latte_ suite, with the exception of m12r, has an initial baryon particle mass of \(7070\,{\rm M}_{\odot}\), although stellar mass loss leads to star particles having typical masses \(\approx 5000\,{\rm M}_{\odot}\), and dark matter particle mass of \(3.5\times 10^{5}\,{\rm M}_{\odot}\). m12r has initial baryon particle masses of \(4200\,{\rm M}_{\odot}\) and dark matter particle mass of \(2.1\times 10^{5}\,{\rm M}_{\odot}\). The _ELVIS on FIRE_ suite has mass resolution \(\sim 2\times\) better than the _Latte_ suite: Romeo and Julier have initial baryon particle masses of \(3500\,{\rm M}_{\odot}\), while Romulus and Remus and Thelma E. Louse have \(4000\,{\rm M}_{\odot}\). Both star and dark-matter particles have fixed gravitational force softenings (comoving at \(z>9\) and physical thereafter) with a Plummer equivalent of \(\epsilon_{\rm star}=4\) pc and \(\epsilon_{\rm dm}=40\) pc. Gas cells have adaptive softening, which matches the hydrodynamic kernel smoothing, reaching a minimum of 1 pc and a typical of \(\sim 40\) pc in the cold ISM. For our main results (Sections 3.1, 3.3, 3.4, and 3.5), we present properties averaged across our galaxy sample, not including the lower-mass galaxies m12r and m12r or Julier from the _ELVIS on FIRE_ suite. However, in Sections 3.6 and 3.7, where we explore dependence on galaxy mass, we include all 14 galaxies. m12r and m12r have stellar masses of \(1.8\) and \(1.5\times 10^{10}\,{\rm M}_{\odot}\), lower than the MW's and M31's stellar masses of \(\approx 5\times 10^{10}\) and \(\approx 10^{11}\,{\rm M}_{\odot}\), respectively. Furthermore, m12r and m12r have stellar kinematics that are weakly or never dominated by disk-like motion. Julier has a stellar mass closer to the MW's, at \(3.3\times 10^{10}\,{\rm M}_{\odot}\), and disk-dominated kinematics at \(z=0\), but we exclude it, because a rapid gas accretion event late in its history leads to anomalous properties that skew the sample-averaged properties towards trends unique to Juliet. We will examine these cases in more detail in future work. ### Measuring Stellar Dynamics Today and at Formation Motivated primarily by measurements of the MW, we select stars that are in each galaxy's'solar annulus': \(R=6-10\) kpc and \(|Z|<3\) kpc today. In future work, we will examine trends with radius, to understand the drivers of post-formation dynamical heating and their application to MW observations in greater detail; briefly, we find that the dispersion today generally decreases with \(R\) today, but that neither the dispersion at formation nor the dispersion today depend on \(R\) at \(R\gtrsim 1-2\) kpc. The effect of the vertical selection on our results is minimal, as we show in Appendix A. A key advantage of our simulations is our ability to study properties both today and when stars formed. In both cases, we select stars on the basis of their _current_ coordinates. However, when determining properties at formation, we only include stars that formed in situ, which we define following Bellardini et al. (2022) as forming within a total distance of 30 kpc comoving from the galactic center. We do not apply this in-situ selection when measuring properties today, to more directly compare with observations. Across our simulations, ex-situ stars comprise only \(\sim 4\%\) of all stars currently within the disk (\(R<20\) kpc and \(|Z|<3\) kpc), but they can make up a non-trivial percentage within our oldest age bins: we show the effect of our in-situ selection on velocity dispersion versus age in Appendix B. We compute stellar positions and velocities relative to the galaxy's center. We determine the orientation of the galaxy at each snapshot (independently) by computing the rotation tensor and axis ratios of the principal axes of the stellar disk, which we define via the moment of inertia tensor of the 25% youngest star particles inside a radius enclosing 90% of the total stellar mass within 10 kpc physical. We compute properties at formation using the galaxy's orientation at that time, not at \(z=0\). Our simulations store 600 snapshots from \(z=99\) to 0, with a time between snapshots of \(20-25\) Myr, and the past 20 Myr (immediately before today) includes 10 snapshots spaced by 2 Myr. We measure a star's 'formation' properties at the first snapshot after it formed, so a star particle on average experienced \(\approx 10\) Myr of evolution when we record its properties. This limits to some extent our ability to measure the earliest dynamical evolution, especially because, in our simulations, stars are born clustered in GMCs (for example Benincasa et al., 2020; Guszejnov et al., 2020), where they can be subject to various gravitational interactions on short timescales. Thus, 'formation' properties in our analysis necessarily refer to the combination of birth properties and early dynamical evolution. When computing the velocities of a stellar population, we select star particles using 250 Myr bins of age, which corresponds to the dynamical/orbital time at the solar annulus today. When plotting velocities, we use the mass-weighted median velocity of all star particles within a given age range; we find nearly identical results using the mass-weighted mean instead. To ensure that outlier stars do not skew our results, we compute the velocity dispersion by finding half of the difference between the mass-weighted 16th and 84th-percentile velocities of all stars in a given age range, which is equivalent to the standard deviation for normal distribution. We compute these quantities (in each age bin) individually for each galaxy and then show the mean and standard deviation across our 11 galaxies. ## 3 Results ### Stellar Kinematics versus Age #### 3.1.1 Defining the Eras of Disk Evolution Motivated by the kinematic trends that we explore and by previous analyses of these galaxies (for example Garrison-Kimmel et al., 2018; Ma et al., 2017; Faucher-Giguere, 2018; Yu et al., 2021; Gurvich et al., 2023), we divide our galaxies' histories into three eras, which we term the 'Pre-Disk', 'Early-Disk', and 'Late-Disk' Eras. Each of our galaxies began in the Pre-Disk Era, when the gas lacked coherent rotation, such that stars formed on nearly isotropic orbits. Then, each galaxy formed a stable, coherent disk and transitioned into the Early-Disk Era. Specifically, the start of the Early-Disk Era marks the time when all subsequent generations of stars that formed with rotationally-dominated kinematics, with \((v_{\phi}>\sigma_{\rm tot})_{\rm form}\). Table 1 lists the lookback time, \(t_{\rm lb}\), that each galaxy transitioned into the Early-Disk Era, \(t_{\rm lb}\left[\left(v_{\phi}/\sigma_{\rm tot}\right)_{\rm form}>1\right]\). These disks were kinematically hot, with high \(\sigma\), at the start of the Early-Disk Era, but they became increasingly settled, with increasingly cold and rotationally-dominated kinematics, throughout it. As a result, \((v_{\phi}/\sigma_{\rm tot})_{\rm form}\) grew most rapidly throughout this era. Eventually, the growth of \((v_{\phi}/\sigma_{\rm tot})_{\rm form}\) slowed and the galaxies transitioned into the Late-Disk Era, when the kinematics at formation did not evolve much as stars formed on highly rotational orbits in a dynamically cold disk. On average, galaxies in our sample transitioned from the Pre-Disk Era \(\approx 8\) Gyr ago and from the Early-Disk to the Late-Disk Era \(\approx 4\) Gyr ago, though with significant galaxy-to-galaxy scatter in these ages. Hopkins et al. (2023) argued that two aspects of the galaxy's gravitational potential drive the transitions between these eras. The _shape_ of the potential, specifically the formation of a sufficiently centrally-concentrated mass profile, drives the initial formation of a disk, that is, the Pre-Disk to Early-Disk transition. Separately, the _absolute depth_ of the potential (within the radius of star formation) drives the transition from bursty to smooth star formation, which roughly coincides with the transition from the Early-Disk to Late-Disk era, although this connection may be less direct. Similarly, our Pre-Disk and Early-Disk Eras generally correspond to the 'bursty star formation' phase, in Stern et al. (2021), Gurvich et al. (2023) and Yu et al. (2021), while our Late-Disk Era corresponds to their'steady star formation' phase. #### 3.1.2 Median velocity versus age We start by analyzing the general dynamic evolution of our simulated galaxies. Figure 1 (top) shows the sample-averaged azimuthal, radial, vertical, and total median velocities at formation (dashed lines) and at present (solid lines) versus stellar age. We label the Late-, Early-, and Pre-Disk Eras on the top of each column, and we mark the start of the Early and Late-Disk Eras with shaded bars. The time that one era ended and another began is not exact, in part because Figure 1 shows sample-averaged quantities, so the shaded bars represent the approximate transition time and not an instantaneous boundary. Figure 1 (top left) shows the median azimuthal (rotational) velocity, \(v_{\phi}\), versus age. For most of the Pre-Disk Era, stars formed with \(v_{\phi,{\rm form}}\approx 0-20\) km s\({}^{-1}\): in the early galaxy, stars did not coherently rotate. Eventually, \(v_{\phi,{\rm form}}\) began to increase as the disk began to settle, and the galaxy transitioned into the Early-Disk Era. The slight growth of \(v_{\phi,{\rm form}}\) towards the end of the Pre-Disk Era arises because we average over multiple galaxies, so earlier-transitioning galaxies bias the sample's mean: within individual galaxies, the rise of \(v_{\phi,{\rm form}}\) is generally (but not always) sharp and coincident with the transition to the Early-Disk Era (see Figure 3). \(v_{\phi,{\rm form}}\) strongly rose throughout the Early-Disk Era, but its growth weakened once the galaxy transitioned into the Late-Disk Era (\(\approx 4\) Gyr ago), likely because the total mass within our radial range changed little during this period (Garrison-Kimmel et al., 2018). \(v_{\phi,{\rm now}}\) primarily follows the same trends with age as \(v_{\phi,{\rm form}}\). However, stars that formed in the Pre-Disk Era have significantly larger \(v_{\phi}\) now than at formation, because they were torqued onto more coherent rotational orbits as the disk subsequently formed and settled. Likely, gas-rich mergers that seeded the disk's orientation also torqued up these existing stars (for example Santistevan et al., 2021). Bellardini et al., in preparation, show that this increase in \(v_{\phi}\) corresponds to an increase in specific angular momentum for these pre-existing stars, that is, they were indeed torqued. In contrast, stars that formed in the Late-Disk Era on nearly circular orbits now have slightly smaller \(v_{\phi}\) now than at formation because of dynamical heating/scattering. Figure 1 (top row, column 2 and column 3) shows the median radial velocity, \(v_{\rm R}\), and the median vertical velocity, \(v_{\rm Z}\), versus stellar age. Previous work analyzing FIRE simulations found that bursty star formation in the Pre-Disk galaxy drove significant gaseous outflows (for example Muratov et al., 2015; Angles-Alcazar et al., 2017; Feldmann et al., 2017; Sparre et al., 2017; Pandya et al., 2021; Stern et al., 2021). These outflows sometimes remained star-forming and produced stars that inherited their progenitor's outflowing dynamics (El-Badry et al., 2016; Yu et al., 2021). We find complementary results: stars that were born in the Pre-Disk Era had positive (outward) \(v_{\rm R,form}\), indicating the presence of star-forming outflows, and \(v_{\rm Z,form}\) that rapidly fluctuated around 0. Once the early disk formed, \(v_{\rm R,form}\) decreased to 0, in part because feedback-driven outflows declined and eventually ceased: throughout the rest of the Early-Disk and Late-Disk Eras, stars had \(v_{\rm R,form}\approx v_{\rm Z,form}\approx 0\). At present, both \(v_{\rm R,now}\) and \(v_{\rm Z,now}\approx 0\) for stars that formed in _all_ eras, meaning that old stars' subsequent dynamical evolution overrordo any preference for outward/upward/downward motion. This difference also reflects our selection of stars that remain within the disk today. Our galaxies largely formed radially 'inside-out' (for example Garrison-Kimmel et al., 2018), such that old in-situ stars typically formed inside of our current radial range of \(6-10\) kpc, and as El-Badry et al. (2016) and Ma et al. (2017) discussed for the FIRE simulations, old in-situ stars typically reside at larger radii today, compared to when they formed, because bursts of feedback-driven outflows generated large fluctuations in the gravitational potential at early times that moved these stars outward. As such, old stars in our sample are more likely than young stars to be near their apocenters, and as a result, have \(v_{\rm R,now}\) closer to 0. Bellardini et al., in preparation will examine this radial redistribution in detail. Figure 1 (top right) shows the median total velocity, \(v_{\rm tot}\), versus age. The growth of the galaxy's mass drove much of the evolution of \(v_{\rm tot}\), because the underlying gravitational potential determines its virial velocity. In turn, the evolution of \(v_{\rm tot,form}\) indicates that the galaxy's mass increased over time, most rapidly in the Pre-Disk Era and only minimally in the Late-Disk Era. The evolution of \(v_{\rm tot,form}\) largely resembles that of \(v_{\phi,\rm form}\), with the exception of the Pre-Disk Era, when \(v_{\rm tot,form}\) grew steadily, while \(v_{\phi,\rm form}\) remained near 0. This is not surprising: \(v_{\rm tot}\) depends only on the magnitude of each component, while \(v_{\phi}\) also depends on the sign. In the Pre-Disk Era, stars rotated, just not coherently, such that the median \(v_{\phi,\rm form}\approx 0\) while \(\sqrt{v_{\phi,\rm form}^{2}}>0\). However, once the galaxy transitioned into the Early-Disk Era, stars had larger and larger fractions of their (increasing) \(v_{\rm tot,form}\) in rotation. At present, \(v_{\rm tot}\) has almost no dependence on age, with all stars near \(v_{\rm tot,now}\approx v_{\rm virial}\approx 200-240\) km s\({}^{-1}\), as a result of virialization and dynamical heating. #### 3.1.3 Velocity dispersion versus age Figure 1 (bottom) shows the azimuthal, radial, vertical, and total stellar velocity dispersion, \(\sigma\), versus age, at both the time of the star's formation and at present. All components exhibit similar trends with age, albeit with different normalizations. Here we focus on general trends with age; we compare the 3 components in Section 3.3. \(\sigma_{\rm form}\) reflects the turbulence in star-forming gas and shows distinct behavior in each kinematic era. The oldest stars formed with the lowest \(\sigma\), reflecting their low \(v_{\rm tot,form}\) in the low-mass galaxy progenitor at early times. As the galaxy grew during the Pre-Disk Era, \(\sigma_{\rm form}\) grew, reflecting the increase in absolute gas turbulence, from accretion, mergers, bursty and more vigorous star formation, and outflows (for example Ma et al., 2017; El-Badry et al., 2018; Hung et al., 2019; Flores Velazquez et al., 2021). \(\sigma_{\rm form}\) reached a maximum during the transition to the Early-Disk Era. This peak in \(\sigma_{\rm form}\) reflects the competing effects of the galaxy continuing to become more massive (increasing \(v_{\rm tot,form}\)) but also starting to settle into a disk, decreasing the relative contribution to \(\sigma_{\rm form}\). \(\sigma_{\rm form}\) steadily decreased throughout the Early-Disk Era as the disk settled. This steady decrease in turbulence could occur only after the initial onset of the disk, when it could self-regulate to maintain a turbulent Toomre \(Q\sim 1\) in star-forming gas (for example Hopkins et al., 2012; Faucher-Giguere et al., 2013; Wisnioski et al., 2015; Krumholz et al., 2018; Gurvich et al., 2020; Orr et al., 2020). As such, equilibrium models imply that a gas disk maintains \(\sigma/v_{\phi}\approx M_{\rm gas}(<r)/M_{\rm tot}(<r)\approx f_{\rm gas}\), so \(\sigma\) reduced as \(f_{\rm gas}\) did. Finally, upon the transition to the Late-Disk Era, \(\sigma_{\rm form}\) reached a floor and remained nearly constant, reflecting the plateauing gas fraction and correspondingly steady and low turbulence and SFR during this era (Ma et al., 2017; Yu et al., 2021; Garrison-Kimmel et al., 2018; Hung et al., 2019; Gurvich et al., 2023). While \(\sigma_{\rm form}\) shows distinct trends in each era, \(\sigma_{\rm now}\) shows markedly different behavior, monotonically increasing with age across all eras. _Thus, the additional dispersion from post-formation dynamical heating, in combination with \(\sigma_{\rm form}\), produces the observed continuous increase in dispersion with stellar age; neither process alone explains it. This also means that \(\sigma_{\rm now}\) does not directly and simply reflect the formation history of the galaxy._ This difference is most evident for stars that formed in the Pre-Disk Era, which have the highest values of \(\sigma_{\rm now}\) today but formed with the smallest values of \(\sigma_{\rm form}\). By contrast, stars forming in the Early-Disk Era exhibit an age evolution that follows a similar slope at formation and at present, modulo a normalization offset. Finally, stars that formed in the Late-Disk Era have \(\sigma_{\rm now}\) increasing with age their while \(\sigma_{\rm form}\) is constant. We examine the age dependence of post-formation dynamical heating in detail in Section 3.4. In conclusion, the age dependence of \(\sigma_{\rm form}\) differed in the Pre-Disk, Early-Disk, and Late-Disk Eras, reflecting the distinct kinematics of these eras. At present, \(\sigma_{\rm now}\) increases monotonically with age, with a relation to \(\sigma_{\rm form}\) that differs for each era. #### 3.1.4 Rotational support versus age The ratio \(v_{\phi}/\sigma\) is a dimensionless measure of the degree of rotational support, that is, how 'disky' or 'dynamically cold' a disk is. Figure 2 \begin{table} \begin{tabular}{c c c c c c c c} \hline galaxy name & \(M_{\rm star}^{\rm 00}\) & \(v_{\phi}\) & \(\sigma_{\rm Z}\) & \(\sigma_{\rm int}\) & \(t_{\rm b}\left[\left(v_{\phi}/\sigma_{\rm tot}\right)_{\rm form}>1\right]\) & \(t_{\rm b}\left[\left(v_{\phi}/\sigma_{\rm tot}\right)_{\rm now}>1\right]\) & \(t_{\rm b}\left[\left(v_{\phi}/\sigma_{\rm tot}\right)_{\rm now}>\sqrt{3}\right]\) & \(t_{\rm b}^{\rm burst}\) \\ & \([10^{100}\rm M_{\odot}]\) & [km/s] & [km/s] & [Gyr] & [Gyr] & [Gyr] & [Gyr] \\ \hline m12m & 10.0 & 288 & 16.7 & 50.1 & 9.21 & 6.90 & 7.68 & 3.81 \\ Romulus & 8.0 & 265 & 20.4 & 55.9 & 7.42 & 7.16 & 7.16 & 4.90 \\ m12b & 7.3 & 272 & 17.9 & 56.6 & 7.42 & 7.42 & 7.42 & 6.32 \\ m12f & 6.9 & 256 & 19.9 & 74.9 & 7.42 & 6.39 & 6.39 & 5.01 \\ Thelma & 6.3 & 230 & 21.2 & 62.8 & 4.35 & 4.35 & 4.25 & 2.57 \\ Romeo & 5.9 & 249 & 14.1 & 40.3 & 11.0 & 10.2 & 10.2 & 6.52 \\ m12i & 5.3 & 238 & 20.1 & 52.4 & 6.65 & 6.39 & 6.39 & 3.14 \\ m12c & 5.1 & 240 & 23.1 & 62.8 & 6.49 & 6.14 & 6.39 & 3.70 \\ m12w & 4.8 & 163 & 16.6 & 83.9 & 4.09 & 4.6 & 5.89 & 0 \\ Remus & 4.0 & 216 & 13.9 & 40.3 & 7.93 & 7.93 & 7.93 & 5.88 \\ Juliet & 3.3 & 211 & 16.6 & 45.6 & 4.35 & 4.61 & 4.61 & 4.40 \\ Louise & 2.3 & 177 & 14.6 & 37.5 & 7.16 & 6.9 & 7.16 & 5.56 \\ m12z & 1.8 & 92.6 & 21.4 & 67.6 & 0.51 & - & - & - \\ m12r & 1.5 & 127 & 17.4 & 47.1 & 5.9 & 0.26 & 2.3 & - \\ \hline sample mean & 6.0 & 236 & 18.0 & 56.1 & 7.19 & 6.76 & 6.98 & 4.31 \\ \hline \end{tabular} \end{table} Table 1: **Stellar properties at \(z=0\) of the FIRE-2 simulations of MW/M31-mass galaxies that we analyze**, in order of decreasing stellar mass. The first column lists the galaxy name: “m12” indicates an isolated galaxy; otherwise the galaxy is in a Local Group-like pair. \(M_{\rm star}^{\rm 00}\) is the stellar mass within \(R_{\rm star}^{\rm 00}\), the radius enclosing 90% of the stellar mass within 20 kpc. \(v_{\phi}\), \(\sigma_{\rm Z}\), and \(\sigma_{\rm tot}\) are the median azimuthal velocity, vertical velocity dispersion, and total velocity dispersion, respectively, of stars younger than 250 Myr in a cylindrical radius \(R=6-10\) kpc and \(|Z|<3\) kpc from the midplane. The first 3 lookback times are the onset of disk formation, defined as the oldest age/lookupck time that stars permanently passed above a dynamical threshold: \(v_{\phi}/\sigma_{\rm tot}>1\) at formation, \(v_{\phi}/\sigma_{\rm tot}>1\) measured at present, and \(v_{\phi}/\sigma_{\rm Z}>1.8\) measured at present. The last column lists the lookback time when the galaxy transitioned from bursty to steady star formation from Yu et al. (2021). The last row shows the average quantities for our sample (excepting m12r, m12z, and Juliet, see text). (top) shows the ratio of median \(v_{\phi}\) to \(\sigma_{\rm Z}\) and \(\sigma_{\rm tot}\), versus stellar age, averaged across our 11 galaxies. We do not show the ratios with \(\sigma_{\rm R}\) or \(\sigma_{\phi}\), because they exhibit qualitatively similar trends as \(\sigma_{\rm tot}\), albeit with higher normalizations. In the Pre-Disk era, \((v_{\phi}/\sigma_{\rm tot})_{\rm form}<1\) (by definition). When \(v_{\phi,\rm form}\) permanently exceeded \(\sigma_{\rm tot,form}\) (\(\approx 8\) Gyr ago on average), a galaxy transitioned into the Early-Disk Era, when successively younger populations were born with increasing \((v_{\phi}/\sigma_{\rm tot})_{\rm form}\) and \((v_{\phi}/\sigma_{\rm Z})_{\rm form}\). After the galaxy transitioned into the Late-Disk Era (\(\approx 4\) Gyr ago on average), \((v_{\phi}/\sigma_{\rm tot})_{\rm form}\) grew only slightly, so stars during this era formed with the same degree of total rotational support. By contrast, \((v_{\phi}/\sigma_{\rm Z})_{\rm form}\) continued to increase throughout the Late-Disk Era, with only slightly slower growth than in the Early-Disk Era, reflecting the more significant reduction in \(\sigma_{\rm Z,form}\) over time. Thus, we find more significant _vertical_'settling' during the Late-Disk Era. Relative to formation, stars have lower \(v_{\phi}/\sigma\) as measured today. To better understand how the post-formation change in \(v_{\phi}/\sigma\) depends on age, Figure 2 (bottom) shows the ratio \((v_{\phi}/\sigma)_{\rm form}/(v_{\phi}/\sigma)_{\rm now}\) versus age. This ratio indicates how 'disky' a stellar population formed relative to how disky it is today, from the combined effect of post-formation dynamical heating and coherent torquing. Younger stars, which formed in the Late-Disk Era, became increasingly less disky with age, as post-formation dynamical heating increased their dispersion. Stars older than \(\approx 2\) Gyr exhibit ratios near 2, meaning they are currently half as disky as when they formed. For stars that formed in the Early-Disk Era, this trend _reverses_, as the post-formation change in diskiness _lessens_ with age, and the oldest stars in this era are now equally as disky as when they formed. This arises because \((v_{\phi}/\sigma)_{\rm form}\) decreased with age _faster_ than the post-formation change in \(v_{\phi}/\sigma\) increased. Now, the ratio with \(\sigma_{\rm tot}\) exhibits nearly identical behavior as the ratio with \(\sigma_{\rm Z}\), indicating that both '3D diskiness' and'vertical diskiness' evolved after formation identically relative to their values at formation. Stars that formed in the Pre-Disk Era have values of \((v_{\phi}/\sigma)_{\rm form}/(v_{\phi}/\sigma)_{\rm now}\) that fluctuate around 1. Some stars have higher \(v_{\phi}/\sigma\) now than when they formed, from post-formation torquing from events like mergers. Also, the division of intrinsically small velocity values amplifies the fluctuations. ### Case Studies While Figures 1 and 2 show trends averaged across 11 galaxies, Figure 3 shows these trends for 3 individual galaxies: m12f (blue), m12m (pink), and Romeo (green). Romeo is one of our closest analogues to the MW today, residing in a LG-like pair and having a stellar mass of \(5.9\times 10^{11}\) M\({}_{\odot}\), similar to the MW. Furthermore, recent observational analysis suggests that the MW's disk began to form \(\approx 12\) Gyr ago (Belokurov and Kravtsov, 2022; Conroy et al., 2022; Xiang and Rix, 2022), and as we quantify further below, of all our galaxies, Romeo's disk began to form the earliest and thus nearest in time to this estimate for the MW, likely because of its forming in LG-like environment (Garrison-Kimmel et al., 2018; Santistevan et al., 2020). Figure 3 (left) shows that Romeo's \(v_{\phi,\rm form}\) rapidly increased and its \((v_{\phi}/\sigma_{\rm Z})_{\rm form}\) began to rise \(\approx 12\) Gyr ago and it transitioned into its Early-Disk Era \(\approx 11\) Gyr ago. Although Romeo transitioned the earliest, its evolution largely follows the same trends as most of our other galaxies. m12f highlights the effect of major mergers on stellar kinematics. Its \(v_{\phi,\rm form}\) and \((v_{\phi}/\sigma_{\rm Z})_{\rm form}\) began to increase \(\approx 10\) Gyr ago, during its transition to the Early-Disk Era. However, both decreased \(\approx 8\) Gyr ago, when a galaxy flew by and Figure 1: **Median velocities and velocity dispersions for stars versus age**, within cylindrical \(R=6-10\) kpc and \(|Z|<3\) kpc at \(z=0\). Age bins have a width of 250 Myr. The lines show the average and the shaded regions show the standard deviation across 11 MW-mass galaxies, with velocities measured at \(z=0\) (solid) and at the time of formation (dashed, including only in-situ stars). The shaded vertical bars show when our galaxies transitioned from the Pre-Disk to the Early-Disk Era (\(\approx 8\) Gyr ago) and the Early-Disk to the Late-Disk Era (\(\approx 4\) Gyr ago), on average: we define these areas in Section 3.1.1. **Top row**: Median velocity versus age. For most of the Pre-Disk Era, stars formed with little net rotation, but \(v_{\phi,\rm form}\) rapidly increased in the Early-Disk Era when the disk initially formed, and it increased at a lower rate in the Late-Disk Era when the disk further settled. Stars that formed in the Pre-Disk Era have \(v_{\phi,\rm now}>v_{\phi,\rm form}\) from dynamical torquing after formation, while stars that formed in the Late-Disk Era now have slightly smaller \(v_{\phi,\rm now}\) than at formation from post-formation dynamical heating. \(v_{\rm R}\) and \(v_{\rm Z}\) fluctuated at formation in the Pre- and Early-Disk Eras, because of frequent mergers, rapid accretion, bursty star formation, and gaseous outflows, but today they are 0 across all ages. \(v_{\rm tot,form}\) steadily increased at all times, but \(v_{\rm tot,\rm now}\) is nearly independent of age, because all stars have been dynamically heated to orbit near the virial velocity. **Bottom row**: Velocity dispersion versus age. All 3 components and the total show the same trends with age. \(\sigma_{\rm form}\) increased throughout the Pre-Disk Era as the galaxy grew, peaking \(\approx 8\) Gyr ago, but decreased throughout the Early-Disk Era and remained constant throughout the Late-Disk Era. By contrast, \(\sigma_{\rm now}\) monotonically increases with stellar age because of the dynamical heating of stars after their formation. Thus, _the kinematics of stars today do not simply reflect their kinematics at formation._ After this, \(v_{\phi,\rm form}\) and \((v_{\phi}/\sigma_{\rm form})_{\rm form}\) increased again. m12f then experienced a major merger \(\approx 6.9\) Gyr ago. Although the dispersion was high during the merger, afterwards \(\sigma_{\rm form}\) sharply decreased while \((v_{\phi}/\sigma)_{\rm form}\) increased. Thus, a major merger led to a more rotationally-supported disk and may have triggered its transition to the Late-Disk Era. This agrees with previous work that found that gas-rich mergers can trigger the settling of galactic disks, particularly if they occur when the potential is sufficiently deep to prevent outflows after bursts of merger-triggered star formation (for example Hopkins et al., 2009; Moreno et al., 2021; Santistevan et al., 2021; McElroy et al., 2022; He et al., 2023). At present, the high dispersion during the merger and the sharp decline after remain discernible. Although the merger more dramatically impacts in the-plane components of the dispersion (notably, \(\sigma_{\phi}\) at present spikes from 60 km s\({}^{-1}\) to 150 km s\({}^{-1}\) back down to 55 km s\({}^{-1}\) within 1 Gyr during the merger), for consistency, we display only the vertical component. Thus, the present-day kinematics of stars, specifically'spikes' in the velocity dispersion versus age, can reveal past merger events, as we will explore in future work. After its disk begun to settle, m12f underwent a smaller, prograde merger \(\approx 1.5\) Gyr ago that triggered a significant burst of star formation (Yu et al., 2021), increased \(\sigma_{\rm form}\), and decreased \((v_{\phi}/\sigma)_{\rm form}\). Thus, certain mergers occurring in the Pre- or Early-Disk Eras increased the galaxy's rotational support, mergers occurring in the Late-Disk Era decreased the disk's rotational support. We next highlight m12m, whose evolution deviates from our other galaxies. m12m has the largest stellar mass at \(z=0\) in our sample, \(\approx 10^{11}\) M\({}_{\odot}\), more similar to M31. Furthermore, m12m underwent numerous early mergers, including two particularly massive mergers \(\approx 10\) and 9 Gyr ago, but it had a quiescent merger history over the past 8 Gyr. m12m has our second earliest transition into the Early-Disk Era: its \((v_{\phi}/\sigma_{\rm tot})_{\rm form}\) permanently surpassed unity \(\approx 9\) Gyr ago. Despite this, m12m has yet at \(z=0\) to transition fully into its Late-Disk Era: its \(v_{\phi,\rm form}\) continues to increase (albeit only slightly), all but its vertical component of \(\sigma_{\rm form}\) have yet to become constant, and \((v_{\phi}/\sigma_{\rm tot})_{\rm form}\) continues to strongly increase. This may be because m12m began to develop a bar \(\approx 4.8\) Gyr ago, leading to a bar-buckling event that formed a significant X-shaped bulge composed primarily of younger stars (age \(<4.8\) Gyr) (Debattista et al., 2019). m12m's kinematics are also unique at present; most notably, its \(v_{\phi,\rm now}\) exhibits no significant decrease for old stars; instead, all ages have \(v_{\phi,\rm now}\gtrsim 160\) km s\({}^{-1}\), which is almost 3\(\times\) faster than our sample's average, likely as a result of its multiple major mergers. Of our 3 case studies, m12m generally had the lowest \(\sigma_{\rm Z,form}\) for stars older than \(\approx 8\) Gyr, but at present m12m has the highest \(\sigma_{\rm Z,now}\) at almost all ages, because its stars have been more dynamically heated. Comparing these case studies provides useful context and caution for interpreting population-wide trends. First, the dispersion of young stars does not necessarily reflect the dispersion of older stars: across the three galaxies, the difference in \(\sigma_{\rm Z,now}\) is only a few km s\({}^{-1}\) for the youngest stars (age \(<250\) Myr), but jumps to 14 km s\({}^{-1}\) at 1 Gyr old and 57 km s\({}^{-1}\) at 6 Gyr old. In fact, the rate at which \(\sigma_{\rm now}\) increases with age varies between galaxies. On a related note, how \(\sigma_{\rm form}\) compares between galaxies can be different than how \(\sigma_{\rm now}\) does. That is, stars of a certain age may now have larger/smaller dispersions than those in another galaxy despite forming with smaller/larger dispersions. For example, Romeo generally had the highest values of \(\sigma_{\rm Z,form}\) for stars older than 6 Gyr while m12m had the lowest. However, this ordering flips at present: today, \(\sigma_{\rm Z}\) in m12m is higher than in Romeo. Thus, the dispersion with which stars formed does not set their present dispersion, because the amount and age dependence of post-formation heating can differ significantly between galaxies. On the contrary, \(v_{\phi}/\sigma_{\rm Z}\) provides a similar comparison between galaxies at both formation and present-day. \((v_{\phi}/\sigma_{\rm Z})_{\rm how}\) maintains its relative ordering between galaxies from formation and even largely preserves the trends with age exhibited by each individual galaxy: Romeo still has the highest values for stars younger than \(\approx 10\) Gyr, m12f and m12m still have similar values for intermediate-aged stars, and m12f still has the lowest values for young stars. Although the differences between each galaxy's \(v_{\phi}/\sigma_{\rm Z}\) are less pronounced today than at formation, _present day observations of \((v_{\phi}/\sigma)_{\rm now}\) (versus age) give a better representation of a galaxy's formation history than \(\sigma_{\rm now}\) and \(v_{\phi,\rm now}\) alone_. ### Comparing Velocity Components All three spatial components of velocity dispersion share similar trends with age, but we now compare them in detail. Figure 4 shows the radial, azimuthal, and vertical velocity dispersions, averaged across the 11 galaxies, at formation (left) and at present (right). Figure 4 (top row) compares the dispersions. The relative order of the components at formation is constant with age: \(\sigma_{\rm R,form}>\sigma_{\phi,\rm form}>\sigma_{\rm Z,form}\). At present, \(\sigma_{\rm R,now}\) dominates at all ages. However, Figure 2: **Top**: The ratio of median \(v_{\phi}\) to \(\sigma\), that is, how rotationally supported or ‘disky’ the stars are, versus age. We average across 11 galaxies, and the shaded region shows the galaxy-to-galaxy standard deviation. The solid purple line shows \((v_{\phi}/\sigma_{\rm box})_{\rm form}\), while the red shows \((v_{\phi}/\sigma_{\rm Z})_{\rm form}\). The dashed light-purple and orange lines show the same but measured when the star formed. In the Pre-Disk Era, stars formed with \(v_{\phi}/\sigma\sim 0\), but, once the disk began to form, \((v_{\phi}/\sigma_{\rm form})_{\rm form}\) rapidly increased until today, so vertical settling continued until today. By contrast, \((v_{\phi}/\sigma_{\rm tot})_{\rm form}\) also initially increased rapidly during the Early-Disk Era, but it remained roughly constant in the Late-Disk Era. **Bottom**\((v_{\phi}/\sigma)_{\rm form}(v_{\phi}/\sigma)_{\rm form}\) for \(\sigma_{\rm tot}\) (dotted navy) and \(\sigma_{\rm Z}\) (solid red), which measures how ‘disky’ stars were when they formed relative to today. Both ratios have similar values and exhibit similar trends with age. Stars that formed in the Early- and Late-Disk Eras have ratios above 1 (and near 2 for most ages), because post-formation dynamical heating increased their dispersions. Pre-Disk stars have \((v_{\phi}/\sigma)_{\rm now}\approx(v_{\phi}/\sigma)_{\rm form}=0\), with fluctuations from mergers/accretion and feedback. \(\sigma_{\rm Z,now}\) now matches \(\sigma_{\phi,\rm now}\) for stars that formed in the Early- and Pre-Disk Eras. Thus, the similarity between \(\sigma_{\rm Z,now}\) and \(\sigma_{\phi,\rm now}\) for intermediate-aged and old stars arises from vertical post-formation dynamical heating, not from formation. Measurements of \(\sigma_{\rm now}\) in the Solar neighborhood find \(\sigma_{\rm R}>\sigma_{\phi}>\sigma_{\rm Z}\) (for example Wielen 1977; Dehnen & Binney 1998; Nordstrom et al. 2004). While our results agree for stars that formed in the Late-Disk Era, we find that \(\sigma_{\phi,\rm now}\approx\sigma_{\rm Z,now}\) for stars that formed in the Early and Pre-Disk Eras, potentially indicating that post-formation heating of \(\sigma_{\rm Z}\) is greater for older stars in our galaxies than in the MW. Figure 4 (second row) shows the ratio of each component's \(\sigma^{2}\) to the total \(\sigma^{2}\) at that age, which compares each component's contribution to the overall kinetic energy in dispersion. Because \(\sigma_{\rm total}^{2}\equiv\sigma_{\rm R}^{2}+\sigma_{\phi}^{2}+\sigma_{Z}^{2}\), the sum of this ratio in all three components is 1. Remarkably, the ordering and relative value of each component at formation are independent of age. That is, \(\sigma_{\rm form}\) was distributed across its 3 components the same way across the galaxy's entire formation history, with \(\sigma_{\rm R,form}^{2}\), \(\sigma_{\phi,\rm form}^{2}\), and \(\sigma_{\rm Z,form}^{2}\) accounting for \(\approx 51\%\), \(32\%\), and \(17\%\) of \(\sigma_{\rm tot,form}^{2}\), on average. This roughly constant partitioning across cosmic time is striking, given that it persists across the ears of disk formation, including the transition from dispersion-dominated to rotation-dominated orbits, from bursty to steady star formation, and the abatement of galactic outflows. This constancy may indicate that radial outflows prevalent in the early galaxy had a similar relative impact on \(\sigma_{\rm R,form}^{2}\) as spiral arms in the later galaxy; however, we relegate testing this hypothesis to future work. Currently, \(\sigma_{\rm R,now}^{2}\) dominates in all ages, as Figure 4 (top) shows. However, unlike at formation, the relative contributions of \(\sigma_{\rm R,now}^{2}\) and \(\sigma_{\rm Z,now}^{2}\) are constant with age only for stars older than \(\approx 6\) Gyr, which formed in the Pre-Disk Era and the beginning of the Early-Disk Era. \(\sigma_{\rm R,now}^{2}\) still contributes \(\approx 50\%\) of the random kinetic energy of these old stars, while \(\sigma_{\phi,\rm now}^{2}\) and \(\sigma_{\rm Z,\rm now}^{2}\) both contribute \(\approx 25\%\). For stars younger than 6 Gyr, the relative contribution of \(\sigma_{\rm R,now}^{2}\) increases for younger stars, even surpassing its contribution at formation, because young stars have been heated most strongly along their radial component, as we show below. In contrast, the relative contribution of \(\sigma_{\rm Z,now}^{2}\) decreases for younger stars, more directly reflecting the low \(\sigma_{\rm Z,form}\) of these young stars. Interestingly, stars of all ages have approximately 25% of their random kinetic energy from \(\sigma_{\phi,\rm now}^{2}\), slightly less than when they formed. We define \(\sigma_{0}\) as the dispersion of our youngest stars (age \(<250\) Myr at \(z=0\)). To examine the degree of self-similarity in the evolution of the different velocity components, Figure 4 (third row) shows \(\sigma-\sigma_{0}\), the _absolute change_ in the velocity dispersion with age. All velocity components have more similar evolution in \(\sigma-\sigma_{0}\) than in \(\sigma\). This is most self-similar at formation, but it also holds to a lesser degree at present. In particular, \(\sigma_{\rm R,form}\) exhibits a greater absolute change only for intermediate-aged stars (\(\approx 6-10\) Gyr), whereas \(\sigma_{\rm R,now}\) has a greater absolute change at all ages, because \(\sigma_{\rm R,now}\) had the largest amount of dispersion added by post-formation dynamical heating for all but intermediate-aged stars, as we discuss below. Figure 4 (bottom row) shows \(\sigma/\sigma_{0}\), the _fractional change_ in the velocity dispersion with age, relative to today. At formation, this exhibits significant self-similarity, with all components showing near identical evolution for stars that formed in the Early- and Late-Disk Eras. In particular, at the onset of the Early-Disk Era, stars formed with \(\approx 3\times\) the dispersion as young stars today for all velocity components. However, the striking self-similarity at formation does _not_ hold at present. Although all components evolve relatively similarly for ages \(0-3\) Gyr, the vertical component increasingly dominates for Figure 3: **Median azimuthal velocity and velocity dispersion of stars versus age for 3 galaxies**: m12f (blue), m12m (pink), and Romeo (green). Left shows velocities at the time of formation, while right shows velocities today. **Top**: Median azimuthal velocity, \(v_{\phi}\). Romeo is the earliest settling galaxy in our sample; it exhibited rapid growth in \(v_{\phi}\) at formation \(\approx 2\) Gyr before either m12m or m12f. However, by \(z=0\) the later-setting m12m exhibits higher \(v_{\phi}\) at almost all ages, including its oldest stars. _Thus, kinematics at a given age as measured today do not necessarily indicate those at formation. **Middle**: Vertical velocity dispersion, \(\sigma_{\rm Z}\). At formation, all of the galaxies show large fluctuating velocity dispersions for old stars, which peaked at different times, but this peak does not reflect itself in the current velocity dispersion versus age, at right. m12f underwent a major merger \(\approx 7\) Gyr ago, corresponding to a spike in the velocity dispersion in both panels. **Bottom:** Ratio of median azimuthal velocity to vertical velocity dispersion, as a measure of rotational support. Romeo is the most rotationally supported galaxy at all ages, measured both today and at formation, though m12m catches up for young stars today. Despite differing histories, all 3 galaxies have similar velocity dispersions for young stars today, although they are more separated in \(v_{\phi}/\sigma_{\rm Z}\). older stars, with the oldest stars having vertical dispersions \(\approx 7\times\sigma_{\rm Z,0}\). Because \(\sigma_{\rm Z}\) had similar fractional evolution at formation, its distinctive behavior at present must arise from a greater fractional amount of post-formation heating for stars that formed in the Early- and Pre-Disk Eras, which we explore next. ### Disk Settling versus Post-Formation Heating As we have shown, both disk settling and dynamical heating shape the relation between \(\sigma_{\rm flow}\) and stellar age. We now quantify the relative importance of these two processes versus age. Figure 5 shows the absolute and fractional impact of post-formation dynamical heating on the present-day radial, azimuthal, vertical, and total velocity dispersion of stars versus age. Most notably, both absolute and fractional post-formation dynamical heating follow distinct trends with age for stars that formed in each of our three kinematic eras. Figure 5 (top) shows the difference between dispersion at present and formation, \(\sigma_{\rm flow}-\sigma_{\rm form}\), versus age, which directly conveys the amount of dispersion added post-formation. We focus primarily on the total dispersion, noting that the radial and azimuthal components show similar behavior. However, we discuss the vertical dispersion separately below, because it shows qualitatively different behavior. For stars that formed in the Late-Disk Era, the amount of dispersion added post-formation increases with age, but at a declining rate, consistent with the idea that post-formation dynamical heating from scattering processes causes the observed increase in \(\sigma_{\rm flow}\) with age (for example Spitzer & Schwarzschild 1951). However, stars that formed in the Early-Disk Era have been dynamically heated by the same total amount, \(\approx 50\) km s\({}^{-1}\), regardless of age. Considering the monotonic increase of \(\sigma_{\rm tot}\) in Figure 1, one may not intuitively expect this plateau. For example, stars that formed at the start of the Early-Disk Era (\(\approx 8\) Gyr ago) currently have \(\sigma_{\rm tot,now}\) that is almost 80 km s\({}^{-1}\) higher than stars born at the end of the Early-Disk Era (\(\approx 4\) Gyr ago), but have 5 km s\({}^{-1}\)_less_\(\sigma_{\rm tot}\) added post-formation. Thus, _the monotonic increase of \(\sigma_{\rm tot,now}\) with age for stars that formed in the Early-Disk Era (\(\approx 4-8\) Gyr ago) arose not from the dependence of post-formation heating on age, but rather, from the amount of disk settling in \(\sigma_{\rm form}\) versus age._ Once a disk started to form, scattering with structures within the now well-defined disk plane likely drove most post-formation heating, though interactions with satellite galaxies likely also contributed. As stars were dynamically heated, they experienced larger radial and vertical oscillations. Larger vertical oscillations decrease the amount of time that stars spend near the midplane, and increase their vertical velocities when passing through the midplane. In turn, the ability of stars to be heated along the in-plane direction decreases as vertical oscillations increase. Similarly, stars with large radial eccentricities encounter spiral structure at different orbital phases; the effects of these varied encounters tend to average out, such that spiral-driven heating becomes progressively weaker for increasingly eccentric orbits (Binney & Tremaine 2008). Thus, stars that formed with low dispersions in the Late-Disk Era were heated less efficiently as they aged, which explains their near-exponential behavior for \(\sigma_{\rm tot,now}-\sigma_{\rm tot,form}\), as shown in the top row of Figure 5. Similarly, because stars that formed in the Early-Disk and Pre-Disk Eras had large dispersions at birth, they likely underwent little post-formation heating over the course of the Late-Disk and Early-Disk Era (though old stars likely experienced significant within the Pre-Disk Era ). On the other hand, post-formation _vertical_ heating shows different trends than in-plane heating across the Late and Early-Disk Eras. The amount of \(\sigma_{\rm Z}\) added post-formation never plateaus but instead monotonically increases with age by \(\approx 7\) km s\({}^{-1}\) per Gyr. Therefore, _post-formation vertical heating is likely a steady, continuous process_, implying a different dynamical origin than in-plane post-formation heating. We will pursue a detailed analysis of the origin of in-plane and vertical heating in future work. For stars that formed in the Pre-Disk Era, the amount of post-formation heating increases steadily with age for all components, including vertical. Older stars have more dispersion added post-formation, not only because they have had more time to be heated, but also because the early galaxy experienced the most frequent mergers, starbursts, and potential fluctuations that increased the dispersion of existing stars (for example El-Badry et al. 2018b). Likely, this phase accounted for most of the heating of stars that formed in the Pre-Disk Era. In later eras, more localized, temporally-stable processes primarily heated the stars. Figure 4: **Comparing the evolution of the 3 components of velocity**: lines show the radial (orange), azimuthal (pink), and vertical (purple) velocity dispersion at formation (left) and today (right) versus age. We average across 11 galaxies, and the shaded regions show the galaxy-to-galaxy standard deviation. **Top row**: Velocity dispersion, \(\sigma\). At formation, all components exhibit constant ordering, with \(\sigma_{\rm R}>\sigma_{\rm R}>\sigma_{\rm Z}\), and similar dependence on age. Today, \(\sigma_{\rm R}\) dominates at all ages, while \(\sigma_{\rm R}\) and \(\sigma_{\rm Z}\) are near identical for older stars. However, \(\sigma_{\rm Z}\) falls below \(\sigma_{\rm R}\) for stars that formed in the Late-Disk Era. **Second row**: Ratio of each component \(\sigma^{*}\) relative to the total at that age, as a metric of the relative kinetic energy in each component. At both formation and today, \(\sigma_{\rm R}^{2}\) dominates the kinetic energy. At formation, kinetic energy partition is nearly independent of age, while today, it is nearly independent for ages \(>6\) Gyr, but because of post-formation heating, the importance of \(\sigma_{\rm Z}^{2}\) decreases while \(\sigma_{\rm R}^{2}\) increases for younger and younger stars. **Third row**: Absolute change in the velocity dispersion relative to \(\sigma_{\rm O}\), the dispersion of the youngest stars (age \(<250\) Myr at \(z=0\)) for each component. At formation and today, all components are nearly self-similar, indicating that each component changes with age by about the same absolute amount (although at present, the radial component is slightly higher, because of its stronger post-formation heating). **Bottom row**: Velocity dispersion normalized by its value for the youngest stars, which quantifies the fractional change in \(\sigma\) over time. At formation, all components display similar values and age dependence, such that intermediate-age stars formed with 3 times higher \(\sigma\) than the youngest or oldest stars did. As measured today, \(\sigma_{\rm Z}\) exhibits the strongest fractional dependence on age, despite it showing the weakest absolute dependence on age in the top panels, as a result of post-formation heating. Figure 5 (bottom) shows the ratio of dispersion at present to dispersion at formation, \(\sigma_{\rm now}/\sigma_{\rm form}\), versus age, which quantifies the fraction of the present dispersion that existed at formation versus arose from post-formation heating. Values of \(\sigma_{\rm now}/\sigma_{\rm form}>2\) indicate that post-formation heating contributes more to the current dispersion than the dispersion from formation, while values \(<2\) indicate that most of the dispersion was in place at the time of formation. The evolution of the total dispersion largely matches that of the radial and azimuthal components. \(\sigma_{\rm now}/\sigma_{\rm form}\) has an 'S'-shaped dependence on age, somewhat different than in the top panel. Post-formation heating increases rapidly with age for stars that formed in the Late-Disk Era. This era's oldest stars have current (radial, vertical, and total) dispersions with approximately equal contributions from heating and formation. Furthermore, stars that formed \(\approx 4\) Gyr ago, during the transition to the Late-Disk Era, experienced the largest fractional increase from post-formation heating of all stars younger than 10 Gyr (for all but the vertical dispersion). These stars formed in a dynamically-settled thin disk with low dispersion and had more time to experience dynamical heating than stars that formed afterward. However, strikingly, this dependence on age _reverses_ in the Early-Disk Era, when \(\sigma_{\rm now}/\sigma_{\rm form}\) decreases with age. As Figure 5 (top row) showed, the absolute amount of dispersion added post-formation is constant throughout this era. However, as Figure 1 showed, \(\sigma_{\rm form}\) increases with age throughout this era. Thus, the fractional contribution of \(\sigma_{\rm form}\) at birth became increasingly important with age in this era. Besides stars younger than \(\approx 1\) Gyr, the oldest stars from the Early-Disk Era have the lowest dependence on post-formation heating, with only 40% of their current \(\sigma_{\rm tot}\) from post-formation heating, corresponding with their maximum \(\sigma_{\rm form}\) and comparatively low amount of post-formation heating. In essence, stars were not able to be dynamically heated efficiently during this Early-Disk Era, because they already were born with high dispersions. For stars that formed in the Pre-Disk Era, \(\sigma_{\rm now}/\sigma_{\rm form}\) increases rapidly with age, because \(\sigma_{\rm form}\) decreases with age (given the galaxy's lower mass) while the amount of post-formation heating (and thus \(\sigma_{\rm now}\)) increases with age (from mergers, starbursts, and strong outflows). Thus, _the current total dispersion of stars is primarily from post-formation heating for stars older than \(\approx 11\) Gyr, and primarily inherited at formation for stars younger than 11 Gyr_. The evolution of the vertical component largely resembles that of the other components for stars that formed in the Late-Disk Era. However, while \(\sigma_{\rm now}/\sigma_{\rm form}\) of the in-plane components noticeably decrease with age for stars that formed in the Early-Disk Era, the vertical component remains relatively constant near 2. In fact, \(\sigma_{\rm Z,now}/\sigma_{\rm Z,form}\approx 2\) for stars with ages between \(2-10\) Gyr, meaning that the current vertical dispersions of stars with ages \(2-10\) Gyr have equal contributions from post-formation heating and formation. Thus, _for most stars, the fractional impacts of vertical post-formation heating and vertical disk settling are about the same_. For stars younger than 2 Gyr, \(\sigma_{\rm Z,form}\) is more important, while for stars older than 10 Gyr, vertical post-formation heating is. In summary, the monotonic rise of the current velocity dispersion with stellar age arises from the _combined_ effects of _both_ cosmological disk settling and the dynamical processes that heat stars after they form. In understanding the origin of the current _shape_ of the relation between the total velocity dispersion and age, we conclude that: _(1) post-formation heating dominates the dispersion for stars that formed in the Late-Disk Era_ (age \(\lesssim 3.5\) Gyr), when the dispersion at formation was flat with age, (2) _disk settling dominates for stars that formed in the Early-Disk Era_ (age \(\approx 4-8\) Gyr), when post-formation heating was flat with age, and (3) _post-formation heating strongly dominates for stars that formed in the Pre-Disk Era_ (age \(\gtrsim 10\) Gyr). Overall, post-formation dynamical heating only dominates the current dispersion for stars that formed \(\approx 3\) Gyr before the disk started to form, or, equivalently, _the stellar velocity dispersion at present is primary inherited at formation and not from post-formation dynamical heating_ for stars younger than \(\approx 10\) Gyr. Our results agree with Yu et al. (2022), who also analyzed these FIRE-2 galaxies and found that most stars currently on 'thin-disk-like', 'thick-disk-like', or'spheroid-like' orbits were born that way, that is, the type of orbit a star is currently on primarily reflects its formation orbit, not its post-formation evolution. ### Effective Rate of Post-formation Heating Stars that formed in the Pre-Disk Era have the most dispersion added post-formation, as Figure 5 (top) shows. One might assume (naively) that the amount of heating simply scales with age, because older stars simply have had more time to be heated. However, Figure 5 (top) showed that the amount of post-formation heating does not necessarily increase with age. To provide more insight, we examine the 'effective' rate that stars today experienced post-formation heating, on average, via \((\sigma_{\rm now}-\sigma_{\rm form})/\rm age\). This is not the actual Figure 5: **Comparison between the velocity dispersion today versus at formation, which quantifies the amount of post-formation dynamical heating, versus age, for different components of velocity. We average across 11 galaxies, and the shaded region shows the galaxy-to-galaxy standard deviation. Top: Absolute change in dispersion from formation to today. The amount of heating increases with age. However, for all components except \(\sigma_{\rm Z}\), the heating is constant for ages \(\approx 3-8\) Gyr, thus, intermediate-age stars were heated by a similar amount. Bottom: Ratio of dispersion today relative to that at formation, which shows an ’S’-shaped dependence on age. The dotted horizontal line at 2 divides stars whose current dispersion is primarily from formation (below) versus post-formation (above). The combination of small dispersions at birth and rapid post-formation heating leads to a steadily increasing ratio with age up to \(\approx 2\) Gyr. Intermediate-age stars have a dispersion today mostly from that at formation, because they formed when the intrinsic dispersion within the galaxy was highest. The oldest stars have the largest contribution from post-formation heating. _For stars younger than \(10\) Gyr, most of their dispersion was in place at formation_, though \(\sigma_{\rm Z}\) shows nearly equal contributions from formation and subsequent heating at ages \(4-8\) Gyr. instantaneous rate of increase in dispersion. Rather, it represents the _average_ rate at which the dispersion increased since formation. Figure 6 shows this effective rate versus stellar age, for radial, vertical, azimuthal, and total dispersions. First, we focus on the total dispersion: \((\sigma_{\rm tot,now}-\sigma_{\rm tot,form})/\)age. The youngest stars experienced the largest effective rate of increase, which rapidly decreases with age for stars that formed during the Late-Disk Era. For stars that formed in the Early-Disk Era, the effective rate was low, reaching a minimum near the transition to the Pre-Disk Era. This strong age dependence implies that, for stars that formed in a disk, most dynamical heating occurred shortly after birth, and their heating rate slowed as they became progressively dynamically hotter (excepting the vertical component). By contrast, stars that formed in the Pre-Disk Era had effective rates of heating that _increase_ with age, likely because more dramatic, galaxy-wide, non-equilibrium perturbations like mergers and accretion, starbursts, and galactic winds, were more prominent. Furthermore, for stars that formed in the Late-Disk Era, the increase in \(\sigma_{\rm R}\) strongly dominates over the other components, so radial heating dominates the total heating. For example, the youngest stars have effective rates near 10 km/s/Gyr for \(\sigma_{\phi}\) and \(\sigma_{\rm Z}\), but above 30 km/s/Gyr for \(\sigma_{\rm R}\), which may indicate that heating by spiral arms and/or bars dominates most early heating for stars forming in the Late-Disk Era. However, the effective radial rate decreases with age while the other components remain more constant. Thus, stars that formed in the Early- and Pre-Disk Eras have nearly identical rates for all components of \(\sigma\), indicating that _heating was more isotropic in the early galaxy_. Thus, _post-formation heating operated differently over cosmic time_. Any model of post-formation heating thus should treat early cosmic times (prior to the initial formation of the disk) and later cosmic times (after the formation of the disk) separately. ### Disk Kinematics versus Galaxy Mass So far, we showed trends averaged across our 11 galaxies that are similar to the MW in stellar mass. Now, we analyze all 14 galaxies in our sample individually, the lower-mass m12r, m12z, and Juliet to expand the mass range (see Table 1 for details). Figure 7 shows how stellar kinematics depend on the current stellar mass (within \(R_{90}\)) of a galaxy. We focus on \(\sigma\) and \(\nu_{\phi}/\sigma\), as before. Although we do not show it, \(\nu_{\phi}\) increases strongly with stellar mass, which follows from the deeper gravitational potential of more massive galaxies (see also Hopkins et al., 2023). We show the isolated galaxies from the _Latte_ suite as circles and the galaxies in LG-like pairs from the _ELVIS on FIRE_ suite as stars. Figure 7 (top panels) shows \(\sigma_{\rm Z}\) (top row) and \(\nu_{\phi}/\sigma_{\rm Z}\) (second row), for stars of all ages. If we exclude the two lowest-mass galaxies, m12r and m12z, whose disks have yet to or have just begun to settle, \(\sigma_{\rm Z}\) increases with stellar mass. This trend is remarkably strong among LG-like galaxies, with the exception of Thelma, which underwent the most major mergers and has a higher \(\sigma_{\rm Z}\) than galaxies of similar or higher mass. Unlike \(\sigma_{\rm Z}\), \(\nu_{\phi}/\sigma_{\rm Z}\) of all stars does not depend on mass at \(>2\times 10^{10}\) M\({}_{\odot}\). Although we do not show it, we performed an identical analysis for \(\sigma_{\rm R}\) and \(\sigma_{\phi}\) and found similar trends. On average, galaxies in LG-like pairs have lower dispersions and more rotational support than isolated galaxies. These colder kinematics likely relates to their more extended disk sizes at present: LG-like galaxies have \(R_{90}^{*}\approx 1\) kpc larger than isolated galaxies for all stars and \(\approx 3\) kpc larger among stars younger than 250 Myr (Garrison-Kimmel et al., 2018; Bellardini et al., 2022). These larger sizes likely relate to these galaxies transitioning to the Early-Disk Era earlier than our isolated galaxies, on average, as we show below. Figure 7 (middle panels) shows the same but for stars younger than 250 Myr, which reflects the current dynamical state of the ISM and star formation, less affected by the galaxy's integrated merger and heating histories. Now, \(\sigma\) does not depend on stellar mass, while \(\nu_{\phi}/\sigma\) increases modestly with stellar mass, which implies that more massive galaxies have more vertically settled gas disks. Figure 7 (bottom panels) shows the same but for \(\sigma_{\rm tot}\) of young stars, which exhibits broadly similar trends as each \(\sigma\) component: \(\sigma_{\rm tot}\) does not depend much on stellar mass while \(\nu_{\phi}/\sigma_{\rm tot}\) increases moderately, although weaker than for \(\nu_{\phi}/\sigma_{\rm Z}\). Thus, a galaxy's stellar mass has little effect on the current velocity dispersion of young stars in the solar annulus and at most a modest effect on \(\nu_{\phi}/\sigma\), for the masses we explore. We show the total dispersion of young stars also to compare our results against observations of the most massive galaxies in the LG. We use the dispersions of the youngest stars from Nordstrom et al. (2004) for the MW, Dorman et al. (2015) for M31 (Andromeda), and Quirk et al. (2022) for M33 (Triangulum). Although we use the youngest stellar sample in each work, their age cuts are not exactly the same: the minimum age threshold is 1 Gyr for the MW, 30 Myr for M31, and 80 Myr for M33. We measure stars with age \(<250\) Myr in our simulations, which represents a near-average of these age thresholds. Given that the effective rate of post-formation heating is largest immediately after formation, our simulations' values likely include a greater amount of dispersion from post-formation heating than M31 and M33's values, but a lesser amount than the MW's value. Full 3D kinematics are only available for the MW; measurements of M31 and M33 are limited to line-of-sight dispersions, \(\sigma_{\rm LOS}\). Thus, to estimate a total 3D dispersion, we assume isotropy, such that \(\sigma_{\rm tot}^{2}\approx 3\times\sigma_{\rm LOS}^{2}\). This is a simplification, given that our simulated galaxies show \(\sigma\) that is not isotropic, with \(\sigma_{\rm R}>\sigma_{\phi}>\sigma_{\rm Z}\). However, here we simply aim to compare the observed galaxies against each other and explore whether our simulated galaxies have realistic kinematics for young stars. In McCluskey et al., in preparation we will pursue a more rigorous comparison with these observations, including modeling the Figure 6: **The effective rate of dynamical heating since formation, \((\sigma_{\rm now}-\sigma_{\rm form})/\)age, versus stellar age**, for each component of velocity. We average across 11 galaxies, and the shaded region shows the galaxy-to-galaxy standard deviation. The youngest stars have the highest effective rate of heating, implying that the rate of dynamical heating was most rapid immediately after stars formed and became more gradual over time. Increases in \(\sigma_{\rm R}\) strongly dominate for stars that formed in the Late-Disk Era, but all components exhibit broadly similar effective rates for star that formed in the Early- and Pre-Disk Eras. line-of-sight orientation of M31 and M33 and comparing the trend of stellar kinematics across the full measured age range by also modeling stellar age uncertainties. (Figure 2 in Sanderson et al. 2020 also compared three FIRE-2 galaxies--m12i, m12f, m12m--against these same observations of the MW and M31; but for M31, they compared the total dispersion in FIRE-2 against just the line-of-sight dispersion of M31. Therefore, our comparison here is more accurate and shows better agreement between FIRE-2 and M31.) In terms of \(\sigma_{\rm tot}\), Figure 7 shows that young stars in FIRE-2 have \(\sigma_{\rm tot}\approx 10-50\) km s\({}^{-1}\) higher than in the MW and M33 but similar to M31. Young stars in LG-like galaxies have systematically lower \(\sigma_{\rm tot}\), many within \(\approx 10\) km s\({}^{-1}\) of the MW's \(\sigma_{\rm tot}\approx 25-30\) km s\({}^{-1}\). In terms of \(v_{\phi}/\sigma_{\rm tot}\), which is a better, dimensionless metric of 'diskiness' and degree of rotational support, we first emphasize that our observational compilation in Figure 7 (bottom panel) shows that _young stars in the MW are kinematically colder than those in both M31 and M33; thus, within the LG, the MW is a kinematic outlier._ As a result, young stars in our FIRE-2 galaxies have values similar to those in M31 and M33, but young stars in the MW are kinematically colder than those in FIRE-2. Thus, this tentative comparison indicates that young stars in FIRE-2 have statistically realistic/representative values of \(v_{\phi}/\sigma\), in agreement with M33 and M31, but are not as kinematically cold as young stars in the MW. ### Time of Disk Onset Galaxies transitioned from their Pre-Disk to their Early-Disk Era at their 'time of disk onset', which marks the _start_ of the _continuous_ process of disk settling. Most of our galaxies transitioned \(4-8\) Gyr ago, although Romeo transitioned particularly early, \(\approx 11\) Gyr ago. We now examine the time of disk onset across our sample, including how it relates to a galaxy's stellar kinematics today. We defined the transition to the Early-Disk Era as when all stellar populations that formed after this time also had \((v_{\phi}/\sigma_{\rm tot})_{\rm form}>1\). Of course, \(v_{\phi}/\sigma\) measured today can differ from its formation value. To examine how transition times derived using present-day kinematics compare to our fiducial formation-based times, we also measure the age when \((v_{\phi}/\sigma_{\rm tot})_{\rm now}\) permanently exceeds 1. Furthermore, we show that \(\sigma_{\rm Z}\) evolved somewhat differently post-formation than the other components, so we additionally test this by also finding the oldest age after which \((v_{\phi}/\sigma_{\rm Z})_{\rm now}\) permanently exceeds \(\sqrt{3}\), under the simplifying (and in detail incorrect) approximation of isotropy. Furthermore, for all of these metrics, we tested a range of threshold values, and while this shifts the exact lookback time that a galaxy transitioned, it has little effect on the relative ordering within our sample, which is our primary metric of interest. In summary, we use three thresholds to measure the time of disk onset that defines the transition from the Pre-Disk to Early-Disk Era: * \(v_{\phi}/\sigma_{\rm tot}>1\) at formation * \(v_{\phi}/\sigma_{\rm tot}>1\) as measured today * \(v_{\phi}/\sigma_{\rm Z}>\sqrt{3}\approx 1.73\) as measured today Table 1 lists the lookback times/ages of disk onset for each galaxy. These three metrics give times within \(\approx 0.5-1\) Gyr of each other for most galaxies. Times based on \((v_{\phi}/\sigma_{\rm Z})_{\rm now}\) and \((v_{\phi}/\sigma_{\rm tot})_{\rm now}\) agree best with each other. Thus, the present vertical and total kinematics of stars provide similar insights into the disk's formation. However, lookback times from the kinematics at formation are generally earlier, because dynamical heating tends to decrease \(v_{\phi}/\sigma\) of stars post-formation. A few galaxies yield substantially different times for different metrics. Notably, formation kinematics indicate Figure 7: **Present vertical velocity dispersion, \(\sigma_{\rm Z}\), and median \(v_{\phi}/\sigma_{\rm Z}\), versus stellar mass for \(14\) galaxies. We show the isolated (m12) galaxies via circles, the LG-like pairs via stars, and observed galaxies via circled stars. Top panels: Measuring stars of all ages in the galaxy. The dispersion for our disk-dominated galaxies increases with mass, such that more massive galaxies have larger dispersions. All galaxies have approximately constant \(v_{\phi}/\sigma_{\rm Z}\) regardless of mass, excepting m12r after, **Middle panels:** Measuring young stars with ages \(250\) Myr. Here, \(\sigma_{\rm Z}\) does not correlate with mass, though \(v_{\phi}/\sigma_{\rm Z}\) shows a weak correlation, such that more massive galaxies have young stars on more circular orbits. **Bottom panels:** Same as above but measuring \(\sigma_{\rm z}\), which shows similar trends as above, though the correlation for \(v_{\phi}/\sigma_{\rm Z}\) is weaker. Overall, galaxies in LG-like pairs tend to have lower dispersions and more rotational support at a given mass; this difference is strong for stars across all ages and weaker for young stars. We also show observations of young stars in the MW (Nordström et al. 2004), M31 (Dorman et al. 2015), and M33 (Quirk et al. 2022). Young stars in FIRE-2 galaxies have stellar velocity dispersions that are similar to M31 but moderately larger than the MW and M33, while their \(v_{\phi}/\sigma_{\rm tot}\) agree well with both M31 and M33 but are all significantly lower than the MW. Importantly, young stars in the MW are more rotationally supported than in both M31 and M33. that m12r transitioned 5.9 Gyr ago, while current kinematics indicate that it settled only 0.26 Gyr ago, because m12r experienced a merger with an LMC-mass galaxy \(\approx 0.5\) Gyr ago, which significantly heated existing stars. This lends caution to using present-day kinematics to infer the exact timing of events in a galaxy's formation history. As Figure 2 shows, once our galaxies transitioned to the Early-Disk Era, their stars formed on progressively cooler orbits as the disk continuously settled. If, after the time of disk onset, \((v_{\phi}/\sigma)_{\rm form}\) grew at a broadly similar rate in each galaxy, then galaxies that had earlier times of disk onset would have higher \((v_{\phi}/\sigma)_{\rm form}\) today, simply from having more time to settle. Figure 8 tests this idea, showing the time of disk onset versus \(v_{\phi}/\sigma_{\rm Z}\) of stars younger than 250 Myr for our 14 galaxies. The three rows show our three thresholds in \(v_{\phi}/\sigma\) for the onset of disk settling, which all show similar trends. We show isolated galaxies as circles and galaxies in LG-like pairs as stars, as in Figure 7. Figure 8 shows that the time of disk onset was \(\approx 4.5-8\) Gyr ago (\(\approx 0.4-1\)) for the majority of our galaxies, although Romeo (see Section 3.2) has the earliest time of disk onset, \(\approx 10.2-11\) Gyr ago (\(z\approx 2\)). Figure 8 shows that galaxies that experienced earlier disk onset have young stars (age \(<250\) Myr) with larger \(v_{\phi}/\sigma_{\rm Z}\) today, that is, currently form stars on dynamically cooler orbits. While we do not show it, we find similarly strong correlation between the time of disk onset and the average \(v_{\phi}/\sigma_{\rm Z}\) and \(v_{\phi}/\sigma_{\rm tot}\) for both all and young stars. Table 2 lists the Pearson correlation coefficients, which are close to 0.9, and their corresponding p-values, which are \(\sim 10^{-5}-10^{-6}\), indicating a strong correlation. We find similar results using instead the Spearman rank coefficients. We also examined correlations with other galaxy properties. Surprisingly, the time of disk onset does not correlate much with the galaxy's current stellar mass, within our sample. It also does not correlate with any component of \(\sigma\), for either all or young stars. However, the time of disk onset _does_ correlate with the median \(v_{\phi}\) of all or young stars, with Pearson coefficients ranging between 0.75 and 0.85 for all stars and 0.73 and 0.80 for young stars. Despite the lack of correlation with \(\sigma\), each threshold has its highest correlations with \(v_{\phi}/\sigma,not\) with \(v_{\phi}\) alone, further supporting that \((v_{\phi}/\sigma)_{\rm now}\) is a more fundamental (and dimensionless) metric than \(\sigma_{\rm now}\) to compare different galaxies' formation histories. The strong correlation with the \(v_{\phi}/\sigma\) of young stars today is particularly striking, considering it only moderately correlates with \(v_{\phi}\) of young stars. This correlation implies that, after disk onset, these disks settled at relatively similar rates, in order to preserve rank ordering. In particular, after disk onset, mergers and internal processes tend not to significantly alter the overall Hubble-time-averaged rate of disk settling. In general, galaxies in LG-like pairs experienced earlier disk onset than isolated galaxies. This follows from Section 3.6, where LG-like galaxies had lower \(\sigma\) and higher \(v_{\phi}/\sigma\) than isolated galaxies of similar (current) stellar mass, consistent with their larger radii (Bellardini et al., 2022). The earlier settling times for galaxies in LG-like environments follows from the fact that such galaxies assemble (form stars) earlier than those in isolated environments, which in turn follows from the fact that their dark-matter halos formed earlier (Garrison-Kimmel et al., 2019; Santistevan et al., 2020). Thus, galaxies in LG-like environments, despite ending up at similar stellar mass today as galaxies in isolated environments, formed stars earlier, their disks started to settle earlier, and their disks are radially larger and dynamically cooler today. This dependence on LG environment provides important insight into the assembly history of the MW (and M31) compared with other galaxies of similar mass today. As discussed in Section 3.6, young stars in the MW are dynamically colder than those in our galaxies. If we naively extrapolate and place the MW's \(v_{\phi}/\sigma_{\rm tot}\approx 8\) along the relation in Figure 8, the trends within our simulated sample indicate that the MW disk began to settle earlier than our galaxies' disks (\(\gtrsim 11\) Gyr ago). Remarkably, this simple extrapolation agrees with recent estimates of the age of the MW's thick disk: recent works argue that its coherent rotation may have begun \(\approx 12-13\) Gyr ago (Conroy et al., 2022; Xiang and Rix, 2022). We postulate that this dependence on LG environment provides critical insight into this seemingly early assembly of the MW. ### Impact of Cosmic-Ray Feedback on Stellar Kinematics The fiducial FIRE-2 simulations that we examine do not include self-consistent cosmic ray (CR) injection and transport. However, CRs may be important in regulating the dynamical state of the ISM, and therefore the resultant stellar populations, because in the solar neighborhood the CR energy density is comparable to the gas thermal and turbulent energy densities (for example Boulares and Cox, 1990). Here, we briefly explore the effects of CR feedback from stars on the dynamical state of our MW-mass disks. Part of our motivation is that, as Figure 7 showed, none of our simulated galaxies are as kinematically cold as the MW today, although they are kinematically similar to M31 and M33. One proposed way to form kinematically colder disks in cosmological simulations is the inclusion of previously neglected feedback channels, including CRs, because they can Figure 8: Degree of rotational support, \(v_{\phi}/\sigma_{\rm Z}\), for young stars (age \(<250\) Myr) versus the lookback time of disk onset. Circles show isolated (m12) galaxies, and stars show galaxies in LG-like pairs. We examine 3 metrics to determine the time that each galaxy’s disk started to settle, which yield similar but occasionally different times and ordering between galaxies. **Top:** disk onset time defined as open \(v_{\phi}/\sigma_{\rm total}\) at formation permanently exceeded 1. **Middle:** disk onset time defined age as when \(v_{\phi}/\sigma_{\rm total}\) as measured today permanently exceeded 1. **Bottom:** disk onset time defined age as when \(v_{\phi}/\sigma_{\rm Z}\) as measured today permanently exceeded 1.8. Overall, using kinematics at formation versus today or measuring different velocity components all lead to similar times of disk onset. _All 3 metrics show that disks that started to settle earlier currently form stars on more rotationally supported orbits_, which implies that the MW, which is unusually dynamically cold today, started to settle unusually early. provide a spatially and temporally smoother form of feedback and potentially launch cooler galactic winds. Previous work using FIRE-2 simulations found that the inclusion of magnetic fields, anisotropic conduction, and viscosity has no appreciable effects on any bulk galaxy properties, but that the inclusion of CRs, within reasonable uncertainty for the treatment of the CR diffusion coefficient, significantly can alter galaxy properties (Su et al., 2017; Hopkins et al., 2020). FIRE-2 simulations with CRs also include anisotropic CR transport with stream and advection \(\mathcal{H}\) diffusion (with constant parallel diffusivity, \(\kappa=3\times 10^{29}\mathrm{cm^{2}s^{-1}}\)), CR cooling (hadronic and Compton, adiabatic, and streaming losses), CR injection in supernova shocks, and CR-gas coupling (see Chan et al., 2019, for details of the numerical implementation of CRs in these simulations). Hopkins et al. (2020) showed in the FIRE-2 simulations that the primary effect of CR feedback is to prevent the accretion of halo gas into the galaxy, leading to lower star formation and stellar mass by \(\approx 2-4\times\) in MW-mass galaxies at \(z\lesssim 2\). Furthermore, Chan et al. (2021) found that the inclusion of CRs in FIRE-2 leads to lower gas velocity dispersions, which might suggest dynamically cooler disks, although they did not examine \(v_{\phi}/\sigma\). We show below that the _opposite_ is true for young stars when considering \(v_{\phi}/\sigma\), that is, the inclusion of CRs leads to dynamically hotter stellar disks. Figure 9 shows the effect of CRs on present-day stellar kinematics versus age. We include only the isolated galaxies from the _Latte_ suite, which have versions with CR feedback. We also include the lower-mass galaxies m12r and m12z, because our goal is only to examine the differential effects of including CRs. Figure 9 (top) shows the current median azimuthal velocity, \(v_{\phi,\mathrm{now}}\), versus stellar age, averaged across the same galaxies in the CR simulations and the fiducial simulations. At all ages, galaxies with CR feedback have lower \(v_{\phi}\), with the largest absolute difference for the youngest stars, which arises primarily because of their lower stellar masses. Figure 9 (middle) shows the present vertical velocity dispersion, \(\sigma_{\mathrm{Z}}\), versus age. The CR simulations have lower \(\sigma_{\mathrm{Z}}\) at all ages, except for the youngest stars, which may arise from the effects of CRs on the ISM and halo gas occurring over long timescales. We also examined \(\sigma_{\mathrm{R}}\), \(\sigma_{\phi}\), and \(\sigma_{\mathrm{tot}}\) and found qualitatively similar results. Overall, the inclusion of CR feedback leads to lower velocity dispersions in stars, consistent with analyses of gas in these simulations (Hopkins et al., 2020; Chan et al., 2021). Figure 9 (bottom) shows the sample-averaged \(v_{\phi}/\sigma_{\mathrm{Z}}\) versus age. For most ages, stars in the CR simulations have slightly _less_ rotational support than in the simulations without CR feedback; the youngest stars show the most significant reduction at \(\approx 30\%\). Thus, including CR feedback leads to fractionally dynamically hotter galaxies, despite their lower absolute velocity dispersions. However, Figure 10 shows that CR feedback leads to galaxies with \(\approx 3\times\) lower stellar mass at present, as Hopkins et al. (2020) showed. Figure 10 (top row) shows \(v_{\phi}/\sigma_{\mathrm{Z}}\) for all stars versus stellar mass, while Figure 10 (bottom row) shows the same but for young stars (age < 250 Myr). In general, Figure 10 shows the same trends with stellar mass as for our fiducial simulations without CR feedback in Figure 7. Thus, the inclusion of CR feedback primarily acts to reduce a galaxy's stellar masses; its \(v_{\phi}\) and \(v_{\phi}/\sigma_{\mathrm{Z}}\) largely shift along the existing relation. This leads to galaxies that are slightly less disky, with slightly lower \(v_{\phi}/\sigma_{\mathrm{Z}}\). We conclude that the inclusion of CR feedback, at least as implemented in FIRE-2 and assuming a constant diffusion coefficient, does not lead to dynamically colder disks. ## 4 Summary and Discussion ### Summary of Results We summarize the main results from our analysis of stellar disks across the formation histories of 14 MW-mass galaxies from the FIRE-2 simulations. Most importantly, _the kinematics of stars today do not simply reflect their kinematics at formation, but neither do they simply reflect post-formation dynamical heating; both processes are important_. In particular, although the present-day dispersion of stars, \(\sigma_{\mathrm{now}}\), increases monotonically with age, \(\sigma_{\mathrm{form}}\) does not. We identified **Three Eras of Disk Formation**, when stars formed with different kinematics and experienced different degrees of post-formation dynamical heating: **1) Pre-Disk Era** (typically \(\gtrsim 8\) Gyr ago): stars formed on dispersion-dominated orbits, with \((v_{\phi}/\sigma_{\mathrm{tot}})_{\mathrm{form}}<1\) and \(v_{\phi}\lesssim 20\) km s\({}^{-1}\). \(\sigma_{\mathrm{form}}\) increased over time, reflecting the deepening of the gravitational potential. Thus, the oldest stars formed with the _lowest_ dispersion. However, present-day dynamics versus stellar age do not reflect this trend: \(\sigma_{\mathrm{now}}\) increases monotonically with age, meaning that the oldest stars now have the _highest_ dispersion. Thus, post-formation dynamical heating is most important during this era. Furthermore, the typical \(v_{\phi,\mathrm{now}}\approx 60\) km s\({}^{-1}\), so the oldest stars often have been'spun up' since formation. As a result of both of these trends, \((v_{\phi}/\sigma_{\mathrm{tot}})_{\mathrm{now}}\) remains below 1, similar to formation. **2) Early-Disk Era** (typically \(\approx 4-8\) Gyr ago): began at the time of disk onset, when \(v_{\phi}/\sigma_{\mathrm{tot}}>1\) and grew rapidly, and it ended when the growth of \(v/\sigma\) slowed significantly. Similarly, \(\sigma_{\mathrm{form}}\) peaked at the start of this era, and it decreased steadily throughout, reaching a minimum by the era's end. The amount of dispersion added by post-formation heating was constant with age throughout this era, so the slope of \(\sigma_{\mathrm{now}}\) with age is unchanged from that at formation. The majority of \(\sigma_{\mathrm{now}}\) for these stars was in place at formation, not added via post-formation dynamical heating. **3) Late-Disk Era** (typically \(\lesssim 4\) Gyr ago): began when the growth of \((v_{\phi}/\sigma_{\mathrm{tot}})_{\mathrm{form}}\) slowed. Stars formed with nearly constant \(\sigma_{\mathrm{tot,form}}\). \(\sigma_{\mathrm{tot,now}}\) increases monotonically with age, because post-formation dynamical heating increases with age, and the oldest stars in this era have \(\sigma_{\mathrm{now}}\) with an equal contribution from post-formation heating and formation. **We examined the evolution of the three different components \begin{table} \begin{tabular}{c c c c c} \hline \hline metric & \(v_{\phi}/\sigma_{\mathrm{Z}}\) young & \(v_{\phi}/\sigma_{\mathrm{tot}}\) young & \(v_{\phi}/\sigma_{\mathrm{Z}}\) all & \(v_{\phi}/\sigma_{\mathrm{tot}}\) all \\ & corr. coeff. (p-value) & corr. coeff. (p-value) & corr. coeff. (p-value) & corr. coeff. (p-value) \\ \hline \(v_{\phi}/\sigma_{\mathrm{Z,now}}>1.8\) & \(0.90~{}(1.3\times 10^{-6})\) & \(0.91~{}(8.1\times 10^{-6})\) & \(0.86~{}(6.7\times 10^{-5})\) & \(0.92~{}(3.3\times 10^{-6})\) \\ \(v_{\phi}/\sigma_{\mathrm{tot,now}}>1\) & \(0.91~{}(6.1\times 10^{-6})\) & \(0.86~{}(8.8\times 10^{-5})\) & \(0.87~{}(5.1\times 10^{-5})\) & \(0.93~{}(2.4\times 10^{-6})\) \\ \(v_{\phi}/\sigma_{\mathrm{tot,form}}>1\) & \(0.89~{}(1.7\times 10^{-5})\) & \(0.89~{}(2.4\times 10^{-5})\) & \(0.81~{}(4.0\times 10^{-4})\) & \(0.88~{}(3.00\times 10^{-5})\) \\ \(h_{\mathrm{burst}}\) & \(0.78~{}(3.0\times 10^{-3})\) & \(0.70~{}(0.011)\) & \(0.55~{}(0.067)\) & \(0.63~{}(0.028)\) \\ \hline \hline \end{tabular} \end{table} Table 2: Correlations and p-values between the degree of ordered motion of stars and different lookback times of galaxy transitions. The first three rows are the lookback times of the onset of disk settling, using three different metrics, while the fourth row is the lookback time corresponding to the galaxy’s transition from bursty to steady star formation from Yu et al. (2021). of velocity.**_At formation, \(\sigma_{\rm R,form}>\sigma_{\phi,\rm form}>\sigma_{\rm Z,form}\) for all ages. Remarkably, across all eras and ages, the relative amount in each component is relatively unchanged. At present, \(\sigma_{\rm R,now}\) continues to dominate at all ages, but \(\sigma_{\rm Z,now}\approx\sigma_{\phi,\rm now}\) for stars that formed in the Early- and Perez-Disk Eras (ages \(\gtrsim 4\) Gyr). Post-formation dynamical heating primarily increased \(\sigma_{\rm R}\) at ages \(\lesssim 4\) Gyr but acted nearly isotropically for older stars. We compared our simulated 14 galaxies to observations of 3 galaxies in the Local Group, compiling measurements of \(\sigma\) and \(v_{\phi}/\sigma\) for young stars (age \(\lesssim 1\) Gyr) in the MW, M31, and M33. The MW is significantly kinematically colder than both M31 and M33. For young stars, all of our simulations are dynamically hotter than the MW but exhibit kinematics for similar to those of M33 and M31. We examined the dependence of stellar kinematics on galaxy stellar mass.\(\sigma_{\rm now}\) of all stars increases with galaxy stellar mass, but \((v_{\phi}/\sigma)_{\rm now}\) of all stars is nearly constant. Young stars (age \(<250\) Myr) exhibit opposite trends: \(\sigma_{\rm now}\) is independent of mass, while \((v_{\phi}/\sigma)_{\rm now}\) increases with mass. We quantified the time of disk onset, the lookback time that each galaxy's disk began to settle, using thresholds in \(v_{\phi}/\sigma\), measured both today and at formation. The degree of rotational support of young stars today correlates strongly with the time of disk onset: earlier-setting galaxies currently form colder disks. Most of our galaxies began to settle \(\approx 4-8\) Gyr ago, with galaxies in LG-like environments generally starting to settle earlier than isolated galaxies. Romeo, a galaxy in a LG-like pair, began to settle the earliest, \(\approx 10-11\) Gyr ago, likely the closest to the MW. We examined the impact of including self-consistent cosmic-ray injection/transport from stars, assuming a constant diffusion coefficient, which in the FIRE-2 simulations leads to galaxies with lower stellar masses, lower rotational velocities, and lower dispersions. Overall, the inclusion of cosmic rays does not significantly change \(v_{\phi}/\sigma\) at a given stellar mass. ### Discussion #### 4.2.1 Caveats We first discuss key limitations and caveats to our analysis of 14 MW-mass galaxies from the FIRE-2 simulations. First, we reiterate the selection function of our simulated sample. We selected isolated or LG-like paired dark-matter halos from dark-matter-only simulations at \(z=0\) with \(M_{200}\equiv 1-2\times 10^{12}\,\rm M_{\odot}\), with no additional selection based on galaxy properties or merger/formation history, etc. Thus, we did not choose our galaxies specifically to be analogues of the MW (or M31 or M33). As such, our galaxy-averaged results represent randomly sampled formation histories of MW-mass galaxies, which should fairly sample the cosmological scatter in formation histories. Second, these FIRE-2 simulations do not model all of the physics Figure 10: **Effects of cosmic-ray feedback on current stellar \(v_{\phi}/\sigma_{Z}\) versus stellar mass, across 8 isolated MW-mass galaxies simulated with fiducial FIRE-2 physics (circles) and additionally with CR injection and transport (triangles). CR feedback leads to galaxies with \(\gtrsim 3\) times lower stellar mass. Top row: \(v_{\phi}/\sigma_{Z}\) for stars of all ages. The CR simulations have lower \(\sigma_{Z}\) but also lower stellar masses, leading them to land on a similar relation in \(v_{\phi}/\sigma_{Z}\). Bottom row: same but for young stars with ages \(<250\) Myr. CR simulations have slightly lower \(v_{\phi}/\sigma_{Z}\) for 7 of the galaxies, though again, they remain on a similar scaling relation. Adding CR feedback does not lead to dynamically colder disks; it primarily reduces the stellar mass formed and moves galaxies along a similar relation.** Figure 9: **Effects of cosmic-ray feedback on disk dynamical history**: current stellar kinematics versus age, averaged across 8 isolated MW-mass galaxies simulated with fiducial FIRE-2 physics (solid) and additionally including cosmic ray (CR) injection and transport from stars (dotted), assuming a constant effective diffusion coefficient \(\kappa=3\times 10^{29}\rm cm^{-2}s^{-1}\). Shaded region shows the galaxy-to-galaxy standard deviation. Top: median azimuthal velocity, \(v_{\phi}\). The addition of CR feedback reduces overall star formation, leading to lower-mass galaxies with lower velocities at all ages. Middle: vertical velocity dispersion, \(\sigma_{Z}\). Given their lower stellar mass, the CR simulations have lower velocity dispersions, though the CR simulations converge to the fiducial simulations for the youngest stars. Bottom: median \(v_{\phi}/\sigma_{Z}\), as a measure of rotational support and ‘diskiness’. Given the reduction of both \(v_{\phi}\) and \(\sigma_{Z}\) in the CR simulations, \(v_{\phi}/\sigma_{Z}\) is similar at nearly all ages for the fiducial and CR simulations. For the youngest stars, the CR simulations are fractionally dynamically hotter and less rotationally supported. We find similar trends for the other velocity components and the total dispersion. that may be relevant for disk formation, in particular, AGN feedback from black holes. The degree to which AGN may impact stellar dynamics in the solar annuli of MW-mass galaxies (which in this work we define as \(R=6-10\) kpc and \(|Z|<3\) kpc) is unclear: likely, AGN feedback is most relevant for more massive galaxies and most significantly alters stellar dynamics in the innermost regions (Mercedes-Feliz et al., 2023; Wellons et al., 2023), but given stellar radial redistribution and AGN feedback's probable impact on quenching and global galaxy properties such as stellar mass and SFR, its effects may impact our results. Additionally, our analysis of birth kinematics has limitations. We determined a star particle's position and velocity relative to the galaxy's major axes, which are stable and well-defined at present, but not necessarily at earlier times, when accretion and mergers were more significant and even the notion of a single main galaxy progenitor can be ill-defined (Wechsler et al., 2002; Giocoli et al., 2007; Sanitsevan et al., 2020). Thus, cleanly separating the \(R\), \(\phi\), and \(Z\) components is difficult at early times. However, all components of the dispersion at formation display similar trends as \(\sigma_{\rm tot}\), which is independent of the axes' determination, implying that this limitation's impact on our overall qualitative results is likely minor. Furthermore, as we discussed in Section 2.2, our simulations store snapshots every \(20-25\) Myr, so our star 'formation' properties include \(\approx 10\) Myr of post-formation dynamical evolution, on average. This early heating can be significant. Using an idealized simulation of an isolated MW-like galaxy with sub-parsec resolution (0.05 pc), Renaud et al. (2013) found that stars that formed with \(\sigma_{\rm tot,form}\approx 10\) km s\({}^{-1}\) were heated to 15 km s\({}^{-1}\) in just 10 Myr. Similarly, our Figure 6 shows that \(\sigma_{\rm tot}\) most rapidly increased post-formation by \(\approx 30\) km s\({}^{-1}\) in \(\sim 125\) Myr. We tested if this early evolution could change our conclusions about the relative importance of 'formation' dispersion versus post-formation heating. For stars younger than 10 Gyr, \(\sigma_{\rm tot,now}/\sigma_{\rm tot,form}<1.95\) for all but \(\approx 3.5\) Gyr old stars, meaning that post-formation heating is less than 95% of their \(\sigma_{\rm tot,form}\). If we assume that stars were heated at the same effective rate as the youngest stars in Figure 6 for \(\approx 12\) Myr, our revised \(\sigma_{\rm tot,now}/\sigma_{\rm tot,form}\) still remains below 1.95 for the same ages. Even if we assume an initial effective heating rate that is 4\(\times\) larger, \(\sigma_{\rm tot,now}/\sigma_{\rm tot,form}\) remains less than 2 for the same ages. Thus, our results about the relative importance of post-formation heating remain unchanged. Lastly, our analysis did not account for uncertainties in stellar age. Stellar ages are difficult to measure accurately, and uncertainties on stellar ages for large populations are generally \(\gtrsim 20-30\)% (for example Soderblom, 2010), though the synergy of astrometric and spectroscopic surveys with asteroseismological data has driven tremendous progress in the past decade (for example Silva Aguirre et al., 2018; Mackereth et al., 2019). Previous works showed that the inclusion of observationally-motivated age uncertainties of 20 - 30% in simulations can (1) obscure features from mergers in the age-velocity dispersion relationship (Martig et al., 2014; Buck et al., 2020), and (2) decrease inferred exponential heating rates via the'smearing' of age bins (Aumer et al., 2016). In McCluskey et al., in preparation, we will examine the kinematic trends of our simulations versus age at \(z=0\), incorporating realistic age uncertainties, which will allow us to compare to observational trends in much more detail. #### 4.2.2 Disk Galaxy Formation in FIRE Simulations As we discussed in the Introduction, our work builds upon a series of analyses of FIRE simulations that studied the formation histories of MW-mass galaxies, in particular, the co-evolution of their stellar and gaseous dynamics, morphology, star formation burstiness, metallicity spatial variations, and virialization of the inner CGM (Ma et al., 2017; Garrison-Kimmel et al., 2018; Yu et al., 2021; Stern et al., 2021; Bellardini et al., 2022; Gurvich et al., 2023; Hafen et al., 2022; Trapp et al., 2022; Yu et al., 2022; Hopkins et al., 2023). We further quantified the notion that MW-mass disk galaxy formation occurred in distinct phases, showing that stellar kinematics, both at formation and at present, follow broadly contemporaneous evolutionary phases. In particular, what we define as the transition from the Early-Disk to the Late-Disk Era coincides with the virialization of the inner CGM (Stern et al., 2021), the transition from bursty to steady star formation (Yu et al., 2021), and a transformation in the energetics and angular momentum of both accreted gas (Hafen et al., 2022) and the ISM (Gurvich et al., 2023). During this stage of steady star formation in the Late-Disk Era, \(\sigma_{\rm form}\) was low and constant, with the gas turbulence reaching a floor maintained by the thermal and turbulent pressure of the warm neutral/ionized medium (for example Leroy et al., 2008), stellar feedback, and spiral arms. Yu et al. (2021) quantified the lookback time that star-formation transitioned from bursty to steady - the 'burst time' or \(t_{\rm b}\) - for 12 of the 14 galaxies that we analyze. Table 1 lists each galaxy's \(t_{b}\) from Yu et al. (2021). The transition to steady star formation typically occurred \(\approx 2\) Gyr after our 'time of disk onset', when stellar \(v_{\phi}/\sigma_{\rm tot}>1\). Nominally, this implies that star formation remained bursty for a while during the Early-Disk Era, in agreement with our finding that \(\sigma_{\rm form}\) remained high (but declining) throughout this era. However, our times of disk onset are based on stellar dynamics within the solar annulus today (\(R=6-10\) kpc and \(|Z|<3\) kpc) and only moderately correlate with \(t_{\rm b}\), having Spearman coefficients between \(0.75-0.78\). By contrast, Yu et al. (2021) included all stars that formed within 20 kpc physical of the galactic center. If instead we extend our radial range to \(R<12\) kpc, this yields times that (1) are on average 1.2 Gyr later than our original times, because of the hotter kinematics of the inner region, and (2) more strongly correlate with \(t_{\rm b}\), with Spearman coefficients between 0.94 and 0.96 for our three dynamical thresholds. Thus, these estimates for when a galaxy's disk began to form/settle can depend on the selection of stars today, with stars in the solar neighborhood yielding earlier settling times. We will investigate radial trends in these simulations in future work. Using the same FIRE-2 simulations, Yu et al. (2022) argued that they can subdivide their bursty star-formation era into two eras: an early bursty and chaotic phase, when stars formed with irregular morphology, and a subsequent bursty phase when stars formed with disky morphology. These bursty star-formation eras generally correspond to our Pre-Disk and Early-Disk eras. Yu et al. (2022) also studied how both the birth and present-day orbits of stars reflect the era when they formed, classifying stars as belonging to the thin-disc, thick-disc, and isotropic spheroid based on their orbital circularity at both formation and at \(z=0\). They found that, in general, a star's orbital classification at formation reflects the era in which it formed: stars primarily formed on isotropic spheroid orbits in the early bursty star-formation phase, thick-disc orbits in the later bursty star-formation phase, and thin-disc orbits in the steady star-formation phase. However, Yu et al. (2022) noted that this is not a one-to-one mapping: \(\approx 34\)% of stars that formed with thin-disc orbits now have thick-disc orbits, while a similar fraction of stars that formed with thick-disc orbits now have isotropic spheroid orbits. Furthermore, the fraction of stars that were born 'thin' but are now 'thick' increases with age. Our results agree with theirs, and we further quantify the degree of post-formation heating/perturbations that stars experienced. #### 4.2.3 Comparison to Other Cosmological Zoom-in Simulations Various works have used different cosmological zoom-in simulations to study the formation histories of MW-mass galaxies (for example Bird et al., 2012; Brook et al., 2012; Martig et al., 2014; Grand et al., 2016; Buck et al., 2020; Agertz et al., 2021; Bird et al., 2021). Across more than a decade of progress, and across a wide range of numerical implementations of hydrodynamics, star formation, and feedback, such cosmological simulations consistently agree that the dynamical history of a MW-mass disk arises from _both_ cosmological disk settling and post-formation dynamical heating; one cannot neglect one or the other. As we discuss below, most current-generation zoom-in simulations of MW-mass galaxies also broadly agree in both the _typical_ velocity dispersion of young stars at \(z\approx 0\) and the shape/magnitude of the stellar age dependence, as measured today. Our results agree with the VINTERGARTEN simulation (Agertz et al., 2021), which used the same initial conditions as m12i but simulated with the adaptive mesh refinement code RAMSES (Teyssier, 2002). Their \(\sigma_{\rm Z,now}\) versus age (Figure 14 of Agertz et al., 2021) is nearly identical to ours for m12i, both in value and shape: both of our simulations show near exponential increases in \(\sigma_{\rm Z,now}\) with age for stars younger than \(6\) Gyr and a jump at \(6\) Gyr. Our results also agree well with the NIIAO-UHD suite (Buck et al., 2020): their \(5\) galaxies, which span \(M_{\rm star}=1.5-16\times 10^{10}\,\rm M_{\odot}\), have \(\sigma_{\rm Z,now}\approx 20-35\,\rm km\,s^{-1}\) for young stars, and \(\approx 60-120\,\rm km\,s^{-1}\) for stars \(12\) Gyr old, similar to ours. Differences in the exact age dependence of each galaxy's dispersion follows from its unique formation history. Grand et al. (2016) analyzed \(16\) MW-mass galaxies from the Auriga simulations, also finding broadly similar \(\sigma_{\rm Z,now}\) and \(\sigma_{\rm Z,form}\) as we do. Similarly to FIRE, NIIAO-UHD, and VINTERGARTEN, Auriga simulations have typical \(\sigma_{\rm Z,now}\approx 20-25\,\rm km\,s^{-1}\). They also concluded that cosmological disk settling was generally the primary effect that set the relation between the velocity dispersion and the age of stars today. However, Grand et al. (2016) found that bars were the strongest contributor to vertical dynamical heating. This seemingly contradicts our results, because stars in _all_ of our galaxies had significant post-formation increases to \(\sigma_{\rm Z}\), including stars in unbarred or weakly-barred galaxies. Furthermore, even our galaxies that do house significant bars near \(z=0\) (such as m12m) did not at early times, and our results indicate that heating is most significant for stars that formed in the Pre-Disk Era. Although we do not study the effect of bars in this work, Ansar et al., in preparation will quantify the incidence of bars in the FIRE-2 simulations, and in future work we will address the drivers of dynamical heating in our simulations, including bars. While it is possible that the bar headed these old stars more recently, bars most likely do not heat _only_ old stars. Our results may differ from those of Grand et al. (2016) because of numerical differences. First, the Auriga simulations in Grand et al. (2016) had baryonic mass resolution of \(4\times 10^{4}\,\rm M_{\odot}\), \(8-10\) times larger than ours, and they used lower spatial resolution, with gravitational force softenings of \(375\) pc physical for stars and gas (which is comparable to the thin-disk scale height of the MW), while our FIRE-2 simulations use \(2.7-4\) pc for star particles and \(\approx 1\) pc minimum and \(\sim 40\) pc ISM-averaged for gas cells. Auriga also uses the subgrid ISM model from Springel & Hernquist (2003), treating the ISM as a two-phase medium with an effective equation of state that inhibits the formation of dense gas, and star formation occurs at much lower density, \(n>0.13\) cm\({}^{-3}\). Thus, Auriga simulations do not resolve the dense ISM, including important dynamical structures like GMCs. Finally, the modeling of stellar feedback in Auriga is markedly different: supernovae launch 'wind particles' that are temporarily decoupled from the hydrodynamics until the particle encounters sufficiently low-density gas, which does not couple that feedback directly to dense gas in a star-forming region. That said, the Auriga simulations do model certain physics that these FIRE-2 simulations do not include, most importantly, AGN feedback. Grand et al. (2016) also found that spiral arms did not considerably heat the disk, while we proposed that spiral arms are likely responsible for the significant increase in \(\sigma_{\rm tot,now}\) of young stars, in part because \(\sigma_{\rm R}\) dominates the overall dispersion (see also Orr et al., 2022). However, Grand et al. (2016) only considered _vertical_ heating. Because the Auriga simulations do not resolve GMCs, spiral arms likely have negligible effects on the vertical dispersion, but may have more significant impacts on the in-plane dispersions. For example, while the FIRE-2 simulations resolve GMCs, young stars in FIRE-2 have vertical effective heating rates that are \(\approx 4\times\) lower than their radial rates, as Figure 6 showed. Most previous works only examined the vertical velocity dispersion, but because vertical post-formation dynamical heating operates differently than total and in-plane heating, and because we find that the vertical dispersion typically is the smallest, the vertical dispersion does not describe the total heating. As we discussed above, nearly all current cosmological zoom-in simulations form MW-mass galaxies with typical \(\sigma_{\rm Z,now}\gtrsim 20\) km s\({}^{-1}\), not as dynamically cold as the MW today. However, Bird et al. (2021) analyzed a single MW-mass galaxy, h277, selected from the \(g14\) suite of simulations (Christensen et al., 2012), generated using the parallel \(N\)-body+SPH code GASOLINE (Wadsley et al., 2004). That simulation had \(\sigma_{\rm Z,now}<10\) km s\({}^{-1}\) for stars younger than \(1\) Gyr, similar to the MW (for example Nordstrom et al., 2004; Mackereth et al., 2019). The more MW-like kinematics of h277 most likely reflect its more MW-like formation history: its disk began to settle early, \(\approx 9\) Gyr ago, and it had no major mergers since \(z\sim 2\), similar to estimates for the MW's history (for example Belokurov et al., 2018; Helmi et al., 2018; Conroy et al., 2022; Xiang & Rix, 2022). That said, one of our galaxies, Romeo, has a similar time of disk onset and merger history as h277 but has a larger \(\sigma_{\rm Z}\approx 15\) km s\({}^{-1}\) for young stars today. Therefore, differences in modeling the ISM, star formation, and/or stellar feedback models also may contribute: for example, Hopkins et al. (2012) and Agertz et al. (2013) show how the structure of the ISM can vary depending on the feedback model. Although the baryonic mass resolution, modeling of dense ISM, and molecular-based star-formation criteria are broadly similar between our FIRE-2 simulations and h277, we note key numerical differences: h277 used SPH for the hydrodynamics solver (instead of MFM); used larger gravitational force softening of \(173\) pc for all particle species; used the 'blastwave' feedback model for core-collapse supernovae in which cooling is turned off for gas within the blast wave radius to mimic the adiabatic expansion of a remnant (Stinson et al., 2006); used only thermal energy injection for white-dwarf (Type Ia) supernovae; and did not include stellar winds or radiative feedback. By contrast, FIRE-2 accounts for energy and momentum injection for both core-collapse and white-dwarf supernovae, and includes stellar winds, radiation pressure, and photo-ionization. However, we again emphasize that both \(\sigma_{\rm Z,form}\) and \(\sigma_{\rm Z,now}\) exhibited similar trends with age in h277 and our FIRE-2 simulations. #### 4.2.4 Connections to the Early Formation of the MW As we discussed in Sections 3.6 and 3.7, the cold kinematics of MW stars may indicate that the MW's disk began to settle unusually early in its history. For example, using our predicted result that the kinematics of young stars today correlates with when the disk began to settle, and placing the MW's value of \(v_{\phi}/\sigma_{\rm tot}\approx 8\) along the relation from our simulations in Figure 8, this alone implies that the MW's time of disk onset was \(\gtrsim 11\) Gyr ago, which agrees with recent estimates from observations (for example Belokurov & Kravtsov, 2022; Conroy et al., 2022; Xiang & Rix, 2022). Thus, our analysis of disk settling times in cosmological simulations and its relation to present-day kinematics complements and augments these works. This also suggests that the FIRE-2 simulations do not produce a galaxy as kinematically cold as the MW today primarily because no galaxy in our sample started to settle as early as the MW. We now discuss connections with some of these recent observational works. Conroy et al. (2022) combined Gaia astrometry and H3 Survey spectroscopy to posit that the MW disk formed in three dynamical eras, broadly consistent with our Pre-Disk, Early-Disk, and Late-Disk Eras. Conroy et al. (2022) proposed that the MW began in a'simmering phase', during which the star formation efficiency was low and stars formed kinematically hot, but that \(\approx 12-13\) Gyr ago the MW transitioned into a 'boiling phase', during which the SFE strongly increased and stars formed with increasingly 'disky' dynamics, marking the 'birth of the Galactic disk'. This thick disk continued to grow and settle in a 'boiling phase' for \(\approx 3-4\) Gyr (from \(z\approx 4\) to \(z\approx 1\)) until the Gaia-Sausage-Enceladus (GSE) merger, after which the star formation efficiency decreased and stars formed in a dynamically-color, thin disk until today. Similarly, Belokurov & Kravtsov (2022) combined Gaia astrometry with spectroscopy from the APOGEE Data Release 17. Although Belokurov & Kravtsov (2022) lacked empirical age estimates, their general picture agrees with ours: old, metal-poor, in-situ stars were born in a'messy' phase, then the MW'spun-up' as stars rapidly became metal-rich and rotational earlier than \(\approx 8\) Gyr ago. Belokurov & Kravtsov (2022) posited that this rapid formation of the MW's disk took place over 1 - 2 Gyr, with the GSE merger occurring after the initial formation of a thick-disk, such that the merger heated disk stars onto halo-like orbits. On the other hand, Xiang & Rix (2022) concludes that the formation of the in-situ halo and thick-disk overlapped, that is, stars formed on halo-like orbits for \(\approx 2\) Gyr after the initial formation of the thick-disk. In this picture, the GSE merger then occurred \(\approx 11\) Gyr ago, 1-2 Gyr earlier than previous estimates, and did not strongly heat pre-existing disk stars, or cause a transition to a thin-disk, but instead enhanced thick-disk formation. Our results help connect these present-day observations to understanding the MW's formation history. In particular, _the presence of coherent rotation in a stellar population today does not necessarily indicate the presence of coherent rotation at the time of formation_. Observations show that older, more metal-poor stars in the MW rotate slower than younger stars, but they still display some coherent rotation. In our simulations, this is not primarily the result of these Pre-Disk stars being born with coherent rotation, but rather, this tends to arise from post-formation torquing (see Figure 1), for example, caused by mergers (see Bellardini et al. in preparation). m12m provides an illustrative example: it experienced several major mergers at early times, broadly similar to estimates of when the GSE merger occurred. As Figure 3 shows, m12m has old stars with \(v\phi_{,\rm now}\approx 200\) km s\({}^{-1}\), despite having \(v\phi_{,\rm form}\approx 0-50\) km s\({}^{-1}\). Thus, mergers like the GSE merger could have torqued old stars in the MW onto more coherently-rotating orbits. Ultimately, the MW's kinematics reflect its individual formation history. While the MW provides a useful benchmark, the fact that most current cosmological simulations form galaxies with hotter kinematics than the MW may speak to the MW's early settling time rather than to limitations of the simulations. Indeed, the comparison of the MW to M31 and M33 in Figure 7 affirms that the MW may be an outlier among galaxies at its mass. Therefore, future works that study the kinematics of stars and their relation to stellar ages in external galaxies will provide excellent opportunities to place the MW in a statistical context, to study the combination of cosmological disk settling and dynamical heating across a range of formation histories (Dorman et al., 2015; Leaman et al., 2017; Quirk et al., 2022). ## Acknowledgements We thank Jonathan Stern for helpful comments. FM and AW received support from: NSF via CAREER award AST-2045928 and grant AST-2107772; NASA ATP grant 80NSSC20K0513; HST grants AR-15809, GO-15902, GO-16273 from STScI; and a Scialog Award from the Heising-Simons Foundation. CAFG was supported by NSF through grants AST-2108230 and CAREER award AST-1652522; by NASA through grants 17-ATP17-0067 and 21-ATP21-0036; by STScI through grant HST-G0-16730.016-A; by CXO through grant TM2-23005X; and by the Research Corporation for Science Advancement through a Cottrell Scholar Award. We performed some of this work at the Aspen Center for Physics, supported by NSF grant PHY-1607611. We ran simulations using: XSEDE, supported by NSF grant ACI-1548562; Blue Waters, supported by the NSF; Frontera allocations AST21010 and AST20016, supported by the NSF and TACC; Pleiades, via the NASA HEC program through the NAS Division at Ames Research Center. ## Data Availability All of the Python code that we used to generate these figures are available at [https://fmccluskey.github.io](https://fmccluskey.github.io), which uses the publicly available package [https://bitbucket.org/awetzel/gizmo_analysis](https://bitbucket.org/awetzel/gizmo_analysis)(Wetzel & Garrison-Kimmel, 2020). The FIRE-2 simulations are publicly available (Wetzel et al., 2022) at [http://flathub.flatironinstitute.org/fire](http://flathub.flatironinstitute.org/fire). Additional FIRE simulation data is available at [https://fire.northwestern.edu/data](https://fire.northwestern.edu/data). A public version of the Gizmo code is available at [http://www.tapir.caltech.edu/~phopkins/Site/GIZMO.html](http://www.tapir.caltech.edu/~phopkins/Site/GIZMO.html).
2309.01120
Double Clipping: Less-Biased Variance Reduction in Off-Policy Evaluation
"Clipping" (a.k.a. importance weight truncation) is a widely used variance-reduction technique for counterfactual off-policy estimators. Like other variance-reduction techniques, clipping reduces variance at the cost of increased bias. However, unlike other techniques, the bias introduced by clipping is always a downward bias (assuming non-negative rewards), yielding a lower bound on the true expected reward. In this work we propose a simple extension, called $\textit{double clipping}$, which aims to compensate this downward bias and thus reduce the overall bias, while maintaining the variance reduction properties of the original estimator.
Jan Malte Lichtenberg, Alexander Buchholz, Giuseppe Di Benedetto, Matteo Ruffini, Ben London
2023-09-03T09:10:10Z
http://arxiv.org/abs/2309.01120v1
# Double Clipping: Less-Biased Variance Reduction in Off-Policy Evaluation ###### Abstract. "Clipping" (a.k.a. importance weight truncation) is a widely used variance-reduction technique for counterfactual off-policy estimators. Like other variance-reduction techniques, clipping reduces variance at the cost of increased bias. However, unlike other techniques, the bias introduced by clipping is always a downward bias (assuming non-negative rewards), yielding a lower bound on the true expected reward. In this work we propose a simple extension, called _double clipping_, which aims to compensate this downward bias and thus reduce the overall bias, while maintaining the variance reduction properties of the original estimator. off-policy evaluation, OPE, inverse propensity scoring, IPS, clipping + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition ## 1. Introduction Off-policy evaluators are a crucial component in the development of many real-world recommender systems. They allow us to estimate the performance of a new _target_ recommendation policy based on interaction data logged from a different _logging_ policy (for instance, the current production recommender), thereby reducing the need to run slow and costly A/B tests. Many counterfactual off-policy estimators are based on the inverse propensity scoring (IPS) principle (Bouquet et al., 2009; Chen et al., 2010; Chen et al., 2011; Chen et al., 2012). Given a stochastic logging policy and some mild assumptions, IPS-based estimators are unbiased, but often suffer from high variance. This is true even on industrial-scale data sizes; in particular, if the logging policy is close to being deterministic. Intuitively speaking, most IPS estimators contain propensity ratio weights of the form \(w=p_{\text{target}}/p_{\text{logging}}\), where \(p_{\text{target}}\) is a target propensity (e.g., the probability that the target policy recommends a particular action to the user) and \(p_{\text{logging}}\) is the logging propensity (e.g., the probability that the logging policy recommended that same action to the user). These ratios can become arbitrarily large for small logging propensities, which then leads to high variance in the overall estimate. The literature has proposed various variance-reduction techniques for IPS-style estimators, including weight clipping (Bouquet et al., 2009; Chen et al., 2011; Chen et al., 2012), self-normalization (Kang et al., 2013), doubly-robust estimators (Bouquet et al., 2009; Chen et al., 2011; Chen et al., 2012), as well as generalizations of those ideas (Bouquet et al., 2009; Chen et al., 2011; Chen et al., 2012). In this article we revisit weight clipping, which is still used extensively due to its simplicity (it does not require a reward model) and its generality (it is readily applicable to IPS-style estimators used in more complex real-world applications, such as ranking (Bouquet et al., 2009; Chen et al., 2011) or slate recommendation (Kang et al., 2013), where self-normalized or doubly-robust estimators are not available or difficult to implement). The basic idea of weight clipping is to simply avoid large propensity weight ratios by (hard-)clipping the ratios by a constant upper bound \(U\), which is usually treated as a hyper-parameter for the estimation procedure. Just like other variance-reduction techniques, the clipping procedure effectively reduces the variance of the IPS estimator at the cost of introducing a bias. Unlike other techniques, however, the bias introduced by clipping is always pessimistic. In other words, on average, the estimator underestimates the true expected reward (under the technical assumption that rewards are always non-negative), as illustrated in Figure 0(a). In this work, we exploit this property of the clipping bias, so as to obtain more accurate estimates. Specifically, we clip the propensity ratios from both sides rather than just from above, thereby potentially correcting pessimistic underestimates with optimistic overestimates. Experiments with synthetic data show that this approach leads to a reduction in MSE. ## 2. Background and related work We focus on off-policy estimation in the standard contextual multi-armed bandit setting, but we note that our work is applicable to counterfactual learning-to-rank (Rendle, 2015; Rene, 2016) and slate recommendation (Lichtenberg et al., 2017). ### Off-policy evaluation in the contextual bandit. Consider a contextual bandit setting, where a stochastic _logging policy_\(\pi_{0}(y|x)\) (e.g., the currently deployed recommender system) repeatedly selects an _action_\(y\in\mathcal{Y}\) based on a given _context_\(x\sim P(\mathcal{X})\) (e.g., user history, action features, etc.). The system then observes a non-negative _reward_\(r\sim R(x,y)\geq 0\), which depends on the action that was selected and the context. The system does not observe the rewards for any action that was not selected by the logging policy. After \(n\) rounds, the logged data set is given by \(\mathcal{D}=\{(x_{i},y_{i},r_{i},\pi_{0}(\cdot|x_{i}))\}\), where \(r_{i}\) is the observed reward and \(\pi_{0}(y_{i}|x_{i})\) is the _propensity_ (i.e., probability) of action \(y_{i}\) to be selected by the logging policy for context \(x_{i}\). The goal of off-policy evaluation is to estimate the expected reward of a new _target_ policy \(\pi\), given by \(R(\pi)=\mathbb{E}_{x\sim P(X)}\mathbb{E}_{y\sim\pi(\cdot|x)}\mathbb{E}_{r\sim R (x,y|x)}[r]\), based on the logging data set \(\mathcal{D}\). The challenge is that the logged data only contains rewards for actions selected by \(\pi_{0}\), which may be different from those selected by \(\pi\). Thus, we are faced with a counterfactual estimation problem. ### Counterfactual off-policy estimators. The standard inverse propensity scoring (IPS) (Bang et al., 2015; Bang et al., 2016; Lichtenberg et al., 2017; Lichtenberg et al., 2017) estimate for the contextual bandit problem is given by \[\hat{R}_{\text{IPS}}(\pi)=\frac{1}{n}\sum_{i=1}^{n}r(x_{i},y_{i})\frac{\pi(y_ {i}|x_{i})}{\pi_{0}(y_{i}|x_{i})}=\frac{1}{n}\sum_{i=1}^{n}r(x_{i},y_{i})w(x_ {i},y_{i}). \tag{1}\] Figure 1. Comparison of clipped IPS (clPS, blue) and doubly-clipped IPS (dcIPS, orange) in a synthetic bandit experiment (detailed setup described in Section 5). Figure (a)a: Mean (solid line) and corresponding standard error bands of reward estimates across 100 repetitions as a function of clipping constants \(U\) (for both cIPS and dcIPS) and \(L=U\) (only for dcIPS). The dashed red line shows the true reward of the target policy, i.e., the estimation target. The dotted grey line shows the average logging reward observed in the data set. Figure (b)b: mean squared error (MSE) between reward estimate and true target reward across 100 repetitions. The dashed lines show the variance components, the dotted lines show the squared-bias components for both estimators. The estimator \(\hat{R}_{\text{IPS}}(\pi)\) is an unbiased estimator of \(R(\pi)\), given the overlap assumption: \(\pi_{0}(y|x)>0\) whenever \(\pi(y|x)>0\). To satisfy the overlap assumption, the logging policy is usually randomized, leading to the following dilemma. Too much randomization can degrade user experience, but too little randomization leads to high variance in IPS off-policy estimation: little randomization means that some propensity values \(\pi_{0}(y_{i}|x_{i})\) are tiny, which in turn leads to the occasional huge weighting factor \(w(x_{i},y_{i})\). A widely used technique to reduce the variance of the standard IPS estimator is to simply clip (some authors also say "truncate" or "trim") large importance weight ratios. Specifically, we use the clipped IPS estimator (cIPS) that clips the entire ratio, that is, \[\hat{R}_{\text{cIPS}}(\pi,U)=\frac{1}{n}\sum_{i=1}^{n}r(x_{i},y_{i})\text{min} \{w(x_{i},y_{i}),U\}, \tag{2}\] where \(U\geq 1\) is the _upper clipping constant_. ## 3. Clipped IPS is always downward biased. Clearly, for \(U=\infty\), the clipped IPS estimator (Eq. 2) is equivalent to the un-clipped IPS estimator from Eq. 1. With non-negative rewards, decreasing \(U\) effectively reduces variance, at the cost of a downward bias, as illustrated in Figure 0(a). The following proposition confirms this intuition about the downward bias. Proposition 3.1 ().: _Let \(w(x,y)>0\)\(\forall x,y\), then the bias of \(\hat{R}_{\text{cIPS}}(\pi,U)\) is given by (proof in the Appendix)_ \[\text{Bias}(\hat{R}_{\text{cIPS}}(\pi,U))=\mathbb{E}_{x}\mathbb{E}_{y\sim\pi} \bigg{[}\underbrace{\mathbf{1}_{\{w(x,y)>U\}}}_{\text{Only clipped records}} \underbrace{\left(\frac{U}{w(x,y)}-1\right)}_{\text{Always }<0}\underbrace{\mathbb{E}_{r}\big{[}r(x,y)|x,y\big{]}}_{\text{Expected reward}} \bigg{]}. \tag{3}\] If the clipping constant \(U\) is higher than the highest attainable propensity weight ratio \(w(x,y)\) across all requests, then the clipped IPS estimator essentially becomes the standard, unbiased IPS estimator. As soon as the clipping constant becomes "active" in the sense that it starts clipping propensity weight ratios, then the bias is always strictly negative assuming non-negative rewards (ignoring the trivial case in which all clipped requests have zero expected reward). Many machine learning practitioners are happy to accept a small bias to reduce the variance of their estimators. Ideally, one would like to remove the bias from the variance reduction. However, this is difficult because often neither sign nor magnitude of the bias can be inferred from the variance reduction method. In the case of the cIPS estimator, however, Proposition 3.1 showed that the bias introduced is always negative (assuming non-negative rewards). This begs the question whether we can exploit this property to find a less bias-inducing variance-reduction method for off-policy estimation. In the following section we introduce a somewhat naive, yet effective, method to do so. ## 4. Two-sided double clipping We define the _two-sided double-clipping IPS_ (dcIPS) estimator as \[\hat{R}_{\text{dcIPS}}(\pi,U,L)=\frac{1}{n}\sum_{i=1}^{n}r(x_{i},y_{i})\text{ max}\left\{\min\{w(x_{i},y_{i}),U\},\frac{1}{L}\right\}, \tag{4}\] where \(U\geq 1\) is the upper clipping constant and \(L\geq 1\) is the lower clipping constant. The dcIPS subsumes the cIPS estimator; both estimators are equivalent for \(L\to\infty\). On the other extreme, for both clipping constants approaching \(1\), the dcIPS estimator converges to the mean of rewards logged in the data set: \[\mathbb{E}\left[\hat{R}_{\text{dcIPS}}(\pi,U,L)\right]\to R_{\text{logging}} \text{ for }U,L\to 1. \tag{5}\] This is illustrated in Figure 0(a), where the dcIPS (orange line) converges to the true logging reward (gray dotted line). This allows the intuitive interpretation of dcIPS as an estimator that regularizes towards the mean of the logging policy reward and the prior variance is determined by both clipping constants \(U\) and \(L\). Under this regularization perspective, it makes sense to shrink the weights towards a positive constant (1 in this case) rather than to 0, because all weights are known to be positive (Krishnan et al., 2017). Proposition 4.1 ().: _Let \(w(x,y)>0\)\(\forall x,y\), then the bias of the dcIPS estimator with clipping constants \(U\) and \(L\) is given by_ \[\begin{split} Bias(\hat{R}_{\text{dcIPS}}(\pi,U,L))=\mathbb{E}_ {x}\mathbb{E}_{y-\pi}\left[\left(\underbrace{\mathbf{1}_{\{w(x,y)>U\}}\left( \frac{U}{w(x,y)}-1\right)}_{\text{Always}\geq 0,\text{ only depends on }U}+\underbrace{\mathbf{1}_{\{w(x,y)L<1\}}\left(\frac{1}{w(x,y)L}-1\right)}_{ \text{Always}\geq 0,\text{ only depends on }L}+\right)\underbrace{\mathbb{E}_{r}\left[r(x,y)|x,y \right]}_{\text{Expected reward}}\right].\end{split} \tag{6}\] Equation 6 shows that the two clipping constants contribute separately, and in opposing directions, to the overall bias of the dcIPS estimator. In other words, we can try to tune the lower clipping constant \(L\) so as to compensate the bias introduced by the upper clipping constant \(U\). ## 5. Off-Policy Evaluation Experiments The synthetic experiments demonstrate that dcIPS is able to compensate the bias introduced by cIPS and can lead to lower estimation errors overall. We used a synthetic data setting (explained in detail in the Appendix), where we collect logging data \(\mathcal{D}\) from a linear stochastic logging policy that plays a multi-armed bandit environment for \(n=300\) rounds. Based on \(\mathcal{D}\), we estimate the expected reward of a new target policy using clipped IPS evaluators with different clipping constants. For dcIPS, we choose the heuristic to move \(U\) and \(L\) in unison (i.e., becoming a single hyper-parameter), but more sophisticated methods to select \(U\) and \(L\) should be investigated. We show the distribution of reward estimates (Fig. 0(a)) and estimation error components (Fig. 0(b)) as a function of the clipping constants. The figures are best interpreted in conjunction and going from right to left on the x-axis: for large (\(U=L=100\)), both cIPS and dcIPS are basically equivalent to the unclipped IPS estimator: they are unbiased but show high variance. As the clipping constants decrease, the variances of both estimates (dashed lines in 0(b)) decreases monotonically, whereas the biases (dotted lines in 0(b)) increase. The lower clipping of the dcIPS compensates some of the large bias suffered by the cIPS evaluator (for a given point on the x-axis, both estimators use the same upper clipping constant \(U\) and thus the difference in biases reflects the bias compensation from using lower clipping as well). Thanks to this bias compensation, the dcIPS evaluator leads to lower MSE overall (solid lines in 0(b)). ## 6. Discussion and Outlook We analyze the bias of the clipped IPS estimator and find that negative bias provides potential for less-biased variance reduction techniques. We propose a simple method, doubly-clipped IPS, that can compensate the bias of single clipping. One limitation is that we lack a mechanism to select clipping constants. We plan to study algorithms to select clipping constants for dcIPS in a data-driven way (Bleiner et al., 2016; Krizhevsky et al., 2017; Krizhevsky et al., 2018) and investigate theoretically when the bias of double clipping is less than standard clipping. ###### Acknowledgements. We thank 3 anonymous reviewers for their correction of a false statement and their useful suggestions. We also thank Yannik Stein, Vito Bellini, Matej Jakimov, Thorsten Joachims, and Harrie Oosterhuis for fruitful discussion and valuable feedback given in the context of an early talk about this project.
2305.16951
CUQIpy: II. Computational uncertainty quantification for PDE-based inverse problems in Python
Inverse problems, particularly those governed by Partial Differential Equations (PDEs), are prevalent in various scientific and engineering applications, and uncertainty quantification (UQ) of solutions to these problems is essential for informed decision-making. This second part of a two-paper series builds upon the foundation set by the first part, which introduced CUQIpy, a Python software package for computational UQ in inverse problems using a Bayesian framework. In this paper, we extend CUQIpy's capabilities to solve PDE-based Bayesian inverse problems through a general framework that allows the integration of PDEs in CUQIpy, whether expressed natively or using third-party libraries such as FEniCS. CUQIpy offers concise syntax that closely matches mathematical expressions, streamlining the modeling process and enhancing the user experience. The versatility and applicability of CUQIpy to PDE-based Bayesian inverse problems are demonstrated on examples covering parabolic, elliptic and hyperbolic PDEs. This includes problems involving the heat and Poisson equations and application case studies in electrical impedance tomography and photo-acoustic tomography, showcasing the software's efficiency, consistency, and intuitive interface. This comprehensive approach to UQ in PDE-based inverse problems provides accessibility for non-experts and advanced features for experts.
Amal M A Alghamdi, Nicolai A B Riis, Babak M Afkham, Felipe Uribe, Silja L Christensen, Per Christian Hansen, Jakob S Jørgensen
2023-05-26T14:03:04Z
http://arxiv.org/abs/2305.16951v2
# CUQIpy - Part II: computational uncertainty quantification for PDE-based inverse problems in Python ###### Abstract Inverse problems, particularly those governed by Partial Differential Equations (PDEs), are prevalent in various scientific and engineering applications, and uncertainty quantification (UQ) of solutions to these problems is essential for informed decision-making. This second part of a two-paper series builds upon the foundation set by the first part, which introduced CUQIpy, a Python software package for computational UQ in inverse problems using a Bayesian framework. In this paper, we extend CUQIpy's capabilities to solve PDE-based Bayesian inverse problems through a general framework that allows the integration of PDEs in CUQIpy, whether expressed natively or using third-party libraries such as FEniCS. CUQIpy offers concise syntax that closely matches mathematical expressions, streamlining the modeling process and enhancing the user experience. The versatility and applicability of CUQIpy to PDE-based Bayesian inverse problems are demonstrated on examples covering parabolic, elliptic and hyperbolic PDEs. This includes problems involving the heat and Poisson equations and application case studies in electrical impedance tomography (EIT) and photo-acoustic tomography (PAT), showcasing the software's efficiency, consistency, and intuitive interface. This comprehensive approach to UQ in PDE-based inverse problems provides accessibility for non-experts and advanced features for experts. ## 1 Introduction Inverse problems arise in various scientific and engineering applications, where the goal is to infer un-observable features from indirect observations. These problems are often ill-posed, making the inferred solution parameters sensitive to noise in observed data and inaccuracies in forward models [15, 19]. Characterizing and evaluating uncertainties due to this sensitivity is crucial when making decisions based on inferred results. To address these challenges, the field of Uncertainty Quantification (UQ) for inverse problems is in a phase of rapid growth [5, 33]. In medical imaging, for instance, UQ analysis allows experts to evaluate the uncertainty in cancer detection, which can directly impact patient treatment decisions [34]. In flood control and disaster management applications, UQ is needed to assess the risk of floods in specific regions, informing planning and resource allocation [24]. One particularly important category of inverse problems involves those governed by Partial Differential Equations (PDEs). These problems are encountered in various applications such as medical imaging [23, 35], seismic imaging [6, 7, 32], subsurface characterization [2, 3, 16], and non-destructive testing [28]. PDE-based inverse problems involve inferring parameters in PDE models from observed data, which introduces unique challenges in UQ due to the complex nature of the governing equations. The Bayesian framework is widely used for UQ in both PDE-based and non-PDE-based inverse problems, as it enables the systematic incorporation of prior information, forward models, and observed data by characterizing the so-called posterior distribution [2, 3, 7, 8, 25]. This framework provides a comprehensive and unified approach for addressing the unique challenges of UQ in inverse problems. ### Computational UQ for PDE-based inverse problems with CUQlpy In this two-part series, we propose a Python software package CUQlpy, for Computational Uncertainty Quantification in Inverse problems. The package aims to make UQ analysis accessible to non-experts while still providing advanced features for UQ experts. The first paper [29] introduces the core library and components, and presents various test cases. This second paper focuses on using CUQlpy to solve Bayesian inverse problems where the forward models are governed by PDEs. While numerous software tools exist for modeling and solving PDE systems, such as FEniCS[22], FiPy[14], PyClaw[20], scikit-fem[13], and Firedrake[27], only few tools are specifically designed for PDE-based Bayesian inverse problems. The FEniCS-based package hIPPYlib[36, 37] is an example of a package that excels in this task. To make UQ for PDE-based inverse problem more accessible, we propose a general framework for integrating PDE modeling tools into CUQlpy by defining an application programming interface (API) allowing PDE modeling libraries to interact with CUQlpy, regardless of the underlying PDE discretization scheme and implementation. This is possible because a major concept behind the design of CUQlpy is that the core components remain independent from specific forward modeling tools. On the other hand, plugins provide a flexible way to interface with third-party libraries, and in this paper we present CUQlpy-FEniCS as an example of a PDE-based plugin. We introduce modules and classes in CUQlpy that enable solving PDE-based Bayesian inverse problems, such as sampler, distribution, and the cuqi.pde module. In the latter, the cuqi.pde.PDE class provides an abstract interface for integrating PDE modeling implementations like FEniCS with CUQlpy, simplifying the construction of PDE-based Bayesian inverse problems. The modules cuqi.pde.geometry and cuqipy_fenics.pde.geometry play an essential role, allowing the software to use information about the spaces on which the parameters and data are defined. We demonstrate the versatility and applicability of CUQlpy through a variety of PDE-based examples, highlighting the integration and capabilities of the software. One example solves a Bayesian inverse problem governed by a one-dimensional (1D) heat equation, which underscores the intuitiveness of CUQlpy's interface and its correspondence to the mathematical problem description. We present an elaborate electric impedance tomography (EIT) case study using the CUQlpy-FEniCS plugin, illustrating integration with third-party PDE modeling libraries. Finally, we examine a photoacoustic tomography (PAT) case, which shows CUQlpy's ability to handle black-box forward models, emphasizing its adaptability to a wide range of applications in PDE-based Bayesian inverse problems. These examples effectively represent different classes of PDEs: parabolic, elliptic, and hyperbolic PDEs, respectively. ### A motivating example We give a brief introductory example of the use of CUQlpy to solve a PDE-based inverse problem with the Poisson equation modelled in FEniCS using the CUQlpy-FEniCS plugin. More details of the underlying computational machinery are provided in section 3. The inverse problem we consider is to infer a two-dimensional electric conductivity field \(\sigma(\boldsymbol{\xi})\) of a unit square medium that lies in the domain \(\bar{\Gamma}=[0,1]^{2}\), from a noisy observation of the electric potential measurement everywhere in the domain; we denote by \(u(\boldsymbol{\xi})\) the electric potential and by \(y(\boldsymbol{\xi})\) the observation of the electric potential, which in this case coincides with the solution \(u\) on the entire domain, but in general may be available on a subset of the domain or a derived quantity. The electric potential spatial distribution is governed by the 2D Poisson equation and is driven by a source term \(f(\boldsymbol{\xi})\) and prescribed boundary conditions. The Poisson equation can be used to model other physical systems. For example, \(\sigma\) can represent the thermal conductivity of a medium and \(u\) its temperature; alternatively, \(\sigma\) can represent the permeability of a porous medium and \(u\) the pore-fluid pressure. The 2D Poisson equation we consider can be written as \[\nabla\cdot\left(e^{w(\boldsymbol{\xi})}\nabla u(\boldsymbol{\xi})\right)=f( \boldsymbol{\xi})\qquad\mbox{for}\qquad\boldsymbol{\xi}\in\Gamma=(0,1)^{2} \tag{1}\] written here in terms of the log-conductivity field, i.e., \(w(\boldsymbol{\xi})=\log\sigma(\boldsymbol{\xi})\) to ensure positivity of the inferred conductivity field. In this example, we assume zero boundary conditions on the left and right boundaries of the square domain and zero Neumann boundary conditions on the top and bottom boundaries; and a source term \(f(\boldsymbol{\xi})=1\) The forward problem concerns determining the observation \(y(\mathbf{\xi})\) from a given log-conductivity \(w(\mathbf{\xi})\). The inverse problem becomes the problem of inferring the log-conductivity \(w(\mathbf{\xi})\) from an observed realization of \(y(\mathbf{\xi})\). In CUQIpy we consider the discretized form of this problem, \[\mathbf{y}=\mathbf{A}(\mathbf{x}), \tag{2}\] where \(\mathbf{A}\) is a nonlinear forward model, which corresponds to solving the discretized PDE to produce the observation \(\mathbf{y}\) from a log-conductivity given in terms of a parameter \(\mathbf{x}\). CUQIpy (and in this case CUQIpy-FEniCS) provides a collection of demonstration test problems including one from which the present forward model can be obtained as: \(\mathbf{\mathrm{A}}=\texttt{FEniCSPoisson2D(dim=(32,32), field_type="KL",...).model}\) Here, for brevity we have only specified couple of the inputs to configure the problem. The PDE (1) is discretized using the finite-element method (FEM) and implemented using FEniCS on a structured triangular mesh on the physical domain \(\Gamma\). The PDE solution and log-conductivity are approximated on a first-order Lagrange polynomial space, see, e.g., [11]. In this example, we consider the log-conductivity parameterized Figure 1: Results for the 2D Poisson problem, a prior sample \(\mathbf{x}^{\mathrm{true}}\) is used as the exact solution. The noise level in the data is 1%. (a) The exact log-conductivity \(\mathbf{w}^{\mathrm{true}}=\mathbf{G}_{\mathrm{KL}}(\mathbf{x}^{\mathrm{true}})\). (b) The exact data \(\mathbf{y}^{\mathrm{exact}}\). (c) The observed noisy data \(\mathbf{y}^{\mathrm{obs}}\). (d) The log-conductivity mean: the posterior samples mean mapped through \(\mathbf{G}_{\mathrm{KL}}\). (e) The log-conductivity variance computed from the posterior samples. (f) The CI plot showing the 97% CIs for the 32 KL coefficients (blue vertical lines), the exact KL coefficients \(\mathbf{x}^{\mathrm{true}}\) (orange circles), and the KL coefficients means (blue circles). in terms of a Karhunen-Loeve (KL) expansion [10] to enforce smoothness, and \(\mathbf{x}=[x_{1},\ldots,x_{n_{\mathrm{KL}}}]\) is the vector of expansion coefficients, here truncated at \(n_{\mathrm{KL}}=32\). In CUQIpy we consider \(\mathbf{x}\) and \(\mathbf{y}\) vector-valued random variables representing the parameter to be inferred and the data, respectively. To specify a Bayesian inverse problem we express statistical assumptions on variables and the relations between them. Here, we assume an i.i.d. standard normal distribution on KL expansion coefficients \(\mathbf{x}\) and additive i.i.d. Gaussian noise with known standard deviation \(s_{\mathrm{noise}}\) on the data: \[\mathbf{x} \sim\mathrm{Gaussian}(\mathbf{0},\mathbf{I}) \tag{3a}\] \[\mathbf{y} \sim\mathrm{Gaussian}(\mathbf{A}(\mathbf{x}),s_{\mathrm{noise}}^{2}\bm {I}), \tag{3b}\] We can specify this in CUQIpy as \(\mathtt{x}=\mathtt{Gaussian}(\texttt{np.zeros}(\texttt{n\_KL}),\texttt{ 1},\texttt{geometry=G\_KL})\)\(\mathtt{y}=\mathtt{Gaussian}(\texttt{A}(\texttt{x}),\texttt{s\_noise}\texttt{**2}, \texttt{geometry=G\_FEM})\) We note the close similarity between the mathematical expressions and the syntax. Additionally the distributions have been equipped with so-called geometry object G_KL and G_FEM, which capture the interpretation of \(\mathbf{x}\) as KL coefficients and \(\mathbf{y}\) as FEM expansion coefficients; this is elaborated in section 3. We consider a true log-conductivity as a sample from the prior distribution on \(\mathbf{x}\), which we conveniently generate and plot (fig. 1(a)) by \(\mathtt{x\_true}=\mathtt{x}.\mathtt{sample}()\)\(\mathtt{x\_true}.\mathtt{plot}()\) where we note this is displayed as the log-conductivity FEM function, made possible by \(\mathtt{x}\) being equipped with the G_KL geometry. The exact data \(\mathbf{y}^{\mathrm{exact}}\) arising from \(\mathbf{x}^{\mathrm{true}}\) can be determined (and plotted) as A(x_true).plot() while a noisy data realization \(\mathbf{y}^{\mathrm{obs}}\) can be obtained by sampling \(\mathbf{y}\) conditioned on \(\mathbf{x}^{\mathrm{true}}\) (fig. 1(b) and fig. 1(c)): \(\mathtt{y\_obs}=\mathtt{y}(\texttt{x}\texttt{=x\_true}).\mathtt{sample}()\)\(\mathtt{y\_obs}.\mathtt{plot}()\) Again, knowledge of the geometry object \(\mathbf{G}_{\mathrm{FEM}}\), in this case, enables visualizing \(\mathbf{y}^{\mathrm{exact}}\) and \(\mathbf{y}^{\mathrm{obs}}\) as FEM functions. CUQIpy provides a framework for specifying and solving Bayesian inverse problems through posterior MCMC sampling. In the most high-level case we simplify specify a Bayesian Problem from the random variables \(\mathbf{y}\) and \(\mathbf{x}\), provide the observed data \(\mathbf{y}^{\mathrm{obs}}\) and run the UQ() method: \(\mathtt{BP}=\mathtt{Bayesian}\texttt{Problem}(\mathtt{y},\mathtt{x}). \mathtt{set\_data}(\mathtt{y}\texttt{=y\_obs})\)\(\mathtt{BP}.\mathtt{UQ}()\) Under the hood CUQIpy applies Bayes' theorem to construct the posterior distribution, selects a suitable sampler based on the problem structure (in this case the NUTS sampler [17]), samples the posterior and produces posterior mean and UQ plots (fig. 1). The results show that the mean is visually a reasonable approximation of the true conductivity. The variance magnitude is very small and tends to zero as \(\boldsymbol{\xi}\) gets closer to the left and right boundaries on which the PDE boundary conditions \(u=0\) are prescribed. Additionally, the computed credibility intervals (CIs) enclose the exact KL expansion coefficients. Approximately, the first 10 KL expansion coefficients are inferred with high certainty, and the general trend is that the uncertainty increases as the expansion mode number \(i\) increases. ### Overview and notation Having introduced and given a motivating example of UQ with CUQlpy for a PDE-based inverse problem in the present section, we present in section 2 our general framework for integrating PDE-based inverse problems in CUQlpy and illustrate this framework on an inverse problem governed by the 1D heat equation. In section 3, we describe our CUQlpy-FEniCS plugin that extends CUQlpy to allow UQ on PDE-based inverse problems modelled in FEniCS. We finish with two more elaborate case studies: First, in section 4 we demonstrate how Electrical Impedance Tomography (EIT) with multiple layers of solution parametrization can be modelled with CUQlpy-FEniCS. Second, in section 5 we show how user-provided black-box PDE solvers can be used in CUQlpy in an example of Photo-Acoustic Tomography (PAT). Finally, in section 6 we conclude the paper. We use the following notation: Calligraphic font such as \(\mathcal{A}\) denotes a continuous operator. Bold upper case such as \(\boldsymbol{A}\) denotes a discrete operator, with \(\boldsymbol{I}_{\ell}\) denoting the \(\ell\times\ell\) identity matrix; bold lower case such as \(\boldsymbol{x}\) denotes a vector, and lower case such as \(s\) and \(f\) denotes a scalar or a scalar function with \(p\) denoting a probability density function. We use the same notation for vectors and scalars to denote random vectors and scalars, to be can be distinguished by context. We denote by \(\xi\) and \(\boldsymbol{\xi}=[\xi^{1},\xi^{2}]^{\mathsf{T}}\) the spatial coordinates in \(\mathbb{R}\) and \(\mathbb{R}^{2}\), respectively; and we denote by \(\tau\) the time. In the context of solving Bayesian inverse problems, we refer to the unknown quantity to be inferred as the _parameter_ and the measured or observed quantities as the _data_, both considered random variables. When a superscript is provided for a parameter or a data vector, e.g., \(\boldsymbol{x}^{\text{true}}\), it indicates a particular realization of the parameter or the data, respectively. We refer to a particular noisy data realization that we use in the inversion, e.g., \(\boldsymbol{y}^{\text{obs}}\), as the _observed data_. ## 2 Framework for PDE-based Bayesian inverse problems in CUQlpy In this section, we present our general framework for integrating PDE-based Bayesian inverse problems in CUQlpy. The framework is designed to be as generic as possible allowing - in principle - any PDE-based inverse problem to be handled. This includes PDEs expressed natively in CUQlpy (detailed in the present section), using a third-party PDE library such as FEniCS[22] (see section 3 and section 4) and through a user-provided black-box PDE-solver (see section 5). The critical components of this framework are provided by the cuqi.pde module, which contains the PDE class and its subclasses, the supporting Geometry classes, and the PDEModel class, see table 1. The PDE class provides an abstract interface for representing PDEs, a subclass hereof is LinearPDE representing linear PDE problems. At present two concrete classes have been implemented: SteadyStateLinearPDE and TimeDependentLinearPDE from which a broad selection of steady-state and time-dependent linear PDEs can be handled. The Geometry classes allow us to parametrize in terms of various expansions to enforce desired properties on the solution, such as smoothness or as a step-wise function. The PDEModel class provides the interface to use the PDE as a CUQpy Model for Bayesian inference. A PDEModel combines a PDE with two Geometry classes for the domain and range geometry to form a forward model of an inverse problem. We illustrate this framework by an example: a Bayesian inverse problem governed by a 1D heat equation, section 2.1-section 2.5. We emphasize that a much wider variety of PDEs can be handled; an example is used only for concreteness of the presentation. ### The 1D heat equation inverse problem We consider the inverse problem of reconstructing an initial temperature profile \(g(\xi)\) at time \(\tau=0\) of a medium from temperature measurements \(y(\xi)\) at a later time \(\tau=\tau^{\max}\). We assume a medium that can be approximated by a 1D interval, \(\xi\in[0,1]\). An example of such medium is a thin metal rod. The measurements are obtained over the interval \((0,1)\) or a subset of it, and they are typically polluted by a measurement error. The heat propagation in the medium from time \(\tau=0\) to time \(\tau=\tau^{\max}\) can be modeled by a one-dimensional (1D) initial-boundary value heat equation, which can be written as: \[\frac{\partial u(\xi,\tau)}{\partial t}-c^{2}\frac{\partial^{2}u (\xi,\tau)}{\partial\xi^{2}} =f(\xi,\tau),\quad\xi\in[0,1],\quad 0\leq\tau\leq\tau^{\max}, \tag{4a}\] \[u(0,\tau)=u(1,\tau) =0,\] (4b) \[u(\xi,0) =g(\xi), \tag{4c}\] where \(u(\xi,\tau)\) is the temperature at time \(\tau\) and location \(\xi\), \(c^{2}\) is the thermal conductivity (here taken to be a constant for simplicity), and \(f\) is the source term. We assume zero boundary conditions, (4b), and an initial heat profile \(g(\xi)\), (4c). We define the _parameter-to-solution operator_\(\mathcal{S}\) that maps the unknown parameter of the inverse problem \(g(\xi)\) to the PDE solution \(u(\xi,\tau)\), for \(0<\tau\leq\tau^{\max}\). Applying this operator is equivalent to solving the PDE (4a)-(4c) for a given initial condition \(g(\xi)\). We also define the _observation operator_\(\mathcal{O}\) that maps the PDE solution \(u(\xi,\tau)\) to the observed quantities, the temperature measurements \(y(\xi)\). ### The discretized heat equation in CUQlpy We discretize the system (4a)-(4c) in space using finite differences (FD). We discretize the solution \(u(\xi,\tau)\) at a given time \(\tau\) on a regular 1D grid of \(n_{\mathrm{grid}}=100\) interior nodes. The grid spacing \(h\) is approx. \(0.01\). We create a NumPy array to represent the grid grid = np.linspace(h, 1-h, n_grid) For simplicity, we use forward Euler for time stepping. For the choice \(\tau^{\max}=0.01\), we discretize the time interval \([0,0.01]\) into \(n_{\tau}=225\) uniform time steps each of length \(\Delta\tau\). We create a NumPy array to represent the time steps tau = np.linspace(0, tau_max, n_tau) We write the \(k^{th}\) forward Euler step as follows \[\boldsymbol{u}^{k+1}=\boldsymbol{u}^{k}+\Delta\tau\left(\boldsymbol{D}_{c} \boldsymbol{u}^{k}+\boldsymbol{f}^{k}\right),\quad\text{for $k=0,...,n_{\tau}$}, \tag{5}\] where \(\boldsymbol{u}^{0}\coloneqq\boldsymbol{g}\) is the initial condition \(g\) discretized on the 1D grid, i.e. the \(i^{\text{th}}\) element of \(\boldsymbol{g}\) is \(g(\xi_{i})\) where \(\xi_{i}\) is the coordinate of the \(i^{\text{th}}\) grid node. Similarly, \(\boldsymbol{u}^{k}\) and \(\boldsymbol{f}^{k}\) are the PDE solution and the source term, respectively, at time \(\tau=k\Delta\tau\) discretized on the 1D grid. \(\boldsymbol{D}_{c}\) is the discretized diffusion operator \(c^{2}\partial^{2}/\partial\xi^{2}\), obtained using the centered-difference method. We create NumPy arrays to represent the right-hand side vector \(\boldsymbol{f}^{k}\) (zero in this case) and the differential operator \(\boldsymbol{D}_{c}\), and fix \(c=1\) for this example: f = np.zeros(n_grid) D_c = c**2 * ( np.diag(-2*np.ones(n_grid), 0) + np.diag(np.ones(n_grid-1), -1) + np.diag(np.ones(n_grid-1), 1) ) / h**2 We denote by \(\boldsymbol{S}\) the discretized parameter-to-solution operator which maps the discretized initial condition \(\boldsymbol{g}\) to the discretized PDE solution \(\boldsymbol{u}\). \(\boldsymbol{u}\) denotes the column vector of the time step solutions \(\boldsymbol{u}^{1},...,\boldsymbol{u}^{k},...,\boldsymbol{u}^{n_{\tau}}\) stacked vertically. Additionally, we denote by \(\boldsymbol{O}\) the discretized observation operator that maps the discretized PDE solution \(\boldsymbol{u}\) to the observation \(\boldsymbol{y}\in\mathbb{R}^{m}\), where \(m\) is the number of measurements at locations \(\{\xi_{j}^{\text{obs}}\}_{j=1}^{m}\). These locations might or might not correspond to the 1D grid points. In this example, they coincide with the set of grid points \(\{\xi_{i}\}_{i=1}^{n_{\text{grid}}}\) or a subset of it. To represent this discretized PDE equation in CUQlpy, we need to create a PDE-type object that encapsulates the details of the PDE equation and provides an implementation of the operators \(\boldsymbol{S}\) and \(\boldsymbol{O}\), table 1. Creating a PDE-type class requires a user-provided function that represents the components of the PDE on a standardized form, denoted by PDE_form. For time-dependent problems, the PDE_form function takes as inputs the unknown parameter that we want to infer (in this case g) and a scalar value for the current time: tau_current def PDE_form(g, tau_current): return (D_c, f, g) The PDE_form returns a tuple of the differential operator and right-hand side at time tau_current and the initial condition. Note that in this example, both the differential operator and the right-hand side, zero in this case, are independent of the time \(\tau\). For this 1D time-dependent heat equation, we create a TimeDependentLinearPDE object from the specific PDE_form and the time step vector tau and spatial grid grid: PDE = TimeDependentLinearPDE(PDE_form, tau, grid_sol=grid) The TimeDependentLinearPDE object calls the PDE_form every time step and passes the current time of the stepping method. The user can specify additional arguments when initializing the TimeDependentLinearPDE object, e.g., the spatial grid for observations, the time discretization scheme, and the linear solver to be used if the scheme is implicit. By default, the forward Euler method is used for time stepping and the observations are obtained at time \(\tau^{\max}\) on the entire solution grid. We can print the PDE object using print(PDE), which gives information about the object class and its PDE_form: CUQI TimeDependentLinearPDE. PDE form expression: def PDE_form(g, tau_current): return (D_c, f, g) All CUQIpy PDE-type classes implement three methods: (1) assemble, which \begin{table} \begin{tabular}{l l} \hline **Class name** & **Description** \\ \hline cuqi.pde module: & \\ \hline PDE & A class that represents the PDE, it implements the \\ & discretized maps \(\mathbf{S}\) and \(\mathbf{O}\) \\ LinearPDE & A class for linear PDE problems \\ SteadyStateLinearPDE & A class for steady-state linear PDE problems \\ TimeDependentLinearPDE & A class for time-dependent linear PDE problems \\ \hline cuqi.geometry module: & \\ \hline Continuous1D & A class that represents a 1D continuous space \\ Continuous2D & A class that represents a 2D continuous space \\ KLExpansion & A class for Karhunen–Loeve expansion of functions \\ StepExpansion & A class for step functions \\ \hline cuqi.array module: & \\ \hline CUQIarray & A class for data arrays, subclassed from NumPy array \\ \hline cuqi.model module: & \\ \hline PDEModel & Forward model the uses a PDE-type class through \\ & calling its assemble, solve and observe methods \\ \hline cuqi.testproblem module: & \\ \hline Poisson\_1D & 1D Poisson test problem (finite diff. discretization) \\ Heat\_1D & 1D Heat test problem (finite diff. discretization) \\ \hline \end{tabular} \end{table} Table 1: A subset of CUQIpy classes that support integrating PDE-based problems. For a comprehensive list of classes and modules, see the companion paper [29]. performs any assembling that might be needed to prepare the matrices and vectors required to solve the PDE problem, (2) solve, which solves the PDE problem using the assembled components and is equivalent to applying the parameter-to-solution operator \(\mathbf{S}\), and (3) observe, which computes the observations from the PDE solution and is equivalent to applying the observation operator \(\mathbf{O}\). To illustrate these methods, let us consider an initial condition given by the expression \[g^{\mathrm{custom}}(\xi)=\frac{1}{30}\left(1-\cos\left(2\pi\frac{1-\xi}{1} \right)+e^{-200(\xi-0.5)^{2}}+e^{-200(\xi-0.8)^{2}}\right). \tag{6}\] We denote by \(\mathbf{g}^{\mathrm{custom}}\) the discretization of \(g^{\mathrm{custom}}\) on the grid (see fig. 2(a)). We call the method assemble, then apply the operator \(\mathbf{S}\) by calling the method solve: PDE. assemble(g_custom) u_custom, info = PDE.solve() We show the solution u_custom in fig. 2(b) where we plot selected time steps for illustration. Now we can apply the observation operator \(\mathbf{O}\), which in this case corresponds, conceptually, to a matrix that extracts the final time step solution \(\mathbf{u}^{n_{\tau}}\) from the entire PDE solution \(\mathbf{u}\). We denote the observation by \(\mathbf{y}^{\mathrm{custom}}\coloneqq\mathbf{u}^{n_{\tau}}\) and show it in fig. 2(c): y_custom = PDE.observe(u_custom) For time-dependent problems, PDE-type classes additionally implement the method assemble_step to assemble components that are needed to propagate the solution in time each time step, e.g., the discretized source term evaluated at time \(\tau\). Furthermore, PDE-type classes can be equipped with the gradient of \(\mathbf{O}\circ\mathbf{S}\) with respect to its input, \(\mathbf{g}\) in this case, in a given direction. Figure 2: Illustration of the TimeDependentLinearPDE object’s assemble, solve and observe methods. (a) Initial condition \(\mathbf{g}^{\mathrm{custom}}\), (6), used as input to the assemble method. (b) PDE solution \(\mathbf{u}^{\mathrm{custom}}=\mathbf{S}(\mathbf{g}^{\mathrm{custom}})\), shown for selected times \(\tau\) of the legend, obtained by the solve method. (c) Observation \(\mathbf{y}^{\mathrm{custom}}=(\mathbf{O}\circ\mathbf{S})(\mathbf{g}^{\mathrm{custom}})\), i.e., the PDE solution at time \(\tau^{\mathrm{max}}=0.01\) in this case, obtained by the observe method. ### The 1D heat forward problem in CUQlpy We define the discretized forward model of the 1D heat inverse problem \[\mathbf{y}=\mathbf{A}(\mathbf{g})\coloneqq(\mathbf{O}\circ\mathbf{S})(\mathbf{g}), \tag{7}\] where \(\mathbf{A}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\). To represent the forward model in CUQlpy, we create an object from the class PDEModel which is a subclass of Model. To set up a PDEModel we need to specify which spaces are to be used for the domain and range of \(\mathbf{A}\); this is done using the geometry class. In the simplest case, the parameter \(\mathbf{g}\) and observation \(\mathbf{y}\) are simply vectors on the \(\xi\) grid, which is specified by a Continuous1D geometry: \(\mathtt{G\_cont}=\mathtt{Continuous1D}(\mathtt{grid})\) We can now set up the PDE model as \(\mathtt{A}=\mathtt{PDEModel}(\mathtt{PDE},\mathtt{range\_geometry}=\mathtt{G\_cont}, \mathtt{domain\_geometry}=\mathtt{G\_cont})\) The PDEModel object encapsulates the PDE-type object and implements the forward method which corresponds to \(\mathbf{A}\). The PDEModel is agnostic to the underlying details of the PDE, e.g., the discretization method, the type of the PDE, and the third-party PDE modeling library used in the implementing the PDE Python methods. It uses the PDE object through calling the methods assemble, solve, and observe. One could continue with the present \(\mathbf{A}\) and solve directly for \(\mathbf{g}\), however here we demonstrate how to parametrize \(\mathbf{g}\) to enforce some desired properties on the solution. ### Parametrization by the geometry class The domain Geometry object represents the domain space of the forward model \(\mathbf{A}\). It can also be used to parametrize the unknown parameter, here \(\mathbf{g}\). As an example, we consider parameterization in terms of coefficients \(\mathbf{x}=[x_{1},...,x_{n_{\text{step}}}]^{\mathsf{T}}\) of an expansion \[\mathbf{g}=\sum_{i=1}^{n_{\text{step}}}x_{i}\mathbf{\chi}_{i}, \tag{8}\] where \(\mathbf{\chi}_{i}\) for \(i=1,\ldots,n_{\text{step}}\) is the characteristic function of the \(i\)th interval of a total of \(n_{\text{step}}\) intervals in an equidistant partitioning of the domain \([0,1]\). With this parameterization of \(\mathbf{g}\), the unknown parameter of the inverse problem becomes the coefficients \(\mathbf{x}\). We denote by \(\mathbf{G}_{\text{step}}\) the discrete operator that maps \(\mathbf{x}\) to \(\mathbf{g}\). Thus we redefine the forward operator as \[\mathbf{A}(\mathbf{x})\coloneqq(\mathbf{O}\circ\mathbf{S}\circ\mathbf{G}_{\text{step}})\left(\bm {x}\right), \tag{9}\] where now \(\mathbf{A}:\mathbb{R}^{n_{\text{step}}}\rightarrow\mathbb{R}^{m}\). To specify the parameterization (8) in CUQlpy, we set up the domain geometry as a StepExpansion geometry object and pass the 1D grid and our choice of the number of steps n_steps=3 as arguments. G_step = StepExpansion(grid, n_steps=3) We can represent a function in this expansion by our fundamental data structure CUQIarray, which essentially contains a coefficient vector and the geometry, e.g., x_step = CUQIarray([0, 1, 0.5], geometry=G_step) CUQIarray has a dual representation: a _parameter value_, referring to the coefficient vector, here [0, 1, 0.5] and a _function value_, here the function with three steps considering parameters as expansion coefficients in the chosen geometry. A Geometry-type class implements the method par2fun, an implementation of the operator \(\boldsymbol{G}\), which maps the parameter value to the function value. It might also implement the method fun2par, the inverse map from the function to parameter value, \(\boldsymbol{G}^{-1}\), if it exists. It might also implement the gradient of \(\boldsymbol{G}\) with respect to \(\boldsymbol{x}\) in a given direction. A CUQIarray allows convenient plotting of the object in context of the geometry: x_step.plot() By default, plot plots the function value representation of the variable, fig. 3(a). That is, the call x_step.plot() results in calling the underlying Geometry-type object's par2fun method with the array values as the input and plotting its output, \(\boldsymbol{g}^{\mathrm{step}}=\boldsymbol{G}_{\mathrm{step}}(\boldsymbol{x}^ {\mathrm{step}})\). To plot the parameter value representation of the variable x_step, plot_par=True can be passed as an argument to the plot method, fig. 3(b). To employ the step function expansion we pass it as domain geometry: A = PDEModel(PDE, range_geometry=G_cont, domain_geometry=G_step) We can print the model A, print(A), and get: CUQI PDEModel: StepExpansion(3,) -> Continuous1D(100,). Forward parameters: ['x']. PDE: TimeDependentLinearPDE. By default, the forward model input name is x. We can apply the forward model on \(\mathbf{x}^{\text{step}}\) and plot the result \(\mathbf{y}^{\text{step}}=\mathbf{A}(\mathbf{x}^{\text{step}})\): ``` y_step=A(x=x_step) y_step.plot() ``` The returned y_step is a CUQIarray object equipped with the G_cont geometry (see fig. 3(c)). Note, in this case we choose \(\tau^{\text{max}}=0.02\), doubling the number of time steps. ### Specifying and solving the PDE-based Bayesian inverse problem In our discussion of the Bayesian modeling, we consider \(\mathbf{x}\) and \(\mathbf{y}\) to be random variables of the unknown parameter and the data, respectively. We are interested in a statistical characterization--the posterior distribution--of the unknown parameter \(\mathbf{x}\), given a prior distribution of \(\mathbf{x}\), a distribution of the data \(\mathbf{y}\), and a realization of the noisy data \(\mathbf{y}^{\text{obs}}\), see the companion paper for background on Bayesian modeling [29]. We define the Bayesian inverse problem, assuming additive Gaussian noise, as: \[\mathbf{x} \sim\text{Gaussian}(\mathbf{0},\mathbf{I}_{n_{\text{step}}}),\] \[\mathbf{y} \sim\text{Gaussian}(\mathbf{A}(\mathbf{x}),s_{\text{noise}}^{2}\mathbf{I}_{ m}),\] where \(s_{\text{noise}}\) is the standard deviation of the data distribution, which we specify to dictate a desired noise level relative to the observed data, we assume a 10% noise level in this case. We use CUQIpy to create the distributions of \(\mathbf{x}\) and \(\mathbf{y}\) as follows ``` x=Gaussian(np.zeros(n_step),1,geometry=G_step) y=Gaussian(A(x),s_noise**2,geometry=G_cont) ``` We pass the argument geometry=G_step when initializing x to specify that samples from this distribution are expansion coefficients of the step expansion (8). Similarly, we pass the argument geometry=G_cont when initializing y. The argument A(x) represents that \(\mathbf{y}\) is conditioned on \(\mathbf{x}\) through the forward model, as shown by print(y): ``` CUQIGaussian.Conditionvariables['x']. ``` We can draw five samples from the prior distribution and display these by: ``` prior_samples=x.sample(5) prior_samples.plot() ``` Here, prior_samples is a Samples object with the generated samples. It obtains the Geometry-type object from the prior x, here G_step. These are seen in fig. 4(b). By default, the function values of the samples are plotted, i.e., the step functions. We assume that the true solution is the step function with the coefficients \(\mathbf{x}^{\text{step}}\), fig. 3(b). We then generate synthetic noisy data \(\mathbf{y}^{\text{obs}}\) by drawing a sample from the data distribution y conditioned on \(\mathbf{x}=\mathbf{x}^{\text{step}}\): y_obs = y(x=x_step).sample() Figure 4(d) shows the exact solution \(\mathbf{g}^{\rm step}\), the exact data \(\mathbf{y}^{\rm step}\), and the noisy data \(\mathbf{y}^{\rm obs}\). Now we have all the components we need to create the posterior distribution. We achieve this in CUQlpy by creating a joint distribution of the uncertain parameters \(\mathbf{x}\) and the data \(\mathbf{y}\) using the JointDistribution class, then we condition the joint distribution on the data \(\mathbf{y}^{\rm obs}\) to obtain the posterior distribution. The joint distribution is given by \[p(\mathbf{x},\mathbf{y})=p(\mathbf{y}|\mathbf{x})p(\mathbf{x}), \tag{10}\] where \(p(\mathbf{x})\) is the prior probability density function (PDF) and \(p(\mathbf{y}|\mathbf{x})\) is the data distribution PDF. In CUQlpy, this translates to joint = JointDistribution(x, y) posterior = joint(y=y_obs) # condition on y=y_obs Calling print(joint), for example, gives: Figure 4: Results for the Bayesian inverse problem governed by the 1D heat equation in which we use the StepExpansion geometry and choose \(\tau^{\rm max}=0.02\). (a) Discretized characteristic functions \(\mathbf{\chi}_{1},\mathbf{\chi}_{2}\), and \(\mathbf{\chi}_{3}\), the basis functions of the expansion (8) for \(n_{\rm step}=3\). (b) Prior samples plotted on the continuous domain. (c) Posterior samples plotted on the continuous domain. (d) The exact solution, exact data and noisy data. (e) The posterior sample mean and CI on the continuous domain. (f) The posterior sample means and CIs for the step expansion coefficients. JointDistribution( Equation: p(x,y) = p(x)p(y|x) Densities: x ~ CUQI Gaussian. y ~ CUQI Gaussian. Conditioning variables ['x']. ) CUQIpy uses MCMC sampling methods, provided by its Sampler classes, to approximate the posterior and compute its moments, in particular, mean and variance. In this example, we use a component-wise Metropolis Hastings (CWMH) sampler [29, SS2] and set up an instance of it by simply passing the posterior as input: my_sampler = CWMH(posterior) Sampler-type classes implement the methods sample and sample_adapt. The latter adjusts the sampling scale (step size) to achieve a target acceptance rate, which is method dependent. For the CWMH sampler, the target acceptance rate is approx. \(23\%\). We generate \(50,000\) samples using the CWMH sampler: posterior_samples = my_sampler.sample_adapt(50000) posterior_samples is a Samples object which contains, in addition to the samples and their corresponding geometry object, the sampling acceptance and rejection information. ### Posterior samples analysis, and visualization The Samples class provides analysis, and visualization methods that can be used to study the posterior samples. Some of these methods integrate functionalities from ArviZ, a python package for exploratory analysis of Bayesian models [21]. For brevity, we only show some of the visualization features and refer the reader to CUQIpy's documentation for more information on visualization. A basic Samples operation is to plot selected samples (fig. 4(c)): posterior_samples.plot([2000, 3000, 4000, 5000, 6000]) We visualize the samples credibility interval (CI) using the method plot_ci which generates a plot of the samples CI, the sample mean, and the exact solution of the Bayesian inverse problem, if provided: posterior_samples.plot_ci(95, exact=x_step) The first argument is the CI expressed in percent, \(95\%\) CI in this case, and the second optional argument is the exact solution. In fig. 4(e), we show the CI plot. Note that in this plot, the CI is plotted over the continuous domain \((0,1)\) and that the CI encloses the exact solution. We can alternatively plot the CI for the coefficients \(x_{i}\) by passing the argument plot_par=True to the plot_ci function, see fig. 4(f) for the coefficient CI plot. In this coefficient CI plots, we also note that \(\mathbf{x}^{\rm step}\) lies within the CI. ### Parameterizing the initial condition using Karhunen-Loeve (KL) expansion Here we present a different parameterization of the unknown initial condition \(g(\xi)\) to elaborate on CUQIpy's modeling capabilities; we use a truncated Karhunen-Loeve (KL) expansion [18, 38]. Using this representation we are able to impose some regularity and spatial correlation on \(g(\xi)\) and reduce the dimension of the discretized unknown parameter from \(n\) to \(n_{\rm KL}\), where \(n_{\rm KL}\ll n\). To do this we wish to express our \(\mathbf{g}\) as a vector-valued random variable following a zero-mean Gaussian distribution with a carefully constructed covariance matrix \(\mathbf{C}\) capturing the desired variance and spatial correlation. In this particular case, \(\mathbf{C}\) will be constructed as \(\mathbf{C}=\frac{1}{a^{2}}\mathbf{E}\mathbf{\Lambda}^{2}\mathbf{E}^{T}\), where the matrix \(\mathbf{E}\) is an \(n\times n_{\rm KL}\) matrix with ortho-normal columns, \(1/a^{2}\) is the variance, and \(\mathbf{\Lambda}\) is an \(n_{\rm KL}\times n_{\rm KL}\) diagonal matrix with diagonal elements \(\lambda_{i}=1/i^{\gamma}\), where \(\gamma\) is a constant that controls the decay rate of the diagonal elements. The columns of \(\mathbf{E}\) are often chosen to be a discretization of continuous functions on a grid. Here, we choose the sinusoidal basis functions. This choice ensures that the boundary condition (4b) is imposed on the initial condition \(g\). It can be shown that \(\mathbf{g}\) follows the desired distribution if we express it as \[\mathbf{g}=\frac{1}{a}\sum_{i=1}^{n_{\rm KL}}x_{i}\sqrt{\lambda_{i}}\mathbf{e}_{i}. \tag{11}\] Here, \(x_{i}\), \(i=1,\ldots,n_{\rm KL}\) are independent standard normal random variables, known as KL expansion coefficients, and \(\mathbf{e}_{i}\) is the \(i\)th column of \(\mathbf{E}\). We show a few basis functions \(\mathbf{e}_{i}\) discretized on grid in fig. 5(a). The expansion in (11) is known as the KL-expansion and, if \(n_{\rm KL}<n\), the expansion is truncated and \(n_{\rm KL}\) is the truncation size. We denote by \(\mathbf{G}_{\rm KL}\) the operator which maps the KL expansion coefficients vector \(\mathbf{x}=[x_{1},...,x_{n_{\rm KL}}]\) to the approximated discretized initial condition \(\mathbf{g}\). We set up the domain geometry as a KLExpansion object and pass the arguments decay_rate=1.5, normalizer=10 and num_modes=20 for \(\gamma\), and \(a\) and \(n_{\rm KL}\) respectively: \[\texttt{G\_KL}=\texttt{KLExpansion}(\texttt{grid},\texttt{decay\_rate} =1.5,\texttt{normalizer=10},\texttt{num\_modes=20})\] As in case of the step expansion, we then set up the prior as a Gaussian distribution with zero mean and identity covariance, passing also the argument geoemtry=G_KL and sample the prior and plot its samples, fig. 5(b). We use the custom initial condition \(g^{\rm custom}\) in (6) as the true solution. Then following the steps in section 2.5, we create the corresponding synthesized data \(\mathbf{y}^{\rm obs}\). We study three cases, using this initial condition: \(0.1\%\) noise case (fig. 5, second row), \(5\%\) noise case (fig. 5, third row), and \(5\%\) noise with data available only on the first half of the domain (fig. 5, fourth row). In the first two cases, we have data measurement Figure 5: Results for the 1D heat equation-based Bayesian inverse problem in which we use the KLExpansion geometry and the function \(g^{\rm custom}\) (6) as the exact solution; and set \(\tau^{\rm max}=0.01\). We study three cases: \(0.1\%\) noise level case (second row), \(5\%\) noise level case (third row), and \(5\%\) noise level case and data available on the interval \((0,0.5)\) only (fourth row). For the first two cases the data is available everywhere in the domain. (a) The KL expansion (11) basis functions \(\mathbf{e}_{i}\), for \(i=1,2,3,4\). (b) Prior samples plotted on the continuous domain. (c) Posterior samples plotted on the continuous domain for the second case. For each case, the first column shows the exact solution \(\mathbf{g}^{\rm custom}\), the exact data \(\mathbf{y}^{\rm custom}\) and the observed noisy data \(\mathbf{y}^{\rm obs}\), the second column shows the posterior sample mean and CI on the continuous domain, and the third column shows the posterior sample means and CIs for the KL expansion coefficients. everywhere in the domain. To specify the limited observation in the third case, we pass grid_obs=grid[:50] to the TimeDependentLinearPDE initializer. We also pass grid[:50] when creating the range geometry, instead of passing the entire grid. We then create the posterior, and sample it using the CWMH sampler, posterior samples of the second case are shown in fig. 5(c). We note that as the noise level increases, the width of the continuous CI increases and less modes are reconstructed with high certainty. Also, observing only on the first half of the domain leads to a significantly wider CI in the part of the domain where we do not have data, fig. 5(k), and higher uncertainties in the mode reconstructions, fig. 5(l). This concludes our overview of solving PDE-based Bayesian inverse problems with CUQlpy. We emphasize that the heat equation example was for demonstration and that the framework can be applied to wide variety of PDE-based inverse problems. In the next section, we show how to handle problems modelled in the FEM platform FEniCS. ## 3 CUQlpy-FEniCS plugin FEniCS[22] is a popular Python package for solving PDEs using the FEM. The extent of its users, both in the academia and the industry, makes a dedicated CUQlpy interface for FEniCS highly desirable. Here we present our interface plugin: CUQlpy-FEniCS, and revisit the 2D Poisson example discussed in section 1.2 to unpack the underlying CUQlpy and CUQlpy-FEniCS components that are used in building the example. In section 4, we present an elaborate test case of using CUQlpy together with CUQlpy-FEniCS to solve an EIT problem with multiple data sets and in section 5 we showcase using some of the CUQlpy-FEniCS features in solving a PAT problem with a user-provided forward model. We use the main modules of FEniCS: ufl, the FEniCS unified form language module, and dolfin, the Python interface of the computational high-performance FEniCS C++ backend, DOLFIN. We can import these modules as follows: ``` importuflimportdolfinasdl ``` The CUQlpy-FEniCS plugin structure can be adopted to create CUQlpy plugins integrating other PDE-based modeling libraries, e.g., the new version FEniCSx[30]. ### PDE-type classes The CUQlpy-FEniCS plugin defines PDE-type classes, see table 2, that represent PDE problems implemented using FEniCS. To view the underlying PDE-type class that is used in the 2D Poisson example, we call print(A.pde), where A is the CUQlpy PDEModel defined in section 1.2, and obtain: ``` defform(w,u,p): returnufl.exp(w)*ufl.inner(ufl.grad(u),ufl.grad(p))*ufl.dx -f*p*ufl.dx ``` Specifically, the Poisson PDE is represented by the SteadyStateLinearFEniCSPDE class. Similar to the core CUQlpy PDE-type classes, a CUQlpy-FEniCS PDE-type class contains a PDE form, which is a user-provided Python function that uses FEniCS syntax to express the PDE weak form; more discussion on building weak forms is provided in section 4 in the context of the EIT example. The Python function form inputs are w, the unknown parameter (log-conductivity in the 2D Poisson example), u, the state variable (or trial function), and p, the adjoint variable (or test function); and f is the FEniCS expression of the source term. The CUQlpy-FEniCS PDE-type classes follow the interface defined by the core CUQlpy abstract PDE class by implementing the methods assemble, solve, and observe. The assemble method builds the discretized PDE system to be solved, from the provided PDE form. In the Poisson example, section 1.2, the system that result from discretizing the weak form of the PDE (1) using the FEM can be written as: \[\mathbf{K_{w}u}=\mathbf{f}, \tag{12}\] where \(\mathbf{K_{w}}\) is the discretized diffusion operator, given the discretized log-conductivity field \(\mathbf{w}\). The vector \(\mathbf{u}\) is the discretized PDE solution, the potential, and \(\mathbf{f}\) is the discretized source term. \begin{table} \begin{tabular}{l l} \hline **Class name** & **Description** \\ \hline cuqipy\_fenics.pde module: & \\ \hline FEniCSPDE & A base (abstract) class that represents PDE problems defined using FEniCS \\ SteadyStateLinearFEniCSPDE & A class representation of steady state linear PDE problems defined using FEniCS \\ \hline cuqipy\_fenics.geometry module: & \\ \hline FEniCSContinuous & A class representing FEniCS function spaces \\ FEniCSMappedGeometry & A class with additional mapping applied to the function values \\ \hline MaternKLExpansion & A class that builds spectral representation of Matern covariance operator on a given space, represented by a FEniCSContinuous geometry \\ \hline cuqipy\_fenics.testproblem module: & \\ \hline FEniCSDiffusion1D & 1D diffusion PDE problem defined using FEniCS \\ FEniCSPoisson2D & 2D Poisson PDE problem defined using FEniCS \\ \hline \end{tabular} \end{table} Table 2: Modules and classes of the CUQlpy-FEniCS plugin. The solve method solves the linear system (12) using a FEniCS linear solver, that can be specified by the user. As discussed in section 2, this method represents the discretized solution operator \(\mathbf{S}\), which in this case maps the log-conductivity \(\mathbf{w}\) used in assembling \(\mathbf{K}_{\mathbf{w}}\) to the PDE solution \(\mathbf{u}\). Similarly, as discussed in section 2, the method observe represents the discretized observation operator \(\mathbf{O}\). Since, in this case, the observations are obtained on the entire domain, \(\mathbf{O}\) is just an identity operator that maps the full solution \(\mathbf{u}\) to the observations \(\mathbf{y}=\mathbf{u}\). In general, however, \(\mathbf{O}\) can represent observing parts of the solution only, cf. section 2, and/or a derived quantity of interest, cf. section 4 for example. The SteadyStateLinearFEniCSPDE class additionally implements the method gradient_wrt_parameter that computes the gradient of \(\mathbf{O}\circ\mathbf{S}\) with respect to the parameter w in a given direction, using an adjoint-based approach [12]. The software design concept of the PDE form above and the adjoint-based gradient computation of the PDE form follows closely the approach used in hIPPYlib[36, 37]. For brevity we do not provide code for building the SteadyStateLinearFEniCSPDE object here, as it is provided by the CUQIpy-FEniCS test problem FEniCSPoisson2D, and stored in A.pde. How to build PDE-like objects is shown in section 2 for the core CUQIpy, and in section 4 for the CUQIpy-FEniCS plugin. ### Geometry-type classes in CUQIpy-FEniCS Geometry-type classes, as we discussed in section 2, mainly serve three purposes. First, they interface forward models with samplers and optimizers by providing a dual representation of variables: parameter value and function values representations. Second, they provide visualization capabilities for both representations. Lastly, they allow re-parameterizing the Bayesian inverse problem parameter, e.g., in terms of coefficients of expansion for a chosen basis. CUQIpy-FEniCS Geometry-type classes serve the same goals, see table 2 for a list of these classes. There are two main data structures in FEniCS, Function and Vector. The former is a class representation of FEM approximation of continuous functions, and the latter is a class representation of the approximation coefficients of expansion. CUQIpy-FEniCS Geometry-type classes, subclassed from cuqi.geometry.Geometry, interpret these data structures and interface them with CUQIpy. Additionally, they provide plotting methods by seamlessly utilizing the FEniCS plotting capabilities. This enables CUQIpy-FEniCS to visualize function value representation of variables, as in fig. 1(a), as well as, parameter value representation of variables, as in fig. 1(f). The plotting implementation details are hidden from the user; and the user is provided with the CUQIpy simple plotting interface as shown for example in section 2. CUQIpy-FEniCS Geometry-type classes provide useful parameterization and mapping functionalities. Here we discuss the FEniCSContinuous and the MaternKLExpansion Geometry-type classes, both are used in the 2D Poisson, the EIT, and the PAT examples. The most basic CUQIpy-FEniCS Geometry-type class is the FEniCSContinuous geometry which represents FEniCS FEM function spaces. We can write a FEM approximation \(w^{\text{FEM}}(\mathbf{\xi})\) of a continuous function \(w(\mathbf{\xi})\) as \[w(\mathbf{\xi})\approx w^{\text{FEM}}(\mathbf{\xi})=\sum_{i=1}^{n_{\text{FEM}}}w_{i}^{ \text{FEM}}e_{i}^{\text{FEM}}(\mathbf{\xi}), \tag{13}\] where \(\{e_{i}^{\text{FEM}}(\mathbf{\xi})\}_{i=1}^{n_{\text{FEM}}}\) are FEM basis functions defined on a given mesh, \(\mathbf{w}^{\text{FEM}}=[w_{1}^{\text{FEM}},w_{2}^{\text{FEM}},...,w_{n_{\text{FEM }}}^{\text{FEM}}]^{\mathsf{T}}\) is the vector of the corresponding FEM coefficients of expansions, and \(n_{\text{FEM}}\) is the number of basis functions. The FEniCSContinuous.par2fun method converts a NumPy array to a FEniCS Function object representing \(w^{\text{FEM}}(\mathbf{\xi})\). This is achieved by interpreting the array elements as the FEM expansion coefficients \(\mathbf{w}^{\text{FEM}}\). The method fun2par converts a FEniCS Function objects representing \(w^{\text{FEM}}(\mathbf{\xi})\) to a NumPy array of the FEM expansion coefficients \(\mathbf{w}^{\text{FEM}}\). We denote by \(\mathbf{G}_{\text{FEM}}\) the operator implemented by the par2fun method which maps \(\mathbf{w}^{\text{FEM}}\) to \(w^{\text{FEM}}(\mathbf{\xi})\). We use the FEM coefficient vector notation \(\mathbf{w}^{\text{FEM}}\) when referring the FEM function \(w^{\text{FEM}}(\mathbf{\xi})\), for simplicity. To create an object of the FEniCSContinuous class, which we use for example to represent the observations \(\mathbf{y}\) in the Poisson example and refer to as G_FEM, we first define the FEniCS function space on which the parameter is represented parameter_space = dl.FunctionSpace(mesh, "CG", 1) mesh is the FEniCS computational mesh representing the physical domain of the problem, and parameter_space is a FEniCS first-order Lagrange polynomial space defined on mesh. We are now ready to create the FEniCSContinuous object as follows G_FEM = FEniCSContinuous(parameter_space) In some cases, re-parameterizing the Bayesian inverse problem parameter is needed to enforce certain type of solutions. One such re-parameterization, that is used in the Poisson example, is to enforce smooth solutions through a KL expansion. In CUQIpyFEniCS, a KL parameterization can be represented by a MaternKLExpansion geometry. This geometry is used to approximate the FEM coefficient of expansion vector \(\mathbf{w}^{\text{FEM}}\) by a truncated KL expansion \(\mathbf{w}\): \[\mathbf{w}^{\text{FEM}}\approx\mathbf{w}=\sum_{i=1}^{n_{\text{KL}}}x_{i}\sqrt{\lambda _{i}}\mathbf{e}_{i}^{\text{KL}}, \tag{14}\] Here, \(\mathbf{x}=[x_{1},x_{2},...,x_{n_{\text{KL}}}]^{\mathsf{T}}\) is the KL-expansion coefficient vector, \(\{\lambda_{i}\}_{i=1}^{n_{\text{KL}}}\) are a decreasing sequence of positive real numbers, and \(\{\mathbf{e}_{i}\}_{i=1}^{n_{\text{KL}}}\) is a set of FEM coefficient vectors of ortho-normal functions. MaternKLExpansion constructs this KL expansion by discretizing a covariance operator--specifically, a Matern-class covariance operator \((\frac{1}{\ell^{2}}I-\Delta)^{-(\frac{\nu}{2}+\frac{d}{4})}\) with \(\ell>0\), a smoothness parameter \(\nu>1\), and the physical domain spatial dimension \(d=1,2\) or \(3\)[10]--on a FEM function space, parameter_space in this case. We then exploit FEniCS eigenvalue solvers to obtain the approximate eigenpairs \(\{(\sqrt{\lambda_{i}},\mathbf{e}_{i})\}_{i=1}^{n_{\text{KL}}}\). We refer to (14) as the KL parameterization of \(\mathbf{w}\) with the KL coefficients \(\mathbf{x}\). Note that choosing, \(n_{\text{KL}}\ll n_{\text{FEM}}\) reduces the dimension of the parameter space which simplifies solving the Bayesian inverse problem and is typically an accurate approximation in representing smooth fields. We denote by the operator \(\mathbf{G}_{\text{KL\_VEC}}\) the map from the KL expansion coefficients \(\mathbf{x}\) to the FEM expansion coefficients \(\mathbf{w}\). The \(\mathtt{MaternKLExpansion}\) object thus represents the map \(\mathbf{G}_{\text{KL}}\coloneqq\mathbf{G}_{\text{FEM}}\circ\mathbf{G}_{\text{KL\_VEC}}\). In the 2D Poisson example, section 1.2, the \(\mathtt{MaternKLExpansion}\) is internally created by the \(\mathtt{FEniCSPoisson2D}\) test problem as \[\mathtt{G\_KL}=\mathtt{MaternKLExpansion}(\mathtt{G\_FEM},\mathtt{ length\_scale}=0.1,\mathtt{num\_terms}=32)\] and it is used as the domain geometry of the model \(\mathtt{A}\) to approximately parametrize the log-conductivity \(w^{\text{FEM}}(\mathbf{\xi})\) by KL expansion coefficients \(\mathbf{x}\). The \(\mathtt{MaternKLExpansion}\) class additionally implements the method \(\mathtt{gradient}\) which computes the gradient of the map \(\mathbf{G}_{\text{KL}}\) with respect to the coefficients \(\mathbf{x}\) in a given direction. ### Integration into CUQlpy through the PDEModel class The CUQlpy-\(\mathtt{FEniCS}\) PDE-type and \(\mathtt{Geometry}\)-type objects provide the building blocks required to create the forward map \(\mathbf{A}\), e.g., (2). The CUQlpy PDEModel combines these \(\mathtt{FEniCS}\)-dependent objects and interface them to the core CUQlpy library. We run \(\mathtt{print}(\mathtt{A})\), where \(\mathtt{A}\) is the CUQlpy model defined in section 1.2 to see its contents: ``` CUQIPDEModel:MaternKLExpansionFEniCSContinuous(1089,)->FEniCSContinuous(1089,). Forwardparameters:['x']. PDE:SteadyStateLinearFEniCSPDE. ``` We note its domain geometry (in the first line) corresponds to \(\mathtt{G\_KL}\) and its range geometry (second line) is \(\mathtt{G\_FEM}\). We write \(\mathbf{A}\) in terms of its components as: \[\mathbf{A}(\mathbf{x})=(\mathbf{O}\circ\mathbf{S}\circ\mathbf{G}_{\text{KL}})(\mathbf{x}). \tag{15}\] PDEModel provides the forward method that corresponds to applying \(\mathbf{A}\). Additionally, it provides a gradient method to compute the gradient of \(\mathbf{A}\) with respect to the parameter \(\mathbf{x}\), if the underlying \(\mathtt{Geometry}\) and \(\mathtt{PDE}\) objects have gradient implementation. This enables gradient-based posterior sampling such as by the NUTS sampler which we use in section 1.2. The following section gives an elaborate case study of using CUQlpy-\(\mathtt{FEniCS}\) to solve an EIT problem. We provide details of constructing the software components needed to build and solve the problem using CUQlpy equipped with CUQlpy-\(\mathtt{FEniCS}\). ## 4 CUQIpy-FEniCS example: Electrical Impedance Tomography (EIT) EIT is an imaging technique of inferring the conductivity of an object from measurements of the electrical current on the boundary. This is a non-invasive approach in medical imaging for detecting abnormal tissues. Similar techniques are used in many other applications, such as industrial inspection. The underlying mathematical model for EIT is an elliptic PDE. Such PDEs are some of the most popular models for PDE-based inverse problems, e.g., in modelling subsurface flow in a porous medium, and inverse steady-state heat transfer problem with unknown heat conductivity. Hence, the EIT model can be modified for use in a wide range of PDE-based inverse problems. ### Mathematical model of EIT We follow the EIT model presented in [31] in 2D. Let \(\Gamma\subset\mathbb{R}^{2}\) be the open unit disk representing an object, and let \(\sigma:\bar{\Gamma}\to\mathbb{R}\) represent the conductivity of the object, where \(\bar{\Gamma}\) is the closure of the set \(\Gamma\). Suppose we impose a sequence of electric potentials \(g_{k}(\boldsymbol{\xi})\), \(k\in\mathbb{N}\), at the boundary \(\partial\Gamma\). We can then find the distribution of the electric potential \(u_{k}\), associated to \(g_{k}\), inside the object from the elliptic PDE \[-\nabla\cdot(\sigma(\boldsymbol{\xi})\nabla u_{k}(\boldsymbol{ \xi}))=0, \boldsymbol{\xi}\in\Gamma,k\in\mathbb{N}^{+} \tag{16a}\] \[u_{k}(\boldsymbol{\xi})=g_{k}(\boldsymbol{\xi})=\sin(k\arctan( \frac{\xi^{2}}{\xi^{1}})), \boldsymbol{\xi}\in\partial\Gamma,k\in\mathbb{N}^{+}. \tag{16b}\] Here, \(\sigma(\boldsymbol{\xi})\) is the conductivity as a function of the Cartesian coordinates \(\boldsymbol{\xi}=[\xi^{1},\xi^{2}]^{\mathsf{T}}\), and \(k\in\mathbb{N}^{+}\) is some spatial frequency of the boundary electric potential. Note that the boundary condition (16b) for the electric potential is chosen from [4] which is an approximation of the full-electrode model introduced in [31]. The EIT problem is the inverse problem of inferring the conductivity \(\sigma\) by imposing multiple boundary potentials \(g_{k}(\boldsymbol{\xi})\), e.g. for \(k=1,2,3,\dots\), and measuring the corresponding current \(y_{k}(\boldsymbol{\xi})\), defined by \[y_{k}(\boldsymbol{\xi}):=\frac{\sigma(\boldsymbol{\xi})\partial u_{k}( \boldsymbol{\xi})}{\partial\mathbf{n}}, \tag{17}\] at the boundary \(\partial\Gamma\). Here, \(\mathbf{n}\) is the outward unit vector, orthogonal to the boundary \(\partial\Gamma\). This EIT model corresponds to the Dirichlet-to-Neumann EIT model, also known as the shunt model [31]. We are interested in piece-wise constant conductivity \(\sigma\) with background conductivity value \(\sigma^{-}=1\) and foreground conductivity value \(\sigma^{+}=10\). This contrast between foreground and background is a common difference between a healthy and an unhealthy (e.g., cancerous) tissue [4]. We also assume that the foreground represents an inclusion far from the boundaries, simplifying the boundary measurement to \[y_{k}(\boldsymbol{\xi})=\frac{\partial u_{k}(\boldsymbol{\xi})}{\partial \mathbf{n}},\quad\boldsymbol{\xi}\in\partial\Gamma, \tag{18}\] since \(\sigma(\mathbf{\xi})\equiv\sigma^{-}=1\) on the boundary. We define the parameter-to-solution operator \(\mathcal{S}_{k}\) as the mapping from the conductivity \(\sigma(\mathbf{\xi})\) to the solution \(u_{k}(\mathbf{\xi})\) of the PDE (16). We also define the observation operator \(\mathcal{O}\) that maps the PDE solution \(u_{k}(\mathbf{\xi})\) to the boundary current measurement \(y_{k}(\mathbf{\xi})\), with \(\mathbf{\xi}\in\partial\Gamma\). Note that the observation operator \(\mathcal{O}\) does not explicitly depend on the frequency \(k\). In practice only a finite number of frequencies \(k\), in (16), is considered. In this section we only consider \(k=1,2,3\) and \(4\). ### Finite element discretization and FEniCS implementation of EIT Let \(H^{1}(\Gamma)\)[1] be the Hilbert space in which we expect the solution \(u_{k}\) of equations (16) to belong. We now reformulate (16) to obtain an elliptic PDE with homogeneous Dirichlet boundary conditions. This can be achieved e.g. using the lifting method [26]. This approach simplifies the finite element approximation of (16). We define lifting functions \(u_{k}^{\rm lift}\in H^{1}(\Gamma)\), \(k=1,2,3\) and \(4\), that satisfy the boundary input in (16), i.e. \(u_{k}^{\rm lift}(\mathbf{\xi})|_{\partial\Gamma}=u_{k}(\mathbf{\xi})|_{\partial\Gamma} =g_{k}(\mathbf{\xi})\), and vanishes away from the boundary. Introducing a new variable \(v_{k}=u_{k}-u_{k}^{\rm lift}\) allows us to reformulate (16) as \[-\nabla\cdot(\sigma(\mathbf{\xi})\nabla v_{k}(\mathbf{\xi})) =-\Delta u_{k}^{\rm lift}(\mathbf{\xi}), \mathbf{\xi}\in\Gamma,\ k=1,2,3,4, \tag{19a}\] \[v_{k}(\mathbf{\xi}) =0, \mathbf{\xi}\in\partial\Gamma. \tag{19b}\] The potential \(u_{k}\) is now recovered from the relation \(u_{k}=v_{k}-u_{k}^{\rm lift}\), for \(k=1,2,3\) and \(4\). Taking test function \(t(\mathbf{\xi})\in H^{1}(\Gamma)\) we form the weak formulation [26] of (19) as \[\int_{\Gamma}\sigma\nabla v_{k}\cdot\nabla t\ \mathrm{d}\,\mathbf{\xi}=-\int_{ \Gamma}\nabla u_{k}^{\rm lift}\cdot\nabla t\ \mathrm{d}\,\mathbf{\xi}. \tag{20}\] Similarly, we let \(H^{1}_{\rm p}(\partial\Gamma)\) be the space which the observation function \(y_{k}\) belong to. Here, the subscript "p" denotes the Hilbert space of periodic functions. Taking a test function \(w\in H^{1}_{\rm p}(\partial\Gamma)\) we can form the weak form for the boundary measurement as \[\int_{\partial\Gamma}y_{k}w\ \mathrm{d}\,\mathbf{\xi}=\int_{\partial\Gamma}\frac{ \partial v_{k}-\partial u_{k}^{\rm lift}}{\partial\mathbf{n}}w\ \mathrm{d}\,\mathbf{\xi}. \tag{21}\] Here, we used the relation \(\partial u_{k}/\partial\mathbf{n}=\partial(v_{k}-u_{k}^{\rm lift})/\partial \mathbf{n}\). Note that due to the lifting approach, the observation now depends on the frequency \(k\). We emphasise this by introducing the sub-script \(k\) for the observation operator, i.e. we define \(\mathcal{O}_{k}\) to be the mapping from \(v_{k}\) to \(y_{k}\). We discretize the domain \(\Gamma\) using a triangulated mesh. Furthermore, we choose first-order Lagrangian polynomial functions [26] to approximate the basis functions of \(H^{1}(\Gamma)\) and \(H^{1}_{\rm p}(\partial\Gamma)\). We implement the left-hand-side and the right-hand-side of (20) using FEniCS as form_lhs = lambda sigma, v, t: dl.inner(sigma*dl.grad(v), dl.grad(t))*dl.dx form_rhs1 = lambda sigma, t: -dl.inner(dl.grad(u_lift_1), dl.grad(t))*dl.dx ``` Here, the functions form_lhs and form_rhs1 return the FEniCS weak forms of the left-hand side and the right-hand side (for \(k=1\)) of (20), respectively. Furthermore, the FEniCS function u_lift_1 contains the user-defined lifting function. We refer the reader to the codes accompanying this paper for more details. Note that since \(v_{k}\) is the solution to the PDE (20), we may use the same form_lhs and input v for all frequencies \(k=1,2,3\) and \(4\). We construct similar functions form_rhs2, form_rhs3, and form_rhs4 for the input frequencies \(k=2,3\) and \(4\). We now implement the observation function (21). Let give_bnd_vals be a Python function that computes function values at the boundaries of \(\Gamma\). The observation function then takes the form ``` defobservation1(sigma,v1): obs_form = dl.inner(dl.grad(v1),n)*w*dl.ds assembled_form = dl.assemble(obs_form) boundary_values = give_bnd_vals(assembled_form) return boundary_values ``` Here, n is a FEniCS vector containing the unit outward normal vectors to the cell boundaries and v1 is a FEniCS function of the solution \(v_{1}\), and w is a FEniCS test function. We construct similar functions observation2, observation3, and observation4 for the input frequencies \(k=2,3\) and \(4\). The FEM discretization of (20) results in a finite-dimensional system of equations \[\mathbf{M}_{\mathbf{\sigma}}\mathbf{v}_{k}=\mathbf{b}_{k},\qquad k=1,2,3,4, \tag{22}\] and the discretized observation model \[\mathbf{y}_{k}=\mathbf{O}_{k}(\mathbf{v}_{k}),\qquad k=1,2,3,4. \tag{23}\] Here, \(\mathbf{\sigma}\), \(\mathbf{v}_{k}\) and \(\mathbf{u}_{k}^{\rm lift}\) are the FEM expansion coefficients for \(\sigma\), \(v_{k}\), and \(u_{k}^{\rm lift}\) for the \(k\)th frequency, respectively. Furthermore, \(\mathbf{M}_{\mathbf{\sigma}}\) is the FEM mass matrix, i.e. the discretization of the elliptic operator \(\nabla\cdot\sigma\nabla\), that depends on the unknown parameter \(\mathbf{\sigma}\), and \(\mathbf{b}_{k}\) is the right-hand-side vector containing the estimated integrals in the right-hand-side of (20). Furthermore, \(\mathbf{y}_{k}\) is the observation vector, and \(\mathbf{O}_{k}\) is a discretization of the observation map \(\mathcal{O}_{k}\). We now define the EIT forward maps \(\mathbf{A}_{k}\) to be \[\mathbf{y}_{k}=\mathbf{A}_{k}(\mathbf{\sigma}):=\mathbf{O}_{k}\circ\mathbf{S}_{k}(\mathbf{\sigma}), \qquad k=1,2,3,4, \tag{24}\] where \(\mathbf{S}_{k}\) is the FEM disretization of \(\mathcal{S}_{k}\) discussed above. ### Parameterization of the conductivity \(\boldsymbol{\sigma}\). In this section we consider the level-set parameterization for the conductivity \(\sigma\) proposed in [10]. This approach comprises multiple layers of parameterization. In this section we use the geometry module in CUQIpy-FEniCS to implement such layered parameterization. Let us first define the FEniCS function space in which we expect \(\boldsymbol{\sigma}\) to belong \[\texttt{parameter\_space}=\texttt{dl\_FunctionSpace}(\texttt{mesh},\ \texttt{"CG"},\ \texttt{1})\] Here, mesh is the computational mesh used for FEniCS. Recall that parameter_space is a FEniCS function space with linear continuous elements. Similarly, we can define a FEniCS function space solution_space on which the solutions \(\boldsymbol{v}_{k}\) defines a function. Now we define a discrete Gaussian random field \(\boldsymbol{r}\) defined on \(\Gamma\), i.e. realizations of \(\boldsymbol{r}\) are FEM expansion coefficients of a random functions defined on \(\Gamma\). One way to define such a random function is to use a truncated KL-expansion with a Matern covariance, as discussed in section 3.2, to approximate \(\boldsymbol{r}\) the same way \(\boldsymbol{w}\) is approximated in (14). To construct the geometry associated to the KL parameterization, we first consider the operator \(\boldsymbol{G}_{\text{FEM}}\) to be the map from FEM expansion coefficients to a FEniCS function (see Section 3.2). The corresponding geometry is defined as \[\texttt{G\_FEM}=\texttt{FEniCSContinuous}(\texttt{parameter\_space})\] We now construct the geometry associated with the KL parameterized. This geometry is associated to the operator \(\boldsymbol{G}_{\text{KL}}\) defined in Section 3.2. \[\texttt{G\_KL}=\texttt{MaternKLExpansion}(\texttt{G\_FEM},\ \texttt{length\_scale} =0.2,\ \texttt{num\_terms}=64)\] Here length_scale is the length scale constant of the Matern covariance and num_terms is \(n_{\text{KL}}\), the number terms in the KL expansion. The geometry G_KL is now the implementation of \(\boldsymbol{G}_{\text{KL}}\) which maps \(\boldsymbol{x}\), the vector containing the KL expansion coefficients in (14) to the vector \(\boldsymbol{r}\). Note that we used parameter_space as the FEniCS function space associated with \(\boldsymbol{r}\). Now to relate the Gaussian function \(\boldsymbol{r}\) to the conductivity \(\boldsymbol{\sigma}\) we define the Heaviside function [39]\(\boldsymbol{G}_{\text{Heavi}}\) as an additional layer of parameterization \[\boldsymbol{\sigma}=\boldsymbol{G}_{\text{Heavi}}(\boldsymbol{r}):=\frac{1}{2} \left(\sigma^{+}(1-\text{sign}(\boldsymbol{r}))+\sigma^{-}(1+\text{sign}( \boldsymbol{r}))\right). \tag{25}\] This map \(\boldsymbol{G}_{Heavi}\) constructs a piece-wise constant conductivity \(\boldsymbol{\sigma}\). Note that the Heaviside map must be applied to the function \(r\), the FEM function associated to \(\boldsymbol{r}\), which constructs a conductivity \(\sigma\). However, in the case of linear Lagrangian FEM elements, we can directly apply the Heaviside map to the expansion coefficients \(\boldsymbol{r}\) and obtain \(\boldsymbol{\sigma}\) as in (25). We can construct this paramterization in CUQIpy-FEniCS as G_Heavi = FEniCSMappedGeometry(G_KL, map=heaviside) Here, heaviside is a Python function that applies the Heaviside map (25), see companion code for more detail of implementation. By passing map=heaviside, FEniCSMappedGeometry applies heviside to G_KL. We redefine the forward operators to use the parameterizations discussed: \[\boldsymbol{A}_{k}=\boldsymbol{O}\circ\boldsymbol{S}_{k}\circ\boldsymbol{G}_{ \text{Heavi}}\circ\boldsymbol{G}_{\text{KL}},\qquad k=1,2,3,4. \tag{26}\] We define the range geometry as a Continuous1D geometry: G_cont = Continuous1D(m) where m is the dimension of any observation vector \(\boldsymbol{y}_{1},\boldsymbol{y}_{2},\boldsymbol{y}_{3}\) or \(\boldsymbol{y}_{4}\). In the experiments in this section we set m=94. ### PDEmodel for the EIT problem Now we have all the components to define a CUQIpy-FEniCS PDE object. We first create a PDE form by combining the left-hand-side and right-hand-side forms, defined in section 4.2, in a Python tuple as PDE_form1 = (form_lhs, form_rhs1) Since (19) is a steady state linear PDE, we use the SteadyStateLinearFEniCSPDE class to define this PDE in CUQIpy-FEniCS. PDE1 = SteadyStateLinearFEniCSPDE( PDE_form1, mesh, solution_space, parameter_space, zero_bc, observation_operator=observation1, reuse_assembled=True) Recall that solution_space is the FEniCS space for the solution \(\boldsymbol{v}_{k}\) in (22), zero_bc is the FEniCS implementation of the homogeneous Dirichlet boundary conditions for (19), and observation1 is the observation Python function defined in section 4.2. The key argument reuse_assembled=True informs CUQIpy-FEniCS to store and reuse matrix factors of \(\boldsymbol{M}_{\boldsymbol{\sigma}}\), for a particular \(\boldsymbol{\sigma}\), when solving the system (22). This provides significant computational acceleration. We note that PDE problems (19) for frequencies \(k=2,3\) and \(4\) differ from the frequency \(k=1\) only in the right-hand-side term and the observation operator. We can exploit this to construct PDE2, for frequency \(k=2\), as PDE2 = PDE1.with_updated_rhs(form_rhs2) PDE2.observation_operator = observation2 And similarly we construct PDE3 and PDE4. Note that the matrix factorization of \(\mathbf{M}_{\mathbf{\sigma}}\) is shared among PDE1, PDE2, PDE3, and PDE4. Now we can create a PDEModel that represents the forward operator \(\mathbf{A}_{1}\) and includes information about the parameterization of \(\mathbf{\sigma}\). A1 = PDEModel(PDE1, range_geometry=G_cont, domain_geometry=G_Heavi) Similarly we define A2, A3, and A4 for the input frequencies \(k=2,3\) and \(4\) corresponding to the forward operators defined in (26). ### Bayesian formulation and solution In this section we formulate the EIT problem in a Bayesian framework. Let \(\mathbf{x}\) be a vector containing the expansion coefficients \(\{x_{i}\}_{i=1}^{n_{\text{KL}}}\) in (14). In the Bayesian formulation of the EIT problem the posterior distribution of the unknown parameter \(\mathbf{x}\) is the conditional probability distribution of \(\mathbf{x}\) given observed data \(\mathbf{y}_{1}^{\text{obs}},\mathbf{y}_{2}^{\text{obs}},\mathbf{y}_{3}^{\text{obs}}\) and \(\mathbf{y}_{4}^{\text{obs}}\). Here, we assume white Gaussian data noise. This Bayesian problem takes the form \[\mathbf{x} \sim\text{Gaussian}(\mathbf{0},\mathbf{I}_{n_{\text{KL}}}),\] \[\mathbf{y}_{k} \sim\text{Gaussian}(\mathbf{A}_{k}(\mathbf{x}),s_{\text{noise}}^{2}\bm {I}_{m}),\qquad k=1,2,3,4.\] Here, \(s_{\text{noise}}\) is the standard deviation of the data distribution. We use CUQlpy to implement these distributions. x = Gaussian(np.zeros(n_KL), 1, geometry=G_Heavi) y1 = Gaussian(A1(x), s_noise**2, geometry=G_cont) and similarly we define y2, y3, and y3 fo \(k=2,3\) and \(4\). We pass the argument geometry=G_Heavi when initializing x to specify that samples from this distribution follow the parameterization discussed in section 4.3. We can sample from the prior distribution and plot them using the following command lines. prior_sample = x.sample(5) prior_sample.plot() Examples of prior samples can be found in fig. 7(a). Note that CUQlpy-FEniCS visualizes these samples as FEniCS functions. To create simulated data for this EIT problem, we consider the conductivity field \(\sigma^{\text{true}}\) comprising \(3\) circular inclusions. The coordinates of the centers of the inclusions are \((0.5,0.5)\), \((-0.5,0.6)\) and \((-0.3,-0.3)\) with radii \(0.2\), \(0.1\) and \(0.3\), respectively. We also assume a conductivity value inside inclusions of \(\sigma^{+}=10\) and in the background \(\sigma^{-}=1\). We can obtain the FEM expansion coefficients \(\mathbf{\sigma}^{\text{true}}\) by projecting \(\sigma^{\text{true}}\) onto the FEM basis. Note that we introduce an approximation in this projection. The true and the projected conductivity phantoms are presented in Figure 6(a). Note that this conductivity field is not sampled from the prior distribution, and thus, there is no true parameters \(\mathbf{x}^{\rm true}\) that gives rise to the exact conductivity \(\mathbf{\sigma}^{\rm true}\). Data is created by adding additive Gaussian noise, with standard deviation \(s_{\rm noise}\), to \(\mathbf{y}_{k}^{\rm exact}:=\mathbf{A}_{k}(\mathbf{\sigma}^{\rm true})\), for \(k=1,2,3,4\), at noise levels \(5\%\), \(10\%\) and \(20\%\). The true and the noisy data with \(20\%\) noise level are presented in fig. 6(b). Now we have all the components we need to create a posterior distribution. We first define the joint distribution \[p(\mathbf{x},\mathbf{y}_{1},\mathbf{y}_{2},\mathbf{y}_{3},\mathbf{y}_{4})=p(\mathbf{x})\,p(\mathbf{y}_{1}| \mathbf{x})\,p(\mathbf{y}_{2}|\mathbf{x})\,p(\mathbf{y}_{3}|\mathbf{x})\,p(\mathbf{y}_{4}|\mathbf{x}), \tag{27}\] where \(p(\mathbf{x})\) is the prior probability density function (PDF) and \(p(\mathbf{y}_{k}|\mathbf{x})\), for \(k=1,2,3\) and \(4\), are the data distribution PDF. We obtain the posterior distribution by conditioning the joint distribution on data \(\mathbf{y}_{k}^{\rm obs}\), for \(k=1,2,3\) and \(4\). In CUQIpy this translates to joint = JointDistribution(x, y1, y2, y3, y4) posterior = joint(y1=y1_obs, y2=y2_obs, y3=y3_obs, y4=y4_obs) In this test case, we use the standard Metropolis-Hastings (MH) algorithm [29, SS2] to sample from the posterior. We pass the posterior as an argument in the initialization and then compute \(10^{6}\) samples using this sampler. sampler = MetropolisHastings(posterior) posterior_samples = sampler.sample_adapt(1e6) In what remains in this section we discuss how to use posterior_samples in CUQIpyFEniCS to visualize the posterior distribution. Figure 6: (a) The true (but assumed unknown) conductivity field \(\sigma\) (left) and the projected conductivity field onto the FEM space (right). (b) the exact boundary measurement values and the noisy boundary measurement values for frequencies \(k=1,2,3,4\). Here we only present the data collected with \(20\%\) noise level. ### Posterior samples, post-processing and visualization. We analyze and visualise the posterior using CUQlpy equipped with CUQlpy-FEniCS geometries. We first plot some of the posterior samples using the plot method. ``` posterior_samples.plot() ``` This command chooses 5 random posterior samples and plots them. We provide these posterior samples of the conductivity field \(\sigma\) in fig. 7(b), fig. 7(c) and fig. 7(d), for noise levels 5%, 10% and 20%, respectively, only two samples are shown for each case for brevity. We see that the reconstruction in the samples degrades, compared to the true conductivity \(\sigma^{\rm true}\), for higher noise levels. In addition, we see the discrepancy between the samples and \(\sigma^{\rm true}\) happens near the center of the domain. Now we want to estimate and visualize the posterior mean as an estimate for the conductivity field \(\sigma\). We can achieve this using the command ``` posterior_samples.plot_mean() ``` Note that posterior_samples is equipped with the G_heavi geometry. Therefore, plot_mean will apply this geometry, i.e. the parameterization \(\mathbf{G}_{\rm Heavi}\circ\mathbf{G}_{\rm KL}\) to the posterior mean. The mean conductivity field is provided in fig. 8(a). We see that for increased noise level, the posterior mean less resembles the true conductivity field \(\sigma^{\rm true}\). We use the point-wise variance of the posterior samples as a method for quantifying the uncertainty in the posterior. We can achieve this in CUQlpy-FEniCS by Figure 7: Samples from prior and posterior distributions of \(\sigma\). First column: prior samples. Second, third and fourth columns: posterior samples of \(\sigma\) with 5%, 10% and 20% noise level, respectively. posterior_samples.funvals.plot_variance() Here, the module funvals informs CUQIpy-FEniCS we estimate the variance on the function values, i.e. we first apply the map \(\mathbf{G}_{\text{Heavi}}\circ\mathbf{G}_{\text{KL}}\) and then compute the variance. The point-wise variance is presented in fig. 8(b). We see that the uncertainty in the reconstruction is associated to the boundaries of the inclusions, as well as, distance from the domain boundary. Furthermore, adding noise increases the level of uncertainty. This is consistent with findings of [10]. Finally we visualize the posterior for the expansion coefficients \(\mathbf{x}\). We use plot_ci method to visualize the posterior mean and the 95% CIs associated to the parameters. To indicate that we are visualizing the posterior for the coefficient (parameter) \(\mathbf{x}\), we pass the argument plot_par=True to the plot_ci method. posterior_samples.plot_ci(95, plot_par=True) In fig. 9(a), fig. 9(b) and fig. 9(c) we present the CI plots for noise levels 5%, 10% and 20%, respectively. We see that for the case with 20% noise level, the mean of \(x_{i}\) is close to zero, for larger indices \(i\). This suggests that for higher noise levels, the value of \(x_{i}\) follows the prior distribution and the information associated to these coefficients is lost in the data distribution. This is not the case for smaller noise levels, e.g. 5%. Figure 8: Estimated conductivity field \(\sigma\) with uncertainty estimates, visualized as Heaviside-mapped KL expansion. (a) posterior mean (b) point-wise variance. ## 5 Photo-acoustic tomography through user-defined PDE models In many applications of UQ for inverse problems a well-developed forward problem solver is created by the user. Therefore, it is of high interest that CUQIpy and CUQIpy-FEniCS can incorporate such black-box forward solvers. In this section we discuss how to use a black-box code in CUQIpy and CUQIpy-FEniCS to quantify uncertainties for inverse problems with PDEs. In addition, we discuss how to exploit geometries in CUQIpy-FEniCS, in such cases, to visualize uncertainties, without modifying the original black-box software. To demonstrate the user-defined features of CUQIpy and CUQIpy-FEniCS, we consider a 1D photo-acoustic tomography (PAT) problem [35]. In such problems, a short light pulse is illuminated onto an object to create a local initial pressure distribution. This pressure distribution then propagates in the object in the form of ultrasound waves. The PAT problem is then to reconstruct the initial pressure distribution from time-varying ultrasound measurements. For the 1D variant, we consider a 1D pressure profile with 2 ultrasound sensors to measure pressure variations. ### Mathematical model of PAT Let us consider an infinitly long 1D acoustic object with homogeneous acoustic properties (homogeneous wave speed). Assuming that the illumination duration via a light pulse is negligible compared to the speed of wave propagation, we can approximate the propagation of waves in the object by the hyperbolic PDE (linear wave equation) \[\frac{\partial^{2}u(\tau,\xi)}{\partial\tau^{2}} =\frac{\partial^{2}u(\tau,\xi)}{\partial\xi^{2}}, 0<\tau\leq 1,\ \xi\in\mathbb{R}, \tag{28a}\] \[u(0,\xi) =g(\xi), \xi\in\mathbb{R},\] (28b) \[\frac{\partial u(0,\xi)}{\partial t} =0, \xi\in\mathbb{R}. \tag{28c}\] Here, \(u\) is the pressure distribution, \(g\) the initial pressure distribution, and \(\tau\) the time. We place 2 pressure sensors at locations \(\xi_{L}=0\) and \(\xi_{R}=1\) that record the pressure Figure 9: Estimation of \(\mathbf{x}\), i.e., the first 35 coefficients of the KL expansion in (14) and their uncertainty. (a) 5% noise (b) 10 % noise (c) 20 % noise. over time. We define continuous measurements \[y_{L}(\tau):=u(\xi_{L},\tau),\qquad y_{R}(\tau):=u(\xi_{R},\tau),\qquad 0<\tau \leq 1. \tag{29}\] The inverse problem is to find the initial pressure \(g\) from measurements \(y_{L}\) and \(y_{R}\). We define the parameter-to-solution operator \(\mathcal{S}\) of the PAT problem to be the map that maps \(g(\xi)\) to the solution \(u(\tau,\xi)\) for all \(0<\tau\leq 1\) and \(\xi\in\mathbb{R}\). Furthermore, we define the observation operators \(\mathcal{O}_{L}\) and \(\mathcal{O}_{R}\) to be the mappings that maps (slices) the solution \(u(\tau,\xi)\) to the measurements \(y_{L}\) and \(y_{R}\). Note that the forward operators \[\mathcal{A}_{L}:=\mathcal{O}_{L}\circ\mathcal{S},\qquad\mathcal{A}_{R}:= \mathcal{O}_{R}\circ\mathcal{S}, \tag{30}\] are linear operators. We are interested in recovering the initial pressure profile \(g(\xi)\) in the domain \(\Gamma=[\xi_{L},\xi_{R}]=[0,1]\) from the full data, where pressure measurements from both \(\xi_{L}\) and \(\xi_{R}\) is available. In addition, we also investigate reconstructing \(g\) from partial data, where pressure measurement is only available at \(\xi_{L}\). ### CUQlpy implementation of PAT In this section we assume that a discretizaton \(\boldsymbol{S}\) of \(\mathcal{S}\) is available that approximately solves the wave equation (28). Furthermore, we assume that the input of \(\boldsymbol{S}\) is a vector \(\boldsymbol{g}\) which represents a discretization of \(g\). Note that the discretization \(\boldsymbol{S}\) is a generic discretization and we treat it as a black-box solver. However, the particular discretization used in this paper is available in the accompanying codes. We consider a discretization \(\boldsymbol{O}\) of the observation operator \(\mathcal{O}\). We choose a measurement frequency \(f=250\) and construct discrete measurement vectors \(\boldsymbol{y}_{L}\) and \(\boldsymbol{y}_{R}\) comprising snapshots \(u(\xi_{L},i/f)\) and \(u(\xi_{R},i/f)\), for \(i=1,\ldots,m\), respectively. We consider a simple Bayesian problem where we assume the components of \(\boldsymbol{g}\) have a Gaussian distribution and the pressure measurements are corrupted by additive white Figure 10: (b) True initial pressure for the PAT problem. (a) and (c) noisy and noise-free data collected for the PAT problem with sensors at \(\xi_{L}=0\) and \(\xi_{R}=1\), respectively. Gaussian noise. We can formulate this problem as \[\mathbf{g} \sim\mathrm{Gaussian}(\mathbf{0},\mathbf{I}_{n_{g}}),\] \[\mathbf{y}_{L} \sim\mathrm{Gaussian}(\mathbf{A}_{L}(\mathbf{g}),s_{\mathrm{noise}}^{2} \mathbf{I}_{m}),\] \[\mathbf{y}_{R} \sim\mathrm{Gaussian}(\mathbf{A}_{R}(\mathbf{g}),s_{\mathrm{noise}}^{2} \mathbf{I}_{m}).\] Here, \(n_{g}=121\) is the size of \(\mathbf{g}\), and \(s_{\mathrm{noise}}=0.125\) is the standard deviation of the noise. Furthermore, \(\mathbf{A}_{L}\) and \(\mathbf{A}_{R}\) represent the user's discretization of the forward operators \(\mathcal{A}_{L}\) and \(\mathcal{A}_{R}\), and PAT_L and PAT_R are black-box Python functions applying the photo-acoustic forward model \(\mathbf{A}_{L}\) and \(\mathbf{A}_{R}\), respectively. We set up the Bayesian Problem with Gaussian prior and data distributions: g = Gaussian(np.zeros(n_g), 1) yL = Gaussian(PAT_L(x), s_noise**2) yR = Gaussian(PAT_R(x), s_noise**2) Note that CUQIpy, by default, considers Continuous1D geometry for the initial pressure. Therefore, we do not explicitly define it. By using their forward solver within CUQIpy, the user has access to for example to more complex priors for \(\mathbf{g}\), following examples in the companion paper [29]. As an example, when the location of jumps in the initial pressure is the quantity of interest in the inverse problem, we can consider a Markov-random-field-type prior. Using such prior distributions is discussed in [29]. We consider now a known true initial pressure \(g^{\mathrm{true}}\) and its discretization array g_true from which we can construct noisy measurements \(\mathbf{y}_{L}^{\mathrm{obs}}\) and \(\mathbf{y}_{R}^{\mathrm{obs}}\) yL_obs = yL(g=g_true) yR_obs = yR(g=g_true) Figure 10 shows the initial pressure distribution and exact and noisy pressure measurements \(\mathbf{y}_{L}^{\mathrm{obs}}\) and \(\mathbf{y}_{R}^{\mathrm{obs}}\). Instead of constructing the posterior and sampling it, we wish to demonstrate how to formulate this same problem using the CUQIpy-FEniCS plugin. Since sampling is done in the same way, we demonstrate it at the end of the following section. ### CUQIpy-FEniCS implementation of PAT Here, we assume that PAT_L and PAT_R are Python functions with a FEniCS implementation of \(\mathbf{A}_{L}\) and \(\mathbf{A}_{R}\) forward operators. We also assume these functions take a FEniCS function \(\mathbf{\mathsf{g}}\) as input and computes the boundary pressure measurement vector yL and yR. Let us first define the FEniCS function space where \(\mathbf{g}\) defines a function parameter_space = dl.FunctionSpace(mesh, "CG", 1) Here, mesh is a discretization of the real line and parameter_space is a FEniCS function space with first order Lagrangian hat-functions. We parameterize \(\mathbf{g}\) with a KL-expansion with a Matern covariance associated to the map \(\mathbf{G}_{\mathrm{KL}}\) (see Section 3.2 and Section 4.3). We now redefine the forward operators \[\mathbf{A}_{L}=\mathbf{O}_{L}\circ\mathbf{S}\circ\mathbf{G}_{\mathrm{KL}},\qquad\mathbf{A}_{R}=\mathbf{O }_{R}\circ\mathbf{S}\circ\mathbf{G}_{\mathrm{KL}}. \tag{31}\] We set up a geometry for the KL-expansion with CUQlpy-FEniCS (see Section 3.2) as The geometry G_KL is the implementation of \(\mathbf{G}_{\mathrm{KL}}\) with Matern length scale constant \(\ell=0.1\) and regularity constant \(\nu=0.75\), and \(n_{\mathrm{KL}}=100\) terms. We construct a continuous 1D geometry for the observation Now we create a CUQlpy model to incapsulate the forward operators PAT_L and PAT_R with the parameterization, domain and range geometries, and similarly for A_R. Note that in creating these models we are treating PAT_L and PAT_R as a black-box functions. Now, CUQlpy-FEniCS can utilize the information about domain and range geometries to allow advanced sampling and visualization. The parameterized Bayesian problem for the PAT now takes the form \[\mathbf{x} \sim\mathrm{Gaussian}(\mathbf{0},\mathbf{I}_{n_{\mathrm{KL}}}),\] \[\mathbf{y}_{L} \sim\mathrm{Gaussian}(\mathbf{A}_{L}(\mathbf{x}),s^{2}_{\mathrm{noise} }\mathbf{I}_{m}),\] \[\mathbf{y}_{R} \sim\mathrm{Gaussian}(\mathbf{A}_{R}(\mathbf{x}),s^{2}_{\mathrm{noise} }\mathbf{I}_{m}).\] where m=250 is the size of \(\mathbf{y}_{L}^{\mathrm{obs}}\) and \(\mathbf{y}_{R}^{\mathrm{obs}}\). Note that \(\mathbf{A}_{L}\) and \(\mathbf{A}_{R}\) now contain the KL-expansion parameterizations. We can set up this Bayesian problem as In this example we consider the same s_noise, as well as, the same noisy data yL_obs and yR_obs as in Section 5.2. Similar to the previous sections we now construct the joint and the posterior distributions. Setting up a Bayesian problem for the partial data, e.g. when yR_obs is missing is similar. To explore the posterior we use the preconditioned Crank-Nicolson (pCN) [9] sampler which is suited for the KL parameterization. ``` sampler=pCN(posterior) samples=pCN_sampler.sample_adapt(2e5) ``` We can visualize the posterior for \(\mathbf{g}\), i.e., the initial pressure distribution, by ``` posterior_samples.plot_ci(95,exact=g_true) ``` We can find the mean function for \(\mathbf{g}\) and the CIs associated to this estimate in fig. 11. We see that the mean function, in the case with complete data, is a better estimate for the true initial pressure profile compared to the case with partial data. Furthermore, when data corresponding the right boundary is missing, the uncertainty in estimating the right boundary increases. Finally, we plot the mean and 95% CI for the expansion coefficients \(\mathbf{x}\): ``` posterior_samples.plot_ci(95,plot_par=True) ``` In fig. 12(a) and fig. 12(b) we present the CI plots for the full data (from both boundaries) and the partial data (only from the left boundary), respectively. Note that we only show the first 25 components although we estimate all 100 parameters. We see that the uncertainty of the first coefficient significantly increases for the case with partial data. ## 6 Conclusion and future work In this paper we described our general framework for modeling and solving PDE-based Bayesian inverse problems with the CUQlpy Python software package. We showed how to express PDEs and statistical assumptions about unknown parameters natively in CUQlpy, or using a user-provided black-box PDE solver, and conduct Bayesian inference Figure 11: Estimated initial pressure profile \(g\) together with the uncertainty estimates. The plots correspond to the (a) full data and (b) partial data. and uncertainty quantification. We also presented our CUQlpy-FEniCS plugin as an example of how to incorporate modelling by third-party PDE libraries such as the finite-element modelling package FEniCS. We showed that CUQlpy and CUQlpy-FEniCS provide a consistent and intuitive interface to model and solve PDE-based Bayesian inverse problems, as well as analyze and visualize their solutions. Results were shown for parabolic, elliptic and hyperbolic examples involving the heat and Poisson equations as well as application case studies in electrical impedance tomography (EIT) and photo-acoustic tomography (PAT). Future work includes expanding support for derivatives across distributions, forward models, and geometries, as well as integrating PyTorch automatic differentiation into CUQlpy through the CUQlpy-PyTorch plugin. This will simplify the use of gradient-based samplers such as NUTS, as in the Poisson example in this paper, to help address the computational challenge of MCMC-based sampling of high-dimensional and complicated posterior distributions arising in large-scale inverse problems. The extensible plugin structure can also be used integrate more PDE-based modeling libraries. Overall, we believe CUQlpy and its plugins provide a promising platform for solving PDE-based Bayesian inverse problems and have significant potential for further development and expansion in the future. ## Data statement CUQlpy and plugins are available from [https://cuqi-dtu.github.io/CUQlpy](https://cuqi-dtu.github.io/CUQlpy). The code and data to reproduce the results and figures of the present paper are available from [https://github.com/CUQI-DTU/Paper-CUQlpy-2-PDE](https://github.com/CUQI-DTU/Paper-CUQlpy-2-PDE). ## Acknowledgments This work was supported by The Villum Foundation (grant no. 25893). J.S.J. would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme "Rich and Nonlinear Tomography - a multidisciplinary approach" when work on this paper was undertaken. This work was Figure 12: Estimation of the first 25 components of \(\mathbf{x}\), i.e. KL expansion coefficients in (14), and the uncertainty in this estimation. (a) full data and (b) partial data. supported by EPSRC Grant Number EP/R014604/1. This work was partially supported by a grant from the Simons Foundation (J.S.J.). F.U. has been supported by Academy of Finland (project number 353095). The authors are grateful to Kim Knudsen for valuable input in regards to PDE-based inversion in medical imaging, to Aksel Rasmussen for a helpful discussion about the EIT problem formulation and to members of the CUQI project for valuable input that helped shape the design of CUQIpy.
2310.12981
Improved Pairwise Measurement-Based Surface Code
We devise a new realization of the surface code on a rectangular lattice of qubits utilizing single-qubit and nearest-neighbor two-qubit Pauli measurements and three auxiliary qubits per plaquette. This realization gains substantial advantages over prior pairwise measurement-based realizations of the surface code. It has a short operation period of 4 steps and our performance analysis for a standard circuit noise model yields a high fault-tolerance threshold of approximately $0.66\% $. The syndrome extraction circuits avoid bidirectional hook errors, so we can achieve full code distance by choosing appropriate boundary conditions. We also construct variants of the syndrome extraction circuits that entirely prevent hook errors, at the cost of larger circuit depth. This achieves full distance regardless of boundary conditions, with only a modest decrease in the threshold. Furthermore, we propose an efficient strategy for dealing with dead components (qubits and measurements) in our surface code realization, which can be adopted more generally for other surface code realizations. This new surface code realization is highly optimized for Majorana-based hardware, accounting for constraints imposed by layouts and the implementation of measurements, making it competitive with the recently proposed Floquet codes.
Linnea Grans-Samuelsson, Ryan V. Mishmash, David Aasen, Christina Knapp, Bela Bauer, Brad Lackey, Marcus P. da Silva, Parsa Bonderson
2023-10-19T17:59:55Z
http://arxiv.org/abs/2310.12981v2
# Improved Pairwise Measurement-Based Surface Code ###### Abstract We devise a new realization of the surface code on a rectangular lattice of qubits utilizing single-qubit and nearest-neighbor two-qubit Pauli measurements and three auxiliary qubits per plaquette. This realization gains substantial advantages over prior pairwise measurement-based realizations of the surface code. It has a short operation period of 4 steps and our performance analysis for a standard circuit noise model yields a high fault-tolerance threshold of approximately 0.66%. The syndrome extraction circuits avoid bidirectional hook errors, so we can achieve full code distance by choosing appropriate boundary conditions. We also construct variants of the syndrome extraction circuits that entirely prevent hook errors, at the cost of larger circuit depth. This achieves full distance regardless of boundary conditions, with only a modest decrease in the threshold. Furthermore, we propose an efficient strategy for dealing with dead components (qubits and measurements) in our surface code realization, which can be adopted more generally for other surface code realizations. This new surface code realization is highly optimized for Majorana-based hardware, accounting for constraints imposed by layouts and the implementation of measurements, making it competitive with the recently proposed Floquet codes. ## I Introduction The inherent fragility of quantum states has presented a formidable challenge in the pursuit of a scalable quantum computer. Quantum error correction will undoubtedly be essential in any practical realization. Due to its high threshold and local connectivity, the surface code [1; 2; 3] is a leading candidate for a scalable quantum error correcting code. Realizing a surface code subject to hardware constraints is a challenge, and different realizations will have varying performances. In particular, designing a syndrome extraction circuit composed of native operations, while maintaining code performance, is essential. In the broader context of the implementation of useful quantum algorithms, the resources required can be greatly impacted by the choice of code and how well matched it is to the hardware constraints [4; 5]. Most of the proposed implementations of the surface code in hardware have followed the CNOT gate-based realization of stabilizer measurement circuits of Ref. [3], or variants thereof. More recent proposals motivated by measurement-based Majorana quantum computing hardware [6] have considered pairwise measurement-based realizations of the surface code [7; 8; 9], which all utilized two auxiliary qubits for each bulk plaquette stabilizer measurement circuit. These pairwise measurement-based proposals each exhibited various significant drawbacks, such as complicated layouts and difficult measurements in Majorana hardware1, relatively long circuits [8], or bidirectional hook errors [9]. The Floquet codes developed in Ref. [5, 10, 11, 12] provided a major advancement for the realization of pairwise measurement-based codes, largely eliminating such drawbacks and achieving better performance with a high fault-tolerance threshold. In this paper, we devise and analyze a new implementation of the surface code using single- and two-qubit Pauli measurements on a rectangular array of qubits. This implementation utilizes three auxiliary qubits per stabilizer in the bulk. Moreover, the stabilizer measurement circuits only utilize \(X\), \(Z\), \(XX\), and \(ZZ\) Pauli measurements, with the pairwise measurements being directionally correlated between nearest-neighbor qubits, e.g. \(XX\) measurements are always between horizontal neighbors and \(ZZ\) are always between vertical neighbors. It is worth observing that our stabilizer measurement circuits and those of Refs. [7, 8, 9] are all closely related through circuit equivalences, for example using \(ZX\)-calculus relations (see e.g. Ref. [13] for a review of \(ZX\)-calculus). Despite these close relations, there are important physical differences that translate into implementation and performance advantages for our surface code realization. For example, our stabilizer measurement circuits can be pipelined in a manner that yields a minimal operation period of 4 steps for running the code, in which case no qubit is idle in any step. The stabilizer measurement circuits in our surface code implementation have hook errors, similar to the original CNOT-based realizations discussed in Ref. [3]. Hook errors stem from single physical faults that, for a given circuit, are equivalent to higher-weight (e.g. two-qubit) errors on the data qubits (and not equivalent to single data qubit errors). These can reduce the _fault distance_[14], i.e., the minimum number of gate faults that causes an undetectable logical error; in particular, in the toric and surface codes, where logical operators are associated with a direction, hook errors reduce the fault distance when aligned with a logical operator of the same type. Unidirectionality or bidirectionality of hook errors indicate whether each type occurs in one or two directions with respect to the surface. In our case, the hook errors are unidirectional, with \(X\) and \(Z\) type hook errors in perpendicular directions. As such, our implementation can be utilized for the "rotated" surface code [15] without halving the fault distance by choosing boundary conditions and operation schedules accordingly, similar to the strategy used in Ref. [16]. This is in contrast to realizations with hook errors that cannot be favorably oriented, such as the bidirectional hook errors of Ref. [9]. Additionally, for situations where it is desirable, we can modify the circuit to prevent hook errors from occurring, at the expense of increasing the operation period to 7 steps. Given these properties, our code provides an interesting test bed for analyzing the effects of hook errors on code performance. We can modify the bulk 4-gon (four data qubit) stabilizer measurement circuits to provide circuits for measuring 1-gon, 2-gon, and 3-gon operators that interlock with the bulk 4-gon circuits. These \(n\)-gon measurement circuits utilize measurements from the same operation set, so constitute a minimal modification of the bulk. We can use the \(n\)-gons to implement code patches with any desired boundary conditions and alter the shape of code patches during operation, without disrupting the operation cycle. Moreover, we can utilize the \(n\)-gon measurement circuits to implement a protocol for dealing with dead components (i.e. dead qubits and pairwise measurements). In particular, we present an improved and maximally efficient variant of the protocol of Ref. [17] for dealing with dead components by measuring stabilizers for reduced \(n\)-gons that exclude dead components. Our protocol can be incorporated in a natural manner that requires minimal modification to the code operation, i.e. no addition to the set of measurements and no change to the bulk operation cycle. Furthermore, our circuit pipeline effectively alternate between \(Z\)-type and \(X\)-type \(n\)-gon measurements, allowing us to measures the reduced \(n\)-gons ("damaged plaquettes") at the same rate as the the original (undamaged) plaquettes. Since a motivation of this work was to develop code implementations for measurement-based Majorana quantum computing hardware, in particular in arrays of "tetron" qubits [6], our surface code realization is highly optimized for such hardware. In this regard, the rectangular lattice used for our code represents the simplest possible layout for such Majorana hardware. Moreover, for this hardware and layout, the measurements utilized in this code are extremely simple and likely to yield the lowest error rates of any set of physical Majorana parity measurements capable of generating a quantum error correcting code. Another consideration for this hardware is whether to use single or double columns of semiconductor "rails," which run between Majorana qubits to enable the measurements. Double-rail semiconductor layouts avoid physical conflicts between simultaneous measurements of adjacent qubits, generally allowing circuits to be implemented more efficiently in time. On the other hand, single-rail semiconductor layouts significantly simplify the requirements on fabrication and control for Majorana-based hardware, as compared to double-rail layouts. Using single-rail semiconductor layouts instead of double-rail generally increases the operation period by up to a factor of two; for example, using single-rail layouts instead of double-rail will double the operation period for the 4.8.8 Floquet code implementation in Majorana hardware described in Ref. [5]. We find that the use of single-rail layouts for our surface code realization can be implemented with a mild one step increase in the period. \begin{table} \begin{tabular}{c c c c c} protocol & qubit count & depth & threshold & Majorana hardware \\ \hline 3aux & \(O(4d_{\text{f}}^{2})\) & 4 & 0.66\% & simple \\ 3aux, single-rail & \(O(4d_{\text{f}}^{2})\) & 5 & 0.51\% & simple \\ \hline double ancilla & \(O(3d_{\text{f}}^{2})\) & 10 & 0.24\% & complicated \\ windmill & \(O(2d_{\text{f}}^{2})\) & 20 & 0.15\% & complicated \\ pentagonal & \(O(12d_{\text{f}}^{2})\) & 6 & 0.4\% \({}^{*}\) & complicated \\ \hline 4.8.8 & \(O(4d_{\text{f}}^{2})\) & 3 & 1.3\% & simple \\ honeycomb & \(O(6d_{\text{f}}^{2})\) & 3 & 1.3\% & simple \\ 4.8.8, single-rail & \(O(4d_{\text{f}}^{2})\) & 6 & 0.52\% & simple \\ \hline \end{tabular} \end{table} Table 1: Comparison between key properties of pairwise measurement-based codes, including our realization of the surface code (denoted “3aux”), the double ancilla and windmill realizations of Ref. [8], the pentagonal tiling realization of Ref. [9], and the Floquet code on the 4.8.8 and honeycomb lattices [10, 11, 5, 12]. The term “single-rail” indicates variants of the corresponding codes that are needed to make them compatible with Majorana hardware using single-rail semiconductor layouts. Total qubit count for a logical patch is given to leading order as function of fault distance \(d_{\text{f}}\). Circuit depth is given per round of syndrome extraction. Fault tolerance thresholds are computed with respect to the noise model of Ref. [8], except for that of the pentagonal tiling realization from Ref. [9] (indicated by \({}^{*}\)), which uses a slightly different noise model. We indicate whether the code can be implemented in Majorana hardware using simple layouts and measurements, or if it requires complicated layouts and measurements that are likely prohibitive. Finally, we analyze the performance of our code for various scenarios. Our results indicate significant improvement compared to the previous pairwise measurement-based surface code realizations. In particular, simulating the logical failure rate for the standard circuit noise model, we find the fault-tolerance threshold to be approximately \(0.66\%\), and achieve full distance sub-threshold scaling (when hook errors are appropriately addressed). Examining the effects of hook errors when using different boundary conditions and hook-preventing circuit modifications, we generally see the code performing better and achieving full distance in the deep sub-threshold regime for the scenarios where hook errors are benign or absent. Interestingly, the threshold and near threshold behavior are only modestly reduced by the hook-preventing modifications, in contrast to hook-flagging modifications of CNOT-based circuits [18]. Moreover, we compare our code to the state-of-the-art pairwise measurement-based 4.8.8 Floquet code [5; 10; 11], and find it to be reasonably competitive, especially at lower physical error rates. For variants of these two codes that are compatible with implementation in Majorana hardware with single-rail layouts, their performance becomes even more competitive and their thresholds nearly identical. We provide a summary comparison of key properties of the various pairwise measurement-based codes in Table 1. The structure of this paper is as follows. In Sec. II, we introduce our code, presenting the pairwise measurement-based stabilizer measurement circuits and pipelining to realize the surface code on a rectangular lattice. In Sec. III, we describe the occurance of hook errors in our circuits and present modifications of our measurement circuits that prevent the occurance of hook errors. In Sec. IV, we detail the measurement circuits for all \(n\)-gons, with \(n=1,2,3\), and utilize these for surface code patches with boundaries. In Sec. V, we describe our proposed strategy for dealing with dead components and its application to our surface code realization. In Sec. VI, we discuss the implementation in Majorana hardware and provide a modification of the pipelining to make the code compatible with single-rail semiconductor layouts. In Sec. VII, we describe our numerical simulations of code performance and resource estimation and present the results. ## II The code circuits We use a rectangular lattice of qubits to implement a pairwise measurement-based realization of the surface code, as shown in Fig. 1. The plaquettes correspond to 4-gon stabilizer measurements, which are arranged in a checkerboard pattern of \(Z\)-type (\(ZZZZ\) stabilizers) and \(X\)-type (\(XXXX\) Figure 1: A rectangular lattice of qubits used for a pairwise measurement-based realization of the surface code. Data qubits are shown as open dots and auxiliary qubits are shown as solid dots. The blue and red squares will correspond to \(Z\)-type and \(X\)-type plaquettes (4-gons), respectively. Each plaquette exclusively utilizes three auxiliary qubits (labeled \(A\)-\(C\)) to execute its stabilizer measurement circuit. stabilizers). Each plaquette exclusively utilizes three auxiliary qubits to perform the stabilizer measurement circuit. In Fig. 2, we present a circuit diagram for \(M_{ZZZZ}\), the measurement of \(ZZZZ\) on four data qubits, using three auxiliary qubits. The \(ZZZZ\) stabilizer measurement outcome is given by the product of the measurement outcomes of the six \(Z\)-type measurements, i.e. \(M_{Z_{B}}\) (initial), \(M_{Z_{A}Z_{1}}\), \(M_{Z_{C}Z_{2}}\), \(M_{Z_{A}Z_{3}}\), \(M_{Z_{C}Z_{4}}\), and \(M_{Z_{B}}\) (final). By replacing \(X\leftrightarrow Z\), we obtain the circuit for measuring \(XXXX\) on four data qubits shown in Fig. 3. We note that there are numerous equivalences that can be applied to produce equivalent stabilizer circuits. For example, one could apply basis changes, permute data or auxiliary qubit labels, or modify the operation schedule, e.g. by adding/removing time steps and sliding measurements into different time steps (without sliding past other measurements on the same qubit lines). We will use this flexibility to incorporate certain appealing features in the implementation. One desirable feature is to minimize the operation time. In order to compress the circuit into the shortest possible operation period, we can use the measurement schedules shown for \(M_{ZZZZ}\) and \(M_{XXXX}\) in Figs. 2 and 3. While these circuits are shown with six steps, if we are repeating a stabilizer measurement circuit, the single-qubit measurements on auxiliary qubits can serve as both the final measurement of one round of running the stabilizer measurement circuit and the initial measurement of the subsequent round. Thus, we can avoid repeating the single-qubit measurements on auxiliary qubits \(A\) and \(C\) between every round, removing steps 0 and 5 from all but the very first and last rounds of applying the circuit, respectively. In this way, the number of steps necessary for applying \(r\) rounds of the stabilizer measurement circuit is \(4r+2\), i.e. the circuit can be implemented with a 4 step period. Here, the initial and final rounds correspond to the initial measurement of qubit \(A\) (step 0) and final measurement of qubit \(C\) (step 5), while steps 1-4 are repeated \(r\) times in between. We will find it useful to repeat the single-qubit measurement of auxiliary \(B\), as readout Figure 2: The \(M_{ZZZZ}\) circuit for measuring \(ZZZZ\) on four data qubits using three auxiliary qubits. The data qubits are labeled 1-4 in the order that they are addressed in this circuit. The auxiliary qubits are labeled \(A\)-\(C\). The measurement schedule shown here is useful for repeatedly applying the measurement circuit with a four step period, which can be done by repeating steps 1-4. errors of that measurement would otherwise affect the 4-gon stabilizer readout of the two successive stabilizer measurements. This single-qubit measurement can be repeated without slowing down the circuit by incorporating it in steps 1 and 4. Next, we consider applying these \(M_{ZZZZ}\) and \(M_{XXXX}\) circuits on the rectangular lattice with a checkerboard pattern in order to implement the surface code. (For now, we focus on the bulk plaquettes and will return to the matter of boundaries and smaller \(n\)-gons in Sec. IV.) When designing this implementation, we must ensure that operations in neighboring plaquettes do not conflict with each other. Specifically, the measurement sequences should avoid multiply-addressing any given qubit at any step. (One should also avoid conflicting uses of measurement components in hardware layouts where this may occur; we will consider this for Majorana hardware in Sec. VI.) Furthermore, the operation schedule must coordinate between adjacent plaquettes in a manner that correctly builds up the instantaneous stabilizer group to yield the desired \(M_{ZZZZ}\) and \(M_{XXXX}\) measurements needed for a surface code. We can achieve the necessary properties by carefully pipelining the circuits with the data qubits addressed in the manner shown in Fig. 4, with the steps pipelined as \[\cdots,(1Z,3X),(2Z,4X),(3Z,1X),(4Z,2X),\cdots. \tag{1}\] In this way, applying \(r\) rounds of the surface code stabilizer measurements takes \(4r+4\) steps, i.e. has period 4, where we ramp up with the steps \[(0Z,-),(1Z,-),(2Z,0X),(3Z,1X),\cdots \tag{2}\] and ramp down with the steps \[\cdots,(4Z,2X),(5Z,3X),(-,4X),(-,5X). \tag{3}\] Alternatively, we could interchange \(X\) and \(Z\) in the ramp up and down. With the pipelining in Eq. (1), the value of the \(ZZZZ\) stabilizer between time steps \((2Z,4X)\) and \((3Z,1X)\) is measured by Figure 3: The \(M_{XXXX}\) circuit for measuring \(XXXX\) on four data qubits using three auxiliary qubits may be obtained from \(M_{ZZZZ}\) by interchanging \(X\leftrightarrow Z\) for all qubits in the circuit. the \(M_{ZZZZ}\) circuit, and similarly the value of the \(XXXX\) stabilizer between time steps \((4Z,2X)\) and \((1Z,3X)\) is measured by the \(M_{XXXX}\) circuit. The reason for specifying the time steps for the data qubit stabilizers is because, when considering the full code, the data qubit operators at different times will generally not be equivalent due to other plaquettes' stabilizer circuits addressing those data qubits. We note that it is also possible to correctly implement the \(M_{ZZZZ}\) and \(M_{XXXX}\) circuits by interleaving them on a synchronous schedule by appropriately choosing the order in which the circuit addresses the data qubits, the details of which are given in Appendix A. This turns out to have significant disadvantages compared to the pipelined measurement schedule presented above, so we do not focus on it in this paper. ## III Hook errors and their prevention The stabilizer measurement circuits of Figs. 2 and 3 exhibit hook errors, which are errors that stem from a single fault, but that are equivalent to two-qubit errors on the data qubits. In this paper, we distinguish between faults and errors as in Ref. [5]: a fault is a failure of a circuit component (in our case, a measurement or an idling qubit) which results in a set of errors, and an error is represented by a single-qubit Pauli operator applied after the circuit component or a flipped measurement outcome, i.e. a readout error. For example, a fault on a two-qubit measurement can result in a readout error and a Pauli error on either or both qubits involved. Hook errors are concerning because they can reduce the fault distance and harm performance. We use "code distance" \(d\) to mean the minimal number of single data qubit errors that combine to produce a logical error, and "fault distance" \(d_{\rm f}\) to mean the minimal number of faults such that the resulting errors combine to produce a logical error. The fault distance [14] is the effective distance achieved by a particular realization of the code, which depends on the specific details of the circuits. In some situations, the directionality of the hook errors allows them to be oriented perpendicular to the Figure 4: The \(M_{ZZZZ}\) and \(M_{XXXX}\) measurement circuit steps for \(Z\)-type and \(X\)-type plaquettes. Labels indicate the operators measured on the respective qubits in the given step, with connecting lines indicating pairwise measurement. These circuits can be applied in parallel with a relative shift in their operation schedules. A 4 step period for repeated application of these surface code stabilizer measurement circuits is obtained using the pipelining: \(\cdots,(1Z,3X),(2Z,4X),(3Z,1X),(4Z,2X),\cdots\). Steps 1-4 of a given circuit are applied repeatedly, whereas steps 0 and 5 are only used in the ramp up and down of the repetition of circuits, as indicated in Eqs. (1)-(3). corresponding logical operators through a judicious choice of boundary conditions, thereby making their effect on the encoded logical state benign [16]. We now discuss how hook errors occur in the stabilizer measurement circuits presented in Sec. II. We find that they are unidirectional and can be made benign, e.g. for a rotated surface code patch, using an appropriate choice of boundary conditions. Moreover, we discuss how to modify these circuits to prevent hook errors altogether. For the stabilizer measurement circuits of Figs. 2 and 3, readout errors and two-qubit errors stemming from faults on the pairwise measurements of auxiliary qubits in our circuits are equivalent to two-qubit errors on data qubits. In more detail for the \(M_{ZZZZ}\) circuit, a Pauli error \(Z_{B}\) between steps 2 and 3 is equivalent to a \(Z_{1}Z_{3}\) or \(Z_{2}Z_{4}\) error on the data qubits, as shown in Fig. 5.2 A readout error stemming from a fault on the \(M_{X_{A}X_{B}}\) or \(M_{X_{B}X_{C}}\) measurement is also equivalent to a \(Z_{1}Z_{3}\) or \(Z_{2}Z_{4}\) error on the data qubits. Similarly, for the \(M_{XXXX}\) circuit, a Pauli error \(X_{B}\) between steps 2 and 3 or a readout error at the \(M_{Z_{A}Z_{B}}\) or \(M_{Z_{B}Z_{C}}\) measurement is equivalent to a \(X_{1}X_{3}\) or \(X_{2}X_{4}\) error on the data qubits. If we do not repeat the single-qubit measurements on auxiliary qubits \(A\) and \(C\), then readout errors stemming from faults on these non-repeated measurements would be equivalent to these same two-qubit errors on data qubits listed above for the respective circuit types. Footnote 2: When the stabilizer circuits are pipelined as in Eq. (1), this hook error is equivalent to a data \(ZZ\) error occurring at that same time interval (between steps 2 and 3). This \(ZZ\) error on data qubits is not necessarily equivalent to a \(ZZ\) error at a different time step, since the neighboring plaquettes’ \(M_{XXXX}\) circuits also address these data qubits. We observe that the hook errors in our circuits for a given plaquette type are unidirectional, with the direction in our code realization correlated with the error type: the \(Z\)-plaquettes' hook errors correspond to \(ZZ\) data qubit errors in the vertical direction and the \(X\)-plaquettes' hook errors correspond to \(XX\) data qubit errors in the horizontal direction. This is a useful property that, for example, allows us to choose boundary conditions for the "rotated" surface code on a planar patch such that the logical operators are aligned perpendicular to the corresponding type of hook errors; that is, the logical qubit's \(Z\)-logical string operators are horizontal and the \(X\)-logical string operators are vertical. This choice prevents the hook errors from reducing the fault distance of the code (at least during logical idle). We will return to this matter in Sec. IV, after describing the circuits for boundary stabilizer (3-gon, 2-gon, and 1-gon) measurements. Another way we can address the hook errors is to modify our stabilizer measurement circuits Figure 5: An example of a hook error in the \(M_{ZZZZ}\) measurement circuit is given by a \(Z\) error on auxiliary qubit \(B\) that occurs between steps 2 and 3, as shown in the circuit on the left. This error is equivalent to a \(ZZ\) error on data qubits 1 and 3, as shown in the circuit on the right. (Errors are marked in yellow.) so that they detect and distinguish the occurrence of these problematic faults, and prevent the resulting errors from being equivalent to two data qubit errors. This strategy could be useful in scenarios where the logical qubit operators are not (or cannot be) aligned strictly perpendicular to the direction of their corresponding hook errors, such as when performing certain logical gate operations (see, e.g., Ref. [14]). One way of modifying our circuits to prevent the hook errors is to repeat the measurements with which they are associated. For the pairwise auxiliary qubit measurements, immediately repeating each measurement does not fix the problem, but repeating the pair of measurements (alternating between the \(AB\) and \(BC\) measurements) does. With these modifications, the resulting hook-preventing circuit for \(M_{ZZZZ}\) is shown in Fig. 6. The repeated measurements add new "detectors" to the circuit, which allow us to distinguish the errors stemming from the \(AB\) and \(BC\) measurements. Figure 6: The modified circuit for \(M_{ZZZZ}\) in which there are no hook errors. Steps 3, 4, and 7 shown here are the additional steps that have been inserted into the original circuit from Fig. 2. (The \(M_{Z_{B}}\) auxiliary qubit measurement only needs to occur twice per cycle, but we have shown it occurring three times.) Here, we only show the steps for one round of repeated application of the measurement circuit. In order to ramp up the circuit, one needs to begin with a step 0 that applies a \(M_{X_{A}}\) measurement before step 1; this could be achieved (with additional redundant measurements) by applying step 7. Figure 7: The hook-preventing \(M_{ZZZZ}\) and \(M_{XXXX}\) measurement circuit steps for \(Z\)-type and \(X\)-type plaquettes. These can be pipelined as in Eq. (4) to achieve a 7 step period. from the problematic faults from the previously equivalent two data qubit errors. (A detector is a set of measurements for which the product (or parity) of their outcomes is fixed in the absence of errors [19]; see Sec. VII for further discussion.) The \(M_{XXXX}\) circuit can be obtained from this circuit by replacing \(X\leftrightarrow Z\). Using these hook-preventing stabilizer measurement circuits, the surface code can be implemented with a 7-step period. Using the same order of addressing data qubits as used for the unmodified circuits, as shown in Fig. 7, we find several options for pipelining the circuits to achieve this periodicity, namely: \[\begin{split} 1:&\cdots,(1Z,6X),(2Z,7X),(3Z,1X),(4Z,2X), (5Z,3X),(6Z,4X),(7Z,5X),\cdots\\ 2:&\cdots,(1Z,5X),(2Z,6X),(3Z,7X),(4Z,1X),(5Z,2X),(6Z,3X),(7Z,4X),\cdots\\ 3:&\cdots,(1Z,4X),(2Z,5X),(3Z,6X),(4Z,7X),(5Z,1X),(6Z,2X),(7Z,3X),\cdots\\ 4:&\cdots,(1Z,3X),(2Z,4X),(3Z,5X),(4Z,6X),(5Z,7X),(6Z,1X),(7Z,2X),\cdots\end{split} \tag{4}\] It it worth mentioning that this hook-preventing method of repeating the problematic measurements can be applied to other measurement-based code implementations, such as the pentagonal tiling realization of the surface code devised in Ref. [9]. We present details for this example in Appendix B. ## IV Smaller \(n\)-gons and Boundaries In order to operate the code on surfaces with a boundary, we need circuits for measuring the multi-qubit Pauli operators of 1, 2, or 3 data qubits, which we call 1-gons, 2-gons, or 3-gons, respectively. A convenient way to produce stabilizer measurement circuits for these boundary stabilizers is to start with the corresponding 4-gon measurement circuits and remove the appropriate data qubits. In doing so, we may reduce or remove certain measurements from the circuit, where reduction changes a two-qubit measurement involving a removed data qubit into a single-qubit measurement of the same type on the corresponding auxiliary qubit, e.g. \(M_{Z_{A}Z_{1}}\to M_{Z_{A}}\). We may also remove auxiliary qubits from the \(n\)-gon when removing data qubits in this way, depending on which type of \(n\)-gon we are producing. In particular, 2-gons with data qubits addressed by a common auxiliary qubit only require that auxiliary qubit (the other two can be removed), and 1-gons require no auxiliary qubits. The resulting \(n\)-gon measurement circuits are shown in Fig. 8. (The ramp up and down steps 0 and 5 are not shown; we can simply use those of the 4-gon circuits when the step is required.) A convenient property of producing \(n\)-gon measurement circuits in this manner is that they interlock with the bulk 4-gon circuits. In other words, for a system that uses these \(n\)-gon measurement circuits, we can operate all \(Z\)-type \(n\)-gons on the same schedule and all \(X\)-type \(n\)-gons on the same schedule, without further modification, e.g. we can use the pipelining in Eq. (1) of \(Z\)- and \(X\)-plaquette circuits for all the \(n\)-gons. With these \(n\)-gon measurement circuits, it is straightforward to define the code on surfaces with boundaries. However, an issue that arises is that certain choices of boundary conditions may be more advantageous than others. For example, when putting the surface code on a square patch of data qubits with the "rotated surface code" boundary conditions [15], there are two options for how to assign boundary types to the edges of the square patch. The good choice for our code realization, Figure 8: \(Z\)-type (left) and \(X\)-type (right) measurement circuits for 1-gons, 2-gons, and 3-gons. shown in Fig. 9(a), corresponding to using \(Z\)-type 2-gons along vertical edges and \(X\)-type 2-gons along horizontal edges. This choice aligns the \(Z\) and \(X\) logical string operators perpendicular to the direction of the \(Z\)- and \(X\)-plaquettes' hook errors, respectively. The bad choice for our surface code realization, shown in Fig. 9(b), corresponding to using \(Z\)-type 2-gons along horizontal edges and \(X\)-type 2-gons along vertical edges. This choice utilizes more auxiliary qubits at the boundary and, more importantly, aligns the logical string operators with the corresponding hook errors, which halves the fault distance of the code. In contrast, the original (unrotated) surface code boundary conditions [2] is implemented using 3-gons along the boundary edges, with \(Z\)-type and \(X\)-type 3-gons corresponding to "rough" and "smooth" boundaries. In this case, the logical string operators are aligned diagonally across the plaquettes, so the hook errors do not affect the code distance. However, the relation between distance and number of qubits for the original surface code is worse than that of the (good) rotated surface code due to this diagonal alignment of logical string operators with respect to the plaquettes. ## V Fault tolerance in the presence of dead components An important problem to address when implementing error-correcting codes in physical hardware is maintaining fault tolerance in the presence of physical components that are nonfunctional or exhibit substantially higher error rates than most of the components. We can map the effects of faulty physical components to an effective computational model, where they are specified in terms of qubits, computational gates, and measurements. For our purposes, we will refer to qubits, computational gates, and measurements that are nonfunctional or exhibiting atypically higher error rates as "dead components." The dead components can potentially be identified during the bring-up and Figure 9: Different boundary conditions for a square patch of surface code, all shown for fault distance \(d_{\rm f}=3\). (a) The rotated surface code with a good choice of boundary conditions aligns the logical strings perpendicular to the corresponding hook errors of the same type, so \(d_{\rm f}=d\). The relation between number of physical qubits and fault distance in this case is \(N=4d_{\rm f}^{2}-4d_{\rm f}+1\). (b) The rotated surface code with a bad choice of boundary conditions aligns the logical strings parallel to the corresponding hook errors of the same type, which halves the fault distance, so \(d_{\rm f}=\lceil d/2\rceil\). The resulting relation between number of physical qubits and the fault distance in this case is \(N=16d_{\rm f}^{2}+4d_{\rm f}-5\). (c) The original (unrotated) surface code has the logical strings aligned diagonal to the direction of the hook errors, so \(d_{\rm f}=d\). The relation between number of physical qubits and fault distance in this case is \(N=8d_{\rm f}^{2}-8d_{\rm f}+1\). calibration phase of operating the hardware (if dead at the time) or also during the operation of the error-correcting code when error syndromes indicate a qubit or operation is exhibiting a high error rate. Ref. [20] introduced a strategy for dealing with dead data qubits by removing them from the code operation and forming "superplaquette" operators, which are products of the original plaquette operators (of the same type) that exclude the dead qubits.3 Building on this idea, Ref. [17] proposed to generate the superplaquette measurements by measuring the "damaged stabilizers," i.e. the original plaquette operators reduced by removal of the dead data qubits. In general, the set of damaged stabilizers will not commute with all other damaged stabilizers, so they cannot all be simultaneously measured. In Ref. [17], it was assumed that all undamaged stabilizers were measured (effectively) simultaneously in each round of stabilizer measurements; they proposed to deal with this situation by having successive rounds of stabilizer measurements alternate between measuring damaged stabilizers of \(Z\)-type and \(X\)-type (while both types of undamaged stabilizers are measured every round). In this way, damaged stabilizers would be measured half as often as the undamaged stabilizers. Using this alternation between measuring \(Z\)-type and \(X\)-type damaged stabilizers, the instantaneous stabilizer group after a given round includes the damaged stabilizers of the type from that round, together with the superplaquette stabilizers of the other type formed from the previous round's damaged stabilizers (but not the previous round's individual damaged stabilizers). Moreover, Ref. [17] considered the implementation of plaquette stabilizer measurements using an auxiliary qubit and CNOT gates for each stabilizer and described a strategy for further dealing with dead auxiliary qubits and CNOT gates (which they called syndrome qubit and link fabrication errors, respectively). For this, they identify all the data qubits directly interacting with a dead CNOT gate or auxiliary qubit, i.e. the data qubit acted upon by a given CNOT gate or all data qubits in the plaquette associated with the given auxiliary qubit, respectively. Then, all such data qubits (even though not dead) are removed from the code, together with all the dead components. Footnote 3: We refer to all the stabilizers of the surface code as plaquette operators, distinguishing them as \(Z\)-type and \(X\)-type, rather than “plaquette” and “star” operators, respectively. Following the same idea for removing dead data qubits, we propose a different strategy for dealing with the dead auxiliary qubits and connections that avoids unnecessarily removing data qubits that are not dead. Here, "connection" refers to a multi-qubit operation, which may include computational gates (e.g. CNOTs) or measurements (e.g. pairwise Pauli measurements) acting on multiple physical qubits. Our strategy for determining the modification of the code operation to remove dead components can be divided into three steps: 1. For any \(n\)-gon involving \(m\) dead data qubits, reduce it to a \((n-m)\)-gon by removing the dead data qubits. 2. For any \(n\)-gon (possibly the result of a reduction in step 1) involving dead auxiliary qubits, split it up into a \(n_{1}\)-gon,..., and \(n_{k}\)-gon, where \(n_{1}+\cdots+n_{k}=n\), such that \(k\) is minimized and none of the resulting measurement circuits utilize the dead auxiliary qubits. 3. For any \(n\)-gon (possibly the result of a reduction and/or splitting in steps 1 and 2) involving dead connections, split it up into a \(n_{1}\)-gon,..., and \(n_{k}\)-gon, where \(n_{1}+\cdots+n_{k}=n\), such that \(k\) is minimized and none of the resulting measurement circuits utilize the dead connections. The result of applying each step is unique, i.e. no choices need to be made (at least for simple enough realizations, including all those of interest examined in this paper). Within each step, the reduction or splitting of a \(n\)-gon can be determined iteratively by assessing one component at a time, until all dead components of that step's type have been removed. Splitting \(n\)-gons (steps 2 and 3) always preserves the number of data qubits. On the other hand, the reductions and splittings (all steps) may result in auxiliary qubits or connections that are not dead, but are no longer utilized in the stabilizer measurement circuits; such auxiliary qubits and connections are collaterally removed from the code operation, as they no longer participate. For example, if an auxiliary qubit is dead, then all connections to it are removed; similarly, if all connections to a qubit are dead, that qubit is removed. We will see more examples of this when we consider specific realizations. In examples (not considered explicitly in this paper) where certain auxiliary qubits or connections are shared by multiple plaquettes' stabilizer measurement circuits, they may, in principle, be dead or removed with respect to one plaquette, but not another. For these situations, the functionality of the components may be assessed for each plaquette, which can then be independently modified accordingly. Though it would not change the functioning of the code, if there are regions that become completely disconnected from the main region of the code, e.g. a single data qubit with no connections to other qubits, these can safely be removed as well. This protocol provides the most efficient salvaging of functioning components (without adding components or elements to the set of operations) when removing the dead components - no functioning data qubits are removed from the code operation (unless they are completely disconnected from the rest of the code) and functioning auxiliary qubits and connections are only removed if they are not needed to implement the (minimally split) \(n\)-gons formed from the reducing and splitting process. Moreover, it minimizes the damage to the code in terms of the gaps created in the code, i.e. the number and size of superplaquette operators which effectively reduce the code and fault distances. Since the above strategy is more efficient with respect to removing components and minimizes the damage to the code, we expect it to improve performance with respect to using the code reduction strategy of Ref. [17]. The strategy presented here applies rather generally to different realizations of the surface code (and possibly other codes), assuming they have certain reasonable properties. One such assumption is that there are natural circuits for measuring all the possible reduced and split plaquette stabilizers that only use previously existing components. The application of step 1 only depends on the code, not the detailed realization. In contrast, the application of steps 2 and 3 will depend on the details of the particular realization. We now consider the implementation of this strategy for our measurement-based realization of the surface code in detail. The \(n\)-gon stabilizer measurement circuits from Sec. IV provide all of the circuits that we need in order to implement our dead components strategy. These \(n\)-gon stabilizer circuits share some nice features when used for reducing and splitting \(n\)-gons. The first is that none of them introduce new physical measurements to the set of measurements needed to operate the code; a given measurement is either removed from the circuit or reduced from a pairwise to a single-qubit measurement of the same type. (We note that all single-qubit Pauli measurements are required for operation, even though they may not all occur in the 4-gon circuits.) Thus, it is a relatively simple matter to change the operation of a plaquette in a manner that reduces or splits the plaquette into smaller \(n\)-gons. Additionally, all of these \(n\)-gon circuits interlock, so we can operate all plaquettes of the same type on the same schedule, without further modification. We can now consider each of the three steps for determining the modification of the code in the presence of dead components. The modifications for step 1, where plaquettes are modified by removing dead data qubits, are shown in Fig. 10. The modifications for step 2, where plaquettes Figure 11: The possible splittings of \(Z\)-type \(n\)-gons for step 2, where dead auxiliary qubits (denoted by starbursts) are removed from the code operation. (Splittings related to these by rotations and reflections are not shown separately.) The splittings for \(X\)-type \(n\)-gons may be obtained from these by 90 degree rotations. In some cases, live auxiliary qubits will be collaterally removed as a result of removing dead auxiliary qubits. The full splitting of each \(n\)-gon in this step can be determined through an iterative process where dead auxiliary qubits are removed one at a time (together with any collateral loss), until no dead ones remain, i.e working down through this figure. Figure 10: The possible reductions of \(Z\)-type \(n\)-gons for step 1, where dead data qubits (denoted by starbursts) are removed from the code operation. (Reductions related to these by rotations and reflections are not shown separately.) The reductions for \(X\)-type \(n\)-gons may be obtained from these by 90 degree rotations. Even though this step only considers dead data qubits, the auxiliary qubits are displayed to show the collateral removal of live auxiliary qubits that may occur when removing dead data qubits. Ignoring the auxiliary qubits, the same reduction of \(n\)-gons can be used for any realization of the surface code. The full reduction of each \(n\)-gon in this step can be determined through an iterative process where dead data qubits are removed one at a time until no dead ones remain, i.e working down through this figure. are modified by removing dead auxiliary qubits, are shown in Fig. 11. The modifications for step 3, where plaquettes are modified by removing dead connections (pairwise measurements), are shown in Fig. 12. In each of these figures, we display the different possible scenarios (up to rotations and reflections) for removing a single dead component at a time. In order to remove all dead components from operation, we check components of a given step one at a time and apply the corresponding reduction or splitting shown in the figures, working down through possible scenarios, and repeat for the next component until all dead components are removed. As a demonstration of the generality of our proposed strategy for surface code realizations, we can apply it to the measurement-based pentagonal tiling realization of Ref. [9] and the CNOT gate-based realization. We show the details for these in Appendix C In order to demonstrate the advantages of our proposed strategy and, in particular, how steps 2 and 3 make our protocol differ from that of Ref. [17], it is useful to consider some example scenarios of dead components in detail. We show the modifications of plaquettes for several dead component scenarios, as well as the \(Z\)- and \(X\)-measurement rounds' corresponding measured stabilizers (damaged and undamaged) and the superplaquette stabilizers that survive from round to round. (Recall that when there are no dead components, all plaquette stabilizers survive from round to round.) In Fig. 13, we show the modification of the plaquette stabilizers associated with one dead data qubit. Following the protocol of Ref. [17], this modification would also be used in the case where that data qubit was live, but any of the four connections involving that qubit were dead. With this modification, the code distance is reduced by one. In Fig. 14, we show our modification of the plaquette stabilizers associated with one dead connection between a data qubit and an auxiliary qubit. Comparing to the modification in Fig. 13, we see that the stabilizers only differ from the undamaged stabilizers in the \(Z\)-measurement round and that the code distance is reduced by one for only the \(Z\) logical string operators. Figure 12: The possible splittings of \(Z\)-type \(n\)-gons for step 3, where dead connections (denoted by starbursts) are removed from the code operation. (Splittings related to these by rotations and reflections are not shown separately.) The splittings for \(X\)-type \(n\)-gons may be obtained from these by 90 degree rotations. In some cases, live auxiliary qubits will be collaterally removed as a result of removing dead connections. The full splitting of each \(n\)-gon in this step can be determined through an iterative process where dead connections are removed one at a time (together with any collateral loss), until no dead ones remain, i.e working down through this figure. In Fig. 15, we show the modification of the plaquette stabilizers associated with one dead plaquette, i.e. four dead data qubits. Following the protocol of Ref. [17], this modification would also be used in the case where the data qubits were live, but any of the auxiliary qubits of the plaquette were dead, or each of those data qubits had a dead connection. With this modification, the code distance is reduced by two. In Fig. 16, we show our modification of the plaquette stabilizers associated with a plaquette for which the \(A\) and \(C\) auxiliary qubits are dead, or the four connections between the data qubits and the auxiliary qubits of that plaquette are dead. We again see that the stabilizers only differ from the undamaged stabilizers in the \(Z\)-measurement round and that the code distance is reduced by two for only the \(Z\) logical string operators. In Fig. 17, we show our modification of the plaquette stabilizers associated with a plaquette for which only the \(B\) auxiliary qubit is dead, or for which either of the connections involving the \(B\) qubit is dead. Again, the stabilizers only differ from the undamaged stabilizers in the \(Z\)-measurement round and, in this case, the distance is reduced by two only for \(Z\) logical string operators in the vertical direction. This may not affect the code distance if the logical qubit is encoded such that the logical \(Z\) string operators are aligned in the horizontal direction. Finally, we observe that we gain another advantage from the manner in which our \(n\)-gon measurement circuits are pipelined: for our measurement-based surface code realization, we can measure damaged plaquette operators at the same rate as undamaged ones. In particular, the pipelining we use, e.g. in Eq. (1), effectively operates as alternation between \(Z\)-type and \(X\)-type \(n\)-gon measurements, rather than a true interleaving of their circuits. This can be seen by tracking when the data qubits of neighboring plaquettes are addressed by the circuits for the different plaquette Figure 14: Configuration for one dead connection to a data qubit. Figure 13: Configuration for one dead data qubit. Figure 15: Configuration for a dead plaquette. types, which allows one to isolate each plaquette's circuit in spacetime without sliding measurements of different type past each other on the qubit worldlines. This natural alternation induced by the pipelining fits perfectly with the previously described need to avoid simultaneously measuring \(Z\)-type and \(X\)-type \(n\)-gons that do not commute with each other, which results from the reductions and splittings of the dead component protocol. In other words, we can operate the damaged plaquettes on the same schedule as the undamaged ones, and thereby measure the damaged and undamaged operators at the same rate. Since we avoid halving the rate at which damaged plaquettes operators are measured, this should yield better performance. Returning to the situation where all undamaged stabilizers are measured in each measurement round, but only one type of the damaged stabilizers are measured, Ref. [21] found improved performance for larger regions of dead components by modifying the protocol of Ref. [17] to alternate between \(l\) repeated rounds of damaged stabilizer measurements of each type, where \(l\) is the linear size of the dead region. For our measurement-based surface code realization, we could follow a similar protocol of repeating damaged stabilizer measurements of each type \(l\) times before alternating, but this would require halving the rate at which damaged stabilizers are measured. It is not obvious whether the trade-off for employing this strategy would improve performance in our case, since it would require halving the rate at which damaged plaquettes are measured. ## VI Implementation in Majorana Hardware We now consider the implementation of our measurement-based realization of the surface code in Majorana hardware [6]. As this was the original motivation for generating our new surface code realization, it should not be surprising that the layout and measurements involved are very simple for this hardware. We consider rectangular arrays of Majorana tetron qubits, which are formed from two Majorana wires connected by a superconducting spine. Each tetron possesses four Majorana Figure 16: Configuration for a plaquette with dead auxiliary qubits \(A\) and \(C\), or dead connections between data qubits and auxiliary qubits \(A\) and \(C\). Figure 17: Configuration for a plaquette with dead auxiliary qubit \(B\) or a dead connection to this qubit. zero modes (MZMs), one at each endpoint of each Majorana wire. A pair of MZMs combine into a fermionic mode, i.e. the joint fermionic parity of two MZMs corresponds to a two-level system. However, since each tetron is a floating superconducting island with a charging energy, there is an overall parity constraint of the four MZMs of a tetron. In this way, the tetrons form a qubit, where the joint parity operators associated with different MZM pairs correspond to the different Pauli operators of the qubit. In particular, when the \(j\)th MZM of a tetron has corresponding Majorana operator \(\gamma_{j}\), we use the convention where the joint fermionic parity operator \(P_{jk}=i\gamma_{j}\gamma_{k}\) of MZMs \(j\) and \(k\) map to Pauli operators according to \[X =P_{23}=P_{14}, \tag{5}\] \[Y =P_{13}=-P_{24},\] (6) \[Z =P_{12}=P_{34}. \tag{7}\] See Ref. [6] for more details. In order to facilitate measurements of these operators, one needs additional components around the tetrons that can couple to MZMs as desired to form interference loops that enable measurement of the joint parity of all the MZMs in the interference loop. (Such measurements always involve exactly two MZMs from each tetron qubit involved in the measurement.) The components utilized for this include semiconductor rails running along the short direction of the tetrons (between columns of tetrons in a rectangular array), which enable (electrostatic) gate-defined quantum dots with gate-controlled couplings to the MZMs. There are also "coherent links," which are single floating Majorana wires (with a MZM on each of its two endpoints), which are located between tetrons running along the long direction. These facilitate the creation of interference loops involving MZMs on the opposite sides of tetrons in the long direction. We display schematic drawings of this hardware and some of the measurement loops in Fig. 18. The measurement loops shown serve to define our basis conventions with respect to the hardware, and the same choice is used for all tetrons. Once the interference loops are turned on, the joint parity of all the MZMs included in the loop can be measured, for example by probing the quantum capacitance of the coupled tetron-quantum dot system (which will exhibit parity dependence) using microwave resonators. We expect that the fidelities of such MZM parity measurements will generally decrease with increasing length of semiconductor, number of MZMs, number of coherent Figure 18: Measurement loops in Majorana hardware corresponding to the measurements needed for our code. MZMs (red circles) are located at the end points of topological wires (gray lines). Two topological wires connected by a trivial superconducting spine (dark gray) form a tetron. Tetrons can be connected to each other or coherent links through semiconductor segments (tan). utilized in the corresponding measurement loops [7], that is \[f_{M_{X}}>f_{M_{Z}}>f_{M_{Y}}>f_{M_{XX:\text{horz}}}>f_{M_{ZZ:\text{vert}}}>f_{M_{ YZ:\text{vert}}}>f_{M_{YY:\text{vert}}}>\cdots. \tag{8}\] Here, we have indicated the horizontal or vertical direction on pairwise measurements, because it makes a significant difference in the difficulty of the measurement, e.g. \(M_{ZZ:\text{horz}}\) would require the additional use of two coherent links as compared with \(M_{ZZ:\text{vert}}\). This provides a rough guide for optimizing codes or computations with respect to the measurements utilized. We now examine the implementation of our measurement-based realization of the surface code in a rectangular array of tetrons. We notice that our code only uses \(M_{X}\), \(M_{Z}\), \(M_{XX:\text{horz}}\), and \(M_{ZZ:\text{vert}}\) measurements (for logical memory). These are the two simplest single-qubit measurements and two simplest two-qubit measurements for this hardware. Thus, our measurement-based realization of the surface code is highly optimized with respect to the set of measurements in Majorana hardware. One significant aspect of device design for Majorana hardware is whether a single rail or double rail of semiconductor is used between the columns of tetrons. The semiconductor rails are where most of the operational activity is concentrated in this hardware, e.g. the quantum dots, coupling to MZMs, and measurements. As such, utilizing double-rail semiconductors constitutes an increase in fabrication and control difficulties, which may also translate into higher error rates. However, the positive aspect of the trade-off is that using double-rails allows independent measurements on adjacent tetron columns to be performed without conflict. For single-rail layouts, the interference loops for certain configurations of measurements would overlap and hence could not be performed at the same time. (For double-rail layouts, there may, in practice, be unwanted cross-talk between Figure 19: Interference loops of measurements for the implementation of our measurement-based surface code realization in Majorana hardware with double-rail semiconductor layouts. This layout avoids loop conflicts, allowing for the (minimal) period 4 operation schedule. such measurements when performed simultaneously, since their interference loops would necessarily be in close proximity to each other.) With this in mind, it is useful to consider the implementation of codes in both single-rail and double-rail layouts to appreciate how this difference affects performance in Majorana hardware. We can implement our code in Majorana hardware with the operation schedule of Eq. (1) using rectangular arrays with double-rail semiconductors, as shown in Fig. 19. Examining the interference loops in the steps, we see that attempting to implement this operation schedule on an array with single-rail semiconductors would result in loop conflicts in steps \((1Z,3X)\) and \((4Z,2X)\). A naive resolution of this would be to split each of the steps with conflicts into two steps, distributing the measurements between them so that no step contains a conflict. However, a more efficient resolution is possible, which we show in Fig. 20. In particular, we split step \((4Z,2X)\) into two steps, \((4Z,-)\) and \((-,2X)\), and shift one of the conflicting measurements from step \((1Z,3X)\) into the second of these split steps. Denoting step \(1Z\) without the \(M_{Z_{B}}\) measurement as \(1Z^{\prime}\) and the \(M_{Z_{B}}\) measurement as \(1Z^{\prime\prime}\), the resulting pipelining for single-rail layouts is \[\cdots,(1Z^{\prime},3X),(2Z,4X),(3Z,1X),(4Z,-),(1Z^{\prime\prime},2X),\cdots. \tag{9}\] This operation schedule for single-rail semiconductor layouts has a five step period, which is a Figure 20: The implementation of our measurement-based surface code realization in Majorana hardware with single-rail semiconductor layouts requires resolution of loop conflicts in two steps. A resolution with period 5 operation schedule is shown here. relatively mild slow down. (We note that there are other possibilities for redistributing the measurements among a five step cycle that will be compatible with single-rail layouts.) For the hook-preventing circuits described in Sec. III, we can implement the code using any of the operation schedules shown in Fig. 7 with double-rail layouts. For single-rail semiconductor layouts, we can resolve interference loop conflicts while only increasing the operation period by one step (from seven steps to eight steps), e.g. by modifying the pipelining option 2 given in Eq. (4) in a similar manner as described above. It is worth comparing the implementation of our surface code realization in Majorana hardware to that of the Floquet code on the 4.8.8 lattice, as considered in Ref. [5]. The Floquet code can also be implemented on a rectangular array of qubits, where it utilizes measurements that we denote as \(M_{XX:\text{horz}}\), \(M_{ZZ:\text{vert}}\), and \(M_{YY:\text{vert}}\) (in equal proportions). For double-rail semiconductor layouts, the 4.8.8 Floquet code could operate on a six step period, in which no qubits are idle. However, in order to implement this code on single-rail semiconductor layouts, it appears that interference loop conflicts can only be resolved by splitting each step into two steps, doubling the operation period and introducing the possibility of faults on idle qubits for half the qubits at each step. As such, the comparative performance of our code vs. 4.8.8 Hastings-Haah is likely to improve when the more realistic hardware conditions are taken into consideration. ## VII Performance In this section, we assess the performance of the codes presented in Secs. II, III, and VI, using the circuit noise model presented in Ref. [8]. In this noise model, each measurement fails independently with probability \(p_{\text{physical}}\). When a measurement fails, it acts as an ideal measurement followed by an error drawn uniformly from the set of nontrivial errors supported on the qubits involved in the measurement. For a single-qubit measurement, the error is drawn from the set \(\{(P_{1},F)\}-\{(I,0)\}\). For a two-qubit measurement fault, the error is drawn from the set \(\{(P_{1}\otimes P_{2},F)\}-\{(I\otimes I,0)\}\). Here, \(P_{1},P_{2}\in\{I,X,Y,Z\}\) are Pauli errors acting on the support of the measurement and \(F\in\{0,1\}\) corresponds to a readout error, i.e. a bit flip of the measurement outcome. Some of the code variants we consider include steps in which qubits are idle. The noise model also includes faults on idle qubits, with errors drawn from the set \(\{X,Y,Z\}\). At each time step that a qubit idles, we assign the same error rate \(p_{\text{physical}}\) for an idling error to occur. However, since this is likely to overestimate the relative error rate of idling compared to measurements for our hardware of interest, it is useful to compare code performance with and without faults on idle qubits. We will specify when our analysis does not include faults on idle qubits. The distinction will be significant for the hook-preventing circuit and when analyzing implementations in Majorana hardware with single-rail semiconductor layouts. ### Decoding graph construction We first provide an overview of how we construct a decoding graph, on which we use PyMatching v2 to efficiently perform minimum weight perfect matching via the "sparse blossom" algorithm [22]. We use the spacetime circuit formalism developed in Ref. [23], and the splitting de coder of Ref. [24]. We start with a Clifford circuit consisting of single- and two-qubit measurements. In the absence of errors, the measurements within this circuit satisfy nontrivial correlations, called detectors. More formally, a detector consists of a set of measurements for which the joint parity of their outcomes is fixed in the absence of errors [19]. Let \(\{m_{\alpha}\}\) be the set of measurement outcomes for the circuit, where \(\alpha\) runs over all spacetime coordinates. The set of detectors form a classical parity check code over the measurement outcomes. We define the matrix \(D\) to be such that each row forms a check of the corresponding classical parity check code. Here, we assume \(D_{j\alpha}\) and \(m_{\alpha}\) take values in \(\mathbb{F}_{2}\) with \(\cdot\) the usual product and addition mod 2. Then \(\sum_{\alpha}D_{j\alpha}\cdot m_{\alpha}\) takes on a fixed value in the absence of errors, and forms the \(j\)th detector. We can find all detectors using the fault propagation as described in Ref. [23] [see Algorithm 1 in this reference, and note that they refer to detectors as "checks" (as is also the case in Ref. [14]). A spacetime error chain corresponds to a collection of Pauli errors and readout errors on the circuit. An error chain is detectable if it triggers one or more detectors. In practice, we choose a basis of detectors and use those to form a decoding graph or hypergraph. We label each detector by an integer \(j\), and refer to the set of detectors \(\{j,k,...\}\) that are triggered by an error chain (meaning that the sum of the measurement outcomes mod 2 differs from its value in the absence of errors) as the error chain's syndrome. Our ability to perform error correction depends on the ability to distinguish between equivalence classes of spacetime error chains, based on their syndromes. Each vertex in the decoding graph can (in our chosen basis) be associated to a \(Z\) or \(X\) plaquette, as well as a time coordinate. We note that for a repeated plaquette stabilizer measurement, using the measurement circuits defined in Fig. 2 (original circuit) or Fig. 6 (hook-preventing circuit), there are several low-weight detectors associated with each plaquette (meaning detectors with a small number of contributing measurement outcomes), along with the high-weight detector that corresponds to the repeated measurement of the surface code stabilizer associated to the plaquette. The low-weight detectors are formed by repeated single auxiliary qubit measurements--such as the repeated \(M_{X_{B}}\)--or (in the hook-preventing circuit) by the alternating repeated pairwise auxiliary qubit measurements. Given a basis of detectors together with a set of generative fault configurations \(\mathcal{F}=\{f_{1},f_{2},\cdots\}\), which we take to be degree one faults in the circuit noise model, a decoding hypergraph \(H=V,E\) is defined as follows. Each vertex \(v_{j}\in V\) corresponds to a detector \(j\). A set of vertices \(\{v_{j},v_{k},...\}\) are connected by a hyperedge \(e\in E\) if there exists a fault \(f\in\mathcal{F}\) with the corresponding syndrome \(\{j,k,...\}\). The weight of the hyperedge is defined from the probability of the fault occurring. This construction generally does not result in a decoding _graph_, as certain faults in \(\mathcal{F}\) may trigger three or more detectors and give rise to hyperedges. While matching on a graph can be performed in polynomial time, matching on a hypergraph is generally an NP-hard problem [25]. As described in Ref. [24], it is possible for some decoding hypergraphs to split hyperedges into edges in a consistent way that allows for successful decoding. This is done through the construction of a _split noise model_. We define _primitive faults_ as faults that either trigger one detector (1-faults), or that trigger two detectors without being decomposable into 1-faults (2-faults). By construction, a set of generative fault configurations containing only primitive faults will only give rise to edges. To define the split noise model, any non-primitive fault in \(\mathcal{F}\) is decomposed into primitive faults, such that they together trigger the same set of detectors as the original non-primitive fault. Each of these new faults is assigned the same error rate as the original \(n\)-fault, and is added to a new set of generative fault configurations, \(\tilde{\mathcal{F}}\), together with the primitive faults in \(\mathcal{F}\). \(\tilde{\mathcal{F}}\) approximates \(\mathcal{F}\) while containing only primitive faults, and is used to define a decoding graph on which minimum weight matching is performed. The matching decoder that uses this graph is referred to as a splitting decoder. ### Dynamic re-weighting of the decoder graph edges While the splitting decoder works straightforwardly for the original circuit defined in Fig. 2, the hook-preventing circuit defined in Fig. 6 contains additional detectors, which complicate the use of a splitting decoder. These increase the weight of the syndromes for some circuit noise errors in \(\mathcal{F}\), which in turn affects the performance of the splitting decoder such that the fault distance is effectively halved. To avoid this issue, we dynamically change the weights of the edges in the decoding graph in the presence of certain syndromes, as described below. This dynamic re-weighting is inspired by Ref. [26], where soft information is used to dynamically determine edge weights (rather using than syndromes, as in the present context). With the dynamic re-weighting added for the hook-preventing circuit, we find that the splitting decoder is successful in all cases considered here. We illustrate the need for re-weighting with an example. We recall the measurement sequence in the hook-preventing circuit for a \(Z\)-plaquette, shown in Fig. 6. The repeated pairwise auxiliary measurements \(M_{X_{A}X_{B}}\) and \(M_{X_{B}X_{C}}\) in steps 2-5 give rise to two low-weight detectors. Using superscripts to indicate the time steps of measurements, the first low-weight detector is given by the sum of the measurement outcomes from \(M_{X_{A}X_{B}}^{\text{ZZ}}\) and \(M_{X_{A}X_{B}}^{\text{4Z}}\), and the second by the sum of the measurement outcomes from \(M_{X_{B}X_{C}}^{\text{3Z}}\) and \(M_{X_{B}X_{C}}^{\text{5Z}}\). In our chosen basis of detectors, a single readout error on the second repetition of the measurements in these pairs, e.g. \(M_{X_{A}X_{B}}^{\text{4Z}}\), will trigger not only the low-weight detector, but also two high-weight detectors associated to neighboring \(X\)-plaquettes. As such, a fault that results in such a readout error is a non-primitive fault. It is decomposed onto a 1-fault resulting in a readout error on \(M_{X_{A}X_{B}}^{\text{2Z}}\), which in the chosen basis triggers only the low-weight detector, and two 2-faults that together result in \(Z\)-errors on two data qubits.4 The spatial distribution of the errors is indicated in the following figure, where we denote the faults that result in readout errors on the first and second \(M_{X_{A}X_{B}}\) by \(f_{1}\) and \(f_{2}\): Footnote 4: These can be reversed by an equally natural basis choice, so that a readout error in the first repetition triggers three detectors, and a readout error in the second repetition only triggers one detector. The primitive \(Z\) 2-faults above correspond to edges between the decoding graph vertices that represent the high-weight detectors associated to the neighboring \(X\)-plaquettes. The primitive \(f_{1}\) 1-fault corresponds to a "dangling edge," in the graph as it only connects to one single vertex, i.e. the low-weight detector. We represent such dangling edges as dots in the pictures below. In terms of edges, the three primitive faults are represented as In the presence of multiple non-primitive faults of this type, the matching can fail. Consider the following edges representing two \(f_{2}\) faults on a 6 by 6 torus (left), and another, logically inequivalent edge configuration (right) with the same syndrome: As non-primitive \(f_{2}\) faults appear on _all_ plaquettes in the set of generative fault configurations \(\mathcal{F}\), all non-dangling edges in the above picture by symmetry come with the same weight. Therefore, the minimum weight matching prioritizes the path with fewer edges, and the decoding induces a logical error. To get around this mismatch, we temporarily assign weight zero to all non-dangling edges that correspond a split \(f_{2}\) fault (meaning that they come for free) whenever the dangling edge is "lit up" by a fault configuration, i.e. whenever the corresponding low-weight detector is triggered. With this temporary re-weighting, the matching will not penalize the longer path and the decoding succeeds. Other faults need to be treated in the same fashion. When constructing the decoding graph for the hook-preventing circuits, we first identify the faults whose splitting will require dynamical re-weighting. These correspond to faults of the form \((I,1)\), i.e. pure readout errors, that light up three detectors. These faults are split into three primitive faults. One of these primitive faults in the splitting is a 1-fault that triggers only a single detector \(v_{j}\). The other two primitive faults in the splitting are 2-faults, and during decoding the corresponding two edges in the decoding graph are assigned weight zero whenever \(j\) is present in the observed syndrome. ### Results In this section, we analyze the performance of our code and compare it directly with that of the 4.8.8 Floquet code, which represents the state of the art for pairwise measurement-based codes optimized with respect to Majorana hardware. We estimate the logical failure rates by running Monte Carlo simulations with up to \(10^{8}\) trials for a series of increasing code sizes and a fixed set of physical error rates \(10^{-4}\leq p_{\text{physical}}\leq 1.5\times 10^{-2}\). At sufficiently large code size and low \(p_{\text{physical}}\), we observe no failures in the \(10^{8}\) trials and, thus, do not include the corresponding point in the plots. The shaded regions in performance plots [Figs. 21, 23, 25, and 26] represent 95% credible intervals of a posterior beta distribution given the observed number of logical failures and completed trials, assuming a uniform prior distribution for the logical error rate; the points represent the median of this posterior distribution. In this paper, we extract threshold values by finding the intersection of the two largest simulated code sizes via a linear interpolation _in log-log space_ of the obtained \((p_{\text{physical}},p_{\text{logical}})\) data. We ignore sampling uncertainties in these threshold estimates, since sampling error is generally negligible (e.g., occurring in the 4th significant figure in the \(p_{\text{logical}}\) estimates) at the relatively high error rates encountered near threshold. There are, however, systematic uncertainties due to finite code size and our employed interpolation procedure. We did not attempt to estimate error bars for the threshold values, since the difference of noise model used from a realistic noise model undoubtedly results in more significant deviations. Note that we expect the finite size effects to cause _underestimation_ of the threshold. For a more rigorous threshold estimation procedure, see, e.g., Ref. [14]. We extract "pseudo-thresholds" for a particular system size by locating the error rate where \(p_{\text{logical}}=p_{\text{physical}}\), again obtained via linear interpolation in log-log space. We first compare the performance for our original circuits and pipelining (as described in Sec. II) on a rotated surface code patch using boundary conditions that make the hook errors benign (as explained in Sec. IV) with the performance of the 4.8.8 Floquet code on a planar patch with rectangular boundary conditions (as in Ref. [5]). For implementation in Majorana hardware, these both correspond to realizations that would require double-rail semicondutor layouts. The performance results for these two cases are shown in Fig. 21. Notably, we find a fault-tolerance threshold of \(0.66\%\) for our code and \(1.3\%\) for the 4.8.8 Floquet code.5 For the smallest code size (\(d_{\text{f}}=3\)), the pseudo-thresholds are comparable for the two codes at approximately \(0.096\%\) for the surface code and \(0.16\%\) for 4.8.8 Floquet code. Footnote 5: The improvement of the threshold value for 4.8.8 Floquet code compared to the previous simulations of Ref. [5] are likely due to some combination of (a) our use of an improved decoder, (b) larger considered code sizes, and (c) improved sampling statistics. When evaluating code performance, it is helpful to go beyond the threshold and consider the resource requirements for obtaining a given target logical error rate \(p_{\text{logical}}^{\text{target}}\) for the simulated memory experiment. In Fig. 22, we plot the qubit count, circuit depth, and spacetime footprint (i.e., qubit count multiplied by circuit depth) required to achieve logical error rates of \(p_{\text{logical}}^{\text{target}}=10^{-8}\), \(10^{-12}\), and \(10^{-15}\) for physical error rates in the range \(10^{-6}\leq p_{\text{physical}}\leq 10^{-3}\). In Table 2, we list the Figure 21: (left) Performance results for our pairwise measurement-based surface code realization described in Sec. II on a rotated surface code patch using boundary conditions that make the hook errors benign (see Sec. IV). (right) Performance results for the 4.8.8 Floquet code on a planar patch with rectangular boundary conditions (as in Ref. [5]). For implementation in Majorana hardware, these require double-rail semicondutor layouts. Fault-tolerance thresholds are found to be \(0.66\%\) for our code and \(1.3\%\) for the 4.8.8 Floquet code. corresponding fault distances \(d_{\rm f}\) required to reach these target logical error rates for physical error rates \(p_{\rm physical}=10^{-6}\), \(10^{-5}\), \(10^{-4}\), and \(10^{-3}\). Within the context of this simplified error model, we see that our surface code is competitive in terms of these resource requirements, especially at error rates \(p_{\rm physical}\lesssim 10^{-4}\), and it has substantially narrowed the gap from the original requirements of the pairwise measurement-based surface codes in Ref. [8] at higher error rates \(p_{\rm physical}\approx 10^{-3}\), where the Floquet code previously enjoyed a nearly two full orders of magnitude reduction in spacetime footprint at \(p_{\rm logical}^{\rm target}=10^{-12}\) [see Fig. 9 of Ref. [5]]. Figure 22: Resource requirements to reach target logical error rates of \(p_{\rm logical}^{\rm target}=10^{-8}\), \(10^{-12}\), and \(10^{-15}\), comparing our realization of the surface code and the 4.8.8 planar Floquet code. The respective target logical error rates correspond to columns from left to right, and the resource quantities being considered correspond to rows, from top to bottom: qubit count, circuit depth, and spacetime footprint (i.e., qubit count times circuit depth). To obtain these resource estimates, we have taken the following approach. For each code and code size, we expect the low \(p_{\rm physical}\) behavior to be dominated by circuit noise faults of weight \((d_{\rm f}+1)/2\)[27]. Rather than assuming that this form persists all the way to \(p_{\rm physical}\) on the order of the threshold, we select, for each code size, a characteristic reference point \((p_{\rm physical}^{\rm ref},p_{\rm logical}^{\rm ref})\) in the sub-threshold regime of the empirical data and assume that for \(p_{\rm physical}\leq p_{\rm physical}^{\rm ref}\), the logical error rate is governed by the form \[p_{\rm logical}=p_{\rm logical}^{\rm ref}\left(\frac{p_{\rm physical}}{p_{ \rm physical}^{\rm ref}}\right)^{(d_{\rm f}+1)/2}. \tag{10}\] The colored dashed lines in the performance plots represent these chosen sub-threshold estimates of the logical error rate.6 To estimate the fault distance required to hit a prescribed \(p_{\rm logical}^{\rm target}\) at a given \(p_{\rm physical}\), we first fit the values of these obtained scaling forms for the sub-threshold logical error rate for all simulated code sizes \(d_{\rm f}>3\) to the exponential form Footnote 6: In the plots, a given dashed line terminates at the chosen reference point \((p_{\rm physical}^{\rm ref},p_{\rm logical}^{\rm ref})\). For \(p_{\rm logical}^{\rm ref}\), we use the median of the posterior beta distribution obtained for \(p_{\rm logical}\). \[p_{\rm logical}(p_{\rm physical},d_{\rm f})=\alpha(p_{\rm physical})e^{- \beta(p_{\rm physical})d_{\rm f}}. \tag{11}\] The fits thereby obtained are of very high quality in the considered window \(10^{-6}\leq p_{\rm physical}\leq 10^{-3}\), justifying the approach _a posteriori_. Finally, we determine the smallest (odd) \(d_{\rm f}=d_{\rm f}^{\rm target}\) necessary for the fitted exponential form to predict \(p_{\rm logical}\leq p_{\rm logical}^{\rm target}\). The actual footprints can then be read off from the circuit corresponding to the required fault distance \(d_{\rm f}\) as follows. The physical qubit count for our code on a rotated surface code patch with hook-benign boundary conditions (i.e. \(d_{\rm f}=d\)) is \(N=4d^{2}-4d+1\) and the circuit depth is counted as \(4d\). For the 4.8.8 Floquet code on a patch with rectangular boundary conditions [5], the qubit count is \(N=4d_{\rm f}^{2}+8(d_{\rm f}-1)\) and the circuit depth is counted as \(6\lceil d_{\rm f}/2\rceil\). As we are interested in implementation of these codes in Majorana hardware, where single-rail semiconductor layouts may be strongly preferable to double-rail layouts, we repeat the above analysis for single-rail variants of the two codes. For our measurement-based surface code realization, we use a modification of the circuits that is compatible with single-rail layouts, as described in \begin{table} \begin{tabular}{c c c c c c c} & \(p_{\rm logical}^{\rm target}=10^{-8}\) & \(p_{\rm logical}^{\rm target}=10^{-12}\) & \(p_{\rm logical}^{\rm target}=10^{-15}\) \\ \cline{2-7} \(p_{\rm physical}\) & SC & 4.8.8 & SC & 4.8.8 & SC & 4.8.8 \\ \hline \(10^{-6}\) & 3 & 3 & 5 & 5 & 7 & 7 \\ \(10^{-5}\) & 5 & 5 & 7 & 7 & 9 & 9 \\ \(10^{-4}\) & 7 & 7 & 13 & 9 & 15 & 13 \\ \(10^{-3}\) & 17 & 11 & 27 & 17 & 33 & 23 \\ \hline \end{tabular} \end{table} Table 2: Fault distance \(d_{\rm f}\) required to reach a target logical error rate \(p_{\rm logical}^{\rm target}\) of \(10^{-8}\), \(10^{-12}\) and \(10^{-15}\), respectively, comparing our realization of the surface code and the 4.8.8 planar Floquet code. The determination of the required \(d_{\rm f}\) is the same as in Fig. 22. Sec. VI. For the 4.8.8 Floquet code, we must split each step of the original circuits into two steps in such a way to avoid conflicts between measurement loops. There are different ways to do this, but a convenient choice is to distribute half of the measurements of each type from each step into the resulting two steps. We will not show further detail of the circuit for the single-rail variant of the 4.8.8 Floquet code, as the main point is simply that there are twice as many steps in the period and half the qubits are idle in each step. Since the single-rail variants of both of these codes have steps with idle qubits, we now repeat the above analysis for these single-rail variants. We note that the noise model we use potentially overestimates the relative error rates of idle faults as compared to measurement faults, so one can view the original results and the single-rail results as assessments at the endpoints of a range of possible idle noise rates. (If we consider the single-rail code variants with the idle noise set to zero, we would obtain the original performance results.) Comparing the single-rail variants of these two codes, we find that our code becomes even more competitive. From the performance results in Fig. 23, we find much closer fault-tolerance thresholds of 0.51% and 0.52% for our surface code and the 4.8.8 planar Floquet code, respectively. We note that these decreases from the original thresholds are, respectively, in rough agreement with a 4/5 and 1/2 decrease that one might naively anticipate from the syndrome extraction period increases of four to five steps for the surface code and three to six for the 4.8.8 Floquet code, with extra idle noise on introduced for each additional step. For the smallest code size (\(d_{\rm f}=3\)), the pseudo-thresholds are again comparable, though now favoring the surface code at approximately 0.03% for the surface code and 0.02% for 4.8.8 Floquet code. In Fig. 24, we plot the qubit count, circuit depth, Figure 23: (left) Performance results for the single-rail variant of our pairwise measurement-based surface code realization described in Sec. VI on a rotated surface code patch using boundary conditions that make the hook errors benign. (right) Performance results for the single-rail variant of 4.8.8 Floquet code on a planar patch with rectangular boundary conditions (see description in text). These variants can be implemented in Majorana hardware with single-rail semiconductor layouts. Fault-tolerance thresholds for these single-rail variants are found to be 0.51% for our code and 0.52% for the 4.8.8 Floquet code. and spacetime footprint resource estimates required to achieve logical error rates of \(p_{\text{logical}}^{\text{target}}=10^{-8}\), \(10^{-12}\), and \(10^{-15}\) for physical error rates in the range \(10^{-6}\leq p_{\text{physical}}\leq 10^{-3}\). In Table 3, we list the corresponding fault distances \(d_{\text{f}}\) required to reach these target logical error rates for physical error rates \(p_{\text{physical}}=10^{-6}\), \(10^{-5}\), \(10^{-4}\), and \(10^{-3}\). It is worth noting that we anticipate the comparison of our code's performance to that of the 4.8.8 Floquet code to improve when we use a noise model that better reflects the physical errors affecting Majorana hardware. This is due to to natural assumptions, such as two-qubit measurements having higher fault rates than single-qubit measurements, and measurements having higher fault rates than idling error rates; moreover, our code does not utilize \(M_{YY:\text{vert}}\) measurements, which are used in the 4.8.8 Floquet code and can be expected to have higher fault rates than the \(M_{XX:\text{horz}}\) and \(M_{ZZ:\text{vert}}\) measurements. Figure 24: Resource requirements to reach target logical error rates of \(p_{\text{logical}}^{\text{target}}=10^{-8}\), \(10^{-12}\), and \(10^{-15}\), comparing single-rail variants of our realization of the surface code and the 4.8.8 planar Floquet code. measurements. As our code provides an interesting test bed for exploring the effect of hook errors, we now investigate this matter in simulation. In Fig. 25, we present the performance for our code (using the original circuits and pipelining of Sec. II) on a rotated surface code patch with boundary conditions intentionally chosen to align the hook errors with the corresponding logical operators (left) and on a torus (right).7 The hook errors are malignant for both of these systems. These Figure 25: (left) Performance results for our pairwise measurement-based surface code realization on a rotated surface code patch using boundary conditions that make the hook errors malignant (see Sec. IV). We note that the \(d=3\) curve is not expected to intersect with the other curves at (or near) threshold, as the code is not error-correcting at \(d=3\), because the the fault distance is \(d_{\rm f}=2\). (right) Performance results for our code on a torus. Fault-tolerance thresholds are found to be \(0.65\%\) for the planar patch and \(0.70\%\) for the torus. \begin{table} \begin{tabular}{c c c c c c c} & \multicolumn{2}{c}{\(p_{\rm logical}^{\rm target}=10^{-8}\)} & \multicolumn{2}{c}{\(p_{\rm logical}^{\rm target}=10^{-12}\)} & \multicolumn{2}{c}{\(p_{\rm logical}^{\rm target}=10^{-15}\)} \\ \cline{2-7} \(p_{\rm physical}\) & SC & 4.8.8 & SC & 4.8.8 & SC & 4.8.8 \\ \hline \(10^{-6}\) & 3 & 3 & 7 & 5 & 7 & 7 \\ \(10^{-5}\) & 5 & 5 & 9 & 7 & 11 & 9 \\ \(10^{-4}\) & 9 & 7 & 13 & 11 & 17 & 15 \\ \(10^{-3}\) & 21 & 15 & 31 & 23 & 41 & 29 \\ \hline \end{tabular} \end{table} Table 3: Fault distance \(d_{\rm f}\) required to reach a target logical error rate \(p_{\rm logical}^{\rm target}\) of \(10^{-8}\), \(10^{-12}\) and \(10^{-15}\), respectively, comparing single-rail variants of our realization of the surface code and the 4.8.8 planar Floquet code. can be compared to the code performance on a patch with boundary conditions chosen to make hook errors benign, as shown in Fig. 21(left). The choice of boundary conditions should not affect the fault-tolerance threshold, as the circuits implement the same bulk operations. Indeed, the thresholds for the hook-malignant systems in Fig. 25 are estimated to be approximately \(0.65\%\) for the planar patch and \(0.70\%\) for the torus. The discrepancy between planar and torus is likely due to finite-size effects, and is perhaps not extremely surprising at these sizes. Footnote 10: The \(\lfloor\frac{d-1}{2}\rfloor\) fault is not a good approximation of the \(d\)-dimensional (\(d\)-dimensional) fault, but it is not a good approximation of the \(d\)-dimensional fault. On the other hand, when hook errors are malignant in the code, it should impact the scaling of the logical failure rate curves in the deep sub-threshold (low \(p_{\text{physical}}\)) regime, due to the fault distance \(d_{\text{f}}\) being _halved_ as compared to the code distance \(d\). One interesting consequence of this distance-halving is that, in the low-error regime, we expect the curves to "pair up" in terms of slope (when plotted on a log-log scale): the code can correct up to \(\lfloor\frac{d-1}{2}\rfloor\) faults, with \(d_{\text{f}}=\lceil\frac{d}{2}\rceil\), meaning that \(d\) must increase by four for the slope to increase by one. Indeed, we observe all of this expected behavior in Fig. 25, where the colored dashed lines represent \(p_{\text{logical}}\sim p_{\text{physical}}^{\lfloor(d_{\text{f}}+1)/2\rfloor}\) scaling, chosen to intercept a reference empirical data point, as previously discussed in the context of Figs. 21 and 23 [see Eq. (10)--although here, the dashed lines are only drawn for visual reference and not used for any subsequent calculations]. More rigorously investigating the deep sub-threshold scaling of these hook-malignant codes with extensive, large-scale _stratified sampling_[5] targeting only the dominant subpopulations expected to contribute to \(p_{\text{logical}}\) at \(p_{\text{physical}}\ll 1\) is an interesting topic for future work. Initial such studies on the hook-malignant rotated surface code patch in Fig. 25(left) indicate Figure 26: Performance results for the hook-preventing variant of our pairwise measurement-based surface code realization described in Sec. III on a torus for noise model without (left) and with (right) idle noise. The full code distance is recovered and performance is improved with respect to the code using the original circuits when hook errors are malignant (see Fig. 25). Fault-tolerance thresholds are found to be \(0.61\%\) where there are no idle errors and \(0.43\%\) when idle errors are included. that we can indeed at least find fault configurations contributing to \(p_{\rm logical}\) at the expected powers in \(p_{\rm physical}\). For example, at \(d=11\), we can see logical failures for subpopulations contributing to \(p_{\rm logical}\) at \(O(p_{\rm physical}^{3})\), where \(3=\lfloor\frac{\lceil 11/2\rceil+1}{2}\rfloor\) is the slope of the dashed curve for \(d=11\) in Fig 25(left). Finally, we remark that hook errors of course have the most dramatic consequence at the smallest \(d=3\), where the code is now no longer even error-correcting in the circuit noise model, and thus we expect \(p_{\rm logical}\propto p_{\rm physical}\), as observed in the data in Fig. 25(left). Finally, we evaluate the performance of our code variant utilizing the hook-preventing circuits and pipelining described in Sec. III. For this, we have performed simulations for the code on a torus for the noise model without and with idle noise. The performance results in Fig. 26 demonstrate the expected recovery of the full distance, i.e. \(d_{\rm f}=d\), in the deep sub-threshold regime. Moreover, the performance is overall better than that of the original circuits when hook errors are malignant. Since the hook-preventing measurement circuits are different from the original circuits, we no longer expect the thresholds to be the same as before. For these hook-preventing variants of our surface code realization on the torus, we find fault-tolerance thresholds of approximately \(0.61\%\) when there is no idle noise and \(0.43\%\) when there is idle noise. This represents a modest decrease from the threshold value of the code using the original measurement circuits. ###### Acknowledgements. We are very grateful to A. Paetznick for many useful discussions and help with the decoder implementation and performance assessment. We also thank N. Delfosse for helpful discussions about the decoder construction, M. Beverland for suggesting dynamic weight assignment of decoding graphs for the hook-preventing circuits, and J. Weston for assistance setting up the large-scale simulation pipeline on Azure. We thank J. Haah, M. Hastings, C. Nayak, and K. Svore for helpful feedback. ## Appendix A Interleaved Stabilizer Measurement Circuits We can interleave \(M_{ZZZZ}\) and \(M_{XXXX}\) circuits while running them on the same schedule, i.e. \((0Z,0X),(1Z,1X),\ldots\), by choosing the order in which data qubits are addressed as shown in Fig. 27. This has the slight benefit of requiring \(4r+2\) steps for \(r\) rounds of stabilizer measurement, but it turns out to have significant disadvantages when compared to the pipelined measurement schedule presented in Sec. II. Notably, the pipelined measurement schedule effectively alternates between \(M_{ZZZZ}\) and \(M_{XXXX}\) measurements, while the interleaving effectively performs the measurement simultaneously. This means the pipelined circuit is more efficient when used in the dead component protocols of Sec. V, since it naturally implements the necessary alternation between non-commuting \(Z\)-type and \(X\)-type \(n\)-gons, without reducing the measurement rate. In contrast, using the interleaved measurement schedule would require us only perform half of the non-commuting \(n\)-gons measurement circuits in a given circuit round. Furthermore, considering the measurement loops in Majorana hardware (discussed in Sec. VI for the interleaved measurement schedule, we find that using single-rail semiconductor layouts would require each step to be split into two steps in order to avoid physically conflicting measurements. This would double the measurement period to 8 steps, in constrast with the pipelined circuit which could be implemented in single-rail layouts with a 5 step period. In terms of code performance, for the best case scenario, i.e. ignoring these dead component and single-rail issues, we find that the interleaved and pipelined scheduling yield very similar performance data. ## Appendix B Hook Preventing Modifications for the Pentagonal Tiling Realization of the Surface Code The pentagonal tiling surface code realization of Ref. [9] utilizes two auxiliary qubits for each 4-gon stabilizer measurement, the circuit of which is shown in Fig. 28. In contrast to our realization, circuit noise for the pentagonal tiling circuits results in bidirectional hook errors, as discussed in Ref. [9]. In particular, for the \(M_{ZZZZ}\) circuit, a readout error at the \(M_{X_{A}X_{B}}\) measurement is equiv Figure 27: The stabilizer measurement circuits can be interleaved and run on a synchronous schedule, i.e. \((0Z,0X),(1Z,1X),\ldots\), by addressing the data qubits in a different order than the pipelined measurement schedule presented in Sec. II. This interleaved measurement schedule has significant disadvantages compared to the pipelined schedule. alent to a \(Z_{1}Z_{3}\) or \(Z_{2}Z_{4}\) error on the data qubits, while a \(Z_{A}Z_{B}\) error at the same measurement is equivalent to a \(Z_{1}Z_{2}\) or \(Z_{3}Z_{4}\) error on the data qubits. This bidirectionality makes hook errors more problematic for the pentagonal tiling realization. For example, one cannot align these hook errors to be perpendicular to the direction of the corresponding logical operators. Applying our hook-preventing idea to the pentagonal tiling circuit by repeating the pairwise auxiliary qubit measurement, as shown in Fig. 29, will eliminate the hook errors corresponding to the readout errors, though not the two-qubit Pauli errors. The remaining hook errors of our hook-preventing pentagonal tiling circuits are unidirectional with the direction correlated with the error type (though oppositely correlated with our non-hook-preventing circuits): the \(Z\)-plaquettes' hook errors correspond to \(ZZ\) data qubit errors in the horizontal direction and the \(X\)-plaquettes' hook errors correspond to \(XX\) data qubit errors in the vertical direction. We note that this is the opposite directionality of hook errors that we found for our stabilizer measurement circuits in Sec. III. Again, one way of addressing these remaining unidirectional hook errors is to choose logical operators to be aligned perpendicular to the corresponding type of hook errors. There is another, somewhat more drastic modification one can make to the stabilizer measurement circuits of Ref. [9] that prevents their hook errors in the other direction. Comparing our \(M_{ZZZZ}\) circuit in Fig. 2 with the \(M_{ZZZZ}\) circuit in Fig. 28, we can retrospectively view our circuit as a modification of the pentagonal tiling \(M_{ZZZZ}\) circuit by introducing an additional auxiliary qubit and appropriate measurements to obtain an equivalent circuit. This modification has the effect of trading the horizontal hook error due to a \(Z_{A}Z_{B}\) error at the pairwise auxiliary qubit measurement in the pentagonal tiling circuit for a vertical hook error due to a \(Z_{B}\) error between the two pairwise auxiliary qubit measurements in our circuit, again leaving only unidirectional hook errors. Figure 28: The \(M_{ZZZZ}\) circuit from Ref. [9] for the pentagonal tiling realization of the surface code. ## Appendix C Dead Components in Other Surface Code Realizations In this appendix, we demonstrate our dead components strategy for the measurement-based pentagonal tiling surface code realization of Ref. [9] and the CNOT gate-based realization of the surface code. Step 1 is the same for all realizations, so we can use the modifications shown in Fig. 10 for removing dead data qubits, with the understanding that the array of auxiliary qubits and their collateral removals should be replaced with that of the given realization. For the measurement-based pentagonal tiling realization, the splitting of plaquettes for steps 2 and 3 are shown in Figs. 30 and 31, respectively. For the CNOT gate-based realization, the splitting of plaquettes for steps 2 and 3 are shown in Fig. 32. We note that our strategy may not constitute a desirable trade-off for the CNOT gate-based realization in hardware where measurements are a prohibitively costly resource, as it would increase in the number of measurements performed for each splitting. We note that the pentagonal tiling realization of Ref. [9] is pipelined in a manner that exhibits a natural alternation between \(Z\)-type and \(X\)-type plaquette measurements, similar to our surface code realization. As such, it also has the advantage of not needing to halve the rate at which the non-commuting \(n\)-gons are measured, as compared to the measurement rate of undamaged plaquettes. It is worth mentioning that a similar advantage could potentially be obtained for the CNOT gate-based realization of the surface code by using an appropriate pipelining of the \(Z\)-type and \(X\)-type circuits. In particular, by offsetting the \(Z\)-type and \(X\)-type \(n\)-gon measurement circuits by three steps (and carefully choosing the order that data qubits are addressed in a circuit), we can obtain the same operating period as the interleaved circuits used in Ref. [17], except now the \(n\)-gon Figure 29: A modification of the \(M_{ZZZZ}\) circuit from Ref. [9] that prevents the problematic hook errors associated with readout error at the \(M_{X_{A}X_{B}}\) measurement. Incorporating this and a similar modification of the \(M_{XXXX}\) circuit reduces the problem of bidirectional hook errors to unidirectional hook errors in the pentagonal tiling realization of the surface code. measurements effectively alternate between \(Z\)-type and \(X\)-type. This advantage may be undone Figure 31: The possible splittings of \(Z\)-type \(n\)-gons for step 3, where dead connections are removed from the code operation for the measurement-based pentagonal tiling realization of the surface code. (Splittings related to these by rotations and reflections are not shown separately.) The splittings for \(X\)-type \(n\)-gons may be obtained from these by 90 degree rotations. Figure 32: The splittings of \(Z\)-type or \(X\)-type plaquettes for steps 2 and 3, where dead auxiliary qubits and connections are removed from the code operation for the CNOT gate-based realization of the surface code. All possible \(n\)-gons splittings are not shown because they all follow the same pattern: for step 2, a dead auxiliary qubit splits the \(n\)-gon into \(n\) 1-gons; for step 3, a dead connection splits the \(n\)-gon into a \((n-1)\)-gon and a 1-gon, according to which connection is dead. Figure 30: The possible splittings of \(Z\)-type \(n\)-gons for step 2, where dead auxiliary qubits are removed from the code operation for the measurement-based pentagonal tiling realization of the surface code. (Splittings related to these by rotations and reflections are not shown separately.) The splittings for \(X\)-type \(n\)-gons may be obtained from these by 90 degree rotations. for hardware in which the measurements time is substantially longer than the CNOT gate time, as measurements will occur during four steps, rather than two steps per cycle with such pipelining.
2302.02983
Finslerian wormhole solution in the framework of modified gravity
This article investigates the properties of a wormhole model in a specific gravity theory, namely $f(Ric, T)=Ric+2\lambda T$. The wormhole solution is analyzed using an exponential shape function. The study examines various parameters, such as density, radial pressure, transverse pressure, equation-of-state parameters, and energy conditions, within the framework of deformed gravity. The research emphasizes the influence of the parameter $\lambda$ on energy condition violations and the equilibrium state of the Finslerian wormhole solution. These effects are attributed to anisotropic and hydrostatic forces present in modified gravity. The study demonstrates that the gravity model effectively captures the characteristics of wormholes within the Finslerian space-time. Additionally, the identified features of the wormhole are utilized to visualize its structure by creating a three-dimensional representation of the embedded surface. In summary, this research contributes to understanding wormholes in modified gravity theories, highlighting the importance of the parameter $\lambda$ in determining their behavior and properties.
Manjunath Malligawad, S. K. Narasimhamurthy, Z. Nekouee, Mallikarjun Y. Kumbar
2023-02-06T18:18:23Z
http://arxiv.org/abs/2302.02983v2
# The Finslerian Wormhole model with \(f(R,t)\) gravity ###### Abstract In this article, based on Finsler geometry, we study the wormhole model in \(f(R,T)=R+2\lambda T\) gravity theory with an exponential shape function. On the basis of this wormhole solution, we derive the gravitational field equations. Using the exponential shape function, we discuss the character of parameters such as density, radial pressure, transverse pressure, equation-of-state parameters, and energy conditions in \(f(R,T)\) gravity. We study the significant role of parameter \(\lambda\) in the violation of energy conditions and also in the equilibrium state of the Finslerian wormhole solution, which is caused by the anisotropic force and hydrostatic force in \(f(R,T)\) gravity. Further, we observe in the framework of Finslerian space-time that the \(f(R,T)\) gravity model successfully captures the features of wormholes. Further using these features we plotted and visualized the 3-D wormhole structure Fig. 2. ## I Introduction The theme of the wormhole was first discussed by German mathematician H. Weyl [1]. He explained the structure of a wormhole is an asymptotically flat tube-like structure, and it is a tunnel with two entrances or two distinct points of different space-times or bridge joining two ends of the same space-time. Therefore, the issue of wormholes attracted the attention of many geometricians and physicists. Schwarzschild wormhole is the first type of wormhole solution developed by Schwarzschild. The wormhole is the most well-known and extensively researched idea in General Relativity (GR) and modified theories of gravity. The GR solutions produce wormholes. These hypothetical shortcuts, generally known as Einstein-Rosen bridges, were mathematically proven to exist for the first time by [2]. But [3] showed in a paper later published in 1962 this type of wormhole is unstable. If it joins two regions of the same universe and any object moving slower than the speed of light travels from one exterior area to the other if it falls from it.There are two kinds of wormholes static and dynamic wormholes. The ideology of the traversable wormhole was proposed by [4] with the intention of using a wormhole for time travel or interstellar travel. It is worth noting that the idea of the traversable wormhole is distinct from the Einstein-Rosen bridge theory. To keep the wormhole geometry, and key constituent is the existence of exotic matter in its throat, which pushes wormhole walls apart and prevents them from collapsing. The throat from contracting, allowing it to be traversed, and the rigidity of the traversable wormhole has been examined in order to reduce violation of the null energy condition. The most frequently researched issue in the wormhole concept is the violation of energy conditions. [5] proved that all traversable wormholes, both static and dynamic, violated the null energy conditions. The energy conditions connected are as DEC \(\subset\) WEC \(\subset\) NEC and SEC \(\subset\) NEC. Violating the NEC involves violating all of the energy conditions. In reality, even under conditions weaker than the NEC [6], traversable wormholes cannot be realized in GR, according to the following topological energy-ship theorem in \(f(R)\) gravity [7], traversable wormholes were examined by [8]. They supported the wormhole constructions and identified the causes of the null energy condition's dissatisfaction. \(f(R)\) and \(f(R,T)\) theories have received a lot of attention recently as a way to explain several elements of the universe. By adding the stress-energy tensor term, [9] generalized the \(f(R)\) gravitational theory and developed the \(f(R,T)\) MGT. Gravity models depend on the source term since matter and gravity are connected. Using \(f(R,T)\) gravity, [10] investigated the cosmic evolution of deceleration and the equation of state (EoS) parameters. In the context of the FLRW model, in the study of static wormholes in the \(f(R,T)\) theory of gravity, [11] provided some wormhole models with various matter content assumptions. [12] investigated traversable wormholes with one extra space-like compact dimension. In this respect, they considered how it affected energy density, scale factor, and shape function. In \(f(R,T)\) modified gravity theory (MGT) [13], describe the geometry of the wormhole by using an exponential shape function. The exponential \(f(R,T)\) MGT solution is provided by [14], and the concept of exponential \(f(R,T)\) MGT is considered original in literature. Finsler geometry has caught the attention of many physicists in currently because it explain a variety of issues that Einstein's gravity is unable to explain. Riemannian geometry is one of the special cases of Finsler geometry. Modern standard high-energy theories and Finsler-like gravity theories are consistent with experimental results. Without considering the dark matter hypothesis, Finsler geometry provides a better method for addressing problems with experimental results of flat rotation curves and spiral galaxies [15]. In Finsler geometry traversable wormhole was created by [16]. They discovered an exact solution for various types of shape functions, red-shift functions, and EoS. They also talked about the features of wormhole models. By studying whether or if wormhole stable solutions exist without the need for exotic matter, wormhole geometry has also been investigated thoroughly using modified theories, where MGTs allow us to discuss the issue of exotic matter [17]. Within the context of Finsler geometry, [18] investigate the wormhole model in \(f(R)\) gravity with an exponential shape function. They also examined the gravitational field equation for finding wormhole solutions in the Finslerian framework by taking into account the anisotropic energy-momentum tensor. Our inspiration for it comes from the exponential \(f(R)\) gravity model. By resolving the wormhole field equations using the Finslerian \(f(R,T)\) MGT formalism, then we apply this form for the \(f(R,T)\) MGT. Specifically we focus on the family of \(f(R,T)\) MGT, i.e., \(f(R,T)=R+f(T)\), where \(f(T)=2\lambda T\), \(R\) represent the Ricci scalar, \(T\) represent energy momentum tensor, and \(\lambda\) is a parameter. In this article, we explore the Finslerian wormhole model under \(f(R,T)\) MGT with exponential shape function and examine the wormhole solutions based on parameter values showed that the violation of energy conditions. The following planning guides the structure of the paper. In section (II), we quickly review the ideology of Finsler geometry. In section (III), we described the Finsler geometry formalization in \(f(R,T)\) MGT and discussed the energy conditions. In section (IV), we discussed and analyzed the obtained outcomes. The study concludes with section (V), and we provide some concluding observations. ## II Preliminaries and notations of Finsler geometry Finsler structure on manifold \(M\) is defined as function \(F:TM\rightarrow[0,\infty)\) which satisfies the below properties: 1. Regularity: \(F\) is smooth function on the \(TM\backslash\{0\}\). 2. Positive Homogeneity: \(F(x,cy)=cF(x,y)\) for all \(c>0\). 3. Strong Convexity: The \(n\times n\) Hessian matrix \[g_{\mu\nu}=\frac{\partial^{2}(\frac{1}{2}F^{2})}{\partial y^{\mu}\partial y^{ \nu}}=\frac{1}{2}\dot{\partial}_{\mu}\dot{\partial}_{\nu}F^{2}, \tag{1}\] Finsler structure \(F\) is positive definite on \(TM\backslash\{0\}\), and it is the function of \((x^{i},y^{i})\). The pair \((M,F)\) is called Finsler space. A Finslerian metric is referred to as Riemannian if \(F^{2}\) is quadratic in \(y\). For the Finsler manifold, the geodesic equation is as follows: \[\frac{d^{2}x^{\mu}}{d\tau^{2}}+2G^{\mu}=0, \tag{2}\] where \[G^{\mu}=\frac{1}{4}g^{\mu\nu}\left(\frac{\partial^{2}F^{2}}{\partial x^{\nu} \partial y^{\nu}}y^{\nu}-\frac{\partial F^{2}}{\partial x^{\nu}}\right), \tag{3}\] are called geodesic spray coefficients. We observed that along the geodesic the Finslerian structure \(F(x,y)\) is constant. The Finslerian modified Ricci tensor equation suggested by [19] as follows. \[Ric_{\mu\nu}=\frac{\partial^{2}(\frac{1}{4}F^{2}Ric)}{\partial y^{\mu} \partial y^{\nu}}, \tag{4}\] here \(Ric\) stands for Ricci scalar. \(Ric\) is a geometric invariant with the following expression \[Ric=g^{\mu\nu}R_{\mu\nu}. \tag{5}\] on any basis, Eq. (5) is true. Moreover, in Eq. (5), \(R_{\mu\nu}\) is a representation of the flag's original curvature. It might be stated as, \[R_{\nu}^{\mu}=\frac{1}{F^{2}}\left(2\frac{\partial G^{\mu}}{\partial x^{\nu} }-y^{\nu}\frac{\partial^{2}G^{\mu}}{\partial x^{\nu}\partial y^{\nu}}+2G^{ \nu}\frac{\partial^{2}G^{\mu}}{\partial y^{\nu}\partial y^{\nu}}-\frac{ \partial G^{\mu}}{\partial y^{\nu}}\frac{\partial G^{\nu}}{\partial y^{\nu} }\right). \tag{6}\] Thus, \(Ric\) is as follows, \[Ric\equiv R_{\mu}^{\mu},\] \[R_{\mu}^{\mu}=\frac{1}{F^{2}}\left(2\frac{\partial G^{\mu}}{ \partial x^{\mu}}-y^{\nu}\frac{\partial^{2}G^{\mu}}{\partial x^{\nu} \partial y^{\mu}}+2G^{\nu}\frac{\partial^{2}G^{\mu}}{\partial y^{\nu} \partial y^{\mu}}-\frac{\partial G^{\mu}}{\partial y^{\nu}}\frac{\partial G^{ \nu}}{\partial y^{\mu}}\right). \tag{7}\] Finslerian modified formula for scalar curvature is \[S=g^{\mu\nu}Ric_{\mu\nu}, \tag{8}\] and Einstein tensor formula is \[G_{\mu\nu}=Ric_{\mu\nu}-\frac{1}{2}g_{\mu\nu}S. \tag{9}\] Since it is derived from \(Ric\), connections do not affect the Finslerian modified Einstein tensor. It is solely reliant on the Finslerian structure. As a result, the Finslerian gravitational field equations are also insensitive to the connections. We choose the [4] wormhole metric to search for wormhole structure and consider the Finsler structure to be of the following form, \[F^{2}=e^{2a(r)}y^{\prime}y^{\prime}-\left(1-\frac{b(r)}{r}\right)^{-1}y^{ \prime}y^{\prime}-r^{2}\tilde{F}^{2}(\theta,\phi,y^{\theta},y^{\phi}), \tag{10}\] where \(\tilde{F}\) is a Finsler structure in two dimensions, and we assume that \(\tilde{F}\) has the form, \[\tilde{F}^{2}=y^{\theta}y^{\theta}+A(\theta,\phi)y^{\phi}y^{\phi}. \tag{11}\] Within the Finsler framework in Eq. (10) \(a(r)\) stands for the redshift function, while \(b(r)\) is the shape function [20]. \(a(r)\) explains the redshift effect and tidal force in the wormhole space-time. The shape function \(b(r)\) specifies how a wormhole is shaped. The wormhole throat is positioned at a minimal surface radius \(b(r_{0})=r_{0}\), and the nonmonotonic radial coordinate \(r\) falls from infinity to \(r_{0}\) before increasing from \(r_{0}\) back to infinity. To prevent the appearance of an event horizon, everywhere \(a(r)\) must be a finite. At the throat singularity does not exist for the traversable wormhole metric. \(b(r)\) needs to meet the requirements [4] below to produce the wormhole solutions: * Radial co-ordinate \(r\) has a range of values from \(r_{0}\leq r\leq\infty\), where \(r_{0}\) is the throat radius. * Within the throat \(b(r)\) fulfill the requirement \(b(r_{0})=r_{0}\), as well as for out of the throat, i.e., for \(r>r_{0},0<1-\frac{b(r)}{r}\). * Shape function \(b(r)\) at the throat must satisfy the flaring out requirement, i.e., \(b^{\prime}(r_{0})<1\), where the derivative is with respect to \(r\). * The minimum requirement for the space-time geometry to be asymptotically flat \(\frac{b(r)}{r}\to 0\) as \(|r|\to\infty\). The geometry of the wormhole space-time is described by the requirements listed above. For two-dimensional Finsler structure \(\bar{F}\) Eq. (11) one can obtain the Finslerian metric as, \[\bar{g}_{ij}=diag(1,A(\theta,\phi)), \tag{12}\] \[\bar{g}^{ij}=diag\left(1,\frac{1}{A(\theta,\phi)}\right), \tag{13}\] where \((i,j=\theta,\phi)\). The \(\bar{R}ic\) of the Finslerian structure \(\bar{F}\) can be found by applying the formula from the reference [16]. \[\bar{R}ic=\frac{1}{2A}\left[-\frac{\partial^{2}A}{\partial\theta^{2}}+\frac{1 }{2A}\left(\frac{\partial A}{\partial\theta}\right)^{2}\right]. \tag{14}\] For the Finsler structure \(\bar{F}\), we can better understand wormholes in the Finslerian theory by using the \(\bar{R}ic=\eta\). Here it is assumed to be constant. In three separate circumstances, we find the solution to the differential equation (14) by solving it (\(\eta>0,\ \eta=0,\ \eta<0\)). Consequently, the Finsler structure \(\bar{F}\) Eq. (11) changes to, \[\bar{F}^{2}=y^{\theta}y^{\theta}+C\sin^{2}(\sqrt{\eta}\theta)y^{\phi}y^{\phi} \qquad\text{for}\ \ (\eta>0), \tag{15}\] \[\bar{F}^{2}=y^{\theta}y^{\theta}+C\theta^{2}y^{\phi}y^{\phi}\qquad\qquad\qquad \text{for}\ \ (\eta=0), \tag{16}\] \[\bar{F}^{2}=y^{\theta}y^{\theta}+C\sinh^{2}(\sqrt{-\eta}\theta)y^{\phi}y^{\phi} \quad\text{for}\ \ (\eta<0). \tag{17}\] Now let's use \(C=1\). For the solution \(\eta>0\) the Finslerian wormhole structure Eq. (10) can be written as \[F^{2}=e^{2a(r)}y^{\prime}y^{\prime}-\left(1-\frac{b(r)}{r}\right) ^{-1}y^{\prime}y^{\prime}\] \[-r^{2}\bigg{(}y^{\theta}y^{\theta}+\sin^{2}(\sqrt{\eta}\theta)y^{ \phi}y^{\phi}\bigg{)}. \tag{18}\] The Riemannian metric is denoted by \(\alpha\), and the appropriate Riemannian wormhole structure is provided by \[\alpha^{2}=e^{2a(r)}y^{\prime}y^{\prime}-\left(1-\frac{b(r)}{r}\right)^{-1}y^ {\prime}y^{\prime}-r^{2}\bigg{(}y^{\theta}y^{\theta}+\sin^{2}(\theta)y^{\phi }y^{\phi}\bigg{)}. \tag{19}\] Using Eq. (19), we can written Eq. (18) as, \[F^{2}=\alpha^{2}+r^{2}(\sin^{2}\theta-\sin^{2}\sqrt{\eta}\theta)y^{\phi}y^{ \phi}. \tag{20}\] If we take \(r^{2}(\sin^{2}\theta-\sin^{2}\sqrt{\eta}\theta)y^{\phi}y^{\phi}=\beta^{2}\) into consideration, then Eq. (20) yields, \[F^{2}=\alpha^{2}(1+s^{2}), \tag{21}\] where \[s=\frac{\beta}{\alpha}=\frac{b_{\phi}y^{\phi}}{\alpha}.\] We consider \(b_{\phi}=r\sqrt{\sin^{2}\theta-\sin^{2}\sqrt{\eta}\theta}\). Additionally, the \(\beta=b_{\mu}y^{\mu}\) denotes the differential 1-form where \(b_{\mu}=(0,0,0,b_{\phi})\). Thus, Eq. (21) yields, \[F=\alpha\varphi(s), \tag{22}\] where \(\varphi(s)=\sqrt{1+s^{2}}\). The Finslerian wormhole structure \(F\) is indicated by Eq. (22), which represents the Finsler space with an \((\alpha,\beta)\)-metric. Concerning the Riemannian metric \(\alpha\), the given symbol "\(|\)" signifies the covariant derivative. The equations for \(K_{V}(\alpha)\) and \(K_{V}(\beta)\) are then provided by, \[K_{V}(\alpha)=\frac{1}{2\alpha}(V_{\mu|v}+V_{v|\mu})y^{\mu}y^{ \nu},\] \[K_{V}(\beta)=\bigg{(}V^{\mu}\frac{\partial b_{\nu}}{\partial x^{ \mu}}+b_{\mu}\frac{\partial V^{\mu}}{\partial x^{\nu}}\bigg{)}y^{\nu}.\] The Finsler space's Killing equation, \(K_{V}(F)=0\), is derived from isometric transformations [21]. \[\bigg{(}\varphi(s)-s\frac{\partial\varphi(s)}{\partial s}\bigg{)}K_{V}(\alpha )+\frac{\partial\varphi(s)}{\partial s}K_{V}(\beta)=0. \tag{23}\] From Eq. (22), the Killing equation (23) becomes, \[\alpha K_{V}(\alpha)+\beta K_{V}(\beta)=0,\] and it pays off \[K_{V}(\alpha)=0\ \ \text{and}\ \ K_{V}(\beta)=0 \tag{24}\] as a result we have \[V_{\mu|\nu}+V_{\nu|\mu}=0, \tag{25}\] \[V^{\mu}\frac{\partial b_{\nu}}{\partial x^{\mu}}+b_{\mu}\frac{\partial V^{\mu}}{ \partial x^{\nu}}=0. \tag{26}\] The Riemannian space-time and the Killing equation Eq. (25) are equivalents. Eq. (26) restricts Eq. (25). The Lie derivatives \(L_{V}(\alpha)=0\) and \(L_{V}(\beta)=0\), are comparable to the Killing equations Eq. (25), and Eq. (26) respectively. In addition, we have \(\alpha^{2}=h_{\mu\nu}\). We can get the Riemannian wormhole metric as, \[h_{\mu\nu}=diag\left(e^{2\alpha(r)},-\left(1-\frac{b(r)}{r}\right)^{-1},-r^{2 },-r^{2}\sin^{2}\theta\right).\] The Riemannian space-time isometric symmetry is broken as a result of the Killing equation (26). The redshift function should be positive for every \(r>r_{0}\) if there are no horizons or singularities. In our current study, the redshift function \(a(r)\) is taken to be constant as a result, \(a^{\prime}(r)=0\). One of the crucial properties of traversable wormholes is that the tidal gravitational forces that traveler experiences must be small or \(a^{\prime}(r)=0\). The zero-tidal-force solutions have been examined by [4], who also took into account the vanishing redshift function. As a result, the Finslerian wormhole structure Eq. (10) becomes, \[F^{2}=y^{\prime}y^{\prime}-\left(1-\frac{b(r)}{r}\right)^{-1}y^{\prime}y^{ \prime}-r^{2}\left(y^{\theta}y^{\theta}+\sin^{2}(\sqrt{\eta}\theta)y^{\theta} y^{\theta}\right). \tag{27}\] One can derive the Finsler metric as, \[g_{\mu\nu}=diag\left(1,-\left(1-\frac{b(r)}{r}\right)^{-1},-r^{2},-r^{2}\sin^ {2}(\sqrt{\eta}\theta)\right), \tag{28}\] \[g^{\mu\nu}=diag\left(1,-\left(1-\frac{b(r)}{r}\right),\frac{-1}{r^{2}},\frac{ -1}{r^{2}\sin^{2}(\sqrt{\eta}\theta)}\right). \tag{29}\] When looking at the Finslerian wormhole crucial role is played by the Ricci scalar \(Ric=\eta\) of the two-dimensional Finsler structure \(\dot{F}\). We conclude that the structure \(F\) has a constant flag curvature because \(\eta\)=constant. The geodesic spray coefficients can be found by putting the Finslerian wormhole structure Eq. (27) in Eq. (2). \[G^{\prime}=0, \tag{30}\] \[G^{\prime}=\frac{rb^{\prime}-b}{4r(r-b)}y^{\prime}y^{\prime}-\frac{r-b}{2}y^ {\theta}y^{\theta}-\frac{r-b}{2}\sin^{2}(\sqrt{\eta}\theta)y^{\phi}y^{\phi}, \tag{31}\] \[G^{\theta}=\frac{1}{r}y^{\prime}y^{\theta}-\frac{\sqrt{\eta}}{2}\sin(\sqrt{ \eta}\theta)\cos(\sqrt{\eta}\theta)y^{\phi}y^{\phi}, \tag{32}\] \[G^{\phi}=\frac{1}{r}y^{\prime}y^{\phi}+\sqrt{\eta}\cot(\sqrt{\eta})y^{\theta }y^{\phi}. \tag{33}\] By substituting the expressions from Eqs. (30-33) in Eq. (7), we obtain, \[F^{2}Ric=\frac{rb^{\prime}-b}{r^{2}(r-b)}y^{\prime}y^{\prime}+ \left(\eta-1+\frac{b}{2r}+\frac{b^{\prime}}{2}\right)y^{\theta}y^{\phi}+\] \[\left(\eta-1+\frac{b}{2r}+\frac{b^{\prime}}{2}\right)\sin^{2}( \sqrt{\eta}\theta)y^{\phi}y^{\phi}. \tag{34}\] From Eq. (4), we deduced Finslerian modified Ricci tensor components as, \[Ric_{tt}=0, \tag{35}\] \[Ric_{rr}=\frac{rb^{\prime}-b}{r^{2}(r-b)}, \tag{36}\] \[Ric_{\theta\theta}=\left(\eta-1+\frac{b}{2r}+\frac{b^{\prime}}{2}\right), \tag{37}\] \[Ric_{\phi\phi}=\left(\eta-1+\frac{b}{2r}+\frac{b^{\prime}}{2}\right)\sin^{2}( \sqrt{\eta}\theta). \tag{38}\] By substituting expressions Eq. (35-38) and Eq. (29) in Eq. (8), we can determine the scalar curvature for the Finslerian wormhole space-time as, \[S=-\frac{2}{r^{2}}(b^{\prime}+\eta-1). \tag{39}\] Using the components of the Finslerian-modified Ricci tensor and the scalar curvature, from Eq. (9) one can quickly determine the components of the Finslerian-modified Einstein tensor as, \[G^{\prime}_{t}=\frac{1}{r^{2}}(b^{\prime}+\eta-1), \tag{40}\] \[G^{\prime}_{r}=\frac{b}{r^{3}}+\frac{1}{r^{2}}(\eta-1), \tag{41}\] \[G^{\theta}_{\theta}=G^{\phi}_{\phi}=\frac{rb^{\prime}-b}{r^{3}}. \tag{42}\] From components of the Finslerian modified Einstein tensor Eq. (40-42) depend only on \(r\) and are direction-independent. With the help of the formula, the Chern connection is determined by, \[\Gamma^{\alpha}_{\nu\upsilon}=\frac{\partial^{2}G^{\alpha}}{\partial y^{ \alpha}\partial y^{\upsilon}}-A^{\alpha}_{\nu\upsilon|\beta}\frac{y^{\beta}}{ F}. \tag{43}\] The term \(A_{\mu\nu\upsilon}=g_{\mu\alpha}A^{\alpha}_{\nu\upsilon}\) in the above equation, which represents the Cartan connection, is defined as, \[A_{\mu\nu\upsilon}=\frac{F}{4}\frac{\partial^{3}F^{2}}{\partial y^{\mu} \partial y^{\upsilon}\partial y^{\upsilon}}, \tag{44}\] for all indices \(\mu,\nu\) and \(\upsilon\), we obtain \(A_{\mu\nu\upsilon}=0\), to the Finslerian wormhole structure Eq. (27). From this, we obtain that the Cartan connections for the Finslerian wormhole spacetime are vanish. As a result, we can calculate the Chern connection coefficients using Eq. (43). So we have, \[\Gamma^{t}_{\mu\nu}=0\quad\ (\text{for all indices }\mu,\nu), \Gamma^{\theta}_{\nu\theta}=\Gamma^{\phi}_{\nu\theta}=\frac{1}{r},\] \[\Gamma^{r}_{rr}=\frac{rb^{\prime}-b}{2r(r-b)}, \Gamma^{r}_{rt}=\Gamma^{r}_{r\theta}=\Gamma^{r}_{r\theta}=0,\] \[\Gamma^{r}_{\theta\theta}=-(r-b), \Gamma^{r}_{\phi\phi}=-(r-b)\sin^{2}(\sqrt{\eta}\theta),\] \[\Gamma^{\theta}_{\theta\theta}=\Gamma^{\theta}_{\theta t}=\Gamma^ {\theta}_{\theta\phi}=0,\] \[\Gamma^{\phi}_{\phi t}=\Gamma^{\phi}_{\phi\phi}=0, \Gamma^{\phi}_{\phi\theta}=\sqrt{\eta}\cot(\sqrt{\eta}\theta).\] Now we define the derivative \(\frac{\delta}{\delta x^{\mu}}\) as, \[\frac{\delta}{\delta x^{\mu}}=\frac{\partial}{\partial x^{\mu}}-\frac{ \partial G^{\alpha}}{\partial y^{\mu}}\frac{\partial}{\partial y^{\alpha}}.\] We can determine the covariant derivative of \(G^{\mu}_{\nu}\) by the following equation \[G^{\mu}_{\nu|\mu}=\frac{\delta G^{\mu}_{\nu}}{\delta x^{\mu}}+\Gamma^{\mu}_{ \mu\alpha}G^{\alpha}_{\nu}-\Gamma^{\alpha}_{\mu\nu}G^{\mu}_{\alpha}. \tag{46}\] From Eq. (46) all the covariant derivative components becomes \(G^{\mu}_{i|\mu}=G^{\mu}_{r|\mu}=G^{\mu}_{\theta|\mu}=G^{\mu}_{\theta|\mu}=0\). This leads us to the conclusion that the covariant derivative of \(G^{\mu}_{\nu}\) for the Finslerian modified Einstein tensor is conserved. ## III Formalism of Finsler geometry in \(f(R,t)\) gravity \(f(R,T)\) MGT has received a lot of attention in recent research and seems to be a good fit for addressing wormhole construction problems (since it doesn't have any misleading characteristics). For understanding the observed universe, \(f(R,T)\) MGT, this is the combination of two functions \(R\) and \(T\), where \(R\) stands for the Ricci scalar and \(T\) for the trace of the energy-momentum tensor, is frequently studied in the research [9]. Since the distribution of the matter needed to create wormholes remains a difficult problem for physicists. So, we take the universal anisotropic energy-momentum tensor as [22], \[T^{\mu}_{\nu}=(\rho+p_{t})u^{\mu}u_{\nu}+(p_{r}-p_{t})x^{\mu}x_{\nu}-p_{t} \delta^{\mu}_{\nu}. \tag{47}\] When taken in the radial direction, energy density is represented by \(\rho\), \(u^{\mu}\) is the four-velocity such that \(u^{\mu}u_{\mu}=1\) and \(x^{\mu}\) is the space-like unit vector, such that \(x^{\mu}x_{\mu}=-1\). \(p_{r}\) is the radial pressure, which is determined by the direction of the space-like unit vector \(x^{\mu}\), where \(p_{t}\) represents the transverse pressure measures perpendicular to \(x^{\mu}\). Now let's have a look at the parameter anisotropic factor \(\Delta=p_{t}-p_{r}\), which measures the anisotropy where \(p_{t}\neq p_{r}\). We have \(T=\rho-p_{r}-2p_{t}\), which displays the trace of the anisotropic energy-momentum tensor. We can determine the geometry of wormholes based on the anisotropic factor. If \(\Delta<0\) (negative) geometry of the wormhole is attractive, so anisotropic force is directed inward. If \(\Delta>0\) (positive) geometry of the wormhole is repulsive, then anisotropic force is directed outward. The matter distribution of the wormholes exhibits isotropic pressure if \(\Delta=0\). The achievement of the \(f(R)\) theory in the cosmology inspired us to investigate wormhole solutions because \(f(R,T)\) MGT reduces to \(f(R)\) gravity in the limit \(\lambda\to 0\). In this section, we'll look at the functional form \(f(R,T)=R+f(T)\), where \(f(T)=2\lambda T\) is a function of the trace of the stress-energy tensor of matter. As a result, it is crucial to research how \(\lambda\) affects wormhole solutions with shape functions when there is matter present. The equation for the Finslerian gravitational field is provided by, \[G^{\mu}_{\nu}=(8\pi_{F}+2\lambda)T^{\mu}_{\nu}+\lambda(2\rho+T)g^{\mu}_{\nu}, \tag{48}\] where \(4\pi_{F}\) stands for the volume of the Finsler structure \(\bar{F}\) in two dimensions. If we take \(\bar{F}\) as the Finslerian sphere [7], then \(\pi_{F}\) becomes equal to \(\pi\). By combining components of the Finslerian modified Einstein tensor Eq. (40-42) and energy-momentum tensor Eq. (47), by taking \((G=C=1)\) as relativistic units we can arrive at the gravitational field equations from Eq. (48) for Finslerian wormhole space-time Eq. (27) as follows, \[G^{t}_{t}=(8\pi_{F}+2\lambda)T^{t}_{t}+\lambda(2\rho+T)g^{t}_{t},\] \[\frac{1}{r^{2}}(b^{t}+\eta-1)=(8\pi_{F}+2\lambda)\rho+\lambda(p_ {r}+2p_{t}). \tag{49}\] \[G^{r}_{r}=(8\pi_{F}+2\lambda)T^{r}_{r}+\lambda(2\rho+T)g^{r}_{r},\] \[-\frac{b}{r^{3}}-\frac{1}{r^{2}}(\eta-1)=(8\pi_{F}+3\lambda)p_{r }+\lambda(\rho+2p_{t}).\] (50) \[G^{\mu}_{\mu}=(8\pi_{F}+2\lambda)T^{\mu}_{\mu}+\lambda(2\rho+T) g^{\mu}_{\mu}\ \ (\mu=\theta,\phi),\] \[\frac{b-b^{\prime}r}{2r^{3}}=(8\pi_{F}+4\lambda)p_{t}+\lambda( \rho+2p_{r}). \tag{51}\] If we know the distribution of the matter, gravitational field equations enable us to identify the space-time geometry and vice-versa. Additionally, it has the interesting property that, using the field equations, one can analyze both the distribution of matter and the whole structure of space-time if they are aware of some of the energy stress tensor's components and some of the space-time geometry. In the context of \(f(R,T)=R+2\lambda T\) gravity, an exponential shape function [13] \[b(r)=r_{0}e^{1-\frac{r}{r_{0}}}, \tag{52}\] is taken into consideration in the Finslerian wormhole model to investigate the energy conditions. The shape function Eq. (52) meets every requirement for supporting wormholes mentioned above (Sec. II), i.e., fulfills the requirements \(r>r_{0}\) \(b(r_{0})=r_{0},1-\frac{b(r)}{r}>0\) where \(r_{0}\) is the wormhole's throat radius, the flaring-out condition, and the asymptotic flatness condition. Fig. 1(a) depicts the properties of the exponential shape function Eq. (52) with \(r_{0}=0.5\) and demonstrates that the shape function meets all important requirements. We constructed an embedded 2-D diagram in Fig. 1(b) and a 3-D diagram in Fig. 2 for the shape function Eq. (52) for better visualization of the wormhole see Appendix. By inserting Eq. (52) into Eqs. (49-51), energy density \(\rho\), radial pressure \(p_{r}\), and transverse pressure \(p_{t}\) are obtained respectively as follows, \[\rho=\frac{-e^{1-\frac{r}{r_{0}}}+\eta-1}{r^{2}(8\pi_{F}+2\lambda)}, \tag{53}\] \[p_{r}=-\frac{r_{0}e^{1-\frac{r}{r_{0}}}+r(\eta-1)}{r^{3}(8\pi_{F}+2\lambda)}, \tag{54}\] \[p_{t}=\frac{e^{1-\frac{r}{r_{0}}}(r+r_{0})}{2r^{3}(8\pi_{F}+2\lambda)}. \tag{55}\] Fig. 3, Fig. 4, and Fig. 5 depict the plot of \(p_{r}\), \(p_{t}\), \(\rho\) respectively as function of \(r\) and \(\eta\) for the Finslerian wormhole solution (27) at different value of \(\lambda\), i.e., \(8\pi_{F}\neq-2\lambda\)\((\lambda\neq-12.568)\) so we are taking \(\lambda\geq-12.5\) (considering \(\lambda=-12.5\)) and \(\lambda\leq-12.6\) (considering \(\lambda=-12.6\)) with wormhole throat placed at \(r_{0}=0.5\). In addition, by combining the Eqs. (53-55), we obtain the Figure 1: (a) Behavior of the exponential shape function Eq. (52) at \(r_{0}=0.5\). (b) Embedded 2-D plot of the wormhole defined by Eq. (52). Figure 2: Embedded 3-D plot of the wormhole defined by Eq. (52). following stress-energy components, \[\rho+p_{r}=-\frac{e^{1-\frac{r}{r_{0}}}(r+r_{0})}{r^{3}(8\pi_{F}+2\lambda)}, \tag{56}\] \[\rho+p_{t}=\frac{e^{1-\frac{r}{r_{0}}}(r_{0}-r)+2r(\eta-1)}{2r^{3}(8\pi_{F}+2 \lambda)}, \tag{57}\] \[\rho+p_{r}+2p_{t}=0, \tag{58}\] \[\rho-|p_{r}|=\frac{-e^{1-\frac{r}{r_{0}}}+\eta-1-r^{2}|\frac{r_{0}e^{1-\frac{ r}{r_{0}}}+(\eta-1)r}{r^{3}}|}{r^{2}(8\pi_{F}+2\lambda)}, \tag{59}\] \[\rho-|p_{t}|=\frac{-2e^{1-\frac{r}{r_{0}}}+2\eta-2-|\frac{r_{0}+r}{r^{3}}|r^{2 }e^{1-\frac{r}{r_{0}}}}{2r^{3}(8\pi_{F}+2\lambda)}, \tag{60}\] \[p_{t}-p_{r}=\frac{e^{1-\frac{r}{r_{0}}}(3r_{0}+r)+2r(\eta-1)}{2r^{3}(8\pi_{F} +2\lambda)}, \tag{61}\] and \[\rho-p_{r}-2p_{t}=\frac{-e^{1-\frac{r}{r_{0}}}+\eta-1}{r^{2}(4\pi_{F}+\lambda)}. \tag{62}\] The energy conditions are particularly effective tools for predicting the behavior of strong gravitational fields and the wormhole's geometry. In GR there are seven distinct kinds of energy conditions [24], but we are just concentrating on four of them. Null energy condition (NEC), weak energy condition (WEC), strong energy condition (SEC), and dominant energy condition (DEC). These energy conditions are characterized in terms of \(\rho\), \(p_{r}\), and \(p_{t}\). * According to the WEC, any time-like observer's measurement of energy density must be always positive, i.e., \(\rho\geq 0\), and for all \(l\), \(\rho+p_{l}\geq 0\). * According to the SEC, gravity should always be attractive. Regarding the components of the energy-momentum tensor, i.e., for all \(l\), \(\rho+p_{l}\geq 0\) and \(\rho+\sum_{l}p_{l}\geq 0\). * The DEC states that any observer's measurement of the energy flux is null or time-like and should be non-negative, i.e., \(\rho\geq 0\), which results in \(\rho\geq|p_{l}|\). * For SEC and WEC basic requirement is NEC, i.e., for all \(l\), \(\rho+p_{l}\geq 0\). All of the above-mentioned energy conditions are implied to be invalid by the NEC violation. The NEC follows from the SEC, but the reverse need not be true. Moreover, the SEC need not illustrate the WEC. The DEC implies both the NEC and the WEC, while the reverse doesn't need to be true in each case. Furthermore, the DEC need not imply the SEC. The SEC [25] is a result of gravity's attractive characteristics, and its shape originates directly from an analysis of a spherically symmetrical metric in the general relativity system. To describe the geometry of wormholes, the SEC violation is essential. Although the violation of energy requirements is quite acceptable in some quantum fields, the violation of NEC (commonly referred to as exotic matter) is a basic aspect of static traversable wormholes in the context of GR. Due to this NEC violation, all of the standard energy conditions were also violated. From Figs. (5-12), we plotted the four energy conditions, and anisotropic factor for different values of \(\lambda\) as a function of \(r\) and \(\eta\) with wormhole throat at \(r_{0}=0.5\) for the current Finslerian wormhole model in \(f(R,T)\) MGT. We will need barotropic EoS for the radial and transverse pressures. we express it by \[p_{r}=\omega_{r}\rho,\hskip 28.452756ptp_{t}=\omega_{r}\rho, \tag{63}\] where \(\omega_{r}\) and \(\omega_{r}\) stand for the transverse and radial EoS parameters, respectively. SEC is specified by Eq. (58), which states that the inhomogeneous and anisotropic component fulfills. In this specific instance, \(-1\leq\omega_{r}\leq 1\) followed by \(-1\leq\omega_{r}\leq 0\), along with \(\omega_{r}-1\geq 0\) and \(\omega_{r}+1\leq 0\), and we observed that \(\omega_{t}\leq-1\) and \(\omega_{t}\geq 0\) respectively. From Eq. 63\(\omega_{r}\) and \(\omega_{t}\) can be written by using Eqs. (53-55) as follows, \[\omega_{r}=\frac{r_{0}e^{1-\frac{r}{\tau_{0}}}+(\eta-1)r}{(e^{1-\frac{r}{\tau_{0 }}}-\eta+1)r}, \tag{64}\] \[\omega_{t}=\frac{e^{1-\frac{r}{\tau_{0}}}(r_{0}+r)}{(-e^{1-\frac{r}{\tau_{0}}} +\eta-1)2r}. \tag{65}\] The following expression is satisfied by the barotropic EoS parameters \(\omega_{r}\) and \(\omega_{t}\) \[\omega_{r}+2\omega_{t}+1=0. \tag{66}\] With reference to the radial EoS parameter \(\omega_{r}\), the transverse pressure \(p_{t}\) can be expressed as follows, \[p_{t}=-\frac{1}{2}(1+\omega_{r})\rho. \tag{67}\] It is evident that \(1+\omega_{r}\geq 0\) from 13(a). As a result, Eq. (67) suggests that \(p_{t}\geq 0\). Accordingly, Fig. 4 makes this point clear. In accordance with the indicated range of radial coordinate \(r\) and the \(\eta\), with the wormhole throat at \(r_{0}=0.5\), the radial EoS parameter \(\omega_{r}\) and the transverse EoS parameter \(\omega_{t}\) are shown in Fig. 13. Now we are considering \(F_{a}\) stands for the anisotropic wormhole force, \(F_{h}\) for the hydrostatic force, and \(F_{g}\) for the gravity force. The formulas for \(F_{a},F_{h}\) and \(F_{g}\) are provided by \[F_{a}=\frac{2(p_{t}-p_{r})}{r}, \tag{68}\] \[F_{h}=-p_{r}^{\prime}, \tag{69}\] \[F_{g}=-a^{\prime}(r)(\rho+p_{r}). \tag{70}\] The following is the conservation equation \[F_{a}+F_{h}+F_{g}=0. \tag{71}\] The equations for \(F_{a},F_{h}\), and \(F_{g}\) for the Finslerian wormhole space-time (27) are obtained from Eqs. (53-55), and are used in Eqs. (68-70). \[F_{a}=\frac{e^{1-\frac{r}{\tau_{0}}}(3r_{0}+r)+(\eta-1)2r}{r^{4}(8\pi_{F}+2 \lambda)}, \tag{72}\] \[F_{h}=-\frac{e^{1-\frac{r}{r_{0}}}(3r_{0}+r)+(\eta-1)2r}{r^{4}(8\pi_{F}+2\lambda)}, \tag{73}\] \[F_{g}=a^{\prime}(r)\left(\frac{e^{1-\frac{r}{r_{0}}}(r+r_{0})}{r^{3}(8\pi_{F}+2 \lambda)}\right)=0. \tag{74}\] Since we are studying tideless wormholes, we taken \(a(r)=\) constant implies \(a^{\prime}(r)=0\). For this condition the force due to the contribution of gravity Eq. (74) becomes zero in our study model. The tidal gravitational forces that traveler experiences must be minimal for the wormhole to be traversable. So that the condition \(a^{\prime}(r)=0\) supports the features of a traversable wormhole. Because of this conservation equation (71), becomes \[F_{a}=-F_{h}. \tag{75}\] We deduce that the Finslerian wormhole solution is in equilibrium because our observation in Eq. (75) shows that the opposing effects of hydrostatic and anisotropic forces cancel each other. ## IV Discussion and results Modified theories have the flexibility to experiment with the effective stress-energy tensor to find an alternative approach for solving the exotic matter problem while still adhering to all energy conditions. \(f(R,T)\) gravity is one of the modified theories of gravity that has received a lot of interest. With an exponential shape function that corresponds to the family \(f(R,T)\) of extended theories of gravity, we have developed a Finslerian wormhole model in our present study. We focused our studies to the case \(f(R,T)=R+f(T)\), where \(f(T)=2\lambda T\) and \(\lambda\) is constant parameter. The sensitivity of \(f(R,T)\) gravity wormholes depends on a range of parameter choices. Due to this, several scholars have been working on \(f(R,T)\) MGT among those,[26] discover that the parameter \(\lambda\) has a significant impact in determining the composition of matter in wormholes. In Ref.[18], authors investigate the wormhole model using an exponential shape function within the context of Finsler geometry by taking \(\lambda=0\). The equilibrium configurations of white dwarfs are investigated by[27] using an MGT that depends on the parameter \(\lambda\). As a result, numerous physicists and geometrists have this generalized geometry as interesting. Consequently, the Finslerian space-time manifold is of significance to us when we study the wormhole configuration in \(f(R,T)\) MGT. In general, the "exotic matter" violates the weak/null energy conditions and has the unique capability to keep the wormhole tunnel open. We can reveal that NEC is an artifact of the general relativity that supports a wormhole. A key requirement of static traversable wormholes, according to GR, is the violation of different energy conditions, often known as exotic matter. In the current work, Fig. 1 shows that the exponential shape function Eq. (52) meets all essential geometric conditions, and as a result, we obtained the structure of the wormhole. We plotted WEC, NEC, DEC, and SEC energy conditions in Figs. (5-12). We plotted WEC in Fig. 5 and observed that in Fig. 5(a) energy density condition is negative and also tends to zero with the change of radial coordinate \(r\) with \(\eta\) at \(r_{0}=0.5\), for the value \(\lambda\geq-12.5\) and in Fig. 5(b) energy density condition is positive and also tends to zero with the change of radial coordinate \(r\) with \(\eta\) at \(r_{0}=0.5\), for the value \(\lambda\leq-12.6\). Therefore, we conclude that WEC is violated in the Finslerian wormhole \(f(R,T)\) gravity model for the value \(\lambda\geq-12.5\), within the specified range of \(r\) and \(\eta\). In the Figs. (6-9) we plotted the energy condition terms, using the specified range of \(r\) and \(\eta\) at \(r_{0}=0.5\), and found \(\rho+p_{r}\) in Fig. 6, \(\rho+p_{t}\) in Fig. 7, \(\rho-|p_{r}|\) in Fig. 8, and \(\rho-|p_{t}|\) in Fig. 9, negative for the value of \(\lambda\geq-12.5\) and positive for the value of \(\lambda\leq-12.6\). With Eq. (56), we determined that the \(\eta\) is not a factor in the NEC term \(\rho+p_{r}\), which is the function of \(r\) only. But we have demonstrated the term \(\rho+p_{r}\) graphically in a 3-D plot in Fig. 6 and discovered that the result is negative for the value \(\lambda\geq-12.5\) in Fig. 6(a), and positive for the value \(\lambda\geq-12.6\) in Fig. 6(b). From Fig. 7(a) we observed \(\rho+p_{t}\) is negative for \(\lambda\geq-12.5\), and positive for \(\lambda\leq-12.6\). As a result, we conclude NEC is violated in Finslerian \(f(R,T)\) gravity at \(\lambda\geq-12.5\), hence Finslerian wormhole to be traversable. Also, we observed DEC in Fig. 9(a) violets for the value \(\lambda\geq-12.5\). From these findings, we draw the following conclusions in the framework of Finslerian \(f(R,T)\) gravity the wormholes must be filled with exotic matter. From Eq. (58) in Fig. 10\(\rho+p_{r}+p_{t}\) is equal to zero for the given range of \(r\) and \(\eta\), and we observed the violation of SEC. The \(\eta\) determines whether the Finslerian wormhole is attractive or repulsive. According to Fig. 12(a), the Finslerian wormhole is repulsive for \(1\geq\eta\geq 0.8\), it is initially repulsive and then becomes attractive after crossing the certain value of \(r\) for \(0.2\leq\eta<0.8\) for \(\lambda\geq-12.5\), and it is opposite when \(\lambda\leq-12.6\) in Fig. 12(b). We observed from Fig. 13(a) that the radial EoS parameter \(\omega_{r}\) takes the values between -1 and 1 for the specified value of \(r\) and \(\eta\). Further, if we replace \(F\) with the Finslerian sphere, then \(\omega_{r}\) is obtained to be non-negative with values less than 1. The transverse EoS parameter \(\omega_{t}\) lies between the value of -1 and 0 in Fig. 13(b) [28] because of Eq. (58) and \(-1\leq\omega_{r}\leq 1\). The wormhole matter distribution is affected by three forces because of anisotropic pressure. The Finslerian wormhole solution in equilibrium as a result of the combined effect of the anisotropic force \(F_{a}\) and hydrostatic force \(F_{h}\) in Fig. 14(a) and Fig. 15(a) respectively, for \(\lambda\leq-12.5\). In the beginning, the anisotropic force and the hydrostatic force have positive values for the range \(1\geq\eta\geq 0.6\) with a variation of \(r\) and at the value of \(\lambda\geq-12.5\), and negative values for the value of \(0.2\leq\eta<0.6\) respectively. As a result, they are repulsive and attractive. Their nature changes as \(r\) reach a certain value. The energy density gradient \(\frac{dp}{dr}\) is positive in Fig. 16(a) for \(\lambda\geq-12.5\) and negative in Fig. 16(b) for \(\lambda\leq-12.6\), with change of \(r\) for each value of the \(\eta\) at \(r_{0}=0.5\). We can observe from Fig. 17(a) that the radial pressure gradient \(\frac{dp_{r}}{dr}\) is found to be dependent on the value of \(\eta\). \(\frac{dp_{r}}{dr}\) is positive For \(\eta=0.6,0.8\) and 1, for distinct values of \(r\). But \(\frac{dp_{r}}{dr}\) is positive for \(\eta=0.2\) until \(r=0.8\) and for \(\eta=0.4\) is positive until \(r=0.9\), after it turns negative due to the change in \(r\) at \(\lambda\geq-12.5\), but it is different when \(\lambda\leq-12.6\) in Fig. 17(b). From Fig. 17, we observe the transverse pressure gradient \(\frac{dp_{r}}{dr}\), which is independent of \(\eta\), is negative for \(\lambda\geq-12.5\) and positive for \(\lambda\leq-12.6\) at \(r_{0}=0.5\) for the specified values of \(r\). ## V Conclusion In the present paper, we explore an extended gravity model \(f(R,T)\), it has arbitrary connectivity between geometry and matter in the Finslerian framework, which is represented by the trace of the stress-energy tensor. Therefore, the \(f(R,T)\) gravity model's assumptions could result in some significant deviations from those of standard general relativity or other generalized gravity models. By this model, the gravitational field equations have been derived. We constructed an exponential Finslerian wormhole that relates to the \(f(R,T)=R+f(T)\) where \(f(T)=2\lambda T\) of extended theories of gravity. Here, the parameter \(\lambda\) plays a very important role in the construction of an exponential Finslerian wormhole. Many physicians and geometricians work on \(f(R,T)\) gravity and give the importance of parameter \(\lambda\) in the construction of a wormhole. The energy conditions WEC, NEC, and DEC are violated in our \(f(R,T)\) gravity model when \(\lambda\leq-12.5\), but SEC is equal to zero (see table (1)). This violation provides strong evidence for the presence of wormholes and exotic matter in the Finslerian world. The wormholes are discovered to be phantom when \(\omega_{r}<-1\)[29], but in our \(f(R,T)\) gravity wormhole model is not phantom because we obtain the radial EoS parameter \(\omega_{r}>-1\). Therefore, background theory is crucial for understanding geometry, determining the type of matter distribution, and determining the type of filled fluid. Observing all our exponential Finslerian wormhole models with \(f(R,T)\) MGT is true physically, which is the overall conclusion. Finally, the \(f(R,T)\) gravity model accurately captures the characteristics of wormholes in the context of Finslerian spacetime, which can be visualized 3-D state wormhole model. ## Acknowledgment The author Manjunath Malligawad acknowledges the financial support for this study was provided by Govt. of Karnataka, India, Backward Classes Welfare Department(BCWD) (Application No:2021PHD9521296). We also want to express our gratitude to the editor and the honorable referee for their insightful remarks and suggestions, which allowed us to significantly improve the article. ## Appendix We constructed embedded 2-D and 3-D diagrams for the shape function Eq. (52) for better visualisation of the wormhole. We used an equatorial plane \(\theta=\frac{\pi}{2}\) at a fixed time or \(t=\)constant, and \(\eta=1\), from these conditions Eq. (27) reduce into the form \[F^{2}=-\left(1-\frac{b(r)}{r}\right)^{-1}dr^{2}-r^{2}d\phi^{2}. \tag{81}\] The above equation is represent in cylindrical coordinates as \[F^{2}=-dz^{2}-dr^{2}-r^{2}d\phi^{2}. \tag{82}\] In Euclidean three dimensional space \(z=z(r)\) represents the embedded surface, we can rewrite Eq. (82) as \[F^{2}=-\left(1+\left(\frac{dz}{dr}\right)^{2}\right)dr^{2}-r^{2}d\phi^{2}, \tag{83}\] comparing Eq. (81) and Eq. (83) \[\frac{dz}{dr}=\pm\sqrt{\left(1-\frac{b(r)}{r}\right)^{-1}-1}, \tag{84}\] using Eq. (84) we plotted the embedded surface of the wormhole.
2305.09871
An electrically-driven Carbon nanotube-based plasmonic laser on Silicon
Photonic signal processing requires efficient on-chip light sources with higher modulation bandwidths. Todays conventional fastest semiconductor diode lasers exhibit modulation speeds only on the order of a few tens of GHz due to gain compression effects and parasitic electrical capacitances. Here we theoretically show an electrically-driven Carbon nanotube (CNT)-based laser utilizing strong light-matter-interaction via monolithic integration into Silicon photonic crystal nanobeam (PCNB) cavities. The laser is formed by single-walled CNTs inside a combo-cavity consisting of both a plasmonic metal-oxide-semiconductor hybrid mode embedded in the one dimensional PCNB cavity. The emission originates from interband recombinations of electrostatically-doped nanotubes depending on the tubes chirality towards matching the C-band. Our simulation results show that the laser operates at telecom frequencies resulting in a power output > 3 (100) uW and > 100 (1000)GHz modulation speed at 1x (10x) threshold. Such monolithic integration schemes provide an alternative promising approach for light source in future photonics integrated circuits.
Ke Liu, Behrouz Movahhed Nouri, Elham Heidari, Hamed Dalir, Volker J. Sorger
2023-05-17T00:58:56Z
http://arxiv.org/abs/2305.09871v1
# An electrically-driven Carbon nanotube-based plasmonic laser on Silicon ###### Abstract Photonic signal processing requires efficient on-chip light sources with higher modulation bandwidths. Today's conventional fastest semiconductor diode lasers exhibit modulation speeds only on the order of a few tens of GHz due to gain compression effects and parasitic electrical capacitances. Here we theoretically show an electrically-driven Carbon nanotube (CNT)-based laser utilizing strong light-matter-interaction via monolithic integration into Silicon photonic crystal nanobeam (PCNB) cavities. The laser is formed by single-walled CNTs inside a combo-cavity consisting of both a plasmonic metal-oxide-semiconductor hybrid mode embedded in the one dimensional PCNB cavity. The emission originates from interband recombinations of electrostatically-doped nanotubes depending on the tubes' chirality towards matching the C-band. Our simulation results show that the laser operates at telecom frequencies resulting in a power output \(>\) 3 (100) \(\upmu\)W and \(>\) 100 (1000)'s GHz modulation speed at 1\(\times\) (10\(\times\)) threshold. Such monolithic integration schemes provide an alternative promising approach for light source in future photonic integrated circuits. ## References * [1] P. Avouris, M. Freitag, and Vasili Perebeinos, "Carbon-nanotube photonics and optoelectronics," Nat. Photonics **2**, 341-350 (2008). * [2] M. Bansal, R. Srivastava, C. Lal, M.N. Kamalasanan, and L.S. Tanwar, "Carbon nanotube-based organic light emitting diodes," Nanoscale **1**, 317-330 (2009). * [3] X. Wang, L. Zhang, Y. Lu, H. Dai, Y. K. Kato, and Eric Pop, "Electrically driven light emission from hot single-walled carbon nanotubes at various temperatures and ambient pressures," Appl. Phys. Lett. **91**, 261102 (2007). * [4] E. Gaufres, N. Izard, X. Le Roux, D. Marris-Morini, S. Kazaoui, E. Cassan, and L. Vivien, "Optical gain in carbon nanotubes," Appl. Phys. Lett. **96**, 231105 (2010). * [5] Y. Miyauchi, M. Iwamura, S. Mouri, T. Kawazoe, M. Ohtsu, andK. Matsuda, "Brightening of excitons in carbon nanotubes on dimensionality modification," Nat. Photonics **7**, 715-719 (2013). * [6] T. Mueller, M. Kinoshita, M. Steiner, V. Perebeinos, A.A. Bol, D.B. Farmer, and P. Avouris, "Efficient narrowband light emission from a single carbon nanotube p-n diode," Nat. Nanotechnol. **5**, 27-31 (2010). * [7] S. Wang, Q. Zeng, L. Yang, Z. Zhang, Z. Wang, T. Pei, L. Ding, X. Liang, M. Gao, Y. Li, and L.M. Peng, "High-performance Carbon nanotube light-emitting diodes with asymmetric contacts," Nano Lett. **11** (1), 23-29 (2011). * [8] E. Gaufres, N. Izard, A. Noury, X. Le Roux, G. Rasigade, A. Beck, and L.Vivien, "Light emission in Silicon from Carbon nanotubes," ACS Nano **6** (5), 3813-3819 (2012). * [9] S. Khasminskaya, F. Pyatkov, B.S. Flavel, W.H. Pernice, and R.Krupke, "Waveguide-integrated light-emitting Carbon nanotubes," Adv. Mater. **26**, 3465-3472 (2014). * [10] S. Bahena-Garrido, N. Shimoi, D. Abe, T. Hojo, Y. Tanaka, and K. Tohji, "Plannar light source using a phosphor screen with single-walled carbon nanotubes as field emitters," Rev. Sci. Instrum. **85**, 104704 (2014).
2306.10870
Exploring the free-energy landscape of a rotating superfluid
The equilibrium state of a superfluid in a rotating cylindrical vessel is a vortex crystal -- an array of vortex lines which is stationary in the rotating frame. Experimental realisations of this behaviour typically show a sequence of transient states before the free-energy minimising configuration is reached. Motivated by these observations, we construct a new method for a systematic exploration of the free-energy landscape via gradient-based optimisation of a scalar loss function. Our approach is inspired by the pioneering numerical work of Campbell & Ziff (Phys. Rev. B 20, 1979), and makes use of automatic differentiation (AD) which crucially allows us to include entire solution trajectories in the loss. We first use the method to converge thousands of low-free-energy relative equilibria for vortex numbers in the range $10 \leq N \leq 30$, which reveals an extremely dense set of mostly saddle-like solutions. As part of this search, we discover new continuous families of relative equilibria (in the unbounded domain) which are often global minimisers of the free energy. These continuous families all consist of crystals arranged in a double-ring configuration, and we assess which state from the family is most likely to be observed experimentally by computing energy-minimising pathways from nearby local minima -- identifying a common entry point into the family. Finally, we develop an approach to compute homoclinic orbits and use it to examine the dynamics in the vicinity of the minimising state by converging connections for low-energy saddles.
Andrew Cleary, Jacob Page
2023-06-19T11:45:31Z
http://arxiv.org/abs/2306.10870v2
# Exploring the free-energy landscape of a rotating superfluid ###### Abstract The equilibrium state of a superfluid in a rotating cylindrical vessel is a vortex crystal - an array of vortex lines which is stationary in the rotating frame. Experimental realisations of this behaviour typically show a sequence of transient states before the free-energy minimising configuration is reached. Motivated by these observations, we construct a new method for a systematic exploration of the free-energy landscape via gradient-based optimisation of a scalar loss function. Our approach is inspired by the pioneering numerical work of Campbell & Ziff (_Phys. Rev. B._**20**, 1979), and makes use of automatic differentiation (AD) which crucially allows us to include entire solution trajectories in the loss. We first use the method to converge thousands of low-free-energy relative equilibria for vortex numbers in the range \(10\leq N\leq 30\), which reveals an extremely dense set of mostly saddle-like solutions. As part of this search, we discover new continuous families of relative equilibria (in the unbounded domain) which are often global minimisers of the free energy. These continuous families all consist of crystals arranged in a double-ring configuration, and we assess which state from the family is most likely to be observed experimentally by computing energy-minimising pathways from nearly local minima - identifying a common entry point into the family. Finally, we develop an approach to compute homoclinic orbits and use it to examine the dynamics in the vicinity of the minimising state by converging connections for low-energy saddles. ## I Introduction Helium remains liquid at temperatures lower than any other substance, where it displays quantum mechanical behaviour on a macroscopic scale. Liquid \({}^{4}\)He undergoes a so-called \(\lambda\)-transition to a'superfluid' phase as the temperature drops below 2.2K, at which point it exhibits a range of highly counter-intuitive properties including a complete absence of internal friction [1]. The irrotational nature of a superfluid results in the formation of vortex lines in a rotating container, each with quantised circulation in units of \(h/m\) (where \(h\) is Planck's constant and \(m\) is the mass of a \({}^{4}\)He atom), rather than a solid body rotation, such that the rotation rate of the vortex crystal matches \(\Omega\), the imposed angular velocity [2; 3]. The vortex crystal is an exact equilibrium solution of the governing equations in a rotating frame, with the state observed at long time being a global minimiser of the free energy. Initial theoretical predictions of this phenomenon were verified experimentally [4], with more recent experiments in other configurations (isomorphic to the two dimensional Euler equations) allowing for an increasingly detailed picture [5] where a relaxation onto the minimising state is preceded by the transient appearance of other (presumably unstable) vortex crystals [6]. Campbell and Ziff [7] performed an extensive numerical study to search for the free-energy minimising state of a collection of identical point vortices for various numbers of vortices, \(1\leq N\leq 30\) (and some special cases with larger \(N\)). In their approach, Campbell and Ziff converged vortex crystals (relative equilibria of the equations of motion) by initialising point vortices on a series of concentric circles and performing gradient descent on the free energy to iteratively update the vortex positions until a minimum was reached. This numerical approach should be contrasted with other analytical approaches to find vortex crystals, which often rely on assuming certain symmetries or geometric constraints [8; 9; 10] and hence have often been restricted to fairly modest values of \(N\). Recent reviews [11; 12] have highlighted the need for a systematic study of the low-energy equilibria that emerge as the number of vortices increase. We tackle this problem here by combining numerical methods typically employed in the dynamical systems approach to turbulence [13], and newer optimisation techniques that have emerged in the field of deep learning [14]. The dynamical systems picture of turbulence envisions fluid motion as a trajectory in a high dimensional state space, pin-balling between simple invariant solutions [15; 16]. These ideas gained significant traction following the discovery of a myriad of unstable travelling waves on the laminar-turbulent boundary in a pipe [17; 18] and with the discovery of an unstable periodic orbit embedded in the turbulent attractor of a minimal Couette flow [19]. Finding these exact solutions relies on both (1) a sensible method for generating initial guesses for simple invariant sets and (2) a Newton-Raphson approach to converge them [13; 20]. Point (1) here has been a significant constraint on applying these ideas at high Reynolds numbers, while point (2) restricts the search to (relative) equilibria and periodic orbits -- the method is not appropriate for computing connecting orbits for example. Recent work in dynamical systems for fluids has benefited from the rapid advances in data driven methods and machine learning [21; 22]. This has included new methods for generating plausible guesses for simple invariant solutions [23], including using deep learning architectures [24]. However, it is perhaps the develop ment of automatic differentiation (AD) which offers the most compelling opportunities when combined with more traditional numerical simulation techniques. Time integration routines for systems of differential equations are essentially a sequence of function compositions involving elementary operations on an input (state) vector, and AD exploits this fact to compute exact derivatives via repeated application of the chain rule [14; 25]. AD is therefore a powerful tool in optimisation problems, and differentiable numerical solvers can be straightforwardly combined with neural network architectures. The JAX library [25] in particular has gained significant popularity, leading to exciting advances in molecular dynamics [26] and numerical methods for computational fluid dynamics [27; 28]. Gradient based optimisation has already shown great promise as a tool for generating robust guesses for unstable periodic orbits [29]. In this paper we develop a new AD-based solver for point vortex evolution in a range of geometries, which we combine with a traditional Newton-GMRES-Hookstep algorithm to comprehensively tackle the problems outlined in the recent reviews [11; 12]. We are able to assemble an extremely large (tens of thousands) collection of vortex crystals near the free-energy minima for a range of number of vortices \(7\leq N\leq 50\), and discover new continuous families of equilibria which were previously unknown. Moreover, the flexibility of AD makes it straightforward to compute both dynamical (constant free-energy orbits) and non-dynamical pathways between nearby states to allow for detailed exploration of the free-energy landscape. The rest of this work is structured as follows. In SSII, we formalise the two dimensional point vortex model in a rotating disk, and describe our approach to search for low energy vortex crystals. In SSIII, we assemble large numbers of vortex crystals for numbers of vortices \(7\leq N\leq 50\) and discuss the new continuous families of double-ringed configurations which emerge. In SSIV, we outline and apply an approach to compute energy-minimising pathways between locally stable configurations. In SSV, we compute homoclinic orbits for saddles close to the energy minimising states and examine their transient dynamics. We conclude in SSVI with a summary of our results. ## II Computational Setup ### Physical problem We consider two-dimensional, irrotational flow in a disk of radius \(R\), in which we place \(N\) point vortices of equal circulation, \(\Gamma\). The disk rotates at constant angular velocity, \(\Omega\). Each point vortex moves at a velocity due to the induced velocity from (1) all of the others and (2) the image vortices. If vortex \(\alpha\) is located at a dimensional complex position \(z^{*}=z_{\alpha}^{*}=x_{\alpha}^{*}+iy_{\alpha}^{*}\), then its image sits at \(z^{*}=R^{2}/\overline{z}_{\alpha}^{*}\), where the overbar represents the complex conjugate. Lengths are non-dimensionalised with the disk radius, \(R\), and time with a reference timescale \(R^{2}/\Gamma\), so that the (dimensionless) evolution equation for each vortex in a frame rotating with the disk is then: \[\dot{\overline{z}}_{\alpha}=-\frac{i}{2\pi}\left(\sum_{\beta=1}^{N}{}^{\prime }\frac{1}{z_{\alpha}-z_{\beta}}-\sum_{\beta=1}^{N}\frac{1}{z_{\alpha}-1/ \overline{z}_{\beta}}\right)+i\omega\overline{z}_{\alpha}, \tag{1}\] where the prime on the summation indicates that \(\beta=\alpha\) is excluded and \(\omega:=R^{2}\Omega/\Gamma\) is the non-dimensional disk rotation rate. Equation (1) is equivariant under rotations about the origin of the disk, \(\mathscr{R}^{\theta}:(z_{1},\dots,z_{N})\rightarrow(\exp(i\theta)z_{1},\dots \exp(i\theta)z_{N})\). There is also symmetry under permutation of the vortices, which we must carefully account for when labelling unique configurations or searching for connecting orbits. If \(\mathbf{f}^{t}(\mathbf{z})\) is the time forward map of equation (1), then equilibria are solutions for which \(\mathbf{f}^{t}(\mathbf{z}^{*})=\mathbf{z}^{*}\ \forall t\). These are relative equilibria (REQ) in the lab frame, rotating with the disk at angular velocity \(\omega\). These REQ, or 'vortex crystals', are stationary points of the free energy, \[\mathcal{F}:=-\sum_{\alpha<\beta}\log|z_{\alpha}-z_{\beta}|^{2} +\frac{1}{2}\sum_{\alpha,\beta}\log|1-z_{\alpha}\overline{z}_{ \beta}|^{2}\] \[-2\pi\omega\sum_{\alpha}(1-|z_{\alpha}|^{2}), \tag{2}\] which plays the role of a Hamiltonian in the rotating frame. We have scaled the free energy (2) to match the definition in Campbell and Ziff [7] (note that their 'rotation rate' variable is related to ours via \(\omega_{CZ}\equiv 2\pi\omega\)). The equations of motion follow from \(\dot{x}_{\alpha}=(1/4\pi)\partial_{y_{\alpha}}\mathcal{F}\), \(\dot{y}_{\alpha}=-(1/4\pi)\partial_{x_{\alpha}}\mathcal{F}\). The experimentally observed state in a superfluid is expected to be the global minimiser of \(\mathcal{F}\)[3; 7]. At this point, searching for free-energy minimising solutions for a given \(N\) also requires the selection of a disk angular velocity, \(\omega\), and hence documenting the solutions for a range of \(\omega\) becomes a daunting task. Campbell and Ziff [7] have shown that the problem is substantially simplified if the image vortices are neglected, which can be justified for'moderate' rotation rates \(\omega\) (e.g. at some point beyond the critical \(\omega_{c}\) at which the equilibrium of interest can be maintained in the disk). The reasoning is that in order to match an increasing disk angular velocity, the vortices must increasingly bunch together near the origin so that the streamlines of the induced velocity are approximately circular at the boundaries and the images play only a small perturbative role enforcing the no-penetration boundary condition. In the absence of images, an equilibrium at one value of \(\omega\) can then be trivially rescaled to another value via the simple relation \(\omega|z_{i}|^{2}=\text{constant}\). Without images, the free energy is simply, \[\mathcal{F}_{0}=-\sum_{\alpha<\beta}\log|z_{\alpha}-z_{\beta}|^{2}-2\pi\omega \sum_{\alpha}(1-|z_{\alpha}|^{2}), \tag{3}\] where the first term is the free-space Hamiltonian, \(\mathcal{H}_{0}:=-\sum_{\alpha<\beta}\log|z_{\alpha}-z_{\beta}|^{2}\). While \(\omega\) can now be set arbitrarily, it is helpful to use an \(\omega\)-independent label to identify and order the relative equilibria. A suitable observable is proposed in Campbell and Ziff [7], who subtract the free energy of a continuum model, \(\mathcal{F}_{c}\), evaluated at the same \(\omega\) to find \[\Delta f :=\mathcal{F}_{0}(N,\omega)-\mathcal{F}_{c}(N,\omega)\] \[=-\sum_{\alpha<\beta}\log\omega|z_{\alpha}-z_{\beta}|^{2}-\frac{ N^{2}}{4}+\frac{N^{2}}{2}\log N\] \[+N(b-\frac{1}{2}), \tag{4}\] where the constant \(b=0.74875\). Equation \(\Delta f\) is independent of \(\omega\) at equilibrium as \(\omega|z_{i}|^{2}=\) constant. We focus on the image-less system for the remainder of this paper, apart from SSIII.2 where boundary effects are re-introduced to examine their impact on a certain class of free-space vortex crystals. ### Relative equilibria guess generation With the removal of images, it is convenient to work in the stationary lab frame, where the vortex velocities can now be simply expressed in terms of the free-space Hamiltonian \(\mathcal{H}_{0}\): \[\dot{x}_{\alpha}=\frac{1}{4\pi}\frac{\partial\mathcal{H}_{0}}{\partial y_{ \alpha}},\quad\dot{y}_{\alpha}=-\frac{1}{4\pi}\frac{\partial\mathcal{H}_{0}}{ \partial x_{\alpha}}. \tag{5}\] We solve equation (5) using our fully differentiable point vortex solver jax-pv. Time integration is performed with the symplectic second order Runge-Kutta scheme at a fixed time step of \(\delta t=10^{-3}\). The numerical solver is built on the JAX library [25], which allows for efficient computation of the gradient of the time-forward map \(\mathbf{f}^{t}(\mathbf{x})\), where \(\mathbf{x}:=(x_{1},y_{1},\dots,x_{N},y_{N})\), with respect to the initial positions of the vortices. The JAX framework allows us to also leverage efficiency benefits such as just-in-time compilation and auto-vectorisation. The ability to minimise loss functions which implicitly depend on the time-forward map is central to this work, and we use it to construct a wide range of REQ candidates which are then converged with a Newton-GMRES-Hookstep solver. Our fully differentiable formulation allows for explicit 'targeting' of REQ with specific physical properties, and we initially seek candidates by minimising a balance of two loss functions. The first contribution acts to minimise the free energy, \[\mathcal{L}_{\mathcal{F}}\coloneqq\mathcal{F}_{0}, \tag{6}\] while the second demands that the inter-vortex distances \(\ell_{\alpha\beta}(t)=\|\mathbf{x}_{\alpha}(t)-\mathbf{x}_{\beta}(t)\|\) are unchanged after a suitably long simulation: \[\mathcal{L}_{I}^{T}\coloneqq\frac{\sum_{\alpha<\beta}|\ell_{\alpha\beta}(T)- \ell_{\alpha\beta}(0)|^{2}}{\sum_{\alpha<\beta}|\ell_{\alpha\beta}(0)|^{2}}. \tag{7}\] We combine the loss functions in the following manner: \[\mathcal{L}\coloneqq\kappa\mathcal{L}_{\mathcal{F}}+(1-\kappa)\left( \mathcal{L}_{I}^{T}+\mathcal{L}_{I}^{T/2}\right), \tag{8}\] where the contribution from the inter-vortex distances is evaluated at two points along the orbit to encourage fixed inter-vortex distances throughout the computation (constraints at additional times were also tried with minimal impact on performance) and the integration time is fixed at \(T=10\). Prior to optimisation each guess is initialised from a uniform distribution \(x_{\alpha},y_{\alpha}\sim U[0,\sqrt{N}]\). We re-centre the configuration such that the centre of vorticity coincides with the origin, and we rescale the configuration throughout optimisation such that its average rotation rate is fixed at \(\langle\omega^{\prime}(t)\rangle=\pi/2T\). The continual updating of \(\omega^{\prime}\) ensures that the pattern traces out a quarter rotation regardless of how the inter-vortex distances are adjusted in gradient descent. Converged crystals have \(\omega|z_{i}|^{2}=\) constant so can be rescaled arbitrarily and compared to known results using the the \(\omega\)-independent observable \(\Delta f\) (4). During optimisation the gradient \(\mathbf{\nabla}_{\mathbf{x}(0)}\mathcal{L}\) is passed to an Adam optimiser [30; 31] with initial learning rate \(10^{-2}\). We then take guesses for which \(\mathcal{L}_{I}^{T}\leq 10^{-3}\) within a maximum \(N_{\text{opt}}=2500\) optimiser steps and attempt to converge them with a Newton-GMRES-Hookstep solver for relative equilibria [13]. Over the course of the optimisation procedure we anneal \(\kappa\) from 1 to 0 in increments of \(1/N_{\text{opt}}\). In general, the 'good' guess criteria \(\mathcal{L}_{I}^{T}\leq 10^{-3}\) is achieved while \(\kappa\) is still close to 1 (typically after approximately 150-300 optimiser steps),where the loss function (8) is dominated by the free energy. However, the gradual and slow introduction of the \(\mathcal{L}_{I}\) terms help to reduce the number of optimiser steps required, especially at higher values of \(N\), and yields a much richer family of REQ than the \(\mathcal{L}_{\mathcal{F}}\) term alone, which tends to produce the global minimiser. This can be thought of as an exploration-exploitation trade-off, as in the Reinforcement Learning literature [32]. At the Newton-GMRES stage, we consider guesses to be converged when \[\frac{\|\mathscr{E}^{\theta}\mathbf{f}^{T}(\mathbf{x})-\mathbf{x}\|}{\| \mathbf{x}\|}<10^{-10}, \tag{9}\] where \(\theta=-\omega T\). After the Newton convergence of a large set of vortex crystals, the final step in the process is to extract all unique REQ. This must be done carefully to avoid classifying symmetric copies (e.g. under rotation or permutation) as unique crystals. When comparing two configurations, both are first scaled to the same value of \(\mathcal{H}_{0}\). This is done by scaling one of the configurations \(\mathbf{x}\rightarrow\gamma\mathbf{x}\), with \[\gamma=\exp{\left(\frac{\Delta\mathcal{H}_{0}}{N(N-1)}\right)}, \tag{10}\] where \(\Delta\mathcal{H}_{0}\) is the difference in Hamiltonian of both configurations. Then, all \(N(N-1)/2\) inter-vortex distances \(\{\ell_{ij}^{k}\}_{i=1,\ldots,N,j=i+1,\ldots,N}\) are computed and sorted for each of the two configurations, where \(N\) is the number of vortices in each configuration and \(k=1,2\) is a label for the first and second configuration, respectively. The configurations are classed as symmetric copies if \[\frac{|\ell_{ij}^{1}-\ell_{ij}^{2}|}{|\ell_{ij}^{1}|}<10^{-5},\quad\forall i,j \tag{11}\] This uniqueness condition is different from the 'ring labelling' condition from Campbell and Ziff [7], as we do not restrict ourselves to REQ based on concentric rings. ### Targeted search As \(N\) increases we anticipate the number of REQ will grow dramatically, hence we augment the random search method described above with another approach to search for'missing' low energy states starting from a library of converged solutions. The appropriate loss function here is \[\mathcal{L}_{S}\coloneqq\mathcal{L}_{I}^{T}+\nu\sigma\left(\frac{\mathcal{F}- \mathcal{F}^{*}}{\delta}\right), \tag{12}\] which seeks to converge onto REQ lower than some target free energy \(\mathcal{F}<\mathcal{F}^{*}\) via the sigmoidal contribution \(\sigma(\bullet)\), which (approximately) will 'fire' if \(\mathcal{F}>\mathcal{F}^{*}\). We set the hyperparameter \(\nu=100\), the large value favouring lower energy states. We apply a recursive approach to optimise for (12), beginning from a converged solution, \(\mathbf{x}^{*}\), at \(\mathcal{F}=\mathcal{F}^{*}\), where we select \(\mathcal{F}^{*}\) to coincide with the free energy of a previously converged state. Typically we do this for all states with a corresponding \(\Delta f\) below some threshold value \(\Delta f_{\text{max}}\). We take the converged solution and perturb each vortex position according to \((x_{\alpha},y_{\alpha})\to(x_{\alpha}+\varepsilon x_{\alpha}^{\prime},y_{ \alpha}+\varepsilon y_{\alpha}^{\prime})\), where \(\varepsilon=0.1\) and \(x_{\alpha}^{\prime},y_{\alpha}^{\prime}\sim U[0,1]\). We do this 20 times to generate 20 new candidate equilibria, before then performing gradient descent on (12) with an initial \(\delta=10^{-1}\). For this REQ data set augmentation we find that the simpler AdaGrad optimiser [33] is particularly effective. Our initial learning rate here is \(\eta=0.1\). Since we begin this optimisation process with a small perturbation to a converged equilibrium, the initial value of the inter-vortex loss (our usual convergence metric) is typically \(\mathcal{L}_{I}^{T}\lesssim 10^{-5}\). To ensure that we generate new candidates and not simply repeats of our starting solution, we require that the optimiser performs at least 25 steps. During this process the loss component \(\mathcal{L}_{I}^{T}\) increases while the vortex positions adjust to reduce the large sigmoidal contribution to \(\mathcal{L}_{S}\). We pass solutions for which \(\mathcal{L}_{I}^{T}\leq 10^{-4}\) to a Newton solver. Finally, new unique solutions with \(\Delta f<\Delta f_{\text{max}}\) are identified via (11). ## III Vortex crystals in the image-less system ### Crystal distributions We begin with an initial test case at \(N=7\), where it has been rigorously established that there are exactly 12 REQ [34]. In this case we seek to converge all known solutions and hence do not wish to restrict ourselves to low free energies, so we use a version of loss function (8) with \(\kappa=0\) throughout optimisation. The 'random search' approach yields 11 of the 12 known solutions, with only the high-\(\mathcal{F}\) co-linear state requiring special attention - we find this solution by initialising our vortices on a straight line - presumably because its basin of attraction in the optimisation does not favour the random vortex initialisations. The REQ solutions and corresponding free-energy labels are reported in figure 1. To generate the first 11 solutions we started with 1000 random initialisations, from which we converged 188 states in total. The most frequently computed REQ was the energy minimising crystal at \(\Delta f=0.10749\) (54 times), while the REQ at \(\Delta f=4.96133\) was only computed once. Setting Figure 1: All relative equilibria for the point vortex system with \(N=7\), in order of increasing \(\Delta f\). \(\kappa=0\) is an important modification when attempting to compute REQs at high energies: if \(\kappa\) is annealed as previously described, then we are unable to find the crystal at \(\Delta f=3.49227\), even after 8000 random initialisations. This is likely due to the very similar structure at the slightly lower value of \(\Delta f=3.49226\), which is preferred by the \(\mathcal{L}_{\mathcal{F}}\) term in (8). We now consider large numbers of guesses for \(N=10,11,\ldots,20\), as well as \(N=30\) and \(N=50\). A summary of our initial random search using loss function (8) is reported in table 1. We searched extensively at \(N\in\{10,12,16,20,30,50\}\), while other vortex numbers were considered to explore the possible appearance of degenerate families of vortex crystals that we discuss later in SSIII.2. Notably, we are able to generate very large numbers of viable guesses via automatic differentiation (success rates at this stage are almost all \(\gtrsim 0.98\)). As is often found in turbulence [20; 35], the success rate of the Newton algorithm is a bottleneck, particularly at higher \(N\), although even at \(N=50\) we are still able to converge around a quarter of our AD guesses. The number of unique convergences identified (as classified by equation (11)) is also described in table 1. These numbers should be interpreted in the context of the raw number of samples considered for each \(N\). For example, the large numbers of guesses considered at \(N=10\) or \(N=12\) and the very low'success rate' in the unique solutions column indicates that we may have found most (potentially all) REQ in those cases, while the high success rates at \(N=30\) and \(N=50\) indicates that we have found only a small subset of all available crystals. We examine the converged solutions further in figure 2, where we plot both the number of Newton convergences and the number of unique convergences below \(\Delta f=1\) against the number of samples over the course of the computation for \(N\in\{10,20,30\}\). At both \(N=10\) and \(N=20\) the number of unique convergences is found to level off while the number of Newton convergences remains roughly constant. The plateauing of the number of unique solutions is observed for roughly an order of magnitude more samples when \(N=20\) compared to the \(N=10\) case. At higher \(N=30\) we continue to observe roughly constant linear growth in the number of unique solutions even at \(O(10^{5})\) guesses. We also explore the distribution of states in as a function of \(\Delta f\) for \(N\in\{10,20,30\}\) in the lower panels of figure 2. Clearly the density of states increases enormously with increasing \(N\), and by \(N=30\) there is some indication that there is an exponential increase in the number of REQ with increasing free energy. Note that the explicit search for low free-energy solutions in our loss (8) is likely the reason for the sudden drop in numbers of convergences beyond \(\Delta f\approx 1\). Due to the indications in table 1 and the top-right panel of figure 2 that we are far from identifying all REQ at \(N=30\), we supplement our random search there with the targeted sweep described in section IIC. We set \(\Delta f_{\max}=0.5\), below which we initially had 19 converged solutions (see inset in figure 2). Each of these 19 solutions is used to define a value of \(\mathcal{F}^{*}\) in loss function (12), and we generate 20 perturbed states for each, or 680 new guesses in total. Of these 680 guesses, 190 unique vortex crystals were converged, of which 17 were new solutions for which \(\Delta f<\Delta f_{\max}\). Additional solutions found this way are indicated in red on the histogram in figure 2; the low-energy sweep was necessary to find the lowest free-energy state, which differs from that reported previously [7]. Note that the Newton solver also produces a large number of states for which the converged \(\Delta f>\Delta f_{\max}\) - see the red detail in figure 2. We report the four energy minimising states for \(N\in\{10,12,14,16,18,20,30\}\) in figure 3. This figure includes some states identified by Campbell and Ziff [7], but with many additional solutions. In figure 3 there are new states at \(N\in\{10,14,18,20,30\}\) (identified in detail in the figure caption). Most notably, _all_ four solutions at \(N=30\) in the figure are new, including a new global minimum of the free energy. A handful of solutions shown in figure 3 are stable centres (including, as expected, the energy minimising states), though the majority are saddles. The stability of all states is determined by computing the eigendecomposition of the Jacobian of the vortex velocities (the right hand side of equation (1) without images), in a frame rotating at angular velocity \(\omega\) with the crystal. Most of the saddle solutions in the vicinity of the free-energy minimum have low dimensional unstable manifolds. Stability properties of the 10 lowest free-energy solutions for all \(N\) considered are reported in appendix A, along with additional details of the states themselves. All states possess a neutral eigenvector (eigenvalue \(\lambda_{0}=0\)) corresponding to a rotation through the origin. For some values of \(N\) we observe a large number of 'unique' states (as identified by the symmetry-invariant observable (11)) with identical values of \(\Delta f\). These distinct states all possess an additional neutral eigenvector, and appear to be smoothly connected to one another - i.e. given any two crystals with equal \(\Delta f\) which are 'close', we can always interpolate to find another intermediate equilibrium between them. Vortex crystals exhibiting this behaviour are shown in red in figure 3, where we are actually reporting only one of a large number of converged solutions at that particular \(\Delta f\). To our knowledge these continuous families of vortex crystals have not been documented before, and we now explore them in more detail. ### Degenerate double ring solutions We examine both 'neutral' eigenvectors (\(\lambda_{0}^{(1,2)}=0\)) for the energy minimising state at \(N=16\) in figure 4, which is the lowest \(N\) at which we found a continuous family of solutions. In the top left panel of that figure we see that one of these eigenvectors corresponds to the rotational symmetry in the system (red arrows) as expected. The second eigenvector, which is identified with blue arrows, is markedly different: in the co-rotating frame it appears that the inner ring rotates while the outer ring wobbles in a non-axisymmetric manner. The dependence of the free energy on the logarithm of the relative distances between vortices does in principle allow for more complex behaviour than uniform rotation/translation: Consider a continuous family of solutions parameterised by some arc-length \(s\) (formally, a group orbit). The condition that \(\partial_{s}\Delta f=0\) is equivalent to \[\sum_{i<j}\frac{1}{\ell_{ij}(s)}\frac{d\ell_{ij}(s)}{ds}=0. \tag{13}\] Clearly, this condition holds for pure rotation as \(d_{s}\ell_{ij}=0\ \forall i,j\), but non-trivial combinations of vortex displacements are also allowed. As a further confirmation of a continuous family of \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \hline N & \# samples & \# AD candidates (success rate) & \# Newton convergences (success rate) & \# unique REQ (success rate) \\ \hline 10 & 9000 & 8845 (0.983) & 8796 (0.994) & 28 (0.003) \\ 11 & 2550 & 2504 (0.982) & 2245 (0.897) & 30 (0.013) \\ 12 & 50000 & 41425 (0.828) & 27611 (0.667) & 49 (0.002) \\ 13 & 2550 & 2500 (0.98) & 1517 (0.607) & 65 (0.043) \\ 14 & 2550 & 2510 (0.984) & 1343 (0.535) & 83 (0.062) \\ 15 & 2550 & 2503 (0.982) & 1996 (0.797) & 138 (0.069) \\ 16 & 31400 & 30840 (0.982) & 19733 (0.64) & 265 (0.013) \\ 17 & 2550 & 2501 (0.981) & 2059 (0.823) & 256 (0.124) \\ 18 & 5050 & 4954 (0.981) & 3326 (0.671) & 409 (0.123) \\ 19 & 15000 & 14708 (0.981) & 10427 (0.709) & 851 (0.082) \\ 20 & 37200 & 36472 (0.98) & 14237 (0.39) & 1728 (0.121) \\ 30 & 102600 & 100474 (0.979) & 44258 (0.44) & 38577 (0.872) \\ 50 & 65700 & 64052 (0.975) & 16150 (0.252) & 15521 (0.961) \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of computations for various numbers of vortices. Note that the success rate shown in brackets in the columns is defined relative to the raw number in the previous column. The success rate in the final column should be interpreted differently to those preceding it – a low success rate here for a large number of samples suggests that most, or potentially all, unique states may have been obtained. Figure 2: (Top) Unique convergences below \(\Delta f=1\) (red) and the number of raw convergences including repeats (blue) as a function of the number of samples. (Bottom) Histograms of the distribution of states as a function of the free energy with bin width \(\Delta f=0.08\). Results are shown for numbers of vortices \(N=10\) (left), \(N=20\) (centre) and \(N=30\) (right). The red contributions to the distribution at \(N=30\) indicates extra solutions found by applying the targeted search for low energy REQs using equation (12). crystals, we also verified that the second directional derivative of the free energy along the relevant neutral eigendirection was zero. In the free-energy landscape, we can think of the family of solutions as a smooth, continuous valley along which the crystal structure varies in a non-trivial manner (in contrast to the rotational symmetry). Computationally, we are able to take any member of a continuous family of vortex crystals and move smoothly through the valley by perturbing it along the relevant neutral eigendirection, \(\mathbf{x}^{*}\to\mathbf{x}^{*}+\varepsilon\mathbf{x}_{0}^{\prime}\). We set \(\varepsilon=5\times 10^{-4}\) and construct a sequence of connected states \(\{\mathcal{P}\mathbf{x}_{i}^{*}\}\) by re-converging the perturbed solution with a Newton solver before repeating the process, performing a pullback [36; 37; 38]\(\mathcal{P}\) along the rotational symmetry direction (see appendix B.1) to account for any drift in the rotational-symmetry valley introduced by the Newton solver. The connected sequence of states is parameterised by an arclength, \(s_{n}:=s_{n-1}+\|\mathcal{P}\mathbf{x}_{n}^{*}-\mathcal{P}\mathbf{x}_{n-1}^{*} \|_{2}\), and we stop the calculation once we reach a symmetrically equivalent copy of the starting solution. The family of solutions obtained via arclength continuation at \(N=16\) is reported in the remaining panels of figure 4. In the top-right panel of the figure we measure the sum of the differences in inter-vortex distances, relative to the initial state, identifying a 'period' of \(\Delta s\approx 0.5\). Note that we start this continuation from a state with mirror symmetry (see red markers in lower panels of figure 4). The vortex configurations that make up one arclength-period of the continuous family are visualised in the lower panels of figure 4, where we choose a reference frame in which the angle of one of the inner-ring vortices is fixed. We observe that the outer ring vortices move smoothly relative to the inner ring until a permuted copy of the starting solution is reached. The permutations are shown in the final panel of the figure: the vortex labels of the inner ring are rotated through one 'click' while the outer ring vortices are permuted by two 'clicks'. Note that the vortices on the inner ring are slightly perturbed as we move through the family. If the arclength process is continued through many cycles, the outer ring of vortices is observed to move through a smoothed approximation to a pentagon - matching the number of vortices on the inner ring - which is shown with the thick grey line in the bottom left panel of figure 4. We searched our library of solutions for other examples of continuous families of crystals, and all configurations exhibiting this behaviour are summarised in figure 5. The Figure 3: The four energy minimising states for \(N=\{10,12,14,16,18,20,30\}\) returned from the random search method detailed in §II.2 (with additional states found at \(N=30\) via the targeted search described in §II.3). The solutions plotted in red denote members of continuous solution families (see §III.2). The states are labelled by their number of vortices, with a subscript denoting their ordering by \(\Delta f\), while superscript ‘c’ is used to identify centres. Solutions \(10_{3}\), \(14_{2}\), \(18_{3}\), \(20_{3}\), \(20_{4}\) and _all_ crystals at \(N=30\) shown here are new (not listed in Campbell and Ziff [7]), including a new global minimiser \(30^{c}_{1}\). states are always 'double ring' configurations, and there may or may not be a central vortex. In all cases, following the continuous valley of solutions results in the outer ring moving relative to the inner on a smooth approximation to a geometric shape specified by the number of vortices on the inner ring. For example, if there are six vortices on the inner ring, the outer vortices approximately move around a hexagon. The continuous families of solutions identified for certain double ring configurations exist for the unbounded problem (no images). It is natural to query what happens when the images are included, particularly at lower rotation rates, where the proximity of the image vortices will have a non-negligible impact in the problem. To examine the importance of the images we apply the random search algorithm from SSII.2 to target strict equilibria in the non-inertial frame. As the vortices are now constrained to a unit radius disk, the guesses are initialised from a uniform distribution \(x_{\alpha},y_{\alpha}\sim U[-0.7,0.7]\). As \(\omega\) is increased, the vortices in a free-energy minimising equilibrium are brought closer to the axis of rotation so that the induced velocities are sufficiently high to match the imposed rotation rate. To compensate for these larger induced velocities, and to be consistent with our scaling in the unbounded domain, we set the integration time Figure 4: Continuous family of vortex crystals at \(N=16\) (the free-energy minimiser). (Top left) The two neutral eigenvectors overlayed on one state from the solution family. Red arrows correspond to the continuous rotational symmetry while the blue arrows correspond to the new non-trivial direction. (Top right) Sum of the difference of intervortex distances between a crystal in the family and a starting state as a function of arclength, \(s\). The valley has a periodicity of approximately \(\Delta s\approx 0.5\) and we start from a mirror-symmetric state (see red markers in bottom panels). (Bottom left) The movement of the vortices along one cycle (\(\Delta s\approx 0.5\)) through the family is shown in colour. For ease of visualisation we fix the location of one of the inner ring vortices as we move through the family. Note the small motion of the inner ring highlighted in the inset. The grey outline in the background shows a continuation of the family over many cycles, and traces out a pentagon shape to match the vortices on the inner ring. (Bottom right) Arrows show the permutation of vortices that has occurred after one cycle. Figure 5: Continuous energy minimising valleys for \(N\in\{16,17,18,20,21\}\). These were the only states exhibiting this behaviour in our solution library. Here we show in red a mirror symmetric state in the family, along with the states reached via arclength continuation in grey. Note the tendency of the outer rings to approximately follow a geometric shape set by the number of vortices on the inner ring. In all examples we use a reference frame in which the location of one of the inner ring vortices is fixed. \(T=2\pi/\omega\). We also fixed the annealing parameter \(\kappa=1\) as we were only interested in computing the energy minimiser in the rotating disk domain. Otherwise, the random search algorithm and optimisation parameters are unchanged from SSII.2. In the full problem with images, we find that the continuous families are replaced by a discrete of states with equal free energy. For example at \(N=16\) with \(\omega_{CZ}=20\) (the critical rotation rate is \(19<\omega_{CZ}^{c}<20^{7}\)) we converged 64 energy minimising solutions from 1000 samples, of which 9 were unique solutions. Increasing slightly to \(\omega_{CZ}=22\), we converged 36 energy minimising equilibria from the same number of samples, of which 12 were unique solutions. The very low rate of _unique_ convergences suggests that we may have found all unique equilibria at these rotation rates. At a much higher value \(\omega=40\), the images have a much smaller impact in the problem and we find a very high rate of unique convergences, generating 2020 energy minimising equilibria from 4250 samples, of which 1128 are unique solutions. The states found at \(\omega_{CZ}=20\) and \(\omega_{CZ}=40\) are summarised in figure 6. At the lower rotation rate the outer ring vortices for the various configurations appear to be approximately uniformly spaced and arranged in a series of five 'bands' while some locations (relative to the inner ring) are completely inaccessible. At the higher rotation rate we obtain a very close approximation to the continuous family reported in figure 4. ## IV Energy-minimising pathways between relative equilibria In a number of cases we have found that the free-energy minimising solution could actually be any one of a continuous family of states - e.g. this is true for all \(N\) considered in figure 5 apart from the 17 vortex crystal. Experimental realisations of these configurations [4; 5; 6] typically show a sequence of unstable states en route to the minimising solution, or even transient behaviour at long time [6]. It is natural to question whether there is a 'preferred' state within the continuous family which is more likely to be observed in an experiment. More generally, it would be useful to map out and visualise the free-energy landscape around local minima, as global minima with a particularly shallow free-energy barrier may not be robust to small perturbations to the system, while a local minimum with high barrier energy might be observed in practice. To explore the free-energy landscape around a particular vortex crystal, we adapt a form of the doubly nudged elastic band (DNEB) method [39], to find non-dynamical, free-energy minimising pathways between REQ. This method has been shown to work successfully in multiple settings, such as structural self assembly [40] and rearrangements of the Lennard-Jones cluster [39]. The DNEB methodology should be contrasted with the simple approach adopted by Campbell and Ziff [7], who searched for minimising pathways with the constraint that the pathway from one state to the other coincides with a monotonic outward radial motion of one vortex from an inner ring to an outer ring, which we will see is often not the case. To approximate the energy-minimising pathway between two REQs, \(\mathbf{x}_{a}^{*}\) and \(\mathbf{x}_{b}^{*}\), the DNEB method seeks an energy-minimising chain of \(N_{I}\) intermediate states \(\{\mathbf{x}_{i}\}_{i=1,\ldots,N_{I}}\). For the point vortex system in the unbounded domain, we define a 'potential energy' as the sum of the free energies along the chain: \[V:=\sum_{i=1}^{N_{I}}\mathcal{F}(\mathbf{x}_{i}). \tag{14}\] Minimising \(V\) alone would likely result in states'sliding down' to one of the two minima [39], and to counter this effect we add a fictional elastic band potential which inserts frictional high-dimensional springs between adjacent states: \[\tilde{V}:=\frac{1}{2}k\sum_{i=1}^{N_{I}+1}\mathcal{L}_{G}(\mathbf{x}_{i}, \mathbf{x}_{i-1}), \tag{15}\] where \(\mathbf{x}_{a}^{*}\equiv\mathbf{x}_{0}\) and \(\mathbf{x}_{b}^{*}\equiv\mathbf{x}_{N_{I}+1}\). For standard Hookean springs, we would write \(\mathcal{L}_{G}(\mathbf{x}_{i},\mathbf{x}_{i-1})=\|\mathbf{x}_{i}-\mathbf{x}_ {i-1}\|^{2}\); in the vortex pathway problem it is necessary to modify this term to account for both the rotational and permutation symmetries in the system, which we describe below. Energy minimising pathways are then determined by minimising an overall loss function \[\mathcal{L}_{P}:=V+\tilde{V}. \tag{16}\] Figure 6: Examples of the discrete, unique energy minimising equilibria at \(\omega_{CZ}=20\) (left) and \(\omega_{CZ}=40\) (right). The contraction of the vortices towards the axis of rotation for the higher rate of rotation is evident when comparing the two plots. All 9 unique equilibria at \(\omega_{CZ}=20\) are shown in various colours. These 9 equilibria are arranged in 5 ‘bands’; any equilibria in the same band are very close to overlapping. All 1128 unique equilibria computed at \(\omega_{CZ}=40\) are shown in grey, illustrating the approach to the unbounded limit as \(\omega_{CZ}\) is increased. To account for the permutation symmetry we define a permutation-invariant observable by centering a two-dimensional Gaussian field over each vortex. For an \(N\)-vortex configuration \(\mathbf{x}^{*}\), this observable is defined as \[G(x,y;\mathbf{x}^{*})=\frac{1}{2\pi\sigma^{2}}\sum_{\alpha=1}^{N}\exp\left(\frac {-(x-x_{\alpha}^{*})^{2}-(y-y_{\alpha}^{*})^{2}}{2\sigma^{2}}\right). \tag{17}\] The hyperparameter \(\sigma\) is set as \(\sigma=10^{-2}A/N\), where \(A=L^{2}\) is the area of the computational grid on which \(G\) is evaluated; typically we set \(L=1.25\max|\mathbf{x}^{*}|\) and evaluate \(G\) on a square grid with \(N_{x}=N_{y}=64\) grid points. An example of observable (17) is reported in figure 7 for the \(N=16\) system. The vortex configuration is scaled so that it rotates at \(\omega=\pi/2T\) with \(T=10\) to match our convention from SSII.2. The Gaussian observable is then used to define the spring term appearing in equation (16): \[\begin{split}\mathcal{L}_{G}&(\theta,\mathbf{x}_{i },\mathbf{x}_{i-1}):=\\ &\frac{1}{L_{x}L_{y}}\iint\left(G(x,y;\mathbf{x}_{i})-G(x,y; \mathscr{E}^{\theta}\mathbf{x}_{i-1})\right)^{2}dxdy,\end{split} \tag{18}\] where the state \(\mathbf{x}_{i-1}\) has been rotated through some angle \(\theta\). The angle used in between each pair of states on the chain is determined simply via \[\theta_{i}^{*}=\operatorname*{arg\,min}_{\theta}\mathcal{L}_{G}(\theta; \mathbf{x}_{i},\mathbf{x}_{i-1}). \tag{19}\] Putting this all together the overall loss function for finding minimum energy pathways between states (16) reads \[\mathcal{L}_{P}(\{\mathbf{x}_{i}\}_{i=1}^{N_{I}}, \{\theta_{i}^{*}\}_{i=1}^{N_{I}})=\\ \sum_{i=1}^{N_{I}}\mathcal{F}(\mathbf{x}_{i})+\frac{1}{2}k\sum_{ i=1}^{N_{I}+1}\mathcal{L}_{G}(\theta_{i}^{*},\mathbf{x}_{i},\mathbf{x}_{i-1}). \tag{20}\] The two states \(\mathbf{x}_{a}\) and \(\mathbf{x}_{b}\) on either side of the chain are kept fixed throughout the optimisation process. We solve the outer optimisation problem (20) using an AdaGrad optimiser with a fairly aggressive initial learning rate \(\eta=0.5\), and classify our chain as converged when the relative difference between sequential iterations of the optimiser was less than \(10^{-6}\). At each outer optimisation step we must also solve the inner optimisation problem (19) for the crystal angles, which is done in \(O(10)\) steps with an Adam optimiser (learning rate \(\eta=10^{-2}\). Throughout we set the spring constant to \(k=10^{-2}\) - larger values caused the intermediate states close to the REQs to clump together- and the number of interpolating states to \(N_{I}=200\). Further details of the optimisation, particularly how gradients are used in the DNEB method, are provided in appendix B.2. To initialise the chain we linearly interpolate between the two vortex crystals \(\mathbf{x}_{a}^{*}\) and \(\mathbf{x}_{b}^{*}\), where we rotate \(\mathbf{x}_{b}^{*}\) such that it maximally overlaps with \(\mathbf{x}_{a}^{*}\). This is done as a pre-processing step rather than as part of the chain optimisation, similar to the approach in Goodrich _et al._[40]. We now apply the methodology outlined above to find minimising pathways into REQ \(16^{c}_{1}\) in the \(N=16\) case, looking for pathways from the two next lowest energy centres (see figure 3). These pathways are reported in figure 8. This case was also considered in Campbell and Ziff [7] but their pathways featured discontinuities in the free energy due to the constraint on how the optimisation was performed, i.e. radial motion of a vortex between rings. No such discontinuities are found here, and the smooth routes from the nearby local minima \(16^{c}_{2}\) and \(16^{c}_{4}\) both pass through unstable REQ (these states on the chain were converged in a couple of Newton steps). The barrier energy from \(16^{c}_{2}\) is considerably higher than that from \(16^{c}_{4}\) (note the correspondence of the pattern \(16^{c}_{2}\) with that seen in self assembly experiments of Grzybowski, Stone, and Whitesides [6]). The state at \(16^{c}_{1}\) is the simplest example of a degenerate continuous family of solutions discussed in SSIII.2. While one example from the family is considered in figure 8, we can use our methodology to determine whether there is a preferred state when computing energy minimising chains from nearby local minima. To do this, we initialise 50 states uniformly along one period of the valley of solutions (see discussion in SSIII.2) and compute pathways from the local minimum \(16_{4}\) to these 50 states. The 50 'final' states from the solution valley are shown in grey in figure 9. We then assess whether the pathway has entered the solution valley before continuing along the constant free-energy surface to the final state. We determine an 'entry point' by selecting the last interpolation state on the pathway which is within a relative difference of \(10^{-5}\) of \(\Delta f(16^{c}_{1})\), using the Newton solver to then converge this state to a REQ. The 50 entry points determined this way are shown in colour for all pathways considered in figure 9, the tight clustering of points clearly indicating a preferred state. If no preferred entry state existed, we would expect the coloured entry states to be Figure 7: Permutation-invariant Gaussian smearing of the energy minimiser at \(N=16\), scaled to rotate at \(\omega=\pi/2T\) with \(T=10\). spread evenly along the grey final states. Instead, the coloured entry states are clustered around one preferred entry state. The entry states become even more tightly clustered around the preferred entry state if the tolerance on the relative difference of \(\Delta f(16^{\circ}_{1})\) is increased to \(10^{-4}\). The energy pathways optimisation does not involve any dynamics and is easily deployed at much higher \(N\). For example, we consider a pathway between centres in the \(N=30\) system in figure 10, which as expected passes through an unstable saddle before a very slow descent into \(30^{\circ}_{1}\). We can adapt the methodology here to explore the dynamics connected to the weakly unstable saddles that deform the free-energy landscape around the local minimiser; modifying the Gaussian-based loss (16) to find homoclinic orbits, exploiting the ability of the AD implementation to efficiently differentiate through trajectories. ## V Homoclinic connections Unlike convergence of many simple invariant solutions (equilibria, REQ, periodic orbits) there is no generally adopted methodology for computing heteroclinic or homoclinic orbits. So-called shooting methods have found some connections in plane Couette flow [41], while a recently-proposed variational method has been used to compute connections for the one-dimensional Kuramoto-Sivashinsky equation [42]. Here, we outline an optimisation method exploiting a fully differentiable solver. ### Candidate orbit generation Generally speaking, we find only a single REQ at any specific energy level (see table 2 in appendix A), hence we restrict our search to homoclinic orbits. To generate candidate trajectories we perform a simple recurrent flow analysis [20] to identify returns to a starting equilibrium \(\mathbf{x}^{*}\). Given the large number of vortices we anticipate that connections are likely to be between permuted copies of \(\mathbf{x}^{*}\), hence we use the permutation invariant observable (equation 17) and modify the associated loss so that \[R(\mathbf{x}_{0},t):=\min_{\theta}\mathcal{L}_{G}(\theta,\mathbf{f}^{t}( \mathbf{x}_{0}),\mathbf{x}^{*}) \tag{21}\] is our measure of similarity used to flag possible connecting orbits. We seed the recurrent flow analysis calculations with initial conditions \(\mathbf{x}_{0}=\mathbf{x}^{*}+\varepsilon\mathbf{x}^{\prime}\), where \(\mathbf{x}^{\prime}\in E^{u}(\mathbf{x}^{*})\) (the unstable subspace of the REQ being considered) and we write \[\mathbf{x}^{\prime}(\mathbf{c})=\sum_{i=1}^{N_{U}}c_{i}\mathbf{v}_{i}, \tag{22}\] where \(\{\mathbf{v}_{i}\}\) are unstable eigenvectors of \(\mathbf{x}^{*}\), before normalising \(|\mathbf{x}^{\prime}|=1\). We set the constant \(\varepsilon=10^{-5}\). A coarse initial search with the constants \(c_{i}\in\{0,\pm 1\}\) is performed, and we use the same discretisation grid for \(G\) as described in SSIV. We searched over a time horizon \(t\in[0,1000]\), computing \(R(\mathbf{x}_{0},t)\) every time unit. Cases where \(R\leq 0.1\) are saved as candidates for connecting orbits. Figure 8: Energy minimising pathways from local minima in the \(N=16\) system into the global minimum. Two pathways are shown here on the left and right of the figure respectively, where the the centres correspond to solutions reported in figure 3. Both pathways move through a saddle point which is also a state identified in the random search (see details of these solutions in appendix A). ### Convergence Suitable guesses are converged using gradient-based optimisation. First, a modified Jonker-Volgenant variant[43] of the Hungarian algorithm[44] is used to compute vortex permutations at the end of the connecting orbit. The Hungarian algorithm is a combinatorial optimisation algorithm that solves the linear assignment problem in \(O(N^{3})\). The linear assignment problem in this setting is \[\tilde{\mathbf{P}}=\operatorname*{arg\,min}_{\mathbf{P}}\operatorname{trace}( \mathbf{PC}), \tag{23}\] where \(\mathbf{P}\) is a permutation matrix which permutes the rows of a matrix \(\mathbf{C}\) whose elements \(C_{ij}\) are the Euclidean distances between the \(i^{th}\) vortex of the initial REQ state \(\mathbf{x}^{*}\) and the \(j^{th}\) vortex in the final state on the connecting orbit candidate \(\mathbf{f}^{\mathsf{t}}(\mathbf{x}_{0})\). Then, for the suitable candidates, \(\mathbf{f}^{\mathsf{t}}(\mathbf{x}_{0})\approx\tilde{\mathbf{P}}\mathbf{x}^{*}\). Once we have identified a candidate connection and determined the required vortex permutations, the second stage is to converge the connecting orbit. As the permutation symmetry is now respected, we minimise the Euclidean loss function: \[\mathcal{L}_{E}^{t}\left(\mathbf{c},\theta\right):=\left\|\mathcal{R}^{ \mathsf{t}}\mathbf{f}^{\mathsf{t}}(\mathbf{x}_{0}(\mathbf{c}))-\tilde{ \mathbf{P}}\mathbf{x}^{*}\right\|_{2}^{2}. \tag{24}\] The idea here is that we must remain in the unstable subspace of \(\mathbf{x}^{*}\) by fixing the perturbation size \(\varepsilon\), but the specific direction is unknown. We minimise (24) at fixed integration time \(t\) (discussed further below) using an Adam optimiser with an initial learning rate of \(\eta_{\mathrm{c}}=10^{-3}\) to determine the vector of constants \(\mathbf{c}\), and an Adam optimiser with an initial learning rate of \(\eta_{\theta}=10^{-2}\) to determine the optimal \(\theta\). The initial integration time \(t\) is set by the criteria \(R\leq 0.1\) from the candidate orbit generation step. We then carefully increase \(t\) throughout the convergence process. Each time \(t\) is increased, the connecting orbit is converged to a smaller tolerance level. In practice, we increase the integration time by \(1\%\)\(10\) times, while logarithmically decreasing the tolerance from \(10^{-3}\) to \(10^{-5}\). We allow a maximum number of \(1000\) optimiser steps every time we converge the connection to a smaller tolerance. When \(\mathcal{L}_{E}^{t}<10^{-5}\), we consider the connection converged, which we confirm by verifying that the dynamics are linear in the vicinity of \(\mathbf{x}^{*}\). As a proof of concept, we successfully computed the only connection in the integrable 3-vortex system[45], which is a homoclinic connection between the collinear state via a permutation of vortices. We then found a homoclinic connection in the non-integrable 4-vortex system, also between two permuted collinear states (with a two-dimensional unstable subspace). This connection is reported in figure 11, where we show both the initial evolution of \(\mathcal{L}_{G}\) that flagged the guess along with the 'converged' \(\mathcal{L}_{G}\). The vortex evolution is shown in a co-rotating frame, the two (initially) inner-most vortices Figure 10: Energy minimising pathway at \(N=30\) pathway from the next lowest energy centre REQ \(30^{\varepsilon}_{5}\) to the minimiser \(30^{\varepsilon}_{1}\). A saddle point from our library of solutions is again found along the pathway (see details of these solutions in appendix A). Figure 9: Common entry point (colours) into continuous family of solutions (grey), defined via a tolerance of \(10^{-5}\) on the relative difference between \(\Delta f\) of the entry point and \(\Delta f(16^{\varepsilon}_{1})\). The grey crosses identify \(50\) equally-spaced equilibria from with the continuous family of solutions that defines REQ \(16^{\varepsilon}_{1}\). An energy-minimising pathway was computed from REQ \(16_{4}\) to each. The non-dynamical pathway always approaches the configuration identified in red (which is a mirror symmetric state) before moving along the valley to the respective final state in grey. The clustering of the entry points for each pathway (coloured from blue to red0 is so tight that only the orange- and red-coloured states are visible. moving outwards along straight lines as the two outer vortices loop back to replace them. Many of unstable saddles reported in SSII.2 (see also appendix A) have low dimensional unstable manifolds, and often feature along free-energy minimising pathways between minima as observed in SSIV. The method outlined above for homoclinic orbits gives us a further tool with which to explore the range of dynamical events we may expect to see in a dissipative relaxation onto the free-energy minimising state. We report one such example in figure 12, where we find a homoclinic orbit for a weakly unstable saddle (REQ \(30_{4}\)) in the \(N=30\) system, with a 2-dimensional unstable subspace. ## VI Conclusion In this paper we have introduced a new computational approach to efficiently identify large numbers of relative equilibria in point vortex dynamics, focusing on free-energy minimising states in a rotating cylinder. Using a combination of modern optimisation techniques with a fully differentiable solver, we were able to assemble tens of thousands of low-energy vortex crystals for a wide range of vortex numbers \(N\), with some evidence that we may have found _all_ low-energy states for lower values \(N\lesssim 20\). As part of the search for energy minimisers we discovered a new set of double-ringed continuous families of vortex crystals in unbounded domains, in which the outer ring can be smoothly deformed relative to the inner. These crystals are often the energy minimiser when they occur. The introduction of a solid wall breaks up the continuum of solutions into a set of discrete states with identical free energies, though the number of such states increases dramatically as the disk rotation rate is increased and the role of the image vortices diminishes. Finally, we developed computational methods to explore the free-energy landscape in the vicinity of the global minimiser, finding both non-dynamical, minimum-energy pathways and dynamical homoclinic orbits for unstable saddles. When the energy minimiser is a continuous family of states, the non-dynamical pathways from nearby local minima indicate a common entry point into the family, and are suggestive of a preferred state experimentally. Future work could exploit the differentiability of the solver here to directly connect simple invariant solutions for point vortices to more complex data e.g. to assess the role of vortex crystals in the decay of two-dimensional turbulence [46], or adapt the connecting orbit search to more complex dissipative systems like the Navier-Stokes equations. Figure 11: (Top) The evolution of \(\min_{\theta}\mathcal{L}_{\mathcal{G}}\) over approximately 23 time units for the 4 vortex system, for both the candidate connection (blue) and the converged homoclinic connection (red). (Bottom) Snapshots at various stages of the converged homoclinic connection in a co-rotating frame. The result of the connection is that the inner vortices are swapped with the outer vortices. Figure 12: A homoclinic connection for REQ \(30_{4}\) is achieved via a permutation of vortices, and is shown here in a rotational symmetry reduced frame. The connection evolves (grey outline) from REQ \(30_{4}\) (red crosses) to its permuted copy (blue crosses). ## Appendix A Vortex crystal details In table 2 we report details of the 10 lowest energy states converged for all \(N\) considered (not including the \(N=7\) test case or the \(N=50\) calculations). Eigenvalues are normalised by the rotation rate of the configuration. In order to provide a useful means of describing and identifying the patterns in table 2, ring numbers are given for each state. We follow the convention in Campbell and Ziff [27] that two vortices are said to fall in the same ring if their radii agree within 2%. ## Appendix B Further formulation details ### Pullback The method of moving frames, or the pullback method, is a robust way to reduce a continuous symmetry in a dynamical system [37], which was formalised by Cartan [47]. The pullback operator for states \(\mathbf{x}\in\mathcal{M}\) \[\mathcal{P}:\mathcal{M}\rightarrow\bar{\mathcal{M}}, \tag{10}\] sends every point on our manifold of possible point vortex configurations \(\mathcal{M}\) to a representative point on a slice of the manifold \(\bar{\mathcal{M}}\subset\mathcal{M}\). The \(N\) vortex system has a \(2N\) dimensional manifold and a continuous symmetry group \(G\) (rotation). This slice is then a \(2N-1\) dimensional submanifold which intersects all the group orbits of \(G\) of \(\mathbf{x}\) transversally and at most once. The slice is defined as the hyperplane normal to the tangent of \(G\) at the slice fixing point \(\bar{\mathbf{x}}^{\prime}\): \[(\bar{\mathbf{x}}-\bar{\mathbf{x}}^{\prime})\cdot\boldsymbol{\tau}^{\prime}=0. \tag{11}\] The group tangent at \(\bar{\mathbf{x}}^{\prime}\) is defined as \(\boldsymbol{\tau}^{\prime}=\mathbf{T}\bar{\mathbf{x}}^{\prime}\), where \(\mathbf{T}\) is the generator of infinitesimal rotations for our system, such that \(\mathbf{G}(\theta)=\exp\left(\theta\mathbf{T}\right)\) is an element of \(G\). \(\mathbf{T}\) is a \(2N\times 2N\) matrix as: \[\mathbf{T}=\begin{bmatrix}0&-1&0&0&\dots&0&0\\ 1&0&0&0&\dots&0&0\\ 0&0&0&-1&0&\dots&0\\ 0&0&1&0&0&\dots&0\\ &&&\vdots&&&\\ 0&&\dots&&0&-1\\ 0&&\dots&&1&0\end{bmatrix},\] and the state vector \(\mathbf{x}\) is a \(2N\) dimensional vector \((x_{1},y_{1},x_{2},y_{2},\dots,x_{N},y_{N})\). Expanding the exponential of the above generator, one gets the rotation matrix which rotates each vortex \((x_{i},y_{i})\) counter-clockwise by \(\theta\): \[\mathbf{G}(\theta)=\begin{bmatrix}\cos\theta&-\sin\theta&0&0&\dots&0&0\\ \sin\theta&\cos\theta&0&0&\dots&0&0\\ 0&0&\cos\theta&-\sin\theta&0&\dots&0\\ 0&0&\sin\theta&\cos\theta&0&\dots&0\\ &&&\vdots&&&\\ 0&&&\dots&\cos\theta&-\sin\theta\\ 0&&&\dots&\sin\theta&\cos\theta\end{bmatrix}.\] The pullback operator \(\mathcal{P}\) is defined by a group element \(\mathbf{G}(\theta)\), and it remains to find an appropriate expression for \(\theta(\mathbf{x})\). The slice condition in equation (11) can be simplified to \[(\mathbf{G}(\theta)\mathbf{x})^{T}\mathbf{T}\bar{\mathbf{x}}^{\prime}=0, \tag{12}\] as \(\mathbf{T}^{T}=-\mathbf{T}\) is antisymmetric so that \(\bar{\mathbf{x}}^{\prime T}\mathbf{T}\bar{\mathbf{x}}^{\prime}=0\). We now use equation (12) to find an expression for \(\theta(\mathbf{x})\). Without loss of generality we pick the slice fixing point \(\bar{\mathbf{x}}^{\prime}=(0,\bar{y}^{\prime}_{1},\bar{z}^{\prime}_{2},\bar{y }^{\prime}_{2},\dots,\bar{x}^{\prime}_{N},\bar{y}^{\prime}_{N})\), and in practice, the simplest slice fixing point is \(\bar{\mathbf{x}}^{\prime}=(0,1,0,0,\dots,0,0)\). Then the expression for \(\theta\) follows from (12) as \(\arctan\left(x_{1}/y_{1}\right)\) (respecting the quadrants in the \((x_{1},y_{1})\) plane). This pullback operation rotates a system so that the vortex at index 1 has the same \(x\)-component as the centre of vorticity of the system. ### DNEB method Gradients of the loss in (20) are 'nudged' to counteract two common issues:'sliding-down' and 'corner-cutting'. 'Sliding-down' occurs when the intermediate points move towards the minima. The nudging approach counteracts this by only considering the portion of the gradient of \(V\) orthogonal to the direction of the pathway. 'Corner-cutting' occurs when the path is highly curved, and the spring potential causes the intermediate states to cut out this highly curved bend. The nudging approach counteracts this by only considering the portion of the gradient of \(\tilde{V}\) parallel to the direction of the pathway. The discussion here follows that given in Trygubenko and Wales [39] and the supplementary information for Goodrich et al [40]. Formalising this, let \(\hat{\boldsymbol{\tau}}_{i}\) be the unit tangent vector to the chain at \(\mathbf{x}_{i}\): \[\hat{\boldsymbol{\tau}}_{i}=\begin{cases}\dfrac{(j-i)(\mathbf{x}_{j}-\mathbf{x} _{i})}{\|\mathbf{x}_{j}-\mathbf{x}_{i}\|}&\text{If exactly one of the neighbours $j$ satisfies }\mathcal{F}(\mathbf{x}_{j})>\mathcal{F}(\mathbf{x}_{i})\\ \dfrac{\mathbf{x}_{i+1}-\mathbf{x}_{i-1}}{\|\mathbf{x}_{i+1}-\mathbf{x}_{i-1} \|}&\text{Otherwise (i.e. is a local extremum)}\end{cases}. \tag{13}\] \begin{table} \begin{tabular}{c c c c c c|c c c c c} \hline \hline N & \(\Delta f\) & Rings & \(N_{U}\) & \(\lambda_{u}\) & \(\sum\lambda_{u}\) & N & \(\Delta f\) & Rings & \(N_{U}\) & \(\lambda_{u}\) & \(\sum\lambda_{u}\) \\ \hline \(10_{1}\) & 0.224329 & (2, 4, 4) & - & - & - & \(16_{1}\) & 0.219593 & (5, 11) & - & - & - \\ \(10_{2}\) & 0.22578 & (2, 2, 4, 2) & 1 & 0.00456 & 0.00456 & \(16_{2}\) & 0.364912 & (1, 5, 10) & - & - & - \\ \(10_{3}\) & 0.383211 & (1, 2, 2, 5) & 1 & 0.00521 & 0.00521 & \(16_{3}\) & 0.386468 & (1, 5, 5, 5) & 1 & 0.00696 & 0.00696 \\ \(10_{4}\) & 0.417052 & (1, 9) & - & - & - & \(16_{4}\) & 0.398422 & (4, 4, 8) & - & - & - \\ \(10_{5}\) & 0.418514 & (1, 1, 2, 2, 2, 2) & 1 & 0.00798 & 0.00798 & \(16_{5}\) & 0.399251 & (1, 2, 2, 1, 10) & 1 & 0.01111 & 0.01111 \\ \(10_{6}\) & 0.459785 & (1, 2, 2, 2, 3) & 2 & 0.01592 & 0.02423 & \(16_{6}\) & 0.402959 & (2, 2, 1, 3, 6, 2) & 1 & 0.00937 & 0.00937 \\ \(10_{7}\) & 0.461326 & (1, 2, 2, 1, 4) & 2 & 0.01702 & 0.02828 & \(16_{7}\) & 0.405144 & (4, 8, 4) & 1 & 0.00621 & 0.00621 \\ \(10_{8}\) & 0.495497 & (1, 2, 1, 2, 4) & 2 & 0.01530 & 0.02812 & \(16_{8}\) & 0.407486 & (1, 1, 2, 1, 2, 2, 7) & 1 & 0.00527 & 0.00527 \\ \(10_{9}\) & 0.648659 & (1, 2, 2, 1, 2, 2) & 2 & 0.01962 & 0.03688 & \(16_{9}\) & 0.409302 & (1, 2, 1, 2, 1, 2, 7) & 1 & 0.00741 & 0.00741 \\ \(10_{10}\) & 0.77648 & (4, 6) & 3 & 0.02101 & 0.04161 & \(16_{10}\) & 0.412404 & (1, 1, 2, 2, 1, 2, 7) & 1 & 0.00585 & 0.00585 \\ \(11_{1}\) & 0.24918 & (3, 3, 2, 3) & - & - & - & \(17_{1}\) & 0.283943 & (1, 5, 11) & - & - & - \\ \(11_{2}\) & 0.249185 & (3, 2, 2, 4) & 1 & 0.00048 & 0.00048 & \(17_{2}\) & 0.283943 & (1, 5, 11) & 1 & 0.00003 & 0.00003 \\ \(11_{3}\) & 0.288585 & (2, 3, 2, 4) & - & - & - & \(17_{3}\) & 0.294121 & (5, 12) & - & - & - \\ \(11_{4}\) & 0.288585 & (2, 2, 2, 2, 3) & 1 & 0.00012 & 0.00012 & \(17_{4}\) & 0.34841 & (2, 1, 2, 1, 7, 4) & - & - \\ \(11_{5}\) & 0.323309 & (2, 1, 2, 2, 2, 2) & 1 & 0.01219 & 0.01219 & \(17_{5}\) & 0.349039 & (1, 2, 1, 2, 8, 3) & 1 & 0.00140 & 0.00140 \\ \(11_{6}\) & 0.567528 & (1, 2, 1, 7) & 2 & 0.01087 & 0.01326 & \(17_{6}\) & 0.355255 & (6, 11) & 1 & 0.00732 & 0.00732 \\ \(11_{7}\) & 0.577026 & (1, 1, 2, 2, 1, 2, 2) & 2 & 0.01869 & 0.03243 & \(17_{7}\) & 0.355255 & (6, 11) & 2 & 0.00732 & 0.00733 \\ \(11_{8}\) & 0.58481 & (1, 2, 4, 4) & 2 & 0.01317 & 0.02477 & \(17_{8}\) & 0.357567 & (1, 2, 1, 1, 1, 10) & 1 & 0.00837 & 0.00837 \\ \(11_{9}\) & 0.590779 & (2, 2, 1, 6) & 2 & 0.01838 & 0.03037 & \(17_{9}\) & 0.358299 & (1, 2, 1, 2, 1, 8, 2) & 2 & 0.00858 & 0.01160 \\ \(11_{10}\) & 0.624168 & (1, 3, 2, 5) & 2 & 0.01499 & 0.02894 & \(17_{10}\) & 0.363937 & (1, 3, 2, 2, 7, 2) & 2 & 0.01043 & 0.01635 \\ \(12_{12}\) & 0.193438 & (3, 3, 6) & - & - & - & \(18_{1}\) & 0.252412 & (1, 6, 11) & - & - & - \\ \(12_{2}\) & 0.198368 & (3, 6, 3) & 1 & 0.00585 & 0.00585 & \(1_{8}\) & 0.283157 & (1, 5, 12) & 1 & 0.00003 & 0.00003 \\ \(12_{3}\) & 0.350225 & (4, 8) & - & - & - & \(18_{3}\) & 0.328686 & (1, 3, 2, 1, 4, 7) & 1 & 0.01066 & 0.01066 \\ \(12_{4}\) & 0.357584 & (2, 1, 1, 2, 6) & 1 & 0.00877 & 0.00877 & \(18_{4}\) & 0.351105 & (3, 3, 9, 3) & - & - & - \\ \(12_{5}\) & 0.37359 & (4, 4, 4) & 1 & 0.00885 & 0.00885 & \(18_{5}\) & 0.352151 & (6, 6, 6) & 1 & 0.00481 & 0.00481 \\ \(12_{6}\) & 0.468791 & (2, 3, 7) & 2 & 0.00873 & 0.01596 & \(18_{6}\) & 0.356264 & (6, 12) & 2 & 0.00649 & 0.010105 \\ \(12_{7}\) & 0.47031 & (2, 4, 4, 2) & 2 & 0.00570 & 0.01122 & \(18_{7}\) & 0.36608 & (1, 2, 1, 2, 1, 9, 2) & 1 & 0.01061 & 0.01061 \\ \(12_{8}\) & 0.561931 & (2, 2, 2, 4, 2) & 2 & 0.01887 & 0.03314 & \(18_{8}\) & 0.3679 & (1, 3, 2, 2, 8, 2) & 2 & 0.00950 & 0.01319 \\ \(12_{9}\) & 0.629558 & Let \(\mathbf{\nabla}_{i}\) be the gradient with respect to the state vector of \(\mathbf{x}_{i}\). \[\mathbf{g}_{i} =\mathbf{\nabla}_{i}V=\mathbf{g}_{i}^{\parallel}+\mathbf{g}_{i}^{\perp} \tag{11}\] \[\tilde{\mathbf{g}}_{i} =\mathbf{\nabla}_{i}\tilde{V}=\tilde{\mathbf{g}}_{i}^{\parallel}+\tilde{ \mathbf{g}}_{i}^{\perp} \tag{12}\] where \[\mathbf{v}^{\parallel} =(\mathbf{v}\cdot\hat{\mathbf{\tau}}_{i})\,\hat{\mathbf{\tau}}_{i} \tag{13}\] \[\mathbf{v}^{\perp} =\mathbf{v}-\mathbf{v}^{\parallel} \tag{14}\] The nudged elastic band (NEB) method uses \[\mathbf{g}_{i}=\mathbf{g}_{i}^{\perp}+\tilde{\mathbf{g}}_{i}^{\parallel} \tag{15}\] to optimise \(\mathcal{L}_{P}\). The doubly-nudged elastic band (DNEB) method only projects out some of the \(\tilde{\mathbf{g}}_{i}^{\perp}\) term so that \[\mathbf{g}_{i}=\mathbf{g}_{i}^{\perp}+\tilde{\mathbf{g}}_{i}^{\parallel}+(\tilde{\mathbf{g}}_ {i}^{\perp}-(\tilde{\mathbf{g}}_{i}^{\perp}\cdot\hat{\mathbf{g}}_{i}^{\perp})\hat{\bm {g}}_{i}^{\perp}) \tag{16}\] is instead used to optimise \(\mathcal{L}_{P}\). ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request. ###### Acknowledgements. This research has been supported by the UK Engineering and Physical Sciences Research Council through the MAC-MIGS Centre for Doctoral Training (EP/S023291/1). Computations were performed on the Cirrus UK National Tier-2 HPC Service at EPCC ([http://www.cirrus.ac.uk](http://www.cirrus.ac.uk)).
2301.05564
Abelian surfaces with supersingular good reduction and non semisimple Tate module
We show the existence of abelian surfaces $A$ over $\mathbb{Q}_p$ having good reduction with supersingular special fibre whose associated $p$-adic Galois module $V_p(A)$ is not semisimple.
Maja Volkov
2023-01-13T14:10:49Z
http://arxiv.org/abs/2301.05564v1
# Abelian surfaces with supersingular good reduction and non semisimple Tate module ###### Abstract. We show the existence of abelian surfaces \(\mathcal{A}\) over \(\mathbb{Q}_{p}\) having good reduction with supersingular special fibre whose associated \(p\)-adic Galois module \(V_{p}(\mathcal{A})\) is not semisimple. 2000 _Mathematics Subject Classification_: 11G10, 14K15, 14G20. _Keywords_: Abelian varieties, local fields, Galois representations. ###### Contents * 1 The general method * 2 A lift of the twofold product of a supersingular elliptic curve * 3 A lift of a simple supersingular abelian surface ## Introduction Fix a prime number \(p\) and an algebraic closure \(\overline{\mathbb{Q}}_{p}\) of \(\mathbb{Q}_{p}\). Write \(G=\operatorname{Gal}(\overline{\mathbb{Q}}_{p}/\mathbb{Q}_{p})\) for the absolute Galois group of \(\mathbb{Q}_{p}\). For a \(d\)-dimensional abelian variety \(\mathcal{A}\) over \(\mathbb{Q}_{p}\) let \(\mathcal{A}[p^{n}]\) be the group of \(p^{n}\)-torsion points with values in \(\overline{\mathbb{Q}}_{p}\) and \[V_{p}(\mathcal{A})\underset{\text{\rm def}}{=}\mathbb{Q}_{p}\otimes_{\mathbb{ Z}_{p}}\underset{n\geq 1}{\varprojlim}\mathcal{A}[p^{n}].\] This is a \(2d\)-dimensional \(\mathbb{Q}_{p}\)-vector space on which \(G\) acts linearly and continuously. We want to consider the following problem: find abelian varieties \(\mathcal{A}\) over \(\mathbb{Q}_{p}\) having good reduction with supersingular special fibre and such that the Galois module \(V_{p}(\mathcal{A})\) is _not_ semisimple. In this paper we show the existence of two such varieties with nonisogenous special fibres for the least dimension possible, namely for \(d=2\). In fact our procedure easily generalises to any \(d\geq 2\), however we stick to surfaces as they furnish low-dimensional hence simple to describe representations. The existence of such surfaces follows from the characterisation of \(p\)-adic representations of \(G\) arising from abelian varieties with (tame) potential good reduction obtained in [Vo], and indeed provides an example of application of this result. In order to explicitely describe our objects we use Fontaine's contravariant functor establishing an equivalence between crystalline \(p\)-adic representations of \(G\) and admissible filtered \(\varphi\)-modules. In section 1 we briefly review this theory as well as the characterisation in [Vo] (Theorem 1.2), and outline the general strategy. In sections 2 and 3 we construct two filtered \(\varphi\)-modules arising from abelian surfaces over \(\mathbb{Q}_{p}\) with good reduction that enjoy the required properties (Propositions 2.1 and 3.1). ## 1. The general method Recall from [10] that the objects \(D\) in the category \(\mathbf{MF}_{\mathbb{Q}_{p}}(\varphi)\) of filtered \(\varphi\)-modules are finite dimensional \(\mathbb{Q}_{p}\)-vector spaces together with a Frobenius map \(\varphi\in\operatorname{Aut}_{\mathbb{Q}_{p}}(D)\) and a decreasing filtration \(\operatorname{Fil}=(\operatorname{Fil}^{i}D)_{i\in\mathbb{Z}}\) on \(D\) by subspaces such that \(\operatorname{Fil}^{i}D=D\) for \(i\ll 0\) and \(\operatorname{Fil}^{i}D=0\) for \(i\gg 0\), and the morphisms are \(\mathbb{Q}_{p}\)-linear maps commuting with \(\varphi\) and preserving the filtration. The dual of \((D,\operatorname{Fil})\) is the \(\mathbb{Q}_{p}\)-linear dual \(D^{*}\) with \(\varphi_{D^{*}}=\varphi^{*\,-1}\) and \(\operatorname{Fil}^{i}D^{*}\) consists of linear forms on \(D\) vanishing on \(\operatorname{Fil}^{j}D\) for all \(j>-i\). The Tate twist \(D\{-1\}\) of \((D,\operatorname{Fil})\) is \(D\) as a \(\mathbb{Q}_{p}\)-vector space with \(\varphi_{D\{-1\}}=p\varphi\) and \(\operatorname{Fil}^{i}D\{-1\}=\operatorname{Fil}^{i-1}D\). The filtration \(\operatorname{Fil}\) has Hodge-Tate type \((0,1)\) if \(\operatorname{Fil}^{i}D=D\) for \(i\leq 0\), \(\operatorname{Fil}^{i}D=0\) for \(i\geq 2\), and \(\operatorname{Fil}^{1}D\) is a nontrivial subspace. The full subcategory \(\mathbf{MF}_{\mathbb{Q}_{p}}^{\operatorname{ad}}(\varphi)\) of \(\mathbf{MF}_{\mathbb{Q}_{p}}(\varphi)\) consists of objects \((D,\operatorname{Fil})\) satisfying a property relating the Frobenius with the filtration, called admissibility and defined as follows. For a \(\varphi\)-stable sub-\(\mathbb{Q}_{p}\)-vector space \(D^{\prime}\) of \(D\) consider the Hodge and Newton invariants \[t_{H}(D^{\prime})\underset{\operatorname{def}}{=}\sum_{i\in\mathbb{Z}}i \dim_{\mathbb{Q}_{p}}\bigl{(}D^{\prime}\cap\operatorname{Fil}^{i}D\bigr{/}D^ {\prime}\cap\operatorname{Fil}^{i+1}D\bigr{)}\quad\text{ and }\quad t_{N}(D^{\prime}) \underset{\operatorname{def}}{=}v_{p}(\det\varphi_{|D^{\prime}})\] where \(v_{p}\) is the normalised \(p\)-adic valuation on \(\mathbb{Q}_{p}\). Then \((D,\operatorname{Fil})\) is admissible if 1. \(t_{H}(D)=t_{N}(D)\) 2. \(t_{H}(D^{\prime})\leq t_{N}(D^{\prime})\) for any sub-\(\mathbb{Q}_{p}[\varphi]\)-module \(D^{\prime}\) of \(D\). A sub-\(\mathbb{Q}_{p}[\varphi]\)-module \(D^{\prime}\) endowed with the induced filtration \(\operatorname{Fil}^{i}D^{\prime}=D^{\prime}\cap\operatorname{Fil}^{i}D\) is a subobject of \((D,\operatorname{Fil})\) in \(\mathbf{MF}_{\mathbb{Q}_{p}}^{\operatorname{ad}}(\varphi)\) if and only if \(t_{H}(D^{\prime})=t_{N}(D^{\prime})\). Let \(B_{\operatorname{cris}}\) be the ring of \(p\)-adic periods constructed in [10] and for a \(p\)-adic representation \(V\) of \(G\) put \[\mathbf{D}_{\operatorname{cris}}^{*}(V)\underset{\operatorname{def}}{=} \operatorname{Hom}_{\mathbb{Q}_{p}[G]}(V,B_{\operatorname{cris}}).\] We always have \(\dim_{\mathbb{Q}_{p}}\mathbf{D}_{\operatorname{cris}}^{*}(V)\leq\dim_{ \mathbb{Q}_{p}}V\) and \(V\) is said to be crystalline when equality holds. The functor \(V\mapsto\mathbf{D}_{\operatorname{cris}}^{*}(V)\) establishes an anti-equivalence between the category of crystalline \(p\)-adic representations of \(G\) and \(\mathbf{MF}_{\mathbb{Q}_{p}}^{\operatorname{ad}}(\varphi)\), a quasi-inverse being \(\mathbf{V}_{\operatorname{cris}}^{*}(D,\operatorname{Fil})=\operatorname{ Hom}_{\varphi,\operatorname{Fil}}(D,B_{\operatorname{cris}})\) ([Co-Fo]). These categories are well-suited to our problem since for an abelian variety \(\mathcal{A}\) over \(\mathbb{Q}_{p}\) the \(G\)-module \(V_{p}(\mathcal{A})\) is crystalline if and only if \(\mathcal{A}\) has good reduction ([Co-Io] Thm.4.7 or [Br] Cor.5.3.4.). A \(p\)-Weil number is an algebraic integer such that all its conjugates have absolute value \(\sqrt{p}\) in \(\mathbb{C}\). Call a monic polynomial in \(\mathbb{Z}[X]\) a \(p\)-Weil polynomial if all its roots in \(\overline{\mathbb{Q}}\) are \(p\)-Weil numbers and its valuation at \(X^{2}-p\) is even. Consider the following conditions on a filtered \(\varphi\)-module \((D,\operatorname{Fil})\) over \(\mathbb{Q}_{p}\): 1. \(\varphi\) acts semisimply and \(\operatorname{P}_{\operatorname{char}}(\varphi)\) is a \(p\)-Weil polynomial 2. the filtration has Hodge-Tate type \((0,1)\) 3. there exists a nondegenerate skew form on \(D\) under which \(\varphi\) is a \(p\)-similitude and \(\operatorname{Fil}^{1}D\) is totally isotropic. Recall that \(\varphi\) is a \(p\)-similitude under a bilinear form \(\beta\) if \(\beta(\varphi x,\varphi y)=p\beta(x,y)\) for all \(x,y\in D\) and \(\operatorname{Fil}^{1}D\) is totally isotropic if \(\beta(x,y)=0\) for all \(x,y\in\operatorname{Fil}^{1}D\). The map sending \(\delta\in\operatorname{Isom}_{\mathbb{Q}_{p}}(D^{*},D)\) to \(\beta:(x,y)\mapsto\delta^{-1}(x)(y)\) identifies the antisymmetric isomorphisms of filtered \(\varphi\)-modules from \(D^{*}\{-1\}\) to \(D\) with the forms satisfying (3). A \(\mathbb{Q}_{p}\)-linear map \(\delta:D^{*}\to D\) is an antisymmetric morphism in \(\mathbf{MF}_{\mathbb{Q}_{p}}(\varphi)\) if \(\delta^{*}=-\delta\) (under the canonical isomorphism \(D^{**}\simeq D\)), \(\varphi\delta=p\delta{\varphi^{*}}^{-1}\), and \(\delta(\operatorname{Fil}^{1}D)^{\perp}\subseteq\operatorname{Fil}^{1}D\). **Remark 1.1**.: Let \(\operatorname{Hom}_{\varphi}^{\mathrm{a}}(D^{*}\{-1\},D)\) be the \(\mathbb{Q}_{p}\)-vector space of antisymmetric \(\varphi\)-module morphisms from \(D^{*}\{-1\}\) to \(D\) and pick any \(\delta\in\operatorname{Isom}_{\varphi}^{\mathrm{a}}(D^{*}\{-1\},D)\). Then \(\alpha^{\dagger}=\delta\alpha^{*}\delta^{-1}\) defines an involution \(\dagger\) on \(\operatorname{End}_{\varphi}(D)\) and the map \(\alpha\mapsto\alpha\delta\) establishes an isomorphim \(\operatorname{End}_{\varphi}(D)^{\dagger}\stackrel{{\sim}}{{ \rightarrow}}\operatorname{Hom}_{\varphi}^{\mathrm{a}}(D^{*}\{-1\},D)\) where \(\operatorname{End}_{\varphi}(D)^{\dagger}\) is the subspace of elements fixed by \(\dagger\). **Theorem 1.2** ([Vo] Corollary 5.9).: _Let \(V\) be a crystalline \(p\)-adic representation of \(G\). The following are equivalent:_ 1. _there is an abelian variety_ \(\mathcal{A}\) _over_ \(\mathbb{Q}_{p}\) _such that_ \(V\simeq V_{p}(\mathcal{A})\)__ 2. \(\mathbf{D}_{\mathrm{cris}}^{*}(V)\) _satisfies conditions (_1_), (_2_) and (_3_)._ Note that the restriction \(p\neq 2\) in [Vo] Theorem 5.7 and its Corollary 5.9 is unnecessary as Kisin shows that a crystalline representation with Hodge-Tate weights in \(\{0,1\}\) arises from a \(p\)-divisible group unrestrictidly on the prime \(p\) ([Ki] Thm.0.3). Let \(\mathcal{A}\) be an abelian variety over \(\mathbb{Q}_{p}\) having good reduction and \((D,\operatorname{Fil})=\mathbf{D}_{\mathrm{cris}}^{*}(V_{p}(\mathcal{A}))\). The \(\varphi\)-module \(D\) satisfies (1) by the Weil conjectures for abelian varieties over \(\mathbb{F}_{p}\). Tate's theorem on endomorphisms of the latter (see [Wa-Mi] II) shows that the isomorphism class of the \(\varphi\)-module \(D\), given by semisimplicity by \(\operatorname{P}_{\mathrm{char}}(\varphi)\), determines the isogeny class of the special fibre of \(\mathcal{A}\) over \(\mathbb{F}_{p}\). Any polarisation on \(\mathcal{A}\) induces a form on \(D\) satisfying (3) and the filtration satisfies (2) by the Hodge decomposition for \(p\)-divisible groups and (3). Conversely let \(V\) be a crystalline \(p\)-adic representation of \(G\) such that \(\mathbf{D}_{\mathrm{cris}}^{*}(V)\) satisfies (1), (2), (3). From (1) the Honda-Tate theory ([Ho-Ta]) furnishes an abelian variety \(A\) over \(\mathbb{F}_{p}\) with the right Frobenius. From (2) Kisin's result [Ki] furnishes a \(p\)-divisible group over \(\mathbb{Z}_{p}\) lifting \(A(p)\). The Serre-Tate theory of liftings then produces a formal abelian scheme \(\mathcal{A}\) over \(\mathbb{Z}_{p}\) with special fibre isogenous to \(A\). Finally (3) furnishes a polarisation on \(\mathcal{A}\) which ensures by Grothendieck's theorem on algebraisation of formal schemes ([Gr] 5.4.5) that \(\mathcal{A}\) is a true abelian scheme. The proof of Theorem 5.7 in [Vo] details this construction. Thus we want to construct an admissible filtered \(\varphi\)-module \((D,\operatorname{Fil})\) over \(\mathbb{Q}_{p}\) satisfying conditions (1), (2), (3) of theorem 1.2 and such that 1. \(\operatorname{P}_{\mathrm{char}}(\varphi)\) is a supersingular \(p\)-Weil polynomial 2. \((D,\operatorname{Fil})\) is not semisimple. Recall that a \(p\)-Weil polynomial is supersingular if its roots are of the form \(\zeta\sqrt{p}\) with \(\zeta\in\overline{\mathbb{Q}}\) a root of unity, and that an abelian variety \(A\) over \(\mathbb{F}_{p}\) is supersingular if and only if the characteristic polynomial of its Frobenius is supersingular. Regarding (a) in section 2 we take \(\operatorname{P}_{\mathrm{char}}(\varphi)(X)=(X^{2}+p)^{2}\) which is the characteristic polynomial of the Frobenius of the product of a supersingular elliptic curve \(E\) over \(\mathbb{F}_{p}\) with itself. In section 3 we take \(\operatorname{P}_{\mathrm{char}}(\varphi)(X)=X^{4}+pX^{2}+p^{2}\) which is the characteristic polynomial of the Frobenius of a simple supersingular abelian surface over \(\mathbb{F}_{p}\). Regarding (b) we assume \(p\equiv 1\bmod 3\mathbb{Z}\) in section 3. In each (a)-case we find a subobject \(D_{1}\) of \((D,\operatorname{Fil})\) in \(\mathbf{MF}^{\operatorname{ad}}_{\mathbb{Q}_{p}}(\varphi)\) and a quotient object \(D_{2}\) (endowed with the quotient filtration \(\operatorname{Fil}^{i}D_{2}=\operatorname{Fil}^{i}D\bmod D_{1}\)) such that the sequence is exact and \(D_{2}\) is _not_ a subobject. Thus (s) does not split and therefore \((D,\operatorname{Fil})\) is not semisimple. Of course when \((D,\operatorname{Fil})\simeq\mathbf{D}^{*}_{\operatorname{cris}}(V_{p}( \mathcal{A}))\) this means that there is a nonsplit short exact sequence of \(G\)-modules with \(V_{i}\simeq\mathbf{V}^{*}_{\operatorname{cris}}(D_{i})\) for \(i=1,2\), and it follows that \(V_{p}(\mathcal{A})\) is not a semisimple \(G\)-module. ## 2. A lift of the twofold product of a supersingular elliptic curve Consider the filtered \(\varphi\)-module \((D,\operatorname{Fil})\) over \(\mathbb{Q}_{p}\) defined as follows. There is a \(\mathbb{Q}_{p}\)-basis \(\mathcal{B}=(x_{1},y_{1},x_{2},y_{2})\) for \(D\) so that \[D=\mathbb{Q}_{p}x_{1}\oplus\mathbb{Q}_{p}y_{1}\oplus\mathbb{Q}_{p}x_{2}\oplus \mathbb{Q}_{p}y_{2}\] is a \(4\)-dimensional \(\mathbb{Q}_{p}\)-vector space. The matrix of \(\varphi\) over \(\mathcal{B}\) is \[\operatorname{Mat}_{\mathcal{B}}(\varphi)=\begin{pmatrix}0&-p&0&0\\ 1&0&0&0\\ 0&0&0&-p\\ 0&0&1&0\end{pmatrix}\in\,GL_{4}(\mathbb{Q}_{p})\] and the filtration is given by \[\operatorname{Fil}^{0}D=D,\quad\operatorname{Fil}^{1}D=\,\mathbb{Q}_{p}x_{1} \oplus\mathbb{Q}_{p}(y_{1}+x_{2}),\quad\operatorname{Fil}^{2}D=0.\] **Proposition 2.1**.: _There is an abelian surface \(\mathcal{A}\) over \(\mathbb{Q}_{p}\) such that \((D,\operatorname{Fil})\simeq\mathbf{D}^{*}_{\operatorname{cris}}(V_{p}( \mathcal{A}))\). Further_ * \(\mathcal{A}\) _has good reduction with special fibre isogenous to the product of two supersingular elliptic curves over_ \(\mathbb{F}_{p}\)__ * _the_ \(G\)_-module_ \(V_{p}(\mathcal{A})\) _is not semisimple._ Proof.: The filtration has Hodge-Tate type \((0,1)\) with \(\dim\operatorname{Fil}^{1}D=2\) and \(\det\varphi=p^{2}\) hence \(t_{H}(D)=2=t_{N}(D)\). Since \(\operatorname{P}_{\operatorname{char}}(\varphi)(X)=(X^{2}+p)^{2}\) the nontrivial \(\varphi\)-stable subspaces of \(D\) are the \(D_{i}=\mathbb{Q}_{p}x_{i}\oplus\mathbb{Q}_{p}y_{i}\) for \(i=1,2\) both having Newton invariant \(t_{N}(D_{i})=1\). However \(D_{1}\cap\operatorname{Fil}^{1}D=\mathbb{Q}_{p}x_{1}\) whereas \(D_{2}\cap\operatorname{Fil}^{1}D=0\), so \(t_{H}(D_{1})=1\) and \(t_{H}(D_{2})=0\). Therefore \((D,\operatorname{Fil})\) is admissible, \(D_{1}\) is a subobject, \(D_{2}\) is a quotient that is not a subobject, the short exact sequence (s) does not split and \((D,\operatorname{Fil})\) is not semisimple. The action of \(\varphi\) is semisimple and \(\operatorname{P}_{\operatorname{char}}(\varphi)=\operatorname{P}_{ \operatorname{char}}(\operatorname{Fr}_{E})^{2}\) where \(E\) is a supersingular elliptic curve over \(\mathbb{F}_{p}\) with \(\operatorname{P}_{\operatorname{char}}(\operatorname{Fr}_{E})(X)=X^{2}+p\). Thus \((D,\operatorname{Fil})\) satisfies condition (1) of theorem 1.2 as well as condition (a) of section 1 and it obviously satisfies (2). It remains to check condition (3) that is to find a \(\delta\in\operatorname{Isom}_{\mathbb{Q}_{p}}(D^{*},D)\) satisfying \(\delta^{*}=-\delta\), \(\varphi\delta=p\delta\varphi^{*-1}\), and \(\delta(\operatorname{Fil}^{1}D)^{\perp}=\operatorname{Fil}^{1}D\). Let \(\mathcal{B}^{*}=(x_{1}^{*},y_{1}^{*},x_{2}^{*},y_{2}^{*})\) be the dual basis of \(\mathcal{B}\) for \(D^{*}\) where is the linear form on \(D\) sending \(z\in D\) to \(1\) and vanishing on all vectors noncolinear to \(z\). The matrix of \(p\varphi^{*\,-1}\) over \(\mathcal{B}^{*}\) is \[p\operatorname{Mat}_{\mathcal{B}}(\varphi^{-1})^{t}=\,\begin{pmatrix}0&-1&0&0\\ p&0&0&0\\ 0&0&0&-1\\ 0&0&p&0\end{pmatrix}\] where \(M^{t}\) is the transpose of \(M\) and \[(\operatorname{Fil}^{1}D)^{\perp}=\,\mathbb{Q}_{p}y_{2}^{*}\oplus\mathbb{Q}_{ p}(y_{1}^{*}-x_{2}^{*}).\] Let \(\delta:D^{*}\to D\) be the \(\mathbb{Q}_{p}\)-linear morphism with matrix over the bases \(\mathcal{B}^{*}\) and \(\mathcal{B}\) \[\operatorname{Mat}_{\mathcal{B}^{*}\mathcal{B}}(\delta)=\,\begin{pmatrix}0&0& 0&-1\\ 0&0&1&0\\ 0&-1&0&0\\ 1&0&0&0\end{pmatrix}.\] Then \(\delta\) is invertible and satisfies the relations \(\delta^{*}=-\delta\) and \(\varphi\delta=p\delta\varphi^{*\,-1}\). Further \(\delta(\operatorname{Fil}^{1}D)^{\perp}=\delta\big{(}\mathbb{Q}_{p}y_{2}^{*} \oplus\mathbb{Q}_{p}(y_{1}^{*}-x_{2}^{*})\big{)}=\mathbb{Q}_{p}x_{1}\oplus \mathbb{Q}_{p}(y_{1}+x_{2})=\operatorname{Fil}^{1}D\). **Remark 2.2**.: Any \(2\)-dimensional object satisfying conditions (1) and (2) of theorem 1.2 also satisfies condition (3). Hence theorem 1.2 applied to the admissible filtered \(\varphi\)-modules \((D_{1},\operatorname{Fil}^{i}D\cap D_{1})\) and \((D_{2},\operatorname{Fil}^{i}D\bmod D_{1})\) shows the existence of elliptic schemes \(\mathcal{E}_{i}\) over \(\mathbb{Z}_{p}\) such that \(D_{i}\simeq\mathbf{D}_{\operatorname{cris}}^{*}(V_{p}(\mathcal{E}_{i}))\) for \(i=1,2\). The special fibres of the \(\mathcal{E}_{i}\) are \(\mathbb{F}_{p}\)-isogenous to \(E\). Thus we obtain a nonsplit exact sequence of \(G\)-modules By Tate's full faithfulness theorem [Ta] the \(G\)-module \(V_{p}(\mathcal{A})\) determines the \(p\)-divisible group \(\mathcal{A}(p)\) over \(\mathbb{Z}_{p}\) up to isogeny, therefore \(\mathcal{A}(p)\) is not \(\mathbb{Z}_{p}\)-isogenous to \(\mathcal{E}_{1}(p)\times\mathcal{E}_{2}(p)\). **Remark 2.3**.: The same construction works starting with the square of any supersingular \(p\)-Weil polynomial of degree two (when \(p\geq 5\) there is only \(X^{2}+p\) but when \(p=2\) or \(3\) there are also the \(X^{2}\pm pX+p\)). However it fails when dealing with the product of two distinct such. Indeed let \(\alpha_{1}\neq\alpha_{2}\in p\mathbb{Z}_{p}\) and \(D\) be a semisimple \(4\)-dimensional \(\varphi\)-module with \(\operatorname{P}_{\operatorname{char}}(\varphi)(X)=(X^{2}+\alpha_{1}X+p)(X^{2} +\alpha_{2}X+p)\). Then \(D=D_{1}\oplus D_{2}\) with \(D_{i}=\operatorname{Ker}(\varphi^{2}+\alpha_{i}\varphi+p)\), which are the nontrivial \(\varphi\)-stable subspaces of \(D\), and \(t_{N}(D_{i})=1\). Since \(\alpha_{1}\neq\alpha_{2}\) one checks that any \(\mathbb{Q}_{p}\)-linear \(\delta:D^{*}\to D\) satisfying \(\delta^{*}=-\delta\) and \(\varphi\delta=p\delta\varphi^{*\,-1}\) sends \(D_{2}^{\perp}\) into \(D_{1}\) and \(D_{1}^{\perp}\) into \(D_{2}\). Endowing \(D\) with an admissible Hodge-Tate \((0,1)\) filtration such that (s) does not split amounts to picking a \(2\)-dimensional subspace \(\operatorname{Fil}^{1}D\) such that \(\dim D_{1}\cap\operatorname{Fil}^{1}D=1\) and \(\dim D_{2}\cap\operatorname{Fil}^{1}D=0\) (or vice versa) ; then \(\dim D_{1}\cap\delta(\operatorname{Fil}^{1}D)^{\perp}=0\) and \(\dim D_{2}\cap\delta(\operatorname{Fil}^{1}D)^{\perp}=1\), therefore \(\delta(\operatorname{Fil}^{1}D)^{\perp}\neq\operatorname{Fil}^{1}D\). This shows that the \(p\)-adic Tate modules of abelian schemes over \(\mathbb{Z}_{p}\) with special fibre \(\mathbb{F}_{p}\)-isogenous to the product of two nonisogenous supersingular elliptic curves are semisimple. **Remark 2.4**.: One constructs in a similar fashion for each integer \(n\geq 2\) a lift of the \(n\)-fold product of a supersingular elliptic curve over \(\mathbb{F}_{p}\) with nonsemisimple \(p\)-adic Tate module. ## 3. A lift of a simple supersingular abelian surface In this section we assume \(p\equiv 1\bmod 3\mathbb{Z}\) which is equivalent to \(\zeta_{3}\in\mathbb{Q}_{p}\) where \(\zeta_{3}\) is a primitive 3rd root of unity. Consider the filtered \(\varphi\)-module \((D,\operatorname{Fil})\) defined as follows. There is a \(\mathbb{Q}_{p}\)-basis \(\mathcal{B}=(x_{1},y_{1},x_{2},y_{2})\) for \(D\) so that \[D=\mathbb{Q}_{p}x_{1}\oplus\mathbb{Q}_{p}y_{1}\oplus\mathbb{Q}_{p}x_{2}\oplus \mathbb{Q}_{p}y_{2}\] is a 4-dimensional \(\mathbb{Q}_{p}\)-vector space. The matrix of \(\varphi\) over \(\mathcal{B}\) is \[\operatorname{Mat}_{\mathcal{B}}(\varphi)=\begin{pmatrix}0&\zeta_{3}p&0&0\\ 1&0&0&0\\ 0&0&0&\zeta_{3}^{-1}p\\ 0&0&1&0\end{pmatrix}\in\,GL_{4}(\mathbb{Q}_{p})\] and the filtration is given by \[\operatorname{Fil}^{0}D=D,\quad\operatorname{Fil}^{1}D=\mathbb{Q}_{p}x_{1} \oplus\mathbb{Q}_{p}(y_{1}+x_{2}),\quad\operatorname{Fil}^{2}D=0.\] **Proposition 3.1**.: _There is an abelian surface \(\mathcal{A}\) over \(\mathbb{Q}_{p}\) such that \((D,\operatorname{Fil})\simeq\mathbf{D}^{*}_{\operatorname{cris}}(V_{p}( \mathcal{A}))\). Further_ 1. \(\mathcal{A}\) _has good reduction with special fibre isogenous to a supersingular simple abelian surface over_ \(\mathbb{F}_{p}\)__ 2. _the_ \(G\)_-module_ \(V_{p}(\mathcal{A})\) _is not semisimple._ Proof.: Just as in the proof of proposition 2.1 we have \(t_{H}(D)=2=t_{N}(D)\). Since \[\operatorname{P}_{\operatorname{char}}(\varphi)(X)=X^{4}+pX^{2}+p^{2}=(X^{2}- \zeta_{3}p)(X^{2}-\zeta_{3}^{-1}p)\] the nontrivial sub-\(\mathbb{Q}_{p}[\varphi]\)-modules of \(D\) are the \(D_{i}=\mathbb{Q}_{p}x_{i}\oplus\mathbb{Q}_{p}y_{i}\) for \(i=1,2\) both having Newton invariant \(t_{N}(D_{i})=1\), and Hodge invariants \(t_{H}(D_{1})=1\), \(t_{H}(D_{2})=0\). Again we obtain a nonsplit exact sequence (s) in \(\mathbf{MF}^{\operatorname{ad}}_{\mathbb{Q}_{p}}(\varphi)\) and \((D,\operatorname{Fil})\) is not semisimple. The action of \(\varphi\) is semisimple and \(\operatorname{P}_{\operatorname{char}}(\varphi)=\operatorname{P}_{ \operatorname{char}}(\operatorname{Fr}_{A})\) where \(A\) is a supersingular simple abelian surface over \(\mathbb{F}_{p}\) with \(\operatorname{P}_{\operatorname{char}}(\operatorname{Fr}_{A})(X)=X^{4}+pX^{2} +p^{2}\). Thus \((D,\operatorname{Fil})\) satisfies condition (1) of theorem 1.2 as well as condition (a) of section 1. It obviously satisfies (2) and it remains to check (3). Let \(\mathcal{B}^{*}=(x_{1}^{*},y_{1}^{*},x_{2}^{*},y_{2}^{*})\) be the dual basis of \(\mathcal{B}\) for \(D^{*}\). Again \((\operatorname{Fil}^{1}D)^{\perp}=\mathbb{Q}_{p}y_{2}^{*}\oplus\mathbb{Q}_{p}( y_{1}^{*}-x_{2}^{*})\) and the matrix of \(p\varphi^{*\,-1}\) over \(\mathcal{B}^{*}\) is \[p\operatorname{Mat}_{\mathcal{B}}(\varphi^{-1})^{t}=\begin{pmatrix}0&\zeta_{3}^ {-1}&0&0\\ p&0&0&0\\ 0&0&0&\zeta_{3}\\ 0&0&p&0\end{pmatrix}.\] Let \(\delta:D^{*}\to D\) be the \(\mathbb{Q}_{p}\)-linear morphism with matrix over the bases \(\mathcal{B}^{*}\) and \(\mathcal{B}\) \[\operatorname{Mat}_{\mathcal{B}^{*};\mathcal{B}}(\delta)=\begin{pmatrix}0&0&0& \zeta_{3}\\ 0&0&1&0\\ 0&-1&0&0\\ -\zeta_{3}&0&0&0\end{pmatrix}.\] As in the proof of proposition 2.1 one checks that \(\delta\) is invertible, satisfies \(\delta^{*}=-\delta\), \(\varphi\delta=p\delta\varphi^{*\,-1}\), and that \(\delta(\operatorname{Fil}^{1}D)^{\perp}=\operatorname{Fil}^{1}D\) **Remark 3.2**.: The objects \((D_{1},\operatorname{Fil}^{i}D\cap D_{1})\) and \((D_{2},\operatorname{Fil}^{i}D\bmod D_{1})\) in \(\mathbf{MF}^{\operatorname{ad}}_{\mathbb{Q}_{p}}(\varphi)\) do not arise from elliptic schemes over \(\mathbb{Z}_{p}\), however [Ki] Thm.0.3 shows the existence of \(p\)-divisible groups \(\mathcal{G}_{i}\) over \(\mathbb{Z}_{p}\) such that \(D_{i}\simeq\mathbf{D}^{*}_{\operatorname{cris}}(V_{p}(\mathcal{G}_{i}))\). The special fibre of \(\mathcal{A}(p)\) is \(\mathbb{F}_{p}\)-isogenous to the product of the special fibres of the \(\mathcal{G}_{i}\), themselves being nonisogenous. Thus we obtain a nonsplit exact sequence of \(G\)-modules and Tate's full faithfulness theorem shows that \(\mathcal{A}(p)\) is not \(\mathbb{Z}_{p}\)-isogenous to \(\mathcal{G}_{1}\times\mathcal{G}_{2}\). **Remark 3.3**.: Starting with \(X^{4}-pX^{2}+p^{2}\) when \(p\equiv 1\bmod 3\mathbb{Z}\) and \(X^{4}+p^{2}\) when \(p\equiv 1\bmod 4\mathbb{Z}\) one obtains alike nonsemisimple \(4\)-dimensional supersingular representations (just replace \(\zeta_{3}\) by \(\zeta_{6}\) or \(\zeta_{4}\)). More generally the \[p^{d}\Phi_{n}\Big{(}\frac{X^{2}}{p}\Big{)}=\prod_{i\in(\mathbb{Z}/n\mathbb{Z} )^{\times}}(X^{2}-\zeta_{n}^{i}p)\qquad\text{ with }\;d=\#\big{(}\mathbb{Z}/n\mathbb{Z}\big{)}^{ \times}\geq 2\] where \(\Phi_{n}\) is the \(n\)th cyclotomic polynomial are supersingular \(p\)-Weil polynomials leading when \(p\equiv 1\bmod n\mathbb{Z}\) to similar higher-dimensional constructions.
2308.09655
Oscillatory networks: Insights from piecewise-linear modeling
There is enormous interest -- both mathematically and in diverse applications -- in understanding the dynamics of coupled oscillator networks. The real-world motivation of such networks arises from studies of the brain, the heart, ecology, and more. It is common to describe the rich emergent behavior in these systems in terms of complex patterns of network activity that reflect both the connectivity and the nonlinear dynamics of the network components. Such behavior is often organized around phase-locked periodic states and their instabilities. However, the explicit calculation of periodic orbits in nonlinear systems (even in low dimensions) is notoriously hard, so network-level insights often require the numerical construction of some underlying periodic component. In this paper, we review powerful techniques for studying coupled oscillator networks. We discuss phase reductions, phase-amplitude reductions, and the master stability function for smooth dynamical systems. We then focus in particular on the augmentation of these methods to analyze piecewise-linear systems, for which one can readily construct periodic orbits. This yields useful insights into network behavior, but the cost is that one needs to study nonsmooth dynamical systems. The study of nonsmooth systems is well-developed when focusing on the interacting units (i.e., at the node level) of a system, and we give a detailed presentation of how to use \textit{saltation operators}, which can treat the propagation of perturbations through switching manifolds, to understand dynamics and bifurcations at the network level. We illustrate this merger of tools and techniques from network science and nonsmooth dynamical systems with applications to neural systems, cardiac systems, networks of electro-mechanical oscillators, and cooperation in cattle herds.
Stephen Coombes, Mustafa Sayli, Rüdiger Thul, Rachel Nicks, Mason A Porter, Yi Ming Lai
2023-08-18T16:18:28Z
http://arxiv.org/abs/2308.09655v1
# Oscillatory networks: insights from piecewise-linear modeling+ ###### Abstract There is enormous interest -- both mathematically and in diverse applications -- in understanding the dynamics of coupled oscillator networks. The real-world motivation of such networks arises from studies of the brain, the heart, ecology, and more. It is common to describe the rich emergent behavior in these systems in terms of complex patterns of network activity that reflect both the connectivity and the nonlinear dynamics of the network components. Such behavior is often organized around phase-locked periodic states and their instabilities. However, the explicit calculation of periodic orbits in nonlinear systems (even in low dimensions) is notoriously hard, so network-level insights often require the numerical construction of some underlying periodic component. In this paper, we review powerful techniques for studying coupled oscillator networks. We discuss phase reductions, phase-amplitude reductions, and the master stability function for smooth dynamical systems. We then focus in particular on the augmentation of these methods to analyze piecewise-linear systems, for which one can readily construct periodic orbits. This yields useful insights into network behavior, but the cost is that one needs to study nonsmooth dynamical systems. The study of nonsmooth systems is well-developed when focusing on the interacting units (i.e., at the node level) of a system, and we give a detailed presentation of how to use _saltation operators_, which can treat the propagation of perturbations through switching manifolds, to understand dynamics and bifurcations at the network level. We illustrate this merger of tools and techniques from network science and nonsmooth dynamical systems with applications to neural systems, cardiac systems, networks of electro-mechanical oscillators, and cooperation in cattle herds. C o * 4 Phase-oscillator networks * 4.1 An application to the structure-function relationship in large-scale brain dynamics * 4.2 Dead zones in networks with refractoriness * 5 Phase-amplitude networks * 6 Strongly coupled oscillator networks * 6.1 The master stability function for nonsmooth systems * 6.2 MSF versus weakly-coupled-oscillator theory for systems of \(N=2\) oscillators * 6.3 A brief note about graph spectra * 6.4 Network symmetries and cluster states * 6.5 An application to synaptically coupled, spiking neural networks * 6.6 An application to neural-mass networks * 6.7 An application to cardiac alternans * 6.8 An application to Franklin-bell networks * 6.9 An application to coordination in cow herds * 7 Discussion ## Appendix A Piecewise-linear models ## Appendix B Saltation operator ### Appendix C The nontrivial Floquet exponent for planar PWL systems #### Appendix D Derivation of the jump condition in \(\mathcal{B}(t)\) #### Appendix E Interaction functions #### Acknowledgements. **Dedication.** We dedicate this paper to the memory of our dear friend and colleague Yi Ming Lai. Although he began with us on the journey to write this paper, which in part reviews some of his research activity in recent years, sadly he did not end that journey with us. RIP Yi Ming Lai 1988-2022. ## 1 Introduction Real-world networks -- such as those in the brain, the heart, and ecological systems -- exhibit rich emergent behavior. The observed complex patterns of network activity reflect both the connectivity and the nonlinear dynamics of the network components [127]. The science of networks [105] has proven especially fruitful in probing the role of connectivity, as exemplified by [130]. However, overly focusing on network connectivity can downplay the crucial role of dynamics, and even the investigation of dynamical processes on networks has often focused on a few types of situations [128], such the spread of infectious diseases [116] and synchronization in coupled oscillators [5]. This is perhaps not too surprising, given the significant challenges of understanding even low-dimensional dynamical systems. However, for some time, there has been an appreciation in the applied sciences of the benefits of studying complex systems in the form of networks of piecewise-linear (PWL) and possibly discontinuous dynamical systems. There is a long history of PWL modeling throughout engineering -- particularly in electrical engineering [1] and mechanical engineering [41] -- that has now begun to pervade other disciplines, including the social sciences, finance, and biology [24, 40]. In neuroscience, the McKean model is a classical example [99] of a PWL system. In the McKean mode, one replaces the cubic nullcline of the FitzHugh-Nagumo model [74] for action-potential (i.e., nerve-impulse) generation with a PWL function that preserves the original shape, allowing explicit calculations that one cannot perform with the original smooth system. At its heart, PWL modeling allows one to obtain analytical insight into a nonlinear model by (1) breaking down its phase space into regions in which trajectories obey linear dynamical systems and (2) patching these together across the boundaries between the regions. The approach can also handle discontinuous dynamical systems, such as those that arise naturally when modeling impacting mechanical oscillators, integrate-and-fire (IF) models of spiking neurons, and cardiac oscillators with both state-dependent and time-dependent switching [153]. Although PWL modeling is a beautifully simplistic modeling perspective, the loss of smoothness precludes the use of many results from the standard toolkit of smooth dynamical systems [40], and one must be careful to correctly determine conditions for the existence, uniqueness, and stability of solutions. An important perspective in the applied dynamical-systems community is that the piecewise nature of models is a much more generally applicable feature for many modern applications in science than the smooth dynamical-systems approach that has dominated to date [57]. We refer to the switches and discontinuities in such models as _threshold elements_. The explicit analysis of PWL models at the network level builds on results at the level of individual nodes (e.g., individual oscillatory units), in disciplines ranging from engineering to biology, to reap benefits for understanding network states. This approach opens up a new frontier in network science to address the role of node dynamics in the interrelationships between the structure and function of real-world networks [70]. Throughout the present review, we illustrate the above modeling approach with applications to biological networks in neuroscience and cardiology. We also illustrate these ideas with explorations of other systems, including Franklin Bells and coordinated behavior in cow herds. We consider networks of \(N\) identical oscillators of the general form \[\dot{x}\equiv\frac{\mathrm{d}}{\mathrm{d}t}x_{i}=f(x_{i})+g_{i}(x_{1},x_{2}, \ldots,x_{N})\,,\quad i\in\{1,\ldots,N\}\,,\quad x_{i}\in\mathbb{R}^{m} \tag{1}\] and show how to gain insight into emergent network dynamics when the vector field \(f\) (i.e., the local dynamics) is PWL and the interactions are pairwise. Each oscillator is associated with a node of a structural network (which, most traditionally, takes the form of a graph [105]), and each interaction is associated with an edge of that network. With only pairwise interactions, the coupling function is \[g_{i}(x_{1},x_{2},\ldots,x_{N})=\sigma\sum_{j=1}^{N}w_{ij}G(x_{i},x_{j})\,, \tag{2}\] where \(G(x_{i},x_{j})\) is the dynamics that expresses the coupling between nodes \(i\) and \(j\), the relative strength of this interaction is \(w_{ij}\), and \(\sigma\) sets the overall network coupling strength. One achieves insight into network behavior with a merger of techniques that have been developed for nonsmooth systems (see, e.g., [95]), as exemplified in the books of di Bernardo _et al_. [40], Acary _et al_. [1], and Jeffrey [76] for low-dimensional systems with discontinuous behavior and by network-science tools, especially weakly-coupled oscillator theory [72] and the master stability function [118] -- that have been developed to describe phase-locked states (i.e., states in which all pairs of oscillators are frequency-locked with a constant phase lag between each pair) and their bifurcations. Our paper proceeds as follows. In section 2, we present the types of PWL models -- including PWL continuous, Filippov, and impacting systems -- that we use as nodes of a network. We partition the phase space of these PWL models using _switching manifolds_. We give a method to construct periodic orbits, and we describe and employ an extension of Floquet theory to nonsmooth systems to determine a criterion for the stability of a periodic orbit. We use _saltation operators_ to describe the propagation of perturbations through the switching manifolds. In section 3, we present a reduction technique that allows one to describe a limit-cycle oscillator in terms of a scalar phase variable and additional variables that encode directed distances. By again exploiting saltation operators, we show how to calculate the infinitesimal phase and amplitude responses for PWL models. We illustrate this approach for some PWL neuron models. We first examine weakly coupled systems. In section 4, we consider phase-only network descriptions (i.e., dropping the amplitude coordinates) and we also describe the relevant phase-interaction function. We highlight the usefulness of a phase-oscillator network description using a combination of theory (specifically, about the stability of phase-locked network states) and numerical simulations, with a focus on neural networks.1 In section 5, we examine phase-amplitude networks, for which one needs more functions to fully specify all of the interactions between units. We use a simple two-node network to highlight the dangers of an overreliance on only phase information and emphasize the benefits of using phase-amplitude coordinates to correctly predict phase-locked behavior for moderate values of the network coupling strength \(\sigma\). We then consider strongly coupled systems, for which we do not expect to obtain good predictions of system behavior from approximations of the network dynamics through either phase-only reductions or phase-amplitude network reductions. In section 6, we develop a theory of phase-locked states in networks of identical PWL oscillators without recourse to any approximation. In essence, this theory is based on an extension of the master stability function to nonsmooth systems. We use saltation operators to develop this extension. We apply this theory to a variety of distinct systems, with a focus on synchronous network states and solutions that can arise when a synchronous state loses stability. Finally, in section 7, we summarize our paper and then briefly discuss extensions and further applications of the methodology in it for analyzing the dynamics of coupled-oscillator networks. Footnote 1: When we write “neural networks”, we are referring to networks in neuroscience, as opposed to the use of the term “neural networks” in contexts such as machine learning. ## 2 Piecewise-linear oscillators Planar PWL systems [43, 55, 138] have dynamics on two regions (i.e., "zones"), with a line of discontinuity between those regions. The dynamics of planar PWL systems can be complicated, but they are tractable to study. Therefore, we start by considering them. We describe the dynamics in the two zones by the variable \(x=(v,w)^{\top}\in\mathbb{R}^{2}\), which satisfies \[\frac{\mathrm{d}x}{\mathrm{d}t}=\left\{\begin{array}{ll}f_{1}\equiv A_{1}x+b _{1}&\text{if }x\in R_{1}\\ f_{2}\equiv A_{2}x+b_{2}&\text{if }x\in R_{2}\,,\end{array}\right. \tag{1}\] where \(A_{1,2}\in\mathbb{R}^{2\times 2}\) are constant matrices and \(b_{1,2}\in\mathbb{R}^{2}\) are constant vectors. The regions \(R_{1}\) and \(R_{2}\) are \[R_{1}=\{x\in\mathbb{R}^{2}|\ h(x)>0\}\quad\text{and}\quad R_{2}=\{x\in\mathbb{ R}^{2}|\ h(x)<0\}\,, \tag{2}\] where the _indicator function_\(h:\mathbb{R}^{2}\to\mathbb{R}\) is \[h(x)=v-a\,. \tag{3}\] Switching events occur when \(h(x)=0\), which holds on the _switching manifold_\(\Sigma=\{x\in\mathbb{R}^{2}|\ v=a\}\). The condition \(h(x(t_{i}))=0\) implicitly yields the event times \(t=t_{i}\), with \(i\in\mathbb{Z}\). If an equilibrium point exists in the region \(R_{\mu}\), one determines its stability by the eigenvalues of \(A_{\mu}\), with \(\mu\in\{1,2\}\). When relevant, it is simple to partition phase space into more regions and to thereby incorporate further switching manifolds, so we describe only the simplest situation of two regions of phase space. However, in section 2, we give an example of a system with three switching manifolds. Planar PWL systems of the form (1) have been studied for many years and can have rich dynamics. For example, Freire _et. al_[54] considered continuous systems with two zones and proposed a canonical form that captures many interesting oscillatory behaviors, and Llibre _et. al_[93] studied the existence and maximum number of limit cycles in systems with a discontinuity. Planar PWL systems can have almost all types of dynamics that occur in smooth nonlinear dynamical systems, and they can also support bifurcations that are not possible in smooth systems [40, 41]. However, in comparison to smooth systems, the knowledge of bifurcations in PWL systems is largely limited to specific examples [27]. Nevertheless, we can start to develop a picture of the theory of bifurcations in PWL systems by gathering results from the differential inclusions of Filippov [50], the "C bifurcations" of Feigin [42, 49], and the nonsmooth equilibrium bifurcations of Andronov _et al._[3]. Examples of well-known bifurcations that arise from discontinuities include grazing bifurcations, sliding bifurcations, and discontinuous saddle-node bifurcations [40, 69]. One of the key advantages of PWL modeling is that it allows one to derive closed-form expressions for periodic orbits2[125]. However, the analysis of such dynamics is not trivial because one needs to match the solution pieces from separate linear regimes. Deriving conditions for matching dynamics from different regions typically necessitates the explicit knowledge of the _times-of-flight_ (i.e., the time that is spent by the flow in a zone of phase space before reaching the switching manifold) in each region. Essentially, we solve the system (1) in each of its linear zones using matrix exponentials and demand continuity of solutions to construct orbits of the full nonlinear flow. To clarify how to implement this procedure, we denote a trajectory in zone \(R_{\mu}\) by \(x^{\mu}\) and solve (1) to obtain \(x^{\mu}(t,t_{0})=x(t,t_{0};A_{\mu},b_{\mu})\) using the solution form Footnote 2: Every periodic orbit that we consider in this paper is also a limit cycle, so we use the terms “periodic orbit” and “limit cycle” interchangeably. \[x(t,t_{0};A,b)=G(t-t_{0};A)x(t_{0})+K(t-t_{0};A)b\,, \tag{4}\] where \(t_{0}\) is the initial time, \(t>t_{0}\), and \[G(t;A)=\mathrm{e}^{At}\,,\quad K(t;A)=\int_{0}^{t}G(s;A)\mathrm{d}s=A^{-1}[G(t ;A)-I_{2}]\,, \tag{5}\] where \(I_{m}\) is the \(m\times m\) identity matrix. One can construct a closed orbit (i.e., a periodic orbit) by connecting two trajectories. One starts from initial data \(x(0)=(a,w(0))^{\top}\) which lies on the switching manifold, in each zone. One then writes \[x(t)=\begin{cases}x^{1}(t,0)&\text{if }t\in[0,T_{1}]\\ x^{2}(t,T_{1})&\text{if }t\in(T_{1},T]\,,\end{cases} \tag{6}\] for some \(T>T_{1}>0\). We obtain a periodic orbit by requiring that \(x\) have period \(T\) (i.e., be \(T\)-periodic). The times \(T_{i}\), with \(i\in\{1,2\}\) and \(T_{2}=T-T_{1}\), gives the times-of-flight between switching events. To complete the procedure, we must determine the unknowns \((T_{1},T_{2},w^{1}(0))\) by simultaneously solving a system of three equations: \(a=v^{1}(T_{1})\), \(a=v^{2}(T_{2})\), and \(w^{2}(T_{2})=w^{1}(0)\). This is easy to do using a numerical method for root finding, such as fsolve in Matlab, along with a method to compute matrix exponentials (e.g., expm in Matlab). Alternatively, one can readily perform explicit calculations of \(G(t;A)\) and \(K(t;A)\)[29]. One can classify PWL systems into three different types, depending on their degree of discontinuity [40, 92]. These three types of PWL systems are as follows. **Continuous PWL systems.**: These systems have continuous states and continuous vector fields (i.e., \(f_{1}(x)=f_{2}(x)\)) but discontinuities in the first derivative or higher derivatives of the right-hand side functions (i.e., \(\partial^{n}f_{1}/\partial x^{n}\neq\partial^{n}f_{2}/\partial x^{n}\) for an integer \(n\geq 1\)), across the switching manifold. These systems have a degree of smoothness of 2 or more, but their Jacobian matrices are different on different sides of a switching manifold (i.e., \(\mathrm{D}f_{1}(x)\neq\mathrm{D}f_{2}(x)\)). **Filippov systems [50].**: These systems have continuous states but vector fields that are different on different sides of a switching manifold (i.e., \(f_{1}(x)\neq f_{2}(x)\)). These systems have a degree of smoothness of 1. The vector field of the system (1) is not defined on the switching manifold \(\Sigma=\{x\in\mathbb{R}^{2}|\ h(x)=0\}\). One completes the description of the dynamics on the switching manifold with a set-valued extension \(f(x)\). The extended dynamical system is \[\frac{\mathrm{d}x}{\mathrm{d}t}\in f(x)=\begin{cases}f_{1}(x)&\text{if}\quad x \in R_{1}\\ \overline{\mathrm{co}}\,\{f_{1}(x),f_{2}(x)\}&\text{if}\quad x\in\Sigma\\ f_{2}(x)&\text{if}\quad x\in R_{2}\,,\end{cases} \tag{7}\] where \(\overline{\mathrm{co}}(\mathcal{A})\) denotes the smallest closed convex set that contains \(\mathcal{A}\). In (7), we have \[\overline{\mathrm{co}}\,\{f_{1}(x),f_{2}(x)\}=\{\varsigma f_{1}(x)+(1- \varsigma)f_{2}(x)\,\text{ for all }\,\varsigma\in[0,1]\}\, \tag{8}\] where \(\varsigma\) (which has no physical meaning) is a parameter that defines the convex combination. The extension (i.e., convexification) of the discontinuous system (1) into a convex differential inclusion (7) is known as the _Filippov convex method_[50]. If \(\langle\nabla h,f_{1}\rangle\langle\nabla h,f_{2}\rangle<0\), a Filippov system can have _sliding_ motion [75, 78], with \(\dot{h}=\nabla h\cdot f=0\), along a switching manifold.3 We then have Footnote 3: We use \(\langle\cdot,\cdot\rangle\) and \(\cdot\) interchangeably to denote the standard vector inner product. \[\varsigma=\frac{\nabla h\cdot f_{2}}{\nabla h\cdot(f_{2}-f_{1})}\,. \tag{9}\] #### Impacting systems (i.e., impulsive systems) These systems have instantaneous discontinuities (i.e., "jumps") in a solution at the switching boundary \(\Sigma\) that are governed by a smooth jump operator (i.e., a "switch rule") \(x(t^{+})=\mathcal{J}(x(t^{-}))\), where \(t^{-}\) denotes the time immediately before the impact and \(t^{+}\) denotes the time immediately after the impact. These systems have a degree of smoothness of \(0\). The jump operator \(\mathcal{J}\) is often called an _impact rule_ (or an _impact law_), and the discontinuity boundary \(\Sigma\) is often called an _impact surface_. Depending on the properties of \(\mathcal{J}\), many different types of dynamics can occur. To further understand the behavior of impacting systems, see [20, 21, 40, 41]. To illustrate this classification, we now briefly introduce five different models, each of which has oscillatory behavior and can be written in the form (2.1). We defer the detailed form of these models to Appendix A. In the present section, we emphasize the qualitative aspects of each model with plots of their nullclines and typical periodic orbits (which we construct using the method that we described in the present section). **Absolute model (continuous) [see Figure 1(a)]**: The vector field is continuous across the switching boundary, although its Jacobian is not. The equilibrium point in zone \(R_{1}\) is an unstable focus, and the equilibrium point in zone \(R_{2}\) is a stable focus. A nonsmooth Andronov-Hopf bifurcation [77, 141, 143] occurs when an equilibrium crosses from \(R_{1}\) to \(R_{2}\) and the eigenvalues of the Jacobian jump across the imaginary axis. **PWL homoclinic model (continuous) [see Figure 1(b)]**: There is a saddle point for \(x\in R_{2}\) and an unstable focus for \(x\in R_{1}\), with a vector field that crosses the switching boundary in a continuous manner. There is a homoclinic orbit that tangentially touches the unstable and stable eigendirections of the saddle point in \(R_{2}\). This orbit encloses the unstable focus in \(R_{1}\). See [168] for a detailed discussion of the conditions that ensure existence of a limit cycle or a homoclinic orbit. **PWL Morris-Lecar model (continuous) [see Figure 2]**: The Morris-Lecar model is a planar conductance-based single-neuron model that captures many important features (such as low firing rates) of neuronal firing [103]. One can then simplify it to obtain a PWL system with four zones and three switching manifolds [29]. (By contrast, our other examples have two zones and one switching manifold.) For full details, see Appendix A. **McKean model (Filippov) [see Figure 1(c)]**: The McKean model is a well-known planar PWL model for action-potential generation [99]. There are two varieties of McKean model. One of them has a PWL approximation of a cubic nonlinearity (to capture the behavior of the FitzHugh-Nagumo model), with the associated nullcline broken into three pieces. In the other variety, the PWL approximation of the cubic nonlinearity has two pieces [154]. We discuss the latter, which requires a set-valued extension on the switching manifold [see (2.7)-(2.8)]. For some parameter values, a stable periodic orbit coexists with a stable equilibrium point (i.e., an attracting focus). In all of those situations, they are separated by an unstable sliding periodic orbit. **Planar IF model (impacting) [see Figure 1(d)]**: In the planar IF model, which is a single-neuron model, whenever the voltage variable \(v\) reaches a firing threshold \(v_{\text{th}}\), the system resets according to \(x(t^{+})=\mathcal{J}(x(t^{-}))\equiv(v_{t},w(t^{-})+\mathcal{J}(x(t^{-})))\). The _interactive_ component of the firing field is \(\mathcal{J}(x(t^{+}))\). The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\). The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\)._ **PWL (continuous) [see Figure 1(e)]**: The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\). The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\). The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\). The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\). The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\)._ **PWL (continuous) [see Figure 1(f)]**: The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\). The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\). The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\). The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\)._ **PWL (continuous) [see Figure 1(f)]**: The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\). The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\). The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\). The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\)._ **PWL (continuous) [see Figure 1(f)]**: The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\). The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\). The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\)._ **PWL (continuous) [see Figure 1(f)]**: The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\). The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\). The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\)._ **PWL (continuous) [see Figure 1(f)]**: The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\). The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\)._ **PWL (continuous) [see Figure 1(f)]**: The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\). The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\)._ **PWL (continuous) [see Figure 1(f)]**: The _interactive component of the firing field is \(\mathcal{J}(x(t^{+}))\). \(\kappa/\tau\)). Namely, the voltage \(v\)_resets_ to \(v_{\rm r}\) and the recovery variable \(w\) is _kicked_ by the amount \(\kappa/\tau\), where \(\kappa\) is the kick strength and \(\tau\) is the time scale of the recovery variable. ### Floquet theory for nonsmooth systems Floquet theory [122] is a popular and well-developed technique to study the stability and bifurcations of periodic orbits of smooth dynamical systems \({\rm d}x/{\rm d}t=f(x)\), where \(x\in\mathbb{R}^{m}\) and \(f(x)\) is a continuously differentiable function. If we write a \(T\)-periodic solution in the form \(x^{\gamma}(t)\) Figure 1: Nullclines and periodic orbits in a variety of planar PWL models. The region \(R_{1}\) (respectively, \(R_{2}\)) is the zone with \(v>a\) (respectively, \(v<a\)). We show the stable (respectively, unstable) periodic orbits with solid (respectively, dotted) black curves. We show the \(v\)-nullcline (i.e., the curve \(\dot{v}=0\)) with a dotted gray curve and the \(w\)-nullcline (i.e., the curve \(\dot{w}=0\)) with a dashed–dotted gray curve. We indicate the switching manifold (\(v=a\)) with a solid gray line. (a) Absolute model. The unstable equilibrium point, which we indicate with an unfilled circle, is in the zone \(R_{1}\). The parameter values are \(a=0\), \(\overline{w}=-0.1\), \(\overline{v}=0.1\), and \(d=0.5\). (b) PWL homoclinic model. The repelling focus, which we indicate with an unfilled circle, is in zone \(R_{1}\). The saddle point, which we indicate with a half-filled circle, is in zone \(R_{2}\). The parameter values are \(a=0\), \(\delta_{1}=2\), \(\delta_{2}=-0.3667\), \(\tau_{1}=0.5\), and \(\tau_{2}=-0.6333\). (c) McKean model. The unstable periodic orbit is of sliding type. The stable equilibrium point, which we indicate by a filled black circle and is a focus, is in the zone \(R_{2}\). The parameter values are \(a=0.3\), \(b=2\), \(\gamma=1\), and \(I=3\). (d) Planar IF model. We indicate the firing threshold with a dashed–dotted black line and indicate the reset value with a dashed gray line. The parameter values are \(v_{\rm th}=1\), \(v_{r}=0.2\), \(a_{w}=0\), \(b_{w}=-1\), \(a_{1}=1\), \(a_{2}=-1\), and \(I=0.1\). For further details about these models, see section 2 and Appendix A. the variational equation for this solution is \[\dot{\Phi}=\mathrm{D}f(x^{\gamma}(t))\Phi\,,\quad\Phi(0)=I_{m}\,. \tag{2.10}\] Equation (2.10) has an associated _monodromy_ matrix \(\Phi(T)\). The eigenvalues of \(\Phi(T)\), which are \(\lambda_{k}=\mathrm{e}^{\kappa_{k}T}\) for all \(k\in\{0,\dots,m-1\}\), are the so-called "Floquet multipliers" of the limit cycle, and the values \(\kappa_{k}\) are their associated "Floquet exponents". For a planar system, for which \(x\in\mathbb{R}^{2}\), one of the Floquet multipliers is equal to \(1\) (corresponding to perturbations that are tangent to the periodic orbit) and the other is \(\lambda_{\mathrm{smooth}}=\exp(\kappa_{\mathrm{smooth}}T)\), where \[\kappa_{\mathrm{smooth}}=\frac{1}{T}\int_{0}^{T}\mathrm{Tr}\left(\mathrm{D}f (x^{\gamma}(t))\right)\mathrm{d}t\,. \tag{2.11}\] One determines the stability of periodic orbits from the sign of \(\kappa_{\mathrm{smooth}}\). An orbit is linearly stable if \(\kappa_{\mathrm{smooth}}<0\) and unstable if \(\kappa_{\mathrm{smooth}}>0\). For dynamical systems with nonsmooth or even discontinuous vector fields, one cannot directly use standard Floquet theory [79, 80]. It is also necessary to carefully evolve a perturbation across the switching boundaries. We revisit the adaptation of standard Floquet theory (for non-sliding periodic orbits) to PWL systems [31, 40] of Figure 2: Phase plane of the piecewise-linear Morris–Lecar model, with a stable periodic orbit in black. The periodic orbit has four pieces, with the first and third pieces in \(R_{2}\), the second piece in \(R_{1}\), and the fourth piece in \(R_{3}\). We show the \(v\)-nullcline with a dotted gray line, the \(w\)-nullcline with a dashed–dotted gray line, and the switch manifolds \(\Sigma_{1}\), \(\Sigma_{2}\), and \(\Sigma_{3}\) with solid gray lines. The nullclines are piecewise-linear approximations of those of the original smooth Morris–Lecar model. The open black circle indicates an unstable equilibrium point, the half-filled black circle indicates a saddle point, and the filled black circle indicates a stable equilibrium point (which is in zone \(R_{4}=\{x\in\mathbb{R}^{2}|\ v<a/2\}\)). The parameter values are \(C=0.825\), \(I=0.1\), \(a=0.25\ b=0.5\), \(b^{*}=0.2\), \(\gamma_{1}=2\), and \(\gamma_{2}=0.25\). For further details this model, see section 2 and Appendix A. the form \(\dot{x}=A_{\mu}x(t)+b_{\mu}\), where \(A_{\mu}\in\mathbb{R}^{m\times m}\), \(b_{\mu}\in\mathbb{R}^{m}\), and the phase space has \(P\) distinct regions \(R_{\mu}\) (with \(\mu\in\{1,\dots,P\}\)). Switching events have associated indicator functions \(h_{\mu}(x)\). They occur when \(h_{\mu}(x(t_{i}))=0\) and have switching times \(t_{i}\), with \(i\in\mathbb{Z}\). The state of the system immediately after the switch event is \(x(t_{i}^{+})=\mathcal{J}_{\mu}(x(t_{i}^{-}))\), where \(\mathcal{J}_{\mu}:\mathbb{R}^{m}\to\mathbb{R}^{m}\) is the switch rule, \(t_{i}^{\pm}=\lim_{\epsilon\to 0^{+}}(t_{i}\pm\epsilon)\), and \(x(t_{i}^{-})\) denotes the state immediately before the switch event. We construct a periodic orbit \(x^{\gamma}(t)\) is by patching solutions (built from matrix exponentials) across the boundaries of the regions \(R_{\mu}\). Away from switching events, the variational equation for a periodic orbit is \[\frac{\mathrm{d}}{\mathrm{d}t}\delta x=A_{\mu}\,\delta x\quad\text{for}\ \ x^{\gamma}(t)+\delta x(t)\in R_{\mu}\,, \tag{12}\] where \(\delta x(t)\) is a perturbation of the periodic orbit. The evolution of perturbations in each region is governed by the matrix exponential form \(\delta x(t)=G(t-t_{0};A_{\mu})\delta x(t-t_{0})\), where \(t>t_{0}\) and \(t_{0}\) denotes the time at which the trajectory crosses into region \(R_{\mu}\). To map perturbations across a switching manifold, we use a _saltation operator_[53, 104]. This allows us to evaluate perturbations during the boundary crossing in which either the solution or the vector field (or both) has a discontinuity. Muller [104] used saltation operators to calculate Lyapunov exponents of discontinuous systems and Fredriksson and Nordmark [53] used them in a normal-form derivation for impact oscillators. See [81] for a recent review of saltation operators and their use in engineering. In our context, saltation operators admit an explicit matrix construction of the form \[S_{\mu}(t_{i})=\mathrm{D}\mathcal{J}_{\mu}(x^{\gamma}(t_{i}^{-}))+\frac{[\dot {x}^{\gamma}(t_{i}^{+})-\mathrm{D}\mathcal{J}_{\mu}(x^{\gamma}(t_{i}^{-})) \dot{x}^{\gamma}(t_{i}^{-})][\nabla_{x}h_{\mu}(x^{\gamma}(t_{i}^{-}))]^{\top} }{\nabla_{x}h_{\mu}(x^{\gamma}(t_{i}^{-}))\cdot\dot{x}^{\gamma}(t_{i}^{-})}\,. \tag{13}\] We derive (13) in Appendix B. Equation (13) allows us to write \[\delta x(t_{i}^{+})=S_{\mu}(t_{i})\delta x(t_{i}^{-})\,,\quad x^{\gamma}(t_{i }^{-})+\delta x(t_{i}^{-})\in R_{\mu} \tag{14}\] to describe how perturbations are mapped across a switching manifold at the boundary of region \(R_{\mu}\). Combining (12) and (14) allows us to evaluate \(\delta x(t)\) over one oscillation period \(T\) using \(M\) separate times-of-flight. We thus write \(T=\sum_{i=1}^{M}T_{i}\), with \(\delta x(T)=\Psi\delta x(0)\), where \(\Psi\) is the product \[\Psi=S(t_{M})G(T_{M})S(t_{M-1})G(T_{M-1})\times\dots\times S(t_{2})G(T_{2})S( t_{1})G(T_{1})\,, \tag{15}\] where \(G(T_{i})=G(T_{i};A_{\mu(i)})\) and \(S(t_{i})=S_{\mu(i)}(t_{i})\). The index \(\mu(i)\in\{1,\dots,P\}\) indicates the region that the periodic orbit is in at time \(t_{i}^{-}\). The periodic orbit is linearly stable if all of its nontrivial eigenvalues (i.e., Floquet multipliers) of the matrix \(\Psi\) have moduli less than \(1\) and equivalently if the corresponding Floquet exponents (\(\kappa_{k}=\ln(\lambda_{k})/T\)) all have negative real parts. One (trivial) eigenvalue of \(\Psi\) is equal to \(1\), corresponding to perturbations that are tangential to the periodic orbit. For planar systems, one calculates the lone nontrivial Floquet exponent using the formula \[\kappa=\frac{1}{T}\sum_{i=1}^{M}\left[T_{i}\operatorname{Tr}A_{\mu(i)}+\ln| \det S(t_{i})|\right]\,. \tag{16}\] The logarithmic term in (16) reflects the contribution of discontinuous switching to the stability of an orbit. If \(S=I_{2}\) (i.e., there is no saltation), the logarithmic term vanishes and we recover the formula (11) for a smooth system. In Appendix C, we derive the Floquet-exponent formula (16) for planar PWL systems. We use this formula to compute the stability of periodic orbits in all numerical studies of single-oscillator PWL models. ## 3 Isochrons and isostables We now examine networks of interacting PWL oscillators. We start by generalizing results from the theory of weakly coupled systems of smooth oscillators. The theory of weakly coupled oscillators allows us to obtain insights into the phase relationships between the nodes of a network [72]. Historically, the theory of weakly coupled oscillators has focused on phase-reduction techniques using the notion of _isochrons_, which extend the phase variable for a limit-cycle attractor to its basin of attraction [64, 167]. More recent research has emphasized the importance of distance from a limit cycle using _isostable_ coordinates (which we call "isostables" as shorthand terminology) [98, 97, 163, 97]. Employing isochrons and isostables yield reductions to phase networks and phase-amplitude networks, respectively, although the theory for the latter is far less developed than the theory for the former. To introduce the concepts of an isochron and an isostable, it is sufficient to consider the dynamical system \(\dot{x}=f(x)+g(t)\), with \(x\in\mathbb{R}^{m}\). ### Phase response and amplitude response Consider a \(T\)-periodic hyperbolic limit cycle for the case \(g(t)=0\). Following Perez-Cervera _et al._[121], we parametrize the limit cycle and its \((m-1)\)-dimensional stable invariant manifold by writing \[\frac{\mathrm{d}}{\mathrm{d}t}\theta=\omega\,,\quad\frac{\mathrm{d}}{\mathrm{d }t}\psi_{k}=\kappa_{k}\psi_{k}\,,\quad k\in\{1,\ldots,m-1\}\,, \tag{17}\] where \(\omega=2\pi/T\) and \(\kappa_{k}\) is the \(k\)th Floquet exponent of the limit cycle. The dynamics for \(\theta\) is uniform rotation, and the dynamics for \(\psi_{k}\) is contraction at a rate of \(\kappa_{k}\). There exists an analytic map \(K:\mathbb{T}\times\mathbb{R}^{m-1}\to\mathbb{R}^{m}\) such that \(x=K(\theta,\psi_{1},\ldots,\psi_{m-1})\)[23]. From the map \(K\), we define a scalar function \(\Theta(x)\) that assigns a phase to any point in a neighborhood \(\Omega\) of the limit cycle. The function \(\Theta(x)=\theta\) if there exists \(\psi_{k}\in\mathbb{R}\) such that \(x=K(\theta,\psi_{1},\ldots,\psi_{m-1})\). This function satisfies \(\Theta(x(t))=\Theta(x(t_{0}))+\omega(t-t_{0})\), and the isochrons are the level curves of \(\Theta(x)\). An isochron extends the notion of a phase (which occurs on a cycle) to the neighborhood \(\Omega\). Similarly, we define a set of functions \(\Sigma_{k}(x)\) that assign a value of the amplitude variable to a point \(x\in\Omega\) by setting \(\Sigma_{k}(x)=\psi_{k}\) if there exists \(\theta\in\mathbb{T}\) such that \(x=K(\theta,\psi_{1},\ldots,\psi_{m-1})\). This function satisfies \(\Sigma_{k}(x(t))=\Sigma_{k}(x(t_{0}))\mathrm{e}^{\kappa_{k}(t-t_{0})}\), and the isostables are the level curves of \(\Sigma_{k}(x)\). Intuitively, one can consider each \(\psi_{k}\) coordinate to be a signed distance from the limit cycle in a direction that is specified by \(v_{k}\), which is the right eigenvector of \(\Phi(T)\) with corresponding eigenvalue \(\lambda_{k}\). See [85, 164] for more details. As an illustration, we show a limit cycle of the absolute model in Figure 3 along with some isochrons and isostables in its neighborhood. Knowledge of isochrons and isostables allows us to compute corresponding changes in phase and amplitude under a small perturbation of \(x\) to \(x+\Delta x\). The change in phase is \(\Delta\Theta(x)=\Theta(x+\Delta x)-\Theta(x)\approx\nabla_{x}\Theta(x)\cdot \Delta x\), and the change in amplitude is \(\Delta\Sigma_{k}(x)=\Sigma_{k}(x+\Delta x)-\Sigma_{k}(x)\approx\nabla_{x} \Sigma_{k}(x)\cdot\Delta x\). It is challenging to determine the map \(K\), although it is not necessary to know it to compute the (\(m\)-dimensional) _infinitesimal_ phase response \(\mathcal{Z}\) and amplitude response \(\mathcal{I}_{k}\), which are \[\mathcal{Z}\equiv\nabla_{x^{\gamma}}\Theta(x)\,,\quad\mathcal{I}_{k}\equiv \nabla_{x^{\gamma}}\Sigma_{k}(x)\,. \tag{18}\] We obtain the infinitesimal phase response (iPRC4) \(\mathcal{Z}\) as the \(T\)-periodic solution of the adjoint equation Footnote 4: The “C” in iPRC (and iIRC) is a historical hangover from the phrase “infinitesimal phase response _curve_”, even though the phase response and amplitude response are vector-valued functions. \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{Z}=-\mathrm{D}f(x^{\gamma}(t))^{\top} \mathcal{Z}\,, \tag{10}\] with the normalization condition \(\mathcal{Z}(0)\cdot f(x^{\gamma}(0))=\omega\)[46, 47, 72]. Similarly, the infinitesimal isostable responses (iIRC4) \(\mathcal{I}_{k}\) satisfy the adjoint equation \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{I}_{k}=\left(\kappa_{k}I_{m}-\mathrm{D }f\left(x^{\gamma}(t)\right)^{\top}\right)\mathcal{I}_{k}\,, \tag{11}\] with the normalization condition \(\mathcal{I}_{k}(0)\cdot v_{k}=1\), where \(v_{k}\) is the right eigenvector that is associated with the \(k\)th Floquet exponent of the monodromy matrix [161, 102, 163]. For a nonsmooth system, one needs to augment the above adjoint equations for \(\mathcal{Z}\) (see (10)) and \(\mathcal{I}_{k}\) (see (10)) to examine the behavior at any event time. For example, Coombes _et al._[32] determined the discontinuous iPRC for the planar PWL integrate-and-fire (IF) model by enforcing normalization conditions on both sides of a switching manifold. Additionally, for piecewise-smooth systems, Park _et al._[112] and Wilson [160] developed a jump operator to map the iPRC through an event by using the above normalization condition and certain linear matching conditions. This jump operator is equal to the inverse transpose of the saltation matrix, and related studies [33, Chapter 5] have also made this observation. Using a similar approach, Chartrand _et al._[25] constructed a discontinuous iPRC for the resonate-and-fire model and Shirasaka _et al._[139] showed how to analyze "hybrid dynamical Figure 3: Isochrons and isostables for a stable periodic orbit \(x^{\gamma}(t)\) (thick black curve), with Floquet exponent \(\kappa\approx-0.1534\), of the absolute model. We calculate the isochrons, which we show as thin black curves, using the numerical technique in [96]. We compute the isostables, \(\psi=0.04\) (gray dotted curve) and \(\psi=-0.04\) (gray dashed–dotted curves), in the neighborhood of the limit cycle using the method that we describe in subsection 3.2. This yields \(x(t)=x^{\gamma}(t)+\psi p(t)\), where \(p(t)\) is the Floquet mode. The parameter values are the same as those in Figure 1(a). systems", which include both continuous and discrete state variables [2]. Ermentrout _et al._[44] computed the iPRC of the Izhikevich neuron using a mixture of a jump operator and numerical computations. Wang _et al._[158] determined the iPRC for several planar nonsmooth systems for a limit cycle with sliding dynamics. To do this, they used a modified saltation matrix and then related it to the the jump operator at the point where a sliding motion begins and terminates. Wang _et al._ subsequently applied their approach to neuromechanical control problems [159]. Suppose that one has a matrix representation of the iPRC's jump operator of the form \(\mathcal{R}^{\top}\mathcal{Z}^{+}=\mathcal{Z}^{-}\), where \(\mathcal{Z}^{-}\) denotes the iPRC immediately before an event and \(\mathcal{Z}^{+}\) denotes the iPRC immediately after it. It is then perhaps simplest to construct the jump operator by enforcing normalization across the switching manifold. This balancing of normalization conditions at an event time \(t_{i}\) requires \(\langle\mathcal{Z}^{+},\dot{x}^{\gamma}(t_{i}^{+})\rangle=\langle\mathcal{Z}^ {-},\dot{x}^{\gamma}(t_{i}^{-})\rangle\), so \[\langle\ \mathcal{Z}^{+},\dot{x}^{\gamma}(t_{i}^{+})\rangle=\langle\ \mathcal{R}^{\top}\mathcal{Z}^{+},\dot{x}^{\gamma}(t_{i}^{-})\rangle=\langle \mathcal{Z}^{+},\mathcal{R}\dot{x}^{\gamma}(t_{i}^{-})\rangle\,, \tag{3.5}\] which yields \[\langle\ \mathcal{Z}^{+},\dot{x}^{\gamma}(t_{i}^{+})-\mathcal{R}\dot{x}^{ \gamma}(t_{i}^{-})\rangle=0\,. \tag{3.6}\] Equation (3.6) holds for any \(\mathcal{Z}^{+}\). Therefore, \(\dot{x}^{\gamma}(t_{i}^{+})=\mathcal{R}\dot{x}^{\gamma}(t_{i}^{-})\). Additionally, the action of the saltation matrix on \(\dot{x}^{\gamma}(t_{i}^{-})\) satisfies \(\dot{x}^{\gamma}(t_{i}^{+})=S(t_{i})\dot{x}^{\gamma}(t_{i}^{-})\). To see this, we multiply equation (2.13) on the right by \(\dot{x}^{\gamma}(t_{i}^{-})\) to obtain \[S(t_{i})\dot{x}^{\gamma}(t_{i}^{-}) =\mathrm{D}\mathcal{J}_{\mu(i)}(x^{\gamma}(t_{i}^{-}))\dot{x}^{ \gamma}(t_{i}^{-})\] \[\quad+\frac{[\dot{x}^{\gamma}(t_{i}^{+})-\mathrm{D}\mathcal{J}_{ \mu(i)}(x^{\gamma}(t_{i}^{-}))\dot{x}^{\gamma}(t_{i}^{-})][\nabla_{x}h_{\mu(i)} (x^{\gamma}(t_{i}^{-}))]^{\top}\dot{x}^{\gamma}(t_{i}^{-})}{\nabla_{x}h_{\mu(i )}(x^{\gamma}(t_{i}^{-}))\cdot\dot{x}^{\gamma}(t_{i}^{-})}\] \[=\mathrm{D}\mathcal{J}_{\mu(i)}(x^{\gamma}(t_{i}^{-}))\dot{x}^{ \gamma}(t_{i}^{-})+\dot{x}^{\gamma}(t_{i}^{+})-\mathrm{D}\mathcal{J}_{\mu(i)} (x^{\gamma}(t_{i}^{-}))\dot{x}^{\gamma}(t_{i}^{-})\] \[=\dot{x}^{\gamma}(t_{i}^{+})\,. \tag{3.7}\] This implies that \(\mathcal{R}=S\), which in turn yields \[\mathcal{Z}^{+}=(S^{\top}(t_{i}))^{-1}\mathcal{Z}^{-}\,. \tag{3.8}\] An analogous argument for the iIRC gives \[\mathcal{I}_{k}^{+}=(S^{\top}(t_{i}))^{-1}\mathcal{I}_{k}^{-}. \tag{3.9}\] All that remains is to determine \(\mathcal{Z}\) and \(\mathcal{I}_{k}\) between events. As usual, the PWL nature of (3.3) and (3.4) implies that one can use matrix exponentials to obtain closed-form solutions. For example, the iPRC \(\mathcal{Z}\) and iIRC \(\mathcal{I}_{k}\) of the McKean, absolute, and homoclinic models are \[\mathcal{Z}(t)=\begin{cases}G(t;-A_{1}^{\top})\mathcal{Z}(0)\,,&0\leq t<T_{1} \\ G(t-T_{1};-A_{2}^{\top})(S_{1}^{\top})^{-1}G(T_{1};-A_{1}^{\top})\mathcal{Z}(0 )\,,&T_{1}\leq t<T\end{cases} \tag{3.10}\] and \[\mathcal{I}(t)=\begin{cases}G(t;Q_{1})\mathcal{I}(0)\,,&0\leq t<T_{1}\\ G(t-T_{1};Q_{2})(S_{1}^{\top})^{-1}G(T_{1};Q_{1};)\mathcal{I}(0)\,,&T_{1}\leq t <T\,,\end{cases} \tag{3.11}\] where \(Q_{\mu}=(\kappa I_{2}-A_{\mu}^{\top})\). One still needs to determine the initial data \(\mathcal{Z}(0)\) and \(\mathcal{I}(0)\). To do this, one satisfies the normalization condition and the requirement that responses are periodic. For example, for (3.10), one needs \[\mathcal{Z}(0)=(S_{2}^{\top})^{-1}G(T_{2};-A_{2}^{\top})(S_{1}^{\top})^{-1}G(T_ {1};-A_{1}^{\top})\mathcal{Z}(0)\text{ and }\mathcal{Z}_{1}(0)\dot{v}^{\gamma}(0)+ \mathcal{Z}_{2}(0)\dot{w}^{\gamma}(0)=\omega\text{. One then solves this pair of simultaneous linear equations (e.g., using Cramer's rule, as was done in [29]) to determine the initial data \(\mathcal{Z}(0)=(\mathcal{Z}_{1}(0),\mathcal{Z}_{2}(0))\). One analogously determines \(\mathcal{I}(0)\) using \(\mathcal{I}(0)=(S_{2}^{\top})^{-1}G(T_{2};Q_{2})(S_{1}^{\top})^{-1}G(T_{1};Q_ {1})\mathcal{I}(0)\) and \(\mathcal{I}_{k}(0)\cdot v_{k}=1\). One can follow the same procedure for models with as many regions as desired (e.g., for the PWL Morris-Lecar model, which has four regions). In Figure 4 and Figure 5, we show plots of iPRCs and iIRCs, respectively, that we construct using this method for several PWL models. Sayli _et al._[135] used direct numerical computations to confirm the shapes of these responses. The similarity between the shapes of some iPRCs and iIRCs, such as that between Figure 4(d) and Figure 5(d) for the PWL Morris-Lecar model, was seen previously in studies of certain smooth models [62]. Indeed, comparing the responses that we have constructed with those for smooth models [62, 101] illustrates that a PWL approach can successfully capture the qualitative response features of their smooth counterparts. ### Phase-amplitude dynamics With the results from subsection 3.1, we are in a position to construct the phase dynamics and amplitude dynamics for weak forcing with \(g\neq 0\). In the neighborhood of a stable limit cycle, we expand the gradients of \(\Theta(x)\) and \(\Sigma(x)\) and write \[\nabla_{(x^{\gamma}+\Delta x)}\Theta(x) =\mathcal{Z}(\theta)+H_{\Theta,x^{\gamma}}\Delta x+\mathcal{O} \left(||\Delta x||^{2}\right)\,, \tag{3.12}\] \[\nabla_{(x^{\gamma}+\Delta x)}\Sigma_{k}(x) =\mathcal{I}_{k}(\theta)+H_{\Sigma_{k},x^{\gamma}}\Delta x+ \mathcal{O}\left(\|\,\Delta x\,\|^{2}\right)\,, \tag{3.13}\] Figure 4: The \(v\)-component of the iPRC (solid back curve) and underlying shape of the periodic \(v\)-component (gray dashed curve) for (a) the absolute model with the same parameters as in Figure 1(a), (b) the PWL homoclinic model with the same parameters as in Figure 1(b), (c) the McKean model with the same parameters as in Figure 1(c), and (d) the PWL Morris–Lecar model with the same parameters as in Figure 2. where \(H_{\Theta,x^{\gamma}}\) and \(H_{\Sigma_{k},x^{\gamma}}\) are the Hessian matrices of second derivatives of \(\Theta\) and \(\Sigma_{k}\), respectively, evaluated at the limit cycle \(x^{\gamma}\). Close to a periodic orbit, we use Floquet theory [122] to write \[\Delta x\left(\theta,\psi_{1},\ldots,\psi_{m-1}\right)=\sum_{k=1}^{m-1}\psi_{k} p_{k}(\theta/\omega)\,, \tag{3.14}\] where \(p_{k}(t)=\mathrm{e}^{-\kappa_{k}t}\Phi(t)v_{k}\). Using the chain rule, we see that \(\dot{\theta}=\nabla_{(x^{\gamma}+\Delta x)}\Theta(x)\cdot\dot{x}\) and \(\dot{\psi}_{k}=\nabla_{(x^{\gamma}+\Delta x)}\Sigma_{k}(x)\cdot\dot{x}\) in the neighborhood of the limit cycle. Therefore, equations (3.12), (3.13), and (3.14) yield a phase-amplitude approximation of the full dynamics that is accurate to second order. This approximation is \[\frac{\mathrm{d}\theta}{\mathrm{d}t} =\omega+\left(\mathcal{Z}(t)+\sum_{k=1}^{m-1}\left[\mathcal{B}^{ k}(t)\psi_{k}\right]\right)\cdot g(t)\,, \tag{3.15}\] \[\frac{\mathrm{d}\psi_{k}}{\mathrm{d}t} =\kappa_{k}\psi_{k}+\left(\mathcal{I}_{k}(t)+\sum_{l=1}^{m-1} \left[\mathcal{C}^{l}_{k}(t)\psi_{l}\right]\right)\cdot g(t)\,, \tag{3.16}\] where we define the notation \(\mathcal{B}^{k}(t)\equiv H_{\Theta,x^{\gamma}}p_{k}(t)\) and \(\mathcal{C}^{l}_{k}(t)\equiv H_{\Sigma_{k},x^{\gamma}}p_{l}(t)\) and we enforce the conditions \[-\mathcal{Z}(\theta(t))^{\top}\mathrm{D}f\left(x^{\gamma}(t) \right)p_{k}(t) =f\left(x^{\gamma}(t)\right)^{\top}\mathcal{B}^{k}(t)\,, \tag{3.17}\] \[\mathcal{I}_{k}(\theta(t))^{\top}\left(\kappa_{k}I_{m}-\mathrm{D }f\left(x^{\gamma}(t)\right)\right)p_{l}(t) =f\left(x^{\gamma}(t)\right)^{\top}\mathcal{C}^{l}_{k}(t)\,. \tag{3.18}\] Figure 5: The \(v\)-component of the iIRC (solid back curve) and underlying shape of the periodic \(v\)-component (dashed gray curve) for (a) the absolute model with the same parameters as in Figure 1(a), (b) the PWL homoclinic model with the same parameters as in Figure 1(b), (c) the McKean model with the same parameters as in Figure 1(c), and (d) the PWL Morris–Lecar model with the same parameters as in Figure 2. Following Wilson and Ermentrout [160], one can show that \(\mathcal{B}^{k}\) and \(\mathcal{C}^{l}_{k}\) satisfy \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{B}^{k} =-\left(\mathrm{D}f^{\top}\left(x^{\gamma}(t)\right)+\kappa_{k}I_{m }\right)\mathcal{B}^{k}\,, \tag{3.19}\] \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{C}^{l}_{k} =-\left(\mathrm{D}f^{\top}\left(x^{\gamma}(t)\right)+\left(\kappa _{l}-\kappa_{k}\right)I_{m}\right)\mathcal{C}^{l}_{k}\,, \tag{3.20}\] and we have used the fact that the Hessian of a vector field vanishes for PWL dynamical systems. Importantly, because the system is PWL, we again use matrix exponentials to construct explicit formulas for \(p_{k}(t)\) (by first solving the variational equation (2.10) for \(\Phi(t)\)), \(\mathcal{B}^{k}(t)\), and \(\mathcal{C}^{l}_{k}(t)\) (which are all \(T\)-periodic), being mindful to incorporate appropriate jump conditions. As we show in Appendix D, the jump condition on \(\mathcal{B}\) for the transition across a switching manifold is \[\mathcal{B}^{+}=(S^{\top}(t_{i}))^{-1}\mathcal{B}^{-}+C^{-1}(t_{i})\eta(t_{i})\,, \tag{3.21}\] where we have suppressed the \(k\) indices, \(C(t_{i})\) and \(\eta(t_{i})\) are \[C(t_{i})=\begin{bmatrix}\dot{v}^{\gamma}(t_{i}^{+})&\dot{w}^{\gamma}(t_{i}^{+} )\\ 0&1\end{bmatrix}\,,\quad\eta(t_{i})=\begin{bmatrix}\mathcal{Z}^{-}\cdot(A_{\mu( i)}p(t_{i}^{-}))-\mathcal{Z}^{+}\cdot(A_{\mu(i+1)}p(t_{i}^{+}))\\ \frac{p^{v}(t_{i}^{-})}{\dot{v}^{\gamma}(t_{i}^{-})}(A_{\mu(i)}^{\top}\mathcal{ Z}^{-}-A_{\mu(i+1)}^{\top}\mathcal{Z}^{+})\cdot(0,1)\end{bmatrix}\] for a planar system, and \(p^{v}\) denotes the \(v\)-component of \(p\). Similarly, the jump condition for \(\mathcal{C}\) for the transition across a switching boundary is \[\mathcal{C}^{+}=(S^{\top}(t_{i}))^{-1}\mathcal{C}^{-}+C^{-1}(t_{i})\zeta(t_{i })\,, \tag{3.22}\] where we have again suppressed the \(k\) indices and \[\zeta(t_{i})=\begin{bmatrix}\mathcal{I}^{+}\cdot[(\kappa I_{2}-A_{\mu(i+1)})p (t_{i}^{+})]-\mathcal{I}^{-}\cdot[(\kappa I_{2}-A_{\mu(i)})p(t_{i}^{-})]\\ \frac{p^{v}(t_{i}^{-})}{\dot{v}^{\gamma}(t_{i}^{-})}\left[(A_{\mu(i)}^{\top}- \kappa I_{2})\mathcal{I}^{-}-(A_{\mu(i+1)}^{\top}-\kappa I_{2})\mathcal{I}^{+ }\right]\cdot(0,1)\end{bmatrix}\,. \tag{3.23}\] For example, for PWL models with two zones (such as the McKean model), the above method yields the following explicit formulas: \[p(t)=\begin{cases}\mathrm{e}^{-\kappa t}G(t;A_{1})\widetilde{v}\,,&0\leq t<T_ {1}\\ \mathrm{e}^{-\kappa t}G(t-T_{1};A_{2})S(t_{1})G(T_{1};A_{1})\widetilde{v}\,,&T _{1}\leq t<T\,,\end{cases} \tag{3.24}\] where \(\widetilde{v}\) is the right eigenvector that is associated with the nontrivial Floquet exponent and \[\mathcal{B}(t)=\begin{cases}G(t;K_{1})\mathcal{B}(0)\,,&0<t\leq T_{1}\\ G(t-T_{1};K_{2})[(S^{\top}(t_{1}))^{-1}G(T_{1};K_{1})\mathcal{B}(0)+C_{1}^{-1 }(t_{1})\eta(t_{1})]\,,&T_{1}\leq t<T\,,\end{cases} \tag{3.25}\] where \(K_{\mu}=-(A_{\mu}^{\top}+\kappa I_{2})\) and \[\mathcal{C}(t)=\begin{cases}G(t;-A_{1}^{\top})\mathcal{C}(0)\,,&0<t\leq T_{1} \\ G(t-T_{1};-A_{2}^{\top})[(S^{\top}(t_{1}))^{-1}G(T_{1};-A_{1}^{\top})\mathcal{C }(0)+C_{1}^{-1}(t_{1})\zeta(t_{1})]\,,&T_{1}\leq t<T\,.\end{cases}\] For \(\mathcal{B}(t)\) and \(\mathcal{C}(t)\), one can determine initial data in an analogous fashion as for equation (3.10) by simultaneously enforcing the periodicity constraints and conditions (3.17) and (3.20). For further details, see [135]. In Figure 6 and Figure 7, we show example plots of \(\mathcal{B}(t)\) and \(\mathcal{C}(t)\) that we obtain with the above approach.. We are now ready to examine how to use the phase and amplitude to describe the dynamics of networks of the form (1.1). ## 4 Phase-oscillator networks We first consider the case of strong attraction to a limit cycle. Therefore, to leading order, we do not need to consider amplitude coordinates. Using (3.15), we take a leading-order approximation of (1.1) with (1.2) and \(|\sigma|\ll 1\) to obtain \[\frac{\mathrm{d}}{\mathrm{d}t}\theta_{i}=\omega+\sigma\mathcal{Z}(\theta_{i}/ \omega)\cdot\sum_{j=1}^{N}w_{ij}G(x^{\gamma}(\theta_{i}/\omega),x^{\gamma}( \theta_{j}/\omega))\,,\quad i\in\{1,\ldots,N\}, \tag{4.1}\] with \(\theta_{i}\in[0,2\pi)\). This reduced dynamical system evolves on \(\mathbb{T}^{N}\), whereas the original dynamical system evolves on \(\mathbb{R}^{N\times m}\). We obtain a further (and pragmatic) reduction to a model in terms of phase differences (rather than products of phases) after averaging over one oscillation period. See, e.g., [46] and the review [8]. We obtain the Kuramoto-like model [83] \[\frac{\mathrm{d}}{\mathrm{d}t}\theta_{i}=\omega+\sigma\sum_{j=1}^{N}w_{ij}H( \theta_{j}-\theta_{i})\,,\quad H(\theta)=\frac{1}{T}\int_{0}^{T}\mathcal{Z}(t )\cdot G(x^{\gamma}(t),x^{\gamma}(t+\theta/\omega))\,\mathrm{d}t\,, \tag{4.2}\] where the _phase-interaction function_\(H\) is \(2\pi\)-periodic. We write it as a Fourier series \(H(\theta)=\sum_{n\in\mathbb{Z}}H_{n}\mathrm{e}^{\mathrm{i}n\theta}\), where the complex Fourier coefficients \(H_{n}\) take the form Figure 6: A plot of \(\mathcal{B}(t)\), with the \(v\) and \(w\) components on the left and right vertical axes, respectively. (a) The McKean model with the same parameters as in Figure 1(c). (b) The PWL Morris–Lecar model with the same parameters as in Figure 2. Figure 7: A plot of \(\mathcal{C}(t)\), with the \(v\) and \(w\) components on the left and right vertical axes, respectively. (a) The absolute model with the same parameters as in Figure 1(a). (b) The McKean model with the same parameters as in Figure 1(c). \(H_{n}=\mathcal{Z}_{n}\cdot G_{-n}\) and \(\mathcal{Z}_{n}\) and \(G_{n}\) are the corresponding vector Fourier coefficients of \(\mathcal{Z}\) and \(G\), respectively. For computationally useful representations of the coefficients, see [29]. Using (4.2), it is straightforward to construct relative equilibria (which correspond to oscillatory network states) and determine their stability in terms of both local dynamics and structural connectivity [45]. The structural connectivity is encoded in a graph (i.e., a structural network) with weighted adjacency matrix \(A\) (i.e., coupling matrix) with entries \(w_{ij}\). For a graph of \(N\) nodes, one specifies the connectivity pattern by an _adjacency matrix_\(w\in\mathbb{R}^{N\times N}\) (which is sometimes also called a "coupling matrix" or a "connectivity matrix") with entries \(w_{ij}\). The spectrum of the graph is the set of eigenvalues of \(w\). This spectrum also determines the eigenvalues of the associated combinatorial graph Laplacian \(\mathcal{L}\). We denote the eigenvalues of \(w\) by \(\lambda_{l}\), with \(l\in\{0,\ldots,N-1\}\); we denote the corresponding right eigenvectors by \(u_{l}\). For a phase-locked state \(\theta_{i}(t)=\Omega t+\phi_{i}\) (where \(\phi_{i}\) is the constant phase of each oscillator), one determines stability in terms of the eigenvalues of the Jacobian matrix \(\widehat{H}(\Phi)\) of (4.2), where \(\Phi=(\phi_{1},\ldots,\phi_{N})\) and its components are \[[\widehat{H}(\Phi)]_{ij}=\sigma[H^{\prime}(\phi_{j}-\phi_{i})w_{ij}-\delta_{ij }\sum_{k=1}^{N}H^{\prime}(\phi_{k}-\phi_{i})w_{ik}]\,. \tag{4.3}\] The globally synchronous steady state, \(\phi_{i}=\phi\) for all \(i\), exists in a network with a phase-interaction function that vanishes at the origin (i.e., \(H(0)=0\)) or for a network with a constant row sum (i.e., \(\sum_{j}w_{ij}=\text{constant}\) for all \(i\)). Using the Jacobian (4.3), synchrony is linearly stable if \(\sigma H^{\prime}(0)>0\) and all of the eigenvalues of the structural network's combinatorial _graph Laplacian_[105] \[[\mathcal{L}]_{ij}\equiv-w_{ij}+\delta_{ij}\sum_{k=1}^{N}w_{ik} \tag{4.4}\] lie in the right-hand side of the complex plane. Because the eigenvalues of a graph Laplacian all have the same sign (apart from a single \(0\) value), stability is determined entirely by the sign of \(\sigma H^{\prime}(0)\). In a globally coupled network with \(w_{ij}=1/N\), the graph Laplacian \(\mathcal{L}\) has one \(0\) eigenvalue, and \((N-1)\) degenerate eigenvalues at \(-1\), so synchrony is stable if \(\sigma H^{\prime}(0)>0\). In a globally connected network, one also expects the splay state \(\phi_{i}=2\pi i/N\) to exist generically [10]. Additionally, in the limit \(N\to\infty\), the eigenvalues to determine stability are related to the Fourier coefficients of \(H\) by the equation \(\lambda_{n}=-2\pi\mathrm{i}n\sigma H_{-n}\)[83]. To illustrate these results in a concrete setting, it is informative to consider a globally coupled network of PWL Morris-Lecar neurons. In this case, \(w_{ij}=1/N\) and \(G(x_{i},x_{j})=v_{j}-v_{i}=v((\theta_{j}-\theta_{i})/\omega)-v(0)\) for some common orbit \(v(t)\). This yields \(H(\theta)=\sum_{n\in\mathbb{Z}}\mathcal{Z}_{n}^{v}v_{-n}[\mathrm{e}^{-2\pi \mathrm{i}n\theta}-1]\), where the superscript \(v\) denotes voltage component and we can readily calculate the Fourier coefficients of the phase response \(\mathcal{Z}\) and orbit \(v\) for a PWL system [29]. In the upper-left panel of Figure 13, we show a plot of the phase-interaction function. By visually inspecting the plot, we see that \(H^{\prime}(0)<0\). Therefore, for \(\sigma>0\), the synchronous state is unstable. See [82] for a geometric argument for why synchrony is unstable for gap-junction coupling when the uncoupled oscillators are near a homoclinic bifurcation (as is the case here). A numerical calculation of this splay state's eigenvalues also illustrate that the synchronous state is unstable. Direct numerical simulations with large networks of oscillators illustrate an interesting large time-scale rhythm for which the Kuramoto synchrony order parameter \(R=|N^{-1}\sum_{j=1}^{N}\mathrm{e}^{\mathrm{i}\theta_{j}}|\) fluctuates (possibly chaotically) between the value \(R=1\) for complete synchrony and the value \(R=0\)[29, 67]. In Figure 8, we illustrate these dynamics. ### An application to the structure-function relationship in large-scale brain dynamics The weakly-coupled-oscillator theory that we described in section 4 is natural for exploring relationships between the brain's structural connectivity (SC) and the associated supported neural activity (i.e., its function). There are studies of the SC of the human brain, and graph-theoretic approaches have revealed a variety of features, including a small-world architecture [11], hub regions and cores [108], rich-club organization [16], a hierarchical-like modular structure [147], and cost-efficient wiring [17]. One can evaluate the emergent brain activity that SC supports using functional-connectivity (FC) network analyses [13], which describe patterns of temporal coherence between brain regions. Researchers have associated disruptions in SC network and FC networks with many psychiatric and neurological diseases [12]. A measure of FC that is especially appropriate for network models of the form (4.2) is the pairwise phase coherence \[R_{ij}=\left|\lim_{t\to\infty}\frac{1}{t}\int_{0}^{t}\cos(\theta_{i}(s)- \theta_{j}(s))\,\mathrm{d}s\right|\,. \tag{4.5}\] Models of interacting neural masses yield natural choices of the phase-interaction function [51, 71]. For simplicity, we use a biharmonic phase-interaction function [68] \[H(\theta)=-\sin(\theta-2\pi a)+r\sin(2\theta) \tag{4.6}\] to illustrate how SC can influence FC. Using the results of the present section, we find that the stability boundary for the synchronous state is \(H^{\prime}(0)=0\), which yields \(r=r_{c}=\cos(a)/2\). Direct simulations of the phase oscillator network (4.2) using human connectome data (parcellated into 68 brain regions) beyond the point of instability of the synchronous state reveal very rich patterns of pairwise coherence (4.5). These complicated FC dynamics reflect the fact that all eigenmodes of the graph Laplacian \(\mathcal{L}\) are unstable, leading to network dynamics that mixes all of these states. In Figure 9, we show a plots of the emergent FC matrix from the network model (4.2) and the interactions that are prescribed by the SC matrix. Figure 8: Evolution of the Kuramoto order parameter for the dynamics of a weakly coupled phase-oscillator network of \(N=1000\) PWL Morris–Lecar neurons with linear voltage coupling. The evolution of the Kuramoto order parameter \(R=|N^{-1}\sum_{j=1}^{N}\mathrm{e}^{\mathrm{i}\theta_{j}}|\) illustrates that the system fluctuates between unstable states of synchrony (\(R=1\)) and asynchrony (\(R=0\)). In the upper-left panel of Figure 13, we show the phase-interaction function \(H(\theta)\) is \(H_{1}(\theta)\). ### Dead zones in networks with refractoriness Consider once again a phase-oscillator network (20) that one obtains from a phase reduction of a weakly coupled system (1) with coupling function (2). The phase-interaction function \(H(\theta)\) has a _dead zone_\(U\) if \(U\subset[0,2\pi)\) is an open interval on which \(H(U)=0\)[6]. Let \(\mathrm{DZ}(H)\) denote the union of all dead zones of \(H\). When the phase difference \(\theta_{j}-\theta_{i}\in\mathrm{DZ}(H)\), oscillator \(i\) does not respond to changes in oscillator \(j\) because the connection between them is temporarily absent. Therefore, dead zones of the interaction function \(H\) lead to an effective decoupling of network nodes for certain network states \(\theta=(\theta_{1},\ldots,\theta_{N})\). For a state \(\theta\), the _effective interaction graph_ is a subgraph of the underlying structural network (which is defined by the connection strengths \(w_{ij}\)) that include only the edges \(j\to i\) for which \(\theta_{j}-\theta_{i}\notin\mathrm{DZ}(H)\). Along solution trajectories, the effective interaction graph evolves with time. The set of subgraphs of the underlying structural network (i.e., graph) that are realizeable by trajectories of the system depends on the dead zones of the coupling function \(H\). Ashwin _et al._[6] explored the interplay between dead zones of coupling functions and the realization of particular effective interaction graphs, and they began to explore how the dynamics of the associated coupled system corresponds to changing effective interactions along a trajectory. Because one derives the phase-oscillator network (20) from the original nonlinear-oscillator network described by (1), (2), it is natural to examine conditions on the nonlinear oscillator dynamics that yield a dead zone of the phase-interaction function \(H\). Both the coupling function \(G\) and the iPRC \(\mathcal{Z}\) influence whether or not there is an open interval \(A\in[0,2\pi)\) with \(H(\theta)=0\) for all \(\theta\in A\). See Ashwin _et al._[7] for conditions for dead zones for relaxation oscillators with a separable coupling that acts Figure 9: (a) A structural-connectivity (SC) matrix from diffusion magnetic-resonance-imaging (MRI) data that was made available through the Human Connectome Project [48]. The data was processed using a probabilistic-tractography approach [56]. In this data set, the pairwise connectivity strength in a dense 60,000-network is equal to the fraction of streams that propagate from each voxel \(i\) on the white/gray matter boundary and terminate at voxel \(j\). This network was then parcellated to create a 68-node network, which we employ. See [151] for further details. (b) A functional-connectivity (FC) matrix that uses the phase-coherence measure (21) from a phase-oscillator network with the SC pattern in (a) and the biharmonic phase-interaction function (22). The parameter values are \(\sigma=1\), \(\omega=1\), \(a=0.1\), and \(r=r_{c}-1/2=\cos(a)/2-1/2\). only through one component of \(x_{i}\in\mathbb{R}^{m}\). Here a coupling function \(G\) is separable if it can be written as \[G(x_{i},x_{j})=G^{in}(x_{j})\odot G^{res}(x_{i}), \tag{4.7}\] where \(G^{in}\) is an input function and \(G^{res}\) is the response function and \(v\odot w\) denotes the Hadamard (element-wise) product of the vectors \(v\) and \(w\). They also showed that pulsatile coupling can yield \(\xi\)-approximate dead zones \(U_{\xi}\), on which \(\sup\{|H(\theta)|:\theta\in U_{\xi}\}\leq\xi\). In the present section, we show how to obtain \(\xi\)-approximate dead zones in the phase-interaction function for networks of synaptically coupled PWL neuron models with refractoriness. Many neural oscillators have a refractory period after emitting an action potential (i.e., a nerve impulse). During this time, the neuron does not respond to input. For a phase-oscillator, during the refractory period, input does not cause the oscillator phase to advance thereby preventing further firing events. Therefore, the iPRC, \(\mathcal{Z}(t)\) is approximately \(0\) for one or more intervals \((t_{1},t_{2})\subset[0,T)\), where \(T\) is the oscillator period. An example of a planar PWL model with such a refractory period is the continuous McKean model with "three pieces" [29, 99]. This is a PWL caricature of the FHN model, with the \(v\)-nullcline broken into three pieces, which partitions the phase space into three zones, with two switching manifolds. The dynamics of the system satisfy \[C\dot{v}=\rho(v)-w+I\,,\quad\dot{w}=v-\gamma w\,, \tag{4.8}\] where \(C>0\), \(\gamma\geq 0\), and (to approximate the cubic \(v\)-nullcline) \(\rho(v)\) is given by equation (A.2). The dynamical system (4.8) has a stable periodic orbit when there is a single unstable equilibrium on the center branch of the cubic \(v\)-nullcline. In Figure 10, we show the phase portrait for parameter values with a stable periodic orbit. See A for further details about the model. For \(C\ll 1\), the dynamics of the voltage \(v\) are fast and the dynamics of the recovery variable \(w\) are slow. Therefore, as \(C\to 0\), the system spends most of its time Figure 10: The phase plane for the McKean model has a \(v\)-nullcline that is a piecewise-linear approximation of a cubic (dotted line) and a linear \(w\)-nullcline (dashed–dotted line). The parameters are \(C=0.01\), \(I=0\), \(\gamma=0\), and \(a=-0.5\). The solid black curve indicates a stable periodic orbit. on the left and right branches of the \(v\)-nullcline, with fast switching between the two branches. Consequently, \(\mathcal{Z}^{v}\) (i.e., the \(v\) component of the iPRC) is approximately \(0\) for much of the limit cycle, with peaks corresponding to locations near the switching planes. In Figure 11, we show \(\mathcal{Z}^{v}\) (and \(v\) on the limit cycle) for \(C=0.01\). In the singular \(C\to 0\) limit, the iPRC for this model is discontinuous [28, 73]. We now compute the phase-interaction function for a network of \(N\) synaptically coupled continuous McKean neurons with time-dependent forcing: \[C\dot{v}_{i}=\rho(v_{i})-w_{i}+I+\sigma_{j=1}^{N}\sum_{j}w_{ij}s_{j}(t)\,,\quad \dot{w}_{i}=v_{i}-\gamma w_{i}\,,\quad i\in\{1,\ldots,N\}\,. \tag{4.9}\] Suppose that the synaptic input from neuron \(j\) takes the standard "event-driven" form \[s_{j}(t)=\sum_{p\in\mathbb{Z}}\eta(t-t_{j}^{p})\,, \tag{4.10}\] where \(t_{j}^{p}\) denotes the \(p\)th firing time of neuron \(j\) and the causal synaptic filter \(\eta\) describes the shape of the post-synaptic response. For a phase-locked system, one writes the firing times \(t_{j}^{p}\) as \(t_{j}^{p}=pT-\phi_{j}/\omega\) for a phase offset \(\phi_{j}\in[0,2\pi)\). Therefore, the phase-interaction function is \[H(\theta)=\frac{1}{T}\int_{0}^{T}\mathcal{Z}^{v}(\omega t-\theta)P(\omega t) \,\mathrm{d}t=\frac{1}{2\pi}\int_{0}^{\infty}\mathcal{Z}^{v}(u-\theta)\eta(u/ \omega)\,\mathrm{d}u\,, \tag{4.11}\] where \(P(\phi)=\sum_{p\in\mathbb{Z}}\eta(\phi/\omega-pT)\). Because \(\mathcal{Z}^{v}\) is \(2\pi\)-periodic, we can write \(\mathcal{Z}^{v}(u)=\sum_{n\in\mathbb{Z}}\mathcal{Z}^{v}_{n}\mathrm{e}^{inu}\), where \(\mathcal{Z}^{v}_{n}=(2\pi)^{-1}\int_{0}^{2\pi}\mathcal{Z}^{v}(u)e^{-\mathrm{ i}nu}\,\mathrm{d}u\). Consequently, \[H(\theta)=\frac{1}{T}\sum_{n\in\mathbb{Z}}\mathcal{Z}^{v}_{-n}\mathrm{e}^{ \mathrm{i}n\theta}\hat{\eta}(n/T)\,, \tag{4.12}\] Figure 11: The \(v\) component \(\mathcal{Z}^{v}\) of the iPRC of the continuous McKean model (solid black curve) and the corresponding value of \(v\) on the limit cycle (dashed gray curve). The parameter values are \(C=0.01\), \(I=0\), \(\gamma=0\), and \(a=-0.5\). Observe that \(\mathcal{Z}^{v}\) is approximately \(0\) for much of the limit cycle, with peaks corresponding to locations near the switching planes. where \(\hat{\eta}(k)=\int_{0}^{\infty}\mathrm{e}^{-2\pi\mathrm{i}kt}\eta(t)\,\mathrm{d}t\) is the Fourier transform of the causal synaptic filter \(\eta\). That is, \(H(\theta)=\sum_{n\in\mathbb{Z}}H_{n}\mathrm{e}^{\mathrm{i}n\theta}\), where \(H_{n}=\mathcal{Z}_{-n}^{v}\hat{\eta}(n/T)/T\). If we adopt the common choice \[\eta(t)=\alpha^{2}t\mathrm{e}^{-\alpha t}\Theta(t)\,, \tag{4.13}\] where \(\alpha>0\) and \(\Theta\) is a Heaviside step function, then \[H_{n}=\frac{\alpha^{2}\mathcal{Z}_{-n}^{v}}{T(\alpha+2\pi in/T)^{2}}\,. \tag{4.14}\] In Figure 12, we show the phase-interaction function \(H(\theta)\) for two values of \(\alpha\). This interaction function has two large dead zones. The larger the value of \(\alpha\), the larger the dead zones of \(H(\theta)\). For the chosen parameter values, the dead zones of \(H\) are symmetric for large values of \(\alpha\)[6]. (That is, \(-\theta\in\mathrm{DZ}(H)\) if \(\theta\in\mathrm{DZ}(H)\), then \(-\theta\in DH(H)\). Such a coupling function is "dead-zone symmetric".) This symmetry places restrictions on the effective interaction graphs which can be realized by the trajectories of the model. For example, if \(H\) is dead-zone symmetric, then all of the effective interaction graphs for \(H\) are undirected [6, Proposition 3.7]. In the limit \(\alpha\to\infty\), we also observe that \(H(\theta)\) is a scaled version of the \(v\) component of the iPRC. This follows from \(H(\theta)=\mathcal{Z}^{v}(-\theta)/T\) and the fact that \(\lim_{\alpha\to\infty}\eta(t)=\delta(t)\), giving pulsatile coupling. ## 5 Phase-amplitude networks We now consider the second-order approximation (3.15)-(3.16) that allows us to use both phase and amplitude coordinates to treat oscillatory network dynamics. In contrast to the phase-only approach in section 4, there has been much less work on the theory and applications of phase-amplitude networks although this is now growing, as exemplified by the work in [162]. Because of both this and to facilitate our exposition, we focus on a small network of two identical planar oscillators with linear coupling through the \(v\) component. Pairs and larger networks of linearly coupled smooth Morris-Lec Figure 12: The phase-interaction function \(H(\theta)\) for the continuous McKean model with synaptic coupling for \(\alpha=1000\) (black curve, fast synapses) and \(\alpha=10\) (gray curve, slower synapses). Synapse-firing events occur when \(v=0.6\) and \(\dot{v}>0\). The other parameter values are \(I=0\), \(\gamma=0\), and \(a=-0.5\). The larger value of \(\alpha\) results in a larger dead zone of \(H(\theta)\). in Nicks _et al._[106] where conditions for linear stability of various phase-locked states in globally coupled phase-amplitude networks are also derived. Our discussion parallels the one in Ermentrout _et al._[44] for a smooth model of synaptically coupled thalamic neurons [133]. Specifically, we consider equation (1) with \(N=2\) oscillators, a coupling strength of \(|\sigma|\ll 1\), and \[g_{1}(x_{1},x_{2}) =\sigma[v_{2}-v_{1},0]^{\top}\,, \tag{10}\] \[g_{2}(x_{1},x_{2}) =\sigma[v_{1}-v_{2},0]^{\top}\,.\] To determine the form of \(g(t)\) in the phase-amplitude equations (11)-(12) to obtain the corresponding phase-amplitude reduction of the network equations (1), we write \(v_{i}(t)=v^{\top}\left(\theta_{i}(t)\right)+\psi_{i}(t)p^{v}\left(\theta_{i}(t)\right)\) and assume that the amplitudes \(\psi_{i}\) are \(\mathcal{O}(\sigma)\). Substituting this expression into (11) and (12) and keeping terms up to order \(\mathcal{O}(\sigma^{2})\) yields the phase-amplitude reduced network equations \[\dot{\theta}_{1} =\omega+\sigma\left[h_{1}\left(\theta_{1},\theta_{2}\right)+\psi _{1}h_{2}\left(\theta_{1},\theta_{2}\right)+\psi_{2}h_{3}\left(\theta_{1}, \theta_{2}\right)\right]\,, \tag{11}\] \[\dot{\psi}_{1} =\kappa\psi_{1}+\sigma\left[h_{4}\left(\theta_{1},\theta_{2} \right)+\psi_{1}h_{5}\left(\theta_{1},\theta_{2}\right)+\psi_{2}h_{6}\left( \theta_{1},\theta_{2}\right)\right]\,,\] \[\dot{\theta}_{2} =\omega+\sigma\left[h_{1}\left(\theta_{2},\theta_{1}\right)+\psi _{2}h_{2}\left(\theta_{2},\theta_{1}\right)+\psi_{1}h_{3}\left(\theta_{2}, \theta_{1}\right)\right]\,,\] \[\dot{\psi}_{2} =\kappa\psi_{2}+\sigma\left[h_{4}\left(\theta_{2},\theta_{1} \right)+\psi_{2}h_{5}\left(\theta_{2},\theta_{1}\right)+\psi_{1}h_{6}\left( \theta_{2},\theta_{1}\right)\right]\,,\] where we give the detailed forms of the doubly \(2\pi\)-periodic functions \(h_{1},\ldots,h_{6}\) in Appendix E. To further reduce the system (11) to a phase-difference form, we use averaging (see section 4) and write \(H_{i}(y)=(2\pi)^{-1}\int_{0}^{2\pi}h_{i}(s,y+s)\mathrm{d}s\) and \(\chi\equiv\theta_{2}-\theta_{1}\). This yields \[\dot{\chi} =\sigma\left[H_{1}(-\chi)-H_{1}(\chi)+\psi_{1}\left(H_{3}(-\chi)- H_{2}(\chi)\right)+\psi_{2}\left(H_{2}(-\chi)-H_{3}(\chi)\right)\right]\,, \tag{12}\] \[\dot{\psi}_{1} =\kappa\psi_{1}+\sigma\left[H_{4}(\chi)+\psi_{1}H_{5}(\chi)+\psi _{2}H_{6}(\chi)\right]\,,\] \[\dot{\psi}_{2} =\kappa\psi_{2}+\sigma\left[H_{4}(-\chi)+\psi_{2}H_{5}(-\chi)+ \psi_{1}H_{6}(-\chi)\right]\,.\] In Figure 13, we show the six interaction functions \(H_{1},\ldots,H_{6}\) for the PWL Morris-Lecar model. We compute these functions using the Fourier representation that we described in section 4. Note that these six functions are all that is needed to describe the phase-amplitude reduced dynamics of networks of any finite size \(N\)[106, 113]. For the synchronous \(0\)-amplitude solution \([\chi,\psi_{1},\psi_{2}]^{\top}=[0,0,0]^{\top}\), the Jacobian of (12) has the form \[J=\begin{bmatrix}-2\sigma H_{1}^{\prime}(0)&2\sigma H_{3}(0)&-2\sigma H_{3}(0 )\\ \sigma H_{4}^{\prime}(0)&\kappa-\sigma H_{6}(0)&\sigma H_{6}(0)\\ -\sigma H_{4}^{\prime}(0)&\sigma H_{6}(0)&\kappa-\sigma H_{6}(0)\end{bmatrix} \tag{13}\] where we have used the fact that linear coupling gives \(H_{2}(0)=-H_{3}(0)\) and \(H_{5}(0)=-H_{6}(0)\). All eigenvalues have negative real part, so the synchronous solution is linearly stable when \(\kappa<0\) (which we assume to obtain a stable periodic orbit), \(\kappa<2\sigma(H_{1}^{\prime}(0)+H_{6}(0))\), and \(H_{1}^{\prime}(0)(\kappa-2\sigma H_{6}(0))+2\sigma H_{3}(0)H_{4}^{\prime}(0)<0\). Reducing to the phase-only description by taking \(H_{2},\ldots,H_{6}\equiv 0\) recovers the result that the synchronous solution is linearly stable when \(\sigma H_{1}^{\prime}(0)>0\). One can similarly determine stability conditions for the antisynchronous state (for which the phase difference between the two oscillators is \(\chi=\pi\)). For the antisynchronous state, the shared orbit satisfies \(\psi_{1}=\psi_{2}=\psi\), where \(\psi=-\sigma H_{4}(\pi)/(\kappa+\sigma(H_{5}(\pi)+H_{6}(\pi)))\), which is constant (so that the orbit coincides with an isostable of the node dynamics) [106]. Importantly, for both solutions, the phase-only reduction does not predict any bifurcations from changing \(\sigma>0\), whereas the phase-amplitude approach does allow this possibility. This is the case because both the eigenvalues of (10) and of the Jacobian for the antisynchronous state have a richer dependence on the coupling strength \(\sigma\). See Figure 14 for an interesting bifurcation diagram for the PWL Morris-Lecar model that we obtain by varying \(\sigma\). We see that we can restabilize the synchronous state by increasing \(\sigma\) when \(\sigma\gtrapprox 0.2\). Moreover, at smaller values of \(\sigma\), stable periodic orbits arise from a Andronov-Hopf bifurcation of the antisynchronous state. In one region, for which \(0.15\lessapprox\sigma\lessapprox 0.2\), our analysis predicts that there are no stable solution branches. Direct numerical simulations (see Figure 16) of the full model (14) confirm this prediction. Although the qualitative predictions of the phase-amplitude formalism are better than those of the phase-only formalism, it remains to be seen if these predictions can also give successful quantitative insights. We explore this issue in section 6. ## 6 Strongly coupled oscillator networks In previous sections, we explored how collective behavior (such as phase-locked states) arises in weakly coupled networks. We considered the dynamics of the system (1) on a reduced phase space that is given by the Cartesian product of each oscillator's phase and possibly a subset of the oscillator amplitudes. However, the assumption of weak coupling is not valid in many real-world situations. There are far fewer results for strongly coupled oscillator networks than for weakly coupled oscillator networks, and the former are often restricted to special states such as synchrony [33, Chapter 7]. One popular approach to obtain insights into the behavior of strongly coupled oscillators in the context of smooth dynamical systems is the master-stability-function Figure 13: The interaction functions \(H_{1},\ldots,H_{6}\) for the PWL Morris–Lecar model. The parameters are the same as those in Figure 2. (MSF) approach. The MSF approach5 of Pecora and Carroll [117] to assess the stability of synchronous states of a network in terms of the spectral properties of the network's adjacency matrix is exact. It does not rely on any approximations, aside from those in numerical implementations (to construct periodic orbits and compute Floquet exponents). In the present section, we describe how to augment this MSF approach for PWL systems using the saltation operators that we described in subsection 2.1. For PWL systems, one can use semi-analytical approaches (with numerical computations only for times of flight between switching manifolds) instead of the numerical computations (i.e., simulations of differential equations) that one uses for smooth nonlinear systems.6 Footnote 5: At least on occasion, MSF approaches were used before they were invented officially in the 1990s. See Segel and Levin [137] (a conference-proceeding paper from 1976). Footnote 6: Recently, Corragio _et al._[34] used an alternative approach for systems with a so-called “\(\sigma\)-QUAD property” (which includes many discontinuous neural, genetic, and impact networks) to prove global asymptotic convergence to synchronization in networks of piecewise-smooth dynamical systems. ### The master stability function for nonsmooth systems To introduce the MSF formalism, we start with an arbitrary connected network of \(N\) coupled identical oscillators (1), (2) with \(G(x_{i},x_{j})=\mathcal{H}(x_{i})-\mathcal{H}(x_{j})\). The output for each oscillator is determined by a vector function \(\mathcal{H}:\ \mathbb{R}^{m}\to\mathbb{R}^{m}\) (which can be either Figure 14: Bifurcation diagram for two linearly coupled PWL Morris–Lecar models (see (16)) showing the phase difference \(\chi\) under variation in the overall coupling strength \(\sigma\). The solid (respectively, dashed) curves indicate stable (respectively, unstable) branches of steady-state solutions. The filled (respectively, empty) circles indicate the amplitude of stable (respectively, unstable) periodic orbits. Two branches with phase difference \(\chi\neq 0,\pi\) meet at \(\sigma\lessapprox 0.15\). These branches both terminate in a limit point, so the apparent change of stability is just an artifact of this coincidence. For \(0.15\lessapprox\sigma\lessapprox 0.2\), we do not obtain any stable solution branches. The parameters are the same as those in Figure 2. linear or nonlinear). The network dynamics are \[\frac{\mathrm{d}}{\mathrm{d}t}x_{i}=f\left(x_{i}\right)-\sigma\sum_{j=1}^{N} \mathcal{L}_{ij}\mathcal{H}\left(x_{j}\right)\,, \tag{6.1}\] where, the matrix \(\mathcal{L}\in\mathbb{R}^{N\times N}\), with entries \(\mathcal{L}_{ij}\), is the graph Laplacian (4.4). By construction, the matrix \(\mathcal{L}\) has \(0\) row sums. The \(N-1\) constraints \(x_{1}(t)=x_{2}(t)=\cdots=x_{N}(t)=s(t)\) define the invariant _synchronization manifold_, where \(s(t)\) is a solution in \(\mathbb{R}^{m}\) of the associated uncoupled system. That is, \(\mathrm{d}s(t)/\mathrm{d}t=f(s(t))\). Any motion that begins on the synchronization manifold remains there, so the associated synchronized state is _flow-invariant_. When all oscillators are initially on the synchronization manifold with identical initial conditions, they always remain synchronized. To assess the stability of a synchronized state, we perform a linear stability analysis by inserting a perturbed solution \(x_{i}(t)=s(t)+\delta x_{i}(t)\) into (6.1) to obtain the variational equation \[\frac{\mathrm{d}\delta x_{i}}{\mathrm{d}t}=\mathrm{D}f(s)\delta x_{i}-\sigma \mathrm{D}\mathcal{H}(s)\sum_{j=1}^{N}\mathcal{L}_{ij}\delta x_{j}\,, \tag{6.2}\] where \(\mathrm{D}f(s)\in\mathbb{R}^{m\times m}\) and \(\mathrm{D}\mathcal{H}(s)\in\mathbb{R}^{m\times m}\), respectively, denote the Jacobians of \(f(s)\) and \(\mathcal{H}(s)\), which one evaluates at the synchronous solution \(s(t)\). We introduce \(U=(\delta x_{1},\delta x_{2},\ldots,\delta x_{N})\in\mathbb{R}^{mN}\) and use the tensor product (i.e., Kronecker product) \(\otimes\) for matrices to write the variational equation as \[\frac{\mathrm{d}\mathrm{U}}{\mathrm{d}t}=\left[I_{N}\otimes\mathrm{D}f(s)- \sigma(\mathcal{L}\otimes\mathrm{D}\mathcal{H}(s))\right]U\,. \tag{6.3}\] We organize the normalized right eigenvectors of \(\mathcal{L}\) into a matrix \(P\) such that \(P^{-1}\mathcal{L}=\Lambda P^{-1}\), with \(\Lambda=\mathrm{diag}(\lambda_{1},\lambda_{2},\ldots,\lambda_{N})\), where \(\lambda_{\eta}\) (with \(\eta\in\{1,\ldots,N\}\)) are the corresponding eigenvalues of \(\mathcal{L}\). We introduce a new variable \(Y\) using the linear transformation \(Y=(P\otimes I_{m})^{-1}U\) to obtain a block-diagonal system \[\frac{\mathrm{d}Y}{\mathrm{d}t}=\left[I_{N}\otimes\mathrm{D}f(s)-\sigma( \Lambda\otimes\mathrm{D}\mathcal{H}(s))\right]Y\,, \tag{6.4}\] where \(I_{N}\) is the \(N\times N\) identity matrix. This yields a set of \(N\) decoupled \(m\)-dimensional equations, \[\frac{\mathrm{d}\xi_{l}}{\mathrm{d}t}=\left[\mathrm{D}f(s)-\sigma\lambda_{l} \mathrm{D}\mathcal{H}(s)\right]\xi_{l}\,,\quad l\in\{1,\ldots,N\}\,, \tag{6.5}\] that are parametrized by the eigenvalues of the graph Laplacian \(\mathcal{L}\). The Jacobians \(\mathrm{D}f(s)\) and \(\mathrm{D}\mathcal{H}(s)\) are independent of the block label \(l\). Because the row sums of \(\mathcal{L}\) are \(0\), there is always a \(0\) eigenvalue \(\lambda_{1}=0\), with a corresponding eigenvector \([1,1,\ldots,1]^{\top}\) that characterizes a perturbation that is tangential to the synchronization manifold. The remaining \(N-1\) transversal perturbations (which are associated with the other \(N-1\) solutions of equation (6.5)) must damp out for the synchronous state to be linearly stable. In general, some eigenvalues \(\lambda_{l}\) of \(\mathcal{L}\) may be complex. (For example, this can occur when the adjacency matrix is not symmetric.) This leads us to consider the system \[\frac{\mathrm{d}\xi}{\mathrm{d}t}=\left[\mathrm{D}f(s)-\mu\mathrm{D}\mathcal{ H}(s)\right]\xi\,,\quad\mu\in\mathbb{C}\,,\quad\xi\in\mathbb{C}^{m}\,. \tag{6.6}\] All of the individual variational equations in the system (6.5) have the same structure as that of the system (6.6). The only difference is that \(\mu=\sigma\lambda_{l}\). Equation (6.6) is the so-called _master variational equation_. To determine its stability, we calculate its largest Floquet exponent [65] as a function of \(\mu\). The resulting function is the so-called _master stability function_ (MSF). More explicitly, for a given \(s(t)\), the MSF is the function that maps the complex number \(\mu\) to the largest Floquet exponent of the dynamical system (6.6). The synchronized state of a network of coupled oscillators is linearly stable if the MSF is negative at \(\mu=\sigma\lambda_{l}\), where \(\lambda_{l}\) ranges over all eigenvalues of the matrix \(\mathcal{L}\) except for \(\lambda_{1}=0\). The Laplacian form of the coupling in equation (6.1) guarantees that there exists a synchronous state. However, other forms of coupling are also natural. For example, consider \[\frac{\mathrm{d}x_{i}}{\mathrm{d}t}=f\left(x_{i}\right)+\sigma\sum_{j=1}^{N}w _{ij}\mathcal{H}\left(x_{j}\right)\,. \tag{6.7}\] Substituting \(x_{i}(t)=s(t)\), with \(i\in\{1,2,\ldots,N\}\), into equation (6.7) yields \[\frac{\mathrm{d}s}{\mathrm{d}t}=f\left(s\right)+\sigma\mathcal{H}\left(s \right)\sum_{j=1}^{N}w_{ij}\,. \tag{6.8}\] To guarantee that all oscillators have the same behavior, we assume that \(\sum_{j=1}^{N}w_{ij}=\) constant for all \(i\). If the constant is \(0\), then we say that the system is _balanced_[132, 134, 39, 156]. In a balanced network, the existence of a synchronous network state is independent of the interaction parameters, so varying these parameters cannot induce any nonsmooth bifurcations (arising from a change of the orbit shape and its possible tangential intersection with a switching manifold). One can apply the MSF framework to chaotic systems, for which one calculates Liapunov exponents instead of Floquet exponents [117, 37, 119]. One can also generalize the MSF formalism to network settings in which the coupling between oscillators includes a time delay [38, 86]. A synchronous solution is a very special network state, and more elaborate types of behavior can occur. An example is a "chimera state"(see [31, 94]), in which some oscillators are synchronized but others behave asynchronously [111]. The original MSF approach allows one to investigate the stability of networks of identical oscillators, but it has been extended to study stability in networks of almost identical oscillators [148]. For other discussions of the MSF formalism and its applications, see [118, 127, 4, 8]. One cannot directly apply the MSF methodology to networks of nonsmooth oscillators, and it is desirable to extend it to such systems. We first review a technique that adapts the MSF to PWL systems [31], and we then apply this approach to the models in section 2. We seek to show how the linear stability of the synchronous solution changes under variations of the coupling strength in networks of coupled oscillators. For networks of the form (6.1) with _linear_ vector functions \(\mathcal{H}\) (including the "linear diffusive case" \(\mathcal{H}(x^{1},x^{2},x^{3},\ldots,x^{m})=(x^{1},0,0,\ldots,0)\)) that one builds from PWL systems of the form (2.1), both \(\mathrm{D}f(s)\) and \(\mathrm{D}\mathcal{H}(s)\) are piecewise-constant matrices. Therefore, in each region \(R_{\mu}\), equation (6.6) takes the form \[\frac{\mathrm{d}\xi_{\mu}}{\mathrm{d}t}=[A_{\mu}-\beta J]\xi_{\mu}\,,\quad \beta\in\mathbb{C}\,, \tag{6.9}\] where \(J=\mathrm{D}\mathcal{H}(s)\) and \(\xi_{\mu}\in\mathbb{C}^{m}\). We solve (6.9) using matrix exponentials. This yields \(\xi_{\mu}(t)=G\left(t;A_{\mu}-\beta J\right)\xi_{\mu}(0)\), where \(G(t;A)\) is given by equation (2.5), although we need to be careful when evolving perturbations through the switching manifolds. Using the notation \(U=(\delta x_{1},\delta x_{2},\ldots,\delta x_{N})\in\mathbb{R}^{Nm}\), at each event time \(t_{i}\), we write \(U^{+}=(I_{N}\otimes S(t_{i}))U^{-}\). We then use the transformation \(Y=(P\otimes I_{m})^{-1}U\) and obtain \(Y^{+}=(I_{N}\otimes S(t_{i}))Y^{-}\), which has \(m\times m\) dimensions and an \(N-\)block structure. The action of the saltation operator on each block is \(\xi(t_{i}^{+})=S(t_{i})\xi(t_{i}^{-})\). We use the technique in Appendix B to treat perturbations across a switching boundary. After one period of motion (with \(M\) switching events), this yields \(\xi(T)=\Psi\xi(0)\), where \[\Psi=S(t_{M})G(T_{M})S(t_{M-1})G(T_{M-1})\times\cdots\times S(t_{2})G(T_{2})S( t_{1})G(T_{1}) \tag{102}\] and \(G(T_{i})=G(T_{i};A_{\mu(i)}-\beta J)\) (see (15)). For PWL systems, all of the individual variational equations, which take the form (100), have the same structure as that of the system (101). The only difference is that now there is an additional term \(\beta=\sigma\lambda_{l}\). Therefore, by choosing a reasonable value of \(\beta\) in the complex plane, we can determine the stability of (100) by checking that the MSF of (101) is negative for each \(\beta=\sigma\lambda_{l}\). Alternatively, we can calculate \(\Psi\) for each \(l\); we use the notation \(\Psi(l)\) to emphasize this. We then obtain that the synchronous state is linearly stable if the periodic solution of a single oscillator is linearly stable and the eigenvalues of \(\Psi(l)\), for each \(l\in\{2,\ldots,n\}\) lie within the unit disc. The power of the MSF approach is that it allows one to treat the stability of synchronous states for all possible networks. One first computes the MSF and then uses the spectrum of the chosen network to determine stability. Unlike in weakly-coupled-oscillator theory, one can perform the MSF stability analysis without making any approximations. ### MSF versus weakly-coupled-oscillator theory for systems of \(N=2\) oscillators Before we present applications of the augmented MSF to a few example nonsmooth systems, we compare and contrast this exact approach to results from weakly-coupled-oscillator theory without focusing too much on network structure. Consider a simple reciprocal network (i.e., all coupling is bidirectional) of two nodes with an interaction that is described by (100). The nonzero eigenvalue of the graph Laplacian \(\mathcal{L}\) of this network is \(+2\). For the phase-only description, the synchronous state is linearly stable if \(\sigma H^{\prime}(0)>0\) (see section 4). For the phase-amplitude description, the synchronous state is linearly stable if the three eigenvalues of (100) are all in the left-hand side of the complex plane (see section 5). For the exact approach of the present section, the synchronous state is linearly stable if the MSF is negative at \(2\sigma\) (see subsection 6.1). Using the same oscillator parameters as those in Figure 1 and Figure 2, we find that weakly-coupled-oscillator theory sometimes fails to capture the behavior that is predicted by the exact (MSF) approach. For the McKean and absolute models, all three approaches give the same qualitative prediction that the synchronous state is linearly stable for small positive \(\sigma\) (i.e., for weak coupling) and this stability persists for larger \(\sigma\) (i.e., strong coupling). For the PWL Morris-Lecar model (see (106)), the prediction from the phase-only approximation is that synchrony is always unstable for weak positive coupling. By contrast, the phase-amplitude approximation and MSF approach predict that synchrony can restabilize with increasing coupling strength \(\sigma\), although they predict somewhat different values for the critical coupling strength \(\sigma=\sigma_{c}\) at which the network restabilizes. In Figure 15, we plot the real part \(\kappa_{\max}=\operatorname{Re}(\ln(\operatorname{MSF}(\beta)))/T\) (where \(\beta=2\sigma\)) of the largest Floquet exponent from the MSF as a function of \(\sigma\). In the same figure, we plot the the real part \(\lambda_{\max}=\max\{\operatorname{Re}(\lambda_{1}),\operatorname{Re}(\lambda_ {2}),\operatorname{Re}(\lambda_{3})\}\) of the largest eigenvalue from the phase-amplitude approximation. The phase-amplitude prediction is that \(\sigma_{c}\approx 0.2071\), whereas the (exact) MSF prediction is that \(\sigma_{c}\approx 0.272\). The phase-only theory is incorrect qualitatively, the phase-amplitude theory is correct qualitatively, and the MSF approach (which agrees with direct numerical simulations) is correct both qualitatively and quantitatively. In Figure 14, we explored the behavior of the PWL Morris-Lecar model in the phase-amplitude reduction for coupling strengths \(\sigma\in(0,\sigma_{c})\) (where synchrony is unstable) using bifurcation analysis, which predicts the existence of a stable antisynchronous state (for which there is a relative phase of \(\pi\) between the two oscillators) and of frequency-locked states (i.e., states in which oscillators are synchronized at the same frequency) of different amplitudes. Direct numerical simulations (see Figure 16) confirm these predictions. For the PWL homoclinic model, both the phase-only approximation and the phase-amplitude approximation predict that synchrony is always unstable for weak positive coupling \(\sigma\) in a two-oscillator reciprocal network. These predictions are both inconsistent with the MSF prediction (which agrees with direct numerical simulations) of two windows of positive coupling with stable synchronous states. In Figure 17, we show a plot of the MSF that reveals a nontrivial structure, with two ellipsoidal regions where it is negative. (There is a very small ellipsoidal region near the origin that is not visible with the employed scales.) We also show a slice through \(\beta\) along the real axis that illustrates where the real part of the largest network Floquet exponent is negative, generating two regions in which the synchronous state is stable. ### A brief note about graph spectra As we have seen in our discussions, the spectrum of a graph is important for determining the stability of the synchronous state in both the weakly-coupled-oscillator and MSF approaches. We thus briefly discuss the spectra of a few simple but notable types of graphs. See [155] for a thorough exploration of graph spectra. For a network (i.e., a graph) of \(N\) nodes, one specifies the connectivity pattern by a coupling matrix \(w\in\mathbb{R}^{N\times N}\) (which is often called an "adjacency matrix") with Figure 15: Predictions of linear stability of the synchronous state for the PWL Morris–Lecar model (see (16)) on a reciprocal two-oscillator network using the phase–amplitude approximation (from weakly-coupled-oscillator theory) and MSF approaches. We plot the real part of the largest eigenvalue \(\lambda_{\max}=\lambda_{\max}(\sigma)\) of the Jacobian from the phase–amplitude reduction, which predicts that the synchronous state restabilizes at \(\sigma_{c}\approx 0.2071\) as one increases the coupling strength \(\sigma\). The largest Floquet exponent from the MSF approach is \(\kappa_{\max}=\kappa_{\max}(\sigma)\), which gives a more accurate prediction of restabilization at \(\sigma_{c}\approx 0.272\). entries \(w_{ij}\). The spectrum of the graph is the set of eigenvalues of the matrix \(w\). This spectrum also determines the eigenvalues of the associated combinatorial graph Laplacian \(\mathcal{L}\). In our discussion, we denote the eigenvalues of \(w\) by \(\lambda_{l}\), with \(l\in\{0,\ldots,N-1\}\), and we denote the corresponding right eigenvectors by \(u_{l}\). **Global.**: The simplest type of network with global coupling has adjacency-matrix entries \(w_{ij}=N^{-1}\). The associated network is fully connected with homogeneous coupling. The matrix \(w\) has an eigenvector \((1,1,\ldots,1)\) with eigenvalue \(\lambda_{0}=1\) and \(N-1\) degenerate eigenvalues \(\lambda_{l}=0\), for \(l\in\{1,\ldots,N-1\}\), with Figure 16: Direct numerical simulations of two reciprocally coupled PWL Morris–Lecar oscillators (see (A.1)) for coupling strengths of (a,b) \(\sigma=0.1\), (c,d) \(\sigma=0.18\), (e,f) \(\sigma=0.25\), and (g,h) \(\sigma=0.28\). In panels (a, c, e, g), we show network activity in the \((v_{n},w_{n})\) plane. In panels (b, d, f, h), we show the corresponding time series for \(v_{1}\) and \(v_{2}\). For all \(\sigma\gtrapprox 0.272\), the synchronous state is always stable. For \(\sigma\lessapprox 0.272\), we observe different frequency-locked patterns. The oscillator parameters are the same as those in Figure 2. Figure 17: Predictions from the MSF approach of the stability of the synchronous state in the PWL homoclinic model for a reciprocal two-node network. (a) The MSF has a nontrivial structure, with two ellipsoidal regions where it is negative (which we color in white). The region near the origin is very small and not visible with the employed scales. It is easier to see this small region of stability in (b), in which we plot the real part of the largest network Floquet exponent for \(\sigma=\beta/2\) with \(\beta\in\mathbb{R}\). One region of stability is \(0.0395\lessapprox\sigma\lessapprox 0.0439\) and the other is \(1.178\lessapprox\sigma\lessapprox 2.226\). The oscillator parameters are the same as those in Figure 1. corresponding eigenvectors \(u_{l}\) that satisfy the constraint \(\sum_{l=0}^{N-1}u_{l}=0\). **Star.**: A star network has a hub-and-spoke structure, with a central oscillator that is adjacent to \(N-1\) leaf nodes (which are not adjacent to each other). Star networks arise in computer-network topologies in which one central computer acts as a conduit to transmit messages (providing a common connection point for all nodes through a hub). This star-graph architecture has the adjacency matrix \[w=\begin{bmatrix}0&1/K&1/K&\cdots&1/K\\ 1&0&0&\cdots&0\\ 1&0&0&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&0&0&\cdots&0\end{bmatrix} \tag{11}\] for some constant \(K\). If \(K=N-1\), the matrix \(w\) has an eigenvalue \(\lambda_{0}=1\) with corresponding eigenvector \([1,1,\ldots,1]^{\top}\), an eigenvalue \(\lambda_{1}=-1\) with corresponding eigenvector \([-1,1,\ldots,1]^{\top}\), and \(N-2\) degenerate eigenvalues \(\lambda_{l}=0\), for \(l\in\{3,\ldots,N-1\}\), with corresponding eigenvectors \(u_{l}\) of the form \([0,u_{1},\ldots,u_{N-1}]^{\top}\) that satisfy the constraint \(\sum_{l=1}^{N-1}u_{l}=0\). **Circulant.**: A circulant network's adjacency matrix has entries \(w_{ij}=w_{|i-j|}\). Its rows are shifted versions of the column vector \([w_{0},\ldots,w_{N-1}]^{t}op\). Its eigenvalues are \(\lambda_{l}=\sum_{j=0}^{N-1}w(|j|)\omega_{l}^{j}\), where \(\omega_{l}=\exp(2\pi\mathrm{i}l/N)\) is an \(N\)th root of unity. The eigenvectors are \(u_{l}=[1,\omega_{l},\omega_{l}^{2},\ldots,w_{l}^{N-1}]^{\top}\). ### Network symmetries and cluster states Perfect global synchronization is just one of many states that can emerge in networks of oscillators. Indeed, one expects instabilities of the synchronous state to generically yield "cluster states", in which subpopulations synchronize, but not necessarily with each other. Such cluster synchronization has been relatively well-explored in phase-oscillator networks [9, 22], although less is known about it in networks of limit-cycle oscillators. For this more general scenario, researchers have made progress in networks with symmetry or when the coupling has a linear diffusive (i.e., Laplacian) form [123, 124, 58, 60, 15]. Pecora _et al_. [120] and Sorrentino _et al_. [146] extended the MSF approach (see subsection 6.1) to analyze the stability of cluster states that stem either from network symmetries or from Laplacian coupling (see equations (11) and (12)). Cluster states arise naturally in networks with symmetry, and cluster synchronization can also occur in networks without symmetry when some of the nodes have synchronous input patterns [61]. For networks of identical oscillators that satisfy equation (11), a symmetry of the network is a permutation \(\gamma\) of the nodes that does not change the governing equations. These permutations are precisely the ones that satisfy \(M_{\gamma}\mathcal{L}=\mathcal{L}M_{\gamma}\), where \(\mathcal{L}\in\mathbb{R}^{N\times N}\) is the graph Laplacian (13) and \(M_{\gamma}\) is the \(N\times N\) permutation matrix for the permutation \(\gamma\in S_{N}\). The network symmetries form a group \(\Gamma\subseteq S_{N}\) that is isomorphic to the group of automorphisms of the graph that underlies the network. For a given adjacency matrix, one can identify the automorphism group \(\Gamma\) using computational-algebra routines (such as those that are implemented in SageMath [152]). One can then apply the algorithms in [146] to enumerate all possible cluster states for the associated network structure. Some of these correspond to isotropy subgroups7\(\Sigma\subseteq\Gamma\) and thus arise from network symmetries. The orbit under \(\Sigma\) of node \(i\) is the set \(\{\gamma(i)~{}:~{}\gamma\in\Sigma\}\). The orbits permute subsets of nodes among each other and thereby partition the nodes into clusters. Nodes that are part of the same orbit (i.e., in the same cluster) have synchronized dynamics \(x_{\gamma(i)}\equiv x_{i}\) for any \(\gamma\in\Sigma\) (see [61, Thm III.2]). Isotropy subgroups that are conjugate in \(\Gamma\) lead to cluster states with identical existence and stability criteria [8, 59]. The remaining possible cluster states arise from the specific choice of Laplacian coupling. One can determine them using an algorithm that considers whether or not merging two clusters in a state that is determined by symmetry yields a dynamically valid state (i.e., whether or not it yields consistent equations of motion when \(x_{i}\) is the same for all nodes in the merged cluster). Sorrentino _et al._[146] referred to such cluster states as "Laplacian clusters". See [146] for a detailed explanation of the algorithm to determine these clusters, and see [107] for an illustration of this algorithm. One can automate this algorithm using computer-algebra tools [152]. Footnote 1: The algorithm is based on the fact transformation to the variational equation (6.12) yields a block-diagonal system of equations: \[\frac{\mathrm{d}V}{\mathrm{d}t}=\left[\sum_{k=1}^{M}J^{(k)}\otimes\mathrm{D}f(s_{ k}(t))-\sigma\sum_{k=1}^{M}\left(\mathcal{L}^{\prime}J^{(k)}\right)\otimes \mathrm{D}\mathcal{H}(s_{k}(t))\right]V\,, \tag{6.13}\] where \(V(t)=(Q\otimes I_{m})U(t)\) and \(J^{(k)}=QE^{(k)}Q^{-1}\). The isotypic component of the trivial representation is \(\mathrm{Fix}(\Sigma)=\Upsilon\), which is the synchronization manifold. This gives an \(M\times M\) block in \(\mathcal{L}^{\prime}\) that corresponds to perturbations within the synchronization manifold; one of the Floquet exponents will be \(0\) and the remaining \(M-1\) correspond to intercluster perturbations. The remaining blocks correspond to the isotypic components of other irreducible representations of \(\Sigma\). When the node-space representation has \(l\geq 1\) isomorphic copies of a particular irreducible representation, we obtain a block of size \(l\times l\). Such a block corresponds to a perturbation that is transverse to the synchronization manifold (intracluster perturbations); the associated Floquet multipliers determine the stability under a synchrony-breaking perturbation. For a cluster state to be linearly stable, all Floquet exponents (except the one that is always \(0\)) must have a negative real part. For a periodic Laplacian cluster state, the synchronization manifold is an invariant subspace, but it is not the fixed-point subspace of any subgroup of \(\Gamma\). However, we can still block-diagonalize the Laplacian \(\mathcal{L}\) so that the top-left block corresponds to perturbations within the synchronization manifold. To do this, we use the algorithm of Sorrentino _et al._[146]. Suppose that we start with a cluster state from symmetry with isotropy group \(\Sigma\) that has \(M\) clusters and a variational equation that is block-diagonalized by the matrix \(Q\). Suppose that we merge two clusters in this state to obtain a Laplacian cluster state. Upon this merger, the dimension of the synchronization manifold decreases by \(1\) and the dimension of the transverse manifold increases by \(1\). We obtain new coordinates on the synchronization manifold by transforming the new synchronization vector in the node-set coordinates (this vector has \(1\) entries in the position of each node in the new merged cluster and \(0\) entries everywhere else) into the coordinates of the block-diagonalization of the cluster state with isotropy group \(\Sigma\). The orthogonal complement of the new synchronization vector gives the new transverse direction. We normalize the resulting vectors and use them as rows of an orthogonal matrix \(Q^{\prime}\) whose other rows satisfy \(Q^{\prime}_{ij}=\delta_{ij}\). The matrix \(\chi=Q^{\prime}Q\) block-diagonalizes \(\mathcal{L}\) to a matrix \(\mathcal{L}^{\prime\prime}\) that has a top-left block of size \((M-1)\times(M-1)\). Therefore, the transformation matrix \(\chi\) block-diagonalizes the variational equation for the Laplacian cluster state, facilitating the ability to determine both the \(m(M-1)\) Floquet exponents within the synchronization manifold and the \(m(M+1)\) transverse Floquet exponents. This process for computing the required matrix \(\chi\) is illustrated with examples in [146] and [107]. For PWL systems of the form (6.1) with linear vector function \(\mathcal{H}\), it is relatively straightforward to construct the periodic orbits \(s_{k}(t)\) for a cluster state and to determine its stability by applying the modified Floquet theory (which accounts for the lack of smoothness of the dynamics) of subsection 2.1 to the block-diagonalized system. For example, suppose that we have a small network of linearly coupled oscillators whose dynamics satisfy the absolute PWL model (see Figure 1(a)). As an illustration, consider the five-node network in [146] with graph Laplacian matrix \[\mathcal{L}=\begin{bmatrix}3&-1&0&-1&-1\\ -1&3&-1&0&-1\\ 0&-1&3&-1&-1\\ -1&0&-1&3&-1\\ -1&-1&-1&-1&4\end{bmatrix}\,. \tag{6.14}\] The network supports a Laplacian cluster state with clusters \(\mathcal{C}_{1}=\{1,3,5\}\) and \(\mathcal{C}_{2}=\{2,4\}\)[107, 146]. For this cluster state, \(x_{1}=x_{3}=x_{5}=s_{1}\) and \(x_{2}=x_{4}=s_{2}\), where \(x_{i}=[v_{i},w_{i}]^{\top}\) for \(i\in\{1,\ldots,5\}\) and the invariant-subspace equations have the form \(\dot{\mathbf{s}}=A_{\mu_{1},\mu_{2}}\mathbf{s}+b_{\mu_{1},\mu_{2}}\), where \(\mathbf{s}=[s_{1},s_{2}]^{\top}\) and \[A_{\mu_{1},\mu_{2}}=\begin{bmatrix}A_{\mu_{1}}-2\sigma\mathrm{D}\mathcal{H}&2 \sigma\mathrm{D}\mathcal{H}\\ 3\sigma\mathrm{D}\mathcal{H}&A_{\mu_{2}}-3\sigma\mathrm{D}\mathcal{H}\end{bmatrix} \,,\ \ b_{\mu_{1},\mu_{2}}=\begin{bmatrix}b_{\mu_{1}}\\ b_{\mu_{2}}\end{bmatrix}\,,\ \ \ \mu_{i}=\begin{cases}1\,,&v_{i}>0\\ 2\,,&v_{i}<0\end{cases} \tag{6.15}\] and we define \(A_{1}\), \(A_{2}\), \(b_{1}\), and \(b_{2}\) in Table 1. Also let \(\mathcal{H}(x)=[v,0]^{\top}\) so that the coupling acts only through the first component. This is a 4-dimensional PWL system with two switching planes, \(v_{1}=0\) and \(v_{2}=0\). One can construct the periodic orbit on the 4-dimensional synchronous manifold by following the method that we outlined in section 2. Starting from the initial data \(\mathbf{s}(0)=[0,w_{1}(0),v_{2}(0),w_{2}(0)]^{\top}\), we now have to solve a system of seven nonlinear algebraic equations for \(w_{1}(0)\), \(v_{2}(0)\), and \(w_{2}(0)\) and the four switching times \(T_{1,1}\), \(T_{2,1}\), \(T_{2,2}\), and \(T_{1,2}=\Delta\) (see Figure 18). With the block-diagonalization of the variational equation (6.13), one uses the initial data and switching times to explicitly compute the Floquet multipliers of the periodic orbit. One can compute Floquet multipliers that correspond to perturbations within the synchronization manifold without using the block-diagonalization. We have \[\frac{\mathrm{d}}{\mathrm{d}t}\delta\mathbf{s}=A_{\mu_{1},\mu_{2}}\delta \mathbf{s}\,, \tag{6.16}\] which one can solve using matrix exponentials, being careful to use saltation matrices to evolve perturbations through switching manifolds. After one period, Figure 18: The \(v\) components of the orbits \(s_{1}\) and \(s_{2}\) over one period. One needs to solve seven nonlinear algebraic equations to determine the unknown initial data \(w_{1}(0)\), \(v_{2}(0)\), and \(w_{2}(0)\) and the switching times \(T_{1,1}\), \(T_{2,1}\), \(T_{2,2}\), and \(T_{1,2}=\Delta\). \(\Psi_{s}\delta\mathbf{s}(0)\), where \(\Psi_{s}\) is the monodromy matrix on the synchronization manifold. Considering all evolutions and transitions through switching manifolds, we obtain \[\Psi_{s}=S_{12}\mathcal{E}_{2,1}(\Delta-T_{1,1})S_{11}\mathcal{E}_{1,1}(T_{1,1} -T_{2,2})S_{22}\mathcal{E}_{1,2}(T_{2,2}-T_{2,1})S_{21}\mathcal{E}_{1,1}(T_{2,1 })\,, \tag{6.17}\] with saltation matrices \[S_{ij} =P_{i}\otimes S_{i}(T_{i,j})\,,\] \[P_{1} =\begin{bmatrix}1&0\\ 0&0\end{bmatrix}\,,\,\,P_{2}=\begin{bmatrix}0&0\\ 0&1\end{bmatrix}\,, \tag{6.18}\] \[S_{i}(t) =\begin{bmatrix}\dot{v}_{i}(t^{+})/\dot{v}_{i}(t^{-})&0\\ (\dot{w}_{i}(t^{+})-\dot{w}_{i}(t^{-}))/\dot{v}_{i}(t^{-})&1\end{bmatrix}\,, \quad i\in\{1,2\}\,,\] (6.19) \[\mathcal{E}_{\mu_{1},\mu_{2}}(t) =\mathrm{e}^{A_{\mu_{1},\mu_{2}}t}\,,\qquad\mu_{i}\in\{1,2\}\,.\] The Floquet multipliers for perturbations within the synchronization manifold are the eigenvalues of the monodromy matrix \(\Psi_{s}\). One of these eigenvalues is always \(1\), corresponding to perturbations along the periodic orbit. The block-diagonalization of \(\mathcal{L}\) for the cluster state that we have been discussing is [107, 146] \[\mathcal{L}^{\prime\prime}=\begin{bmatrix}3&-\sqrt{6}&0&0&0\\ -\sqrt{6}&2&0&0&0\\ 0&0&5&0&0\\ 0&0&0&3&0\\ 0&0&0&0&3\end{bmatrix}\,. \tag{6.20}\] In the directions that are transverse to the synchronization manifold, this block-diagonalisation yields the following three decoupled Floquet problems: \[\dot{V}_{3} =(Df(s_{1})-3\sigma D\mathcal{H})V_{3}\,, \tag{6.21}\] \[\dot{V}_{4} =(Df(s_{2})-3\sigma D\mathcal{H})V_{4}\,,\] \[\dot{V}_{5} =(Df(s_{1})-5\sigma D\mathcal{H})V_{5}\,,\] which (as usual) one can solve using matrix exponentials and saltation matrices. This yields \(V_{i}(\Delta)=\Psi_{t_{i}}V_{i}(0)\), where the monodromy matrices \(\Psi_{t_{i}}\) for the transverse directions are \[\Psi_{t_{3}} =S_{1}(\Delta)\mathcal{E}_{L}^{3}(\Delta-T_{1,1})S_{1}(T_{1,1}) \mathcal{E}_{R}^{3}(T_{1,1})\,, \tag{6.22}\] \[\Psi_{t_{4}} =\mathcal{E}_{R}^{3}(\Delta-T_{2,2})S_{2}(T_{2,2})\mathcal{E}_{L }^{3}(T_{2,2}-T_{2,1})S_{2}(T_{2,1})\mathcal{E}_{R}^{3}(T_{2,1})\,,\] \[\Psi_{t_{5}} =S_{1}(\Delta)\mathcal{E}_{L}^{5}(\Delta-T_{1,1})S_{1}(T_{1,1}) \mathcal{E}_{R}^{5}(T_{1,1})\,,\] and \[\mathcal{E}_{\mu}^{\beta}(t)=\mathrm{e}^{(A_{\mu}-\beta\sigma D\mathcal{H})t}\,. \tag{6.23}\] The eigenvalues of the monodromy matrices \(\Psi_{t_{i}}\), with \(i\in{3,4,5}\) give the Floquet multipliers for directions that are transverse to the synchronization manifold. The change of basis from \(U\) coordinates to \(V\) coordinates has no effect on the action of the saltation matrices. (Recall that \(V=(Q\otimes I_{m})U\).) To evolve \(U\) through a discontinuity, we write \(U^{+}=SU^{-}\), where \[S=\sum_{k=1}^{M}E^{(k)}\otimes S_{k}\,. \tag{6.24}\] Therefore, \(V^{+}=\widehat{S}V\), where \[\widehat{S}=(Q\otimes I_{m})S(Q\otimes I_{m})^{-1}=\sum_{k=1}^{M}(QE^{(k)}Q^{-1} \otimes S_{k})=\sum_{k=1}^{M}(J^{(k)}\otimes S_{k})\,. \tag{6.25}\] Because the vector field of the absolute model is continuous, all saltation matrices are the identity matrix. One then does an algebraic calculation to show that the cluster state is stable for the choice of parameters in Figure 1(a). One finds bifurcations of the periodic orbit by determining when the Floquet multipliers leave the unit disk. As one varies the parameters, the order of the times at which trajectories cross the switching planes can also change. One constructs bifurcation diagrams by similarly treating all types of cluster states from network symmetries and Laplacian clustering. For the absolute model with the choice of parameters in Figure 1(a) and interaction function \(\mathcal{H}(x)=[v,0]^{\top}\), we show the bifurcations from varying the coupling strength \(\sigma\) in Figure 19. All of the bifurcations from stable states are tangent bifurcations [84], in which a Floquet multiplier passes through the value \(+1\). One can use the above approach to determine the stability of cluster states in any network of PWL nodes; see [107] for more examples. The computational difficulty of applying the MSF approach for a cluster state scales with the number of clusters in the state and with the number of switching planes in the PWL model of the individual oscillators. It does _not_ scale with the size (i.e., the number of nodes) of a network. Figure 19: Bifurcations between cluster states from varying the coupling strength \(\sigma\) in a five-node network of absolute-model oscillators for the same parameters as in Figure 1(a). We indicate stable periodic orbits with solid curves and unstable solutions with dotted curves. We color each branch according to the circle with its associated cluster state. We show only one branch of the \(L3\) solutions, where we expect a branch of conjugate solutions with identical stability properties. In the inset, we show the mean-field dynamics \([\langle v\rangle,\langle w\rangle]=\sum_{i=1}^{5}[v_{i},w_{i}]/5\) for \(\sigma=-0.05\). This shows an \(A3\) cluster state with dynamics that blow up in finite time. This behavior dominates for \(\sigma\lessapprox-0.0477\), where the \(L3\) branch loses stability. All of the depicted bifurcations from stable states are tangent bifurcations, in which a Floquet multiplier passes through the value \(+1\). Finally, we note that one can view synchrony as a single-cluster state, for which the above methodology reduces to the standard MSF approach in subsection 6.1. ### An application to synaptically coupled, spiking neural networks It is common to model spiking neural networks using integrate-and-fire (IF) neurons. Coombes _et al._[32] explored the nonsmooth nature of systems of IF neurons. The MSF approach has been used to study synaptically coupled networks of nonlinear (specifically, adaptive exponential) IF neurons [87], for which one uses numerical computations to obtain periodic orbits. Nicks _et al._[107] showed how to make analytical progress on the dynamics of PWL planar IF neurons. We follow Nicks _et al._[107] and consider a network of \(N\) synaptically coupled planar IF neurons with the time-dependent forcing \(I\to I+\sigma\sum_{j}w_{ij}s_{j}(t)\). The synaptic input from neuron \(j\) takes the standard event-driven form (4.10) We adopt the common choice of a continuous \(\alpha\)-function, so that \(\eta(t)\) is (4.13). We can then express \(s_{i}(t)\) as the solution to the impulsively forced linear system \[\left(1+\frac{1}{\alpha}\frac{\mathrm{d}}{\mathrm{d}t}\right)s_{i}=u_{i}\,, \quad\left(1+\frac{1}{\alpha}\frac{\mathrm{d}}{\mathrm{d}t}\right)u_{i}=\sum_ {p\in\mathbb{Z}}\delta(t-t_{i}^{p})\,. \tag{6.26}\] We exploit the linearity of the synaptic dynamics between firing events to write the network model in the form (6.7) with \(\dot{x}_{i}=f(x_{i})\), where \(x_{i}=(v_{i},w_{i},s_{i},u_{i})\) and \(f\) has the form (2.1), with \[A_{1,2}=\begin{bmatrix}a_{1,2}&-1&0&0\\ a_{w}/\tau&b_{w}/\tau&0&0\\ 0&0&-\alpha&\alpha\\ 0&0&0&-\alpha\end{bmatrix} \tag{6.27}\] and \(b_{1}=[I,0,0,0]^{\top}=b_{2}\), and one applies the jump operator \(\mathcal{J}(x_{i})=(v_{\mathrm{r}},w_{i}+\kappa/\tau,s_{i},u_{i}+\alpha)\) whenever \(h(x_{i})=v_{i}-v_{\mathrm{th}}=0\). The vector function that specifies the interaction is \(\mathcal{H}(x_{i})=[s_{i},0,0,0]^{\top}\). For a synchronous orbit of the type in Figure 1(d) (so that a trajectory only visits the region of phase space that is described by \(A_{2}\) and the \(T\)-periodic trajectory satisfies the constraints \(v(T)=v_{\mathrm{th}}\), \(w(0)=w(T)+\kappa/\tau\), \(s(0)=s(T)\), and \(u(0)=u(T)+\alpha\)), we only need to consider saltation at firing events and the saltation matrix takes the explicit form \[S(t)=\begin{bmatrix}\dot{v}(t^{+})/\dot{v}(t^{-})&0&0&0\\ (\dot{w}(t^{+})-\dot{w}(t^{-}))/\dot{v}(t^{-})&1&0&0\\ (\dot{s}(t^{+})-\dot{s}(t^{-}))/\dot{v}(t^{-})&0&1&0\\ (\dot{u}(t^{+})-\dot{u}(t^{-}))/\dot{v}(t^{-})&0&0&1\end{bmatrix}\,. \tag{6.28}\] See Appendix B for the general formula for the saltation operator of a PWL system. In this case, the expression for \(\Psi\) in (6.10) reduces to \[\Psi=S(T)\exp\{(A_{2}+\beta\mathrm{D}\mathcal{H})T\}\,, \tag{6.29}\] where \(\beta=\sigma\lambda_{l}\) and \(\lambda_{l}\) is the \(l\)th eigenvalue of \(w\). The matrix \(\mathrm{D}\mathcal{H}\) is a constant matrix with entries \([\mathrm{D}\mathcal{H}]_{ij}=1\) if \(i=1\) and \(j=3\) and \([\mathrm{D}\mathcal{H}]_{ij}=0\) otherwise. Therefore, using equation (6.29) and the prescription in subsection 6.1, we are able to construct MSF (see Figure 20). As a particular realization of a network architecture that guarantees synchrony, we use a balanced ring network with odd \(N\) and \(w_{ij}=w(|i-j|)\). We calculate the distances \(|i-j|\) modulo \((N-1)/2\) and use the decay rate \(w(x)=(1-a|x|/d)\mathrm{e}^{-|x|/d}\). We choose the parameter \(a\) so that \(\sum_{j=1}^{N}w_{ij}=0\) (a balance condition) for the network size \(N\) and a scale \(d\). The eigenvalues \(\lambda_{l}\) of the associated (symmetric and circulant) adjacency matrix are real and given by \(\lambda_{l}=\sum_{j=0}^{N-1}w(|j|)\omega_{l}^{j}\). The balance condition enforces \(\lambda_{0}=0\). Additionally, \(\lambda_{N-l}=\lambda_{l}\) for \(l\in\{1,\ldots,(N-1)/2\}\), so any excited pattern (which arises from an instability) is given by a combination \(e_{m}+e_{-m}=2\,\mathrm{Re}(e_{m})\) for some \(m\in\{1,\ldots,(N-1)/2\}\). Given the shape of the MSF function in Figure 20, one determines the value of \(m\) using \(\lambda_{m}=\max_{l}\lambda_{l}\). In Figure 21, we compare direct simulations of a network versus the predictions of the MSF. When the network's eigenvalues lie within the region where the MSF is negative, small perturbations of synchronous initial data decay away and the system settles to a synchronous periodic orbit, as expected. When one of the eigenvalues crosses the \(0\) level set of the MSF from negative to positive, two types of instabilities emerge. One of them leads to a spatiotemporal pattern of spike doublets (i.e., a burst of two spikes), which arise because an eigenvalue of \(\Gamma\) leaves the unit disk at \(-1\) (through a period-doubling bifurcation), and the other yields a periodic traveling wave (with asynchronous firing) because an eigenvalue of \(\Gamma\) leaves the unit disk at \(+1\) (through a tangent bifurcation). ### An application to neural-mass networks The human brain has roughly \(10^{11}\) neurons and roughly \(10^{15}\) synapses. Although there is general consensus that the synaptic interactions between neurons drive brain dynamics, these astronomical numbers prohibit the construction, analysis, and simulation of an entire brain network that Figure 20: The MSF for a network of synaptically coupled, planar integrate-and-fire (IF) neurons for the synchronous tonic orbit in Figure 1(d). The shaded regions indicate where the MSF is negative for various values of the synaptic rate parameter \(\alpha\). The largest depicted region is for \(\alpha=0.1\), with progressively smaller areas for \(\alpha=0.2\), \(\alpha=0.3\), and \(\alpha=0.4\). The synchronous solution is stable if all of the eigenvalues of \(\sigma w\) lie within a shaded area for a given value of \(\alpha\). is built from single-neuron models such as the absolute model or the Morris-Lecar model (see section 2). Instead, it is instructive to coarse-grain neural behavior by grouping neurons and studying the interactions between these groups. This idea led to neural-mass models [8], which describe the average dynamics of large populations of neurons. One of the most influential neural-mass models is the Wilson-Cowan model [165, 166] \[\frac{\mathrm{d}u}{\mathrm{d}t}=-u+F(I_{u}+w^{uu}u-w^{vu}v)\,,\quad\tau\frac{ \mathrm{d}v}{\mathrm{d}t}=-v+F(I_{v}+w^{uv}u-w^{vv}v)\,, \tag{113}\] where \(u\) and \(v\), respectively, indicate the activity of excitatory and inhibitory populations of neurons. A firing-rate function \(F(x)\), which researchers often take to have a sigmoidal shape, mediates the interactions between the two populations. The quantities \(I_{u,v}\) represent background inputs, and \(w^{\alpha\beta}\) (with \(\alpha,\beta\in\{u,v\}\)) denote connection strengths between populations. The positive constant \(\tau\) encodes the relative time scale between the dynamics of the two populations. To make analytical progress, Coombes _et al._[30] considered a PWL firing-rate function of the form \[F(x)=\begin{cases}0\,,&x\leq 0\\ \epsilon^{-1}x\,,&0<x<\epsilon\\ 1\,,&x\geq\epsilon\,.\end{cases} \tag{114}\] With this choice, it is straightforward to compute periodic orbits of the dynamical system (113) and to determine their linear stability using the techniques that we described in section 2. One can think of the system (113) as modeling an appropriately chosen brain region, so coupling oscillators that satisfy (113) lets one investigate the dynamics of interacting brain regions. By introducing the coupling matrices \(\mathcal{W}^{\alpha\beta}\in\mathbb{R}^{N\times N}\), with \(\alpha,\beta\in\{u,v\}\), we obtain a network of \(N\) interacting Figure 21: Raster plot of spike times from direct numerical simulations of a network of synaptically coupled, planar integrate-and-fire (IF) neurons with \(N=31\) oscillators, \(d=3\), and \(\alpha=0.4\). A raster plot allows us to convey neuron-by-neuron variations in spike times. In the inset, we plot the MSF and superimpose the eigenvalues of \(\sigma w\). In (a), \(\sigma=-0.1\) and synchrony is unstable. In (b), \(\sigma=-0.025\) and synchrony is stable. In (c), \(\sigma=0.1\) and synchrony is unstable. The predicted instability borders (at \(\sigma=0\) and \(\sigma\approx-0.05\)) are in good agreement with the predictions from the nonsmooth MSF analysis. For \(\sigma>0\), the typical pattern of firing activity beyond an instability of the synchronous state is a periodic traveling wave. For \(\sigma<0\), a spatiotemporal pattern emerges via a period-doubling instability of the firing times. oscillators with dynamics \[\frac{\mathrm{d}u_{i}}{\mathrm{d}t} =-u_{i}+F\left(I_{u}+\sum_{j=1}^{N}\mathcal{W}^{uu}_{ij}u_{j}-\sum_ {j=1}^{N}\mathcal{W}^{vw}_{ij}v_{j}\right)\,, \tag{6.32}\] \[\tau\frac{\mathrm{d}v_{i}}{\mathrm{d}t} =-v_{i}+F\left(I_{v}+\sum_{j=1}^{N}\mathcal{W}^{uv}_{ij}u_{j}- \sum_{j=1}^{N}\mathcal{W}^{vv}_{ij}v_{j}\right)\,,\qquad i\in\{1,\ldots,N\}\,. \tag{6.33}\] Although the dynamical system (6.33) is not exactly in the form that we described in subsection 6.1, one can analyze this network using essentially the same MSF techniques. For simplicity and to guarantee the existence of a synchronous network state, we impose the row-sum constraint \(\sum_{j=1}^{N}\mathcal{W}^{\alpha,\beta}_{ij}=w^{\alpha\beta}\) for \(\alpha,\beta\in\{u,v\}\). These row-sum constraints are natural for networks arranged on a ring, because the coupling matrix is circulant (see subsection 6.3). The synchronous network state satisfies \([u_{i}(t),v_{i}(t)]=[u(t),v(t)]\) for all \(i\in\{1,\ldots,N\}\), where \([u(t),v(t)]\) satisfies equations (6.30). It is convenient to introduce the vector \(X=[u_{1},v_{1},u_{2},v_{2},\ldots,u_{N},v_{N}]^{\top}\in\mathbb{R}^{2N}\) and change variables by writing \(Y=\mathcal{W}X+C\), where \[\mathcal{W}=\mathcal{W}^{uu}\otimes\begin{bmatrix}1&0\\ 0&0\end{bmatrix}-\mathcal{W}^{vu}\otimes\begin{bmatrix}0&1\\ 0&0\end{bmatrix}+\mathcal{W}^{uv}\otimes\begin{bmatrix}0&0\\ 1&0\end{bmatrix}-\mathcal{W}^{vv}\otimes\begin{bmatrix}0&0\\ 0&1\end{bmatrix}\,, \tag{6.34}\] the matrix \(C=\mathbf{1}_{N}\otimes[I_{u},I_{v}]^{\top}\), and \(\mathbf{1}_{N}\) is an \(N\)-dimensional vector with all entries equal to \(1\). One can then succinctly describe the switching manifolds by the relations \(Y_{i}=0\) and \(Y_{i}=\epsilon\), and the dynamics takes the form \[\frac{\mathrm{d}}{\mathrm{d}t}Y=\mathcal{A}(Y-C)+\mathcal{W}\mathcal{J}F(Y)\,, \quad\mathcal{J}=I_{N}\otimes\begin{bmatrix}1&0\\ 0&1/\tau\end{bmatrix}\,, \tag{6.35}\] where \(\mathcal{A}=-\mathcal{W}\mathcal{J}\mathcal{W}^{-1}\). We denote the synchronous solution by \(\overline{Y}(t)=(\overline{U}(t),\overline{V}(t),\)\(\overline{U}(t),\overline{V}(t),\ldots,\overline{U}(t),\overline{V}(t))\), with \[\begin{bmatrix}\overline{U}(t)\\ \overline{V}(t)\end{bmatrix}=\begin{bmatrix}w^{uu}&-w^{vu}\\ w^{uv}&-w^{vv}\end{bmatrix}\begin{bmatrix}\overline{u}(t)\\ \overline{v}(t)\end{bmatrix}+\begin{bmatrix}I_{u}\\ I_{v}\end{bmatrix}\,, \tag{6.36}\] and consider small perturbations such that \(Y=\overline{Y}+\delta Y\). We thereby obtain \[\frac{\mathrm{d}}{\mathrm{d}t}\delta Y=\mathcal{A}\,\delta Y+\mathcal{W} \mathcal{J}\,\mathrm{D}F(\overline{Y})\,\delta Y\,, \tag{6.37}\] where \(\mathrm{D}F(\overline{Y})\) is the Jacobian of \(F\) evaluated along the periodic orbit. As we showed in subsection 6.1, we need appropriately diagonalize (6.37). Suppose that we can diagonalize all \(\mathcal{W}^{\alpha\beta}\) with respect to the same basis, and let \(P=[e_{1}\ e_{2}\ \ldots\ e_{N}]\) be the matrix whose columns consist of the basis vectors. Such simultaneous diagonalization is feasible for circulant matrices, which naturally obey the above row-sum constraint. Let \(\{\nu^{\alpha\beta}_{j}\}\), with \(j\in\{1,\ldots,N\}\), denote the eigenvalues of \(\mathcal{W}^{\alpha\beta}\). We then write \[(P\otimes I_{2})^{-1}\mathcal{W}(P\otimes I_{2})=\mathrm{diag}(\Lambda_{1}, \Lambda_{2},\ldots,\Lambda_{N})\equiv\Lambda\,, \tag{6.38}\] where \[\Lambda_{p}=\begin{bmatrix}\nu^{uu}_{p}&-\nu^{vu}_{p}\\ \nu^{uv}_{p}&-\nu^{vv}_{p}\end{bmatrix}\,,\quad p\in\{1,2,\ldots,N\}\,. \tag{6.39}\] Additionally, \((P\otimes I_{2})^{-1}\mathcal{A}(P\otimes I_{2})=-\Lambda(I_{N}\otimes J)\Lambda^{-1}\). Consider perturbations of the form \(\delta Z=(P\otimes I_{2})^{-1}\delta Y\). Equation (6.37) then implies that the linearized dynamics satisfies \[\frac{\mathrm{d}}{\mathrm{d}t}\delta Z=\Lambda(I_{N}\otimes J)\left[-\Lambda^ {-1}+(I_{N}\otimes\mathrm{D})\right]\delta Z\,, \tag{6.40}\] where \(\mathrm{D}\in\mathbb{R}^{2\times 2}\) is the Jacobian of \((F(\overline{U}),F(\overline{V}))\). The matrix \(\mathrm{D}\) is a piecewise-constant matrix that is nonzero only if either \(0<\overline{U}(t)<\epsilon\) or \(0<\overline{V}(t)<\epsilon\). Analogously to (6.5), equation (6.40) has a block structure in which the dynamics in each of \(N\)\(2\times 2\) blocks satisfies \[\frac{\mathrm{d}}{\mathrm{d}t}\xi=[A_{p}+\Lambda_{p}J\mathrm{D}]\xi\,,\quad p \in\{1,\ldots,N\}\,,\quad\xi\in\mathbb{R}^{2}\,, \tag{6.41}\] with \(A_{p}=-\Lambda_{p}J\Lambda_{p}^{-1}\). The problem that is defined by (6.41) is time-independent between switching manifolds, so one can construct a solution in a piecewise fashion from matrix exponentials and write \(\xi(t)=\exp[(A_{p}+\Lambda_{p}J\mathrm{D})t]\xi(0)\). One can then construct a perturbed trajectory over one period of oscillation in the form \(\xi(\Delta)=\Gamma_{p}\xi(0)\), where \(\Psi_{p}\in\mathbb{R}^{2\times 2}\) is \[\Psi_{p}=\mathrm{e}^{A_{p}\Delta_{8}}\mathrm{e}^{A_{p}^{-}(\epsilon)\Delta_{7} }\mathrm{e}^{A_{p}\Delta_{6}}\mathrm{e}^{A_{p}^{-}(\epsilon)\Delta_{5}} \mathrm{e}^{A_{p}\Delta_{4}}\mathrm{e}^{A_{p}^{+}(\epsilon)\Delta_{3}} \mathrm{e}^{A_{p}\Delta_{2}}\mathrm{e}^{A_{p}^{-}(\epsilon)\Delta_{1}}\,, \tag{6.42}\] with \[A_{p}^{\pm}(\epsilon)=\left(A_{p}+\epsilon^{-1}\Lambda(p)JT^{\pm}\right)\,, \quad T^{+}=\begin{bmatrix}1&0\\ 0&0\end{bmatrix}\,,\quad T^{-}=\begin{bmatrix}0&0\\ 0&1\end{bmatrix}\,. \tag{6.43}\] Note (and see equation (6.10)) that there is no saltation (i.e., \(S=I\)). As an illustration, consider a network of Wilson-Cowan oscillators on a ring graph with an odd number of nodes. Let \(\operatorname{dist}(i,j)=\min\{|i-j|,N-|i-j|\}\) be the distance between nodes \(i\) and \(j\). We then define a set of exponentially decaying connectivity matrices \[\mathcal{W}_{ij}^{\alpha\beta}=w^{\alpha\beta}\frac{\mathrm{e}^{-\, \operatorname{dist}(i,j)/\sigma_{\alpha\beta}}}{\sum_{j=0}^{N-1}\mathrm{e}^{- \,\operatorname{dist}(0,j)/\sigma_{\alpha\beta}}}\,. \tag{6.44}\] In this example, are four circulant matrices; they are parametrized by the four quantities \(\sigma_{\alpha\beta}\), which respect the row-sum constraints \(\sum_{j=1}^{N}\mathcal{W}_{ij}^{\alpha\beta}=w^{\alpha\beta}\). In Figure 22, we plot the eigenvalues of \(\Psi_{p}\) for \(p\in\{1,\ldots,N\}\) for two different parameter choices. In Figure 22(a), all of the eigenvalues (excluding the one at \((1,0)\) that arises from time-translation invariance) lie within the unit disk. In Figure 22(b), one eigenvalue leaves the unit disk along the negative real axis. This latter scenario predicts an instability of the synchronous state. In the inset of panel (b), we show the eigenvector that corresponds to the eigenvalue that crosses to the outside of the unit disk. The prediction of this instability is in excellent agreement with direct numerical simulations [30]. ### An application to cardiac alternans One can conceptualize a beating heart as a network of muscle cells in which each heartbeat results from their coordinated contraction and subsequent relaxation. Because the dynamics of an organ results from the orchestrated behavior of individual cells, a major avenue in cardiac research is the investigation of the dynamical repertoire of individual cardiac muscle cells [129]. Molecular changes at the individual-cell level can yield irregular cell behavior, which then feeds forward to pathological heart dynamics (such as cardiac arrhythmias). A vital component that determines the behavior of cardiac muscle cells is the intracellular calcium (Ca\({}^{2+}\)) concentration. In basic terms, rises and falls of the cytosolic Ca\({}^{2+}\) concentration are responsible for muscle contraction, and irregularities and abnormalities of the intracellular Ca\({}^{2+}\) dynamics have been linked to a plethora of cardiac pathologies [90]. The intracellular Ca\({}^{2+}\) concentration in cardiac muscle cells has rich spatiotemporal patterns that arise from the interplay of diffusively coupled calcium-release units (CRUs). One can decompose each CRU into compartments with Ca\({}^{2+}\) fluxes between them, so a cardiac muscle cell corresponds to a network of networks. In other words, each node of the cellular network is itself a network (i.e., a CRU). Using a 5-dimensional PWL representation of a well-established cardiac Ca\({}^{2+}\) model [153], one can express the dynamics of a network of \(N\) CRUs as \[\frac{\mathrm{d}x}{\mathrm{d}t}=Ax+F(t)+\mathcal{L}\otimes Hx\,, \tag{6.45}\] where \(x=(x_{1},x_{2},\ldots,x_{N})\) is a \(5N\)-dimensional vector. Each entry \(x_{\mu}\), with \(\mu\in\{1,\ldots,N\}\), is the 5-dimensional state vector of a single CRU. The matrix \(A\in\mathbb{R}^{5N\times 5N}\) is constant and block diagonal with entries in the set \(\{A_{i}\}\). The constant matrices \(A_{i}\in\mathbb{R}^{5\times 5}\) are associated with a single CRU, analogously to the matrices \(A_{1}\) and \(A_{2}\) in equation (2.1). As usual, the matrix \(\mathcal{L}\in\mathbb{N}^{N\times N}\) denotes the combinatorial graph Laplacian matrix of the network and the matrix \(H\in\mathbb{R}^{5\times 5}\) encodes which variables are coupled and how strongly they are coupled. The time-dependence \(F(t)=1_{N}\otimes v(t)\in\mathbb{R}^{5N}\) distinguishes the present example from the other examples in this section. The explicit time-dependent drive \(v(t)\), which is \(\Delta\)-periodic, models an experimental condition that is known as a voltage clamp, which is used often to disentangle the different cellular mechanisms that contribute to the complex spatiotemporal patterns of the intracellular Ca\({}^{2+}\) concentration of cardiac cells. Because of the explicit time-dependence in equation (6.45), the switching manifolds are not only state-dependent (as in all of the previous examples in this section), but some of them are also time-dependent. This leads to a system in which any trajectory is determined by a sequence of state-dependent and time-dependent Figure 22: Plots of the eigenvalues of (6.42) for a ring of \(N=31\) Wilson–Cowan oscillators, with \(\sigma_{\alpha\beta}=\sigma\) for all \(\alpha\) and \(\beta\). The parameter values are \(\epsilon=0.04\), \(\tau=0.6\), \(I_{u}=-0.05\), \(I_{v}=-0.3\), \(w^{uu}=1\), \(w^{vu}=2\), \(w^{uv}=1\), and \(w^{vv}=0.25\) for coupling strengths of (a) \(\sigma=0.15\) and (b) \(\sigma=0.191\). The inset of (a) shows the synchronous network state, and the inset of (b) shows the eigenvector that is associated with the eigenvalue that lies outside the unit disk. switches [153, 157, 88]. As demonstrated in section 2, one can readily compute the synchronous network state \(s(t)\) of (6.45) using matrix exponentials. One can then linearize equation (6.45) around the synchronous network state \(s(t)\) by using the ansatz \(x(t)=1_{N}\otimes s(t)+\delta x\) and following the general approach in section 6. Analogously to equation (6.5), this yields \[\frac{\mathrm{d}\xi_{l}}{\mathrm{d}t}=\left[A_{i}-\lambda_{l}H\right]\xi_{l}\,. \tag{6.46}\] where \(\xi_{l}\in\mathbb{R}^{5}\) and \(\lambda_{i}\) are the eigenvalues of \(\mathcal{L}\). Because we are perturbing from the synchronous network state, we assume that all CRUs have the same associated \(A_{i}\) to obtain (6.46). The dynamical system (6.46) is continuous at the switching manifolds, so one can obtain its solution using matrix exponentials. Let \(\Delta_{i}\) denote the time-of-flight for when the dynamics are associated with \(A_{i}\), let \(\Delta=\sum_{i}\Delta_{i}\) denote the period of the synchronous state, and let \(\xi(0)\) denote an initial perturbation. According to the relation (6.10) and noting that there is no saltation (i.e., \(S=I\)), the perturbation after one period is \(\xi_{l}(\Delta)=\Psi(\lambda_{l})\xi_{l}(0)\), where \[\Psi(\lambda_{l})=\exp\left[(A_{m}-\lambda_{l}H)\Delta_{N}\right]\times\cdots \times\exp\left[(A_{1}-\lambda_{l}H)\Delta_{1}\right]\,. \tag{6.47}\] As we showed in subsection 6.1, one can use the relation (6.47) to construct the MSF. In Figure 23, we illustrate that the topology of the MSF can vary substantially across different coupling regimes. In the left panel of Figure 23, the zero contour of the MSF forms a closed loop; the MSF is negative inside the loop and positive outside it. On the contrary, in the right panel of Figure 23, there are two distinct regions in which the MSF is negative. The colors reveal that if the MSF changes sign along the real axis, then the only instabilities are either a period-doubling bifurcation (i.e., a \(-1\) bifurcation) or a tangent bifurcation (i.e., a \(+1\) bifurcation). Figure 23: Zero contour of the MSF for \(\Delta=0.9\) and two different coupling regimes. The MSF is negative in regions that we label by “S” and positive in regions that we label by “U”. The color indicates the value of \(\cos\left(\arg\left(q\left(\eta\right)\right)\right)\), where \(q\) is the largest eigenvalue of \(\Psi(\eta)\) and \(\Psi\) is given by (6.47). The synchronous solution is stable if all of the points with \(\eta=\lambda_{l}\) lie in the region S, where \(\lambda_{l}\) are the eigenvalues of the graph Laplacian \(\mathcal{L}\) (which also incorporates the coupling strength of the network). For more details, see [88]. As was demonstrated by Lai _et al._[88], MSF plots like those in Figure 23 allow one to understand abrupt changes in spatial \(\text{Ca}^{2+}\) patterns from small changes of a single parameter. In Figure 24, we illustrate the behavior that emerges when the synchronous solution destabilizes via a period-doubling bifurcation or a tangent bifurcation. In Figure 24(a), we show the peak \(\text{Ca}^{2+}\) concentration during one period of the \(\Delta\)-periodic drive \(v(t)\) for a period-doubling bifurcation. The CRUs are arranged on a regular two-dimensional grid, so one can reference each CRU by a row and column index. Each small rectangle represents the \(\text{Ca}^{2+}\) concentration of a single CRU. One can clearly see a spatially alternating pattern, which is more pronounced near the center of the figure than it is near the edges. This spatially alternating pattern also alternates in time: when a CRU has a large peak during one pacing period, it has a small peak during the next pacing period (and vice versa). In other words, each CRU exhibits a period-2 orbit and adjacent CRUs oscillate out of phase with each other. This phenomenon is known as "subcellular \(\text{Ca}^{2+}\) alternans" and is a precursor to severe cardiac arrhythmia. For the behavior in Figure 24(a), only one eigenvalue lies outside the unit disk. In Figure 24(b), we show the corresponding right eigenvector, which is in excellent agreement with direct numerical simulations. When an eigenvalue leaves the unit disk along the positive real axis, one observes a pattern like the one in Figure 24(c). As with the period-doubling bifurcation, the peak \(\text{Ca}^{2+}\) concentrations exhibit an alternating spatial pattern. However, in contrast to the period-doubling bifurcation, each CRU follows a period-1 orbit, so the peak amplitude is the same across pacing periods, rather than alternating from one pacing period to the next. In Figure 24(d), we show the eigenvector that is associated with the only eigenvalue that lies outside the unit disk for the behavior in Figure 24(b). This eigenvector also agrees excellently with direct numerical simulations. ### An application to Franklin-bell networks Benjamin Franklin was one of the leading political figures of his time, and he was also a prolific inventor and scientist. To facilitate his studies into the nature of electricity, he employed lightning as an electrical power source. To be notified when an iron rod outside his house was sufficiently electrified by lighting, Franklin employed what is now known as a "Franklin bell" [52]. A Franklin bell is a metal ball that oscillates between two metal plates, which are driven by electrical charge. A Franklin bell is an example of an impacting system; the ball velocity changes nonsmoothly when it contacts either plate. A network of \(N\) Franklin bells satisfies the dynamical system [136, 140] \[\ddot{u}_{n}+\gamma_{1}\dot{u}_{n}+\gamma_{2}u_{n}+\sigma\sum_{m=1 }^{N}w_{nm}\left(u_{m}-u_{n}\right) =\text{sgn}(\dot{u}_{n})f\,,\quad t\neq t_{n_{i}}\,, \tag{6.48}\] \[\dot{u}_{n}(t_{n_{i}}^{+}) =-\dot{k}\dot{u}_{n}(t_{n_{i}}^{-})\,,\quad\,t=t_{n_{i}}\,, \tag{6.49}\] where \(u_{n}\) denotes the position of the ball of the \(n\)th Franklin bell, which is restricted between two impacting manifolds at \(\pm a\). One implicitly determines the time \(t_{n_{i}}\) of the \(i\)th impacting event of the \(n\)th oscillator using the relation \(u_{n}(t_{n_{i}})=\pm a\). The parameter \(\sigma\) is a global coupling strength, and the network structure is encoded by a matrix with elements \(w_{nm}\). The constant \(k\in\mathbb{R}^{+}\) is the coefficient of restitution upon impact, \(f\) is a constant force (which is determined by a sum of the repelling and attracting electrostatic forces), \(\gamma_{1}>0\) is a damping coefficient, and \(\gamma_{2}>0\) sets the natural frequency of the pendulum. It is convenient to write the dynamical system (6.48), (6.49) as a system of first-order differential equations (i.e., in the standard form of a dynamical system) by introducing the state vector \(x_{n}=[u_{n},v_{n}]^{\top}\), where \(v_{n}=\dot{u}_{n}\). This yields \[\dot{x}_{n} =F(x_{n})+\sigma\sum_{m=1}^{N}w_{nm}[\mathcal{H}(x_{m})-\mathcal{H} (x_{n})]\,,\hskip 14.226378ptt\neq t_{n_{i}}\,, \tag{6.50}\] \[x_{n}(t_{n_{i}}^{+}) =g\left(x_{n}(t_{n_{i}}^{-})\right)\,,\hskip 14.226378ptt=t_{n_{i}}\,. \tag{6.51}\] The vector field \(F:\mathbb{R}^{2}\to\mathbb{R}^{2}\) is \(F(x_{n})=Ax_{n}+f_{e_{n}}\), where \[A=\begin{bmatrix}0&1\\ -\gamma_{2}&-\gamma_{1}\end{bmatrix}\,,\hskip 14.226378ptf_{e}=\begin{bmatrix} 0\\ f\end{bmatrix}\mathrm{sgn}(v)\,. \tag{6.52}\] The function \(\mathcal{H}:\mathbb{R}^{2}\to\mathbb{R}^{2}\) is \(\mathcal{H}[u,v])=[0,u])^{\top}\). The form of the coupling in equation (6.50) ensures the existence of the synchronous network state \(s(t)\). To determine its linear stability, we rewrite (6.50) using the graph Laplacian \(\mathcal{L}\) (see subsection 6.1). The dynamics between impacts is \[\dot{x}_{n}(t)=F(x_{n}(t))-\sigma\sum_{m=1}^{N}\mathcal{L}_{nm}\mathcal{H}(x_{ m})\,, \tag{6.53}\] which has the form (6.1). Consequently, after diagonalization, the Floquet problem for the linear stability of the synchronous network state becomes \(\xi_{l}(\Delta)=\Psi(l)\xi_{l}(0)\) Figure 24: Instabilities of the synchronous network state induced by (a, b) a period-doubling bifurcation and (c, d) a tangent bifurcation. Panels (a, c) show the peak Ca\({}^{2+}\) concentration during one period \(\Delta\) in one of the CRU compartments, while panels (b, d) depict the eigenvectors that correspond to the only eigenvalue that lies outside the unit disk in for the patterns in panels (a, c). For more details, see [88]. (with \(l\in\{1,\ldots,N\}\)), where \[\Psi(l)=K(t_{2})\mathrm{e}^{A_{l}\Delta_{2}}K(t_{1})e^{A_{l}\Delta_{1}}\,,\quad A _{l}=A-\sigma\lambda_{l}\mathrm{D}\mathcal{H}\,, \tag{108}\] and the saltation operator is \[K(t)=\begin{bmatrix}-k&0\\ \frac{k\dot{e}(t^{-})+\dot{e}(t^{+})}{\dot{u}(t^{-})}&-k\end{bmatrix}\,. \tag{109}\] As we showed in subsection 6.1, we obtain the MSF from (108) with the replacement \(\sigma\lambda_{l}\to\eta\in\mathbb{C}\). In Figure 25, we illustrate the dynamics of a network of 15 Franklin bells when the adjacency matrix has entries \[w_{nm}=c_{n}(\delta_{n,m-1}+\delta_{N-n+1,1})+c_{n-1}\delta_{n,m+1}+c_{N} \delta_{1,N-m+1}\,,\quad c_{n}\in\mathbb{R} \tag{110}\] for \(n,m\in\{1,2,\ldots,N\}\). The MSF (see Figure 25(a)) is negative for only one \(\eta_{l}\), so the synchronous state is unstable. In Figure 25(b), we show the eigenvector that corresponds to the critical value \(\eta_{l}\). We see in Figure 25(c) that we obtain excellent agreement with direct numerical simulations. Because the adjacency matrix with the entries (110) is symmetric, all of the eigenvalues are real, so \(\eta_{l}\) is real. See Sayli _et al._ for a discussion of the predictive power of the MSF for an example with complex eigenvalues. ### An application to coordination in cow herds Grazing animals, such as cows, protect themselves from predators by living in herds [100], and synchronizing their behavior (by tending to eat and lie down at the same time) helps them remain together as a herd [131]. Sun _et al._[149] developed a piecewise-linear dynamical system as a simplistic model to study collective in herds of cattle. One can treat some aspects of their model -- both for a single cow and for a network of cows -- using the focal techniques of the present paper. Cows are ruminants. They eat plant food, swallow, and regurgitate it at some later stage; they then again chew the partly digested plant food. During the first stage (standing/feeding), they stand up to graze. However, they typically lie down and ruminate (i.e., chew the cud) in a second stage (lying/ruminating). A cow thus oscillates between two stages. One can construct a simplistic caricature of a cow by Figure 25: (a) MSF and the values of \(\eta_{l}\) (black dots) for a Franklin-bell network with 15 nodes, where \(c_{n}=1\) if \(n\) is odd and \(c_{n}=0.1\) if \(n\) is even, except for \(c_{2}=-0.1\). The MSF is negative in the white region and positive in the gray region. (b) Normalized eigenvector \(e_{1}\) that corresponds to the eigenvalue in panel (a) for which the MSF is negative (the leftmost black dot). (c) Normalized position \(u_{n}\) at a fixed time for each oscillator \(n\) in the network. For the other parameter values, see [136]. separately considering its observable state (eating, lying down, or standing) and its unobservable level of hunger or desire to lie down. Sun _et al._ formulated a model in terms of a variable \(x(t;\theta)\), where \(x=(v,w)\in[\delta,1]\times[\delta,1]\) with a parameter \(\delta\in(0,1)\) and an observable state \(\theta\in\{\mathcal{E},\mathcal{R},\mathcal{S}\}\). The variables \(v\) and \(w\), respectively, represent the extent of the desire of a cow to eat and lie down. The variable \(\theta\) represents the state of a cow, which can be eating (\(\mathcal{E}\)), lying down (\(\mathcal{R}\)), or standing (\(\mathcal{S}\)). The dynamics in the \((v,w)\) plane is confined to a box, and a cow switches between states whenever a trajectory intersects with the edge of the box. The dynamics of \(x=x(t;\theta)\) takes the simple form \(\dot{x}=a(\theta)x\), where \[a(\theta)=\begin{bmatrix}\alpha(\theta)&0\\ 0&\beta(\theta)\end{bmatrix}\,,\quad\begin{bmatrix}\alpha(\mathcal{E})&\alpha( \mathcal{R})&\alpha(\mathcal{S})\\ \beta(\mathcal{E})&\beta(\mathcal{R})&\beta(\mathcal{S})\end{bmatrix}= \begin{bmatrix}-\alpha_{2}&\alpha_{1}&\alpha_{1}\\ \beta_{1}&-\beta_{2}&\beta_{1}\end{bmatrix}\,, \tag{6.57}\] with hunger parameters \(\alpha_{1,2}>0\) and lying parameters \(\beta_{1,2}>0\). The parameter \(\alpha_{1}\) represents the rate of increase of hunger, \(\alpha_{2}\) represents the decay rate of hunger, \(\beta_{1}\) represents the rate of increase of the desire to lie down, and \(\beta_{2}\) represents the decay rate of the desire to lie down. One prescribes switching conditions (at the four edges of the box) using four indicator functions: \(h_{1}(x)=v-1\), \(h_{2}(x)=w-1\), \(h_{3}(x)=v-\delta\), and \(h_{4}(x)=w-\delta\). The model's four state-transition rules take the form \(\theta\to g_{i}(\theta)\), where \(g_{1}(\theta)=\mathcal{E}\), \(g_{2}(\theta)=\mathcal{R}\), and \(g_{3}(\theta)=g_{4}(\theta)=\mathcal{S}\). If a trajectory intersects the corner of the box, one can apply a state-tiebreaker rule [149], although we will not consider such scenarios. The general prescription in (2.13) yields saltation matrices at each of the four possible state transitions. They take the explicit forms \[S_{1}(t)=S_{3}(t) =\begin{bmatrix}\dot{v}(t^{+})/\dot{v}(t^{-})&0\\ (\dot{w}(t^{+})-\dot{w}(t^{-}))/\dot{v}(t^{-})&1\end{bmatrix}\,, \tag{6.58}\] \[S_{2}(t)=S_{4}(t) =\begin{bmatrix}1&(\dot{v}(t^{+})-\dot{v}(t^{-}))/\dot{w}(t^{-}) \\ 0&\dot{w}(t^{+})/\dot{w}(t^{-})\end{bmatrix}\,.\] For a given state, one readily obtains phase-space trajectories as convex curves \(w=kv^{\beta(\theta)/\alpha(\theta)}\), with \(k=w(t_{0};\theta)/v(t_{0};\theta)^{\beta(\theta)/\alpha(\theta)}\). The associated time evolution is \((v(t;\theta),w(t;\theta))=(\mathrm{e}^{\alpha(\theta)t}v(0;\theta),\mathrm{e} ^{\beta(\theta)t}w(0;\theta))\) for \(t\geq t_{0}\). Many periodic orbits are possible, and one can catalog them in terms of state-transition sequences. Sun _et al._[149] identified four low-period orbits, which have the following cyclic state-transition sequences: (a) \(\mathcal{E}\to\mathcal{R}\to\mathcal{E}\), (b) \(\mathcal{E}\to\mathcal{R}\to\mathcal{S}\to\mathcal{E}\), (c) \(\mathcal{E}\to\mathcal{S}\to\mathcal{R}\to\mathcal{E}\), and (d) \(\mathcal{E}\to\mathcal{S}\to\mathcal{R}\to\mathcal{S}\to\mathcal{E}\). For example, consider a periodic orbit of type (b). Starting from \(x(0)=(1,w(0;\mathcal{E}))\), we obtain the time-of-flight \(T_{1}=-\beta_{1}^{-1}\log w(0;\mathcal{E})\) from the relation \(h_{2}(x(T_{1}))=0\). This, in turn, allows one to determine the initial data for the next piece of the trajectory. It is \(x(T_{1})=(v(T_{1};\mathcal{E}),1)\), from which we obtain the time-of-flight \(T_{2}=-\beta_{2}^{-1}\log\delta\). The third and final piece of the orbit has initial data \(x(T_{1}+T_{2})=(v(T_{2};\mathcal{R}),\delta)\) and a time-of-flight of \(T_{3}=-\alpha_{1}^{-1}\log v(T_{2};\mathcal{R})\). To determine the value of \(w(0;\mathcal{E})\), one enforces the periodicity condition \(w(T;\mathcal{S})=w(0;\mathcal{E})\), where \(T=T_{1}+T_{2}+T_{3}\) is the period of the periodic orbit. One thus obtains \[w(0;\mathcal{E})=\delta^{\frac{1+\frac{\beta_{1}}{\beta_{2}}}{1+\frac{\beta_{ 1}}{\alpha_{1}}}}\,. \tag{6.59}\] To ensure that \(\delta<w(0;\mathcal{E})<1\), \(\delta<v(T_{1};\mathcal{E})<1\), and \(\delta<v(T_{2};\mathcal{R})<1\), the trajectory needs to satisfy the inequality \((\alpha_{2}/\alpha_{1})\cdot(\beta_{2}/\beta_{1})>1\) and it also needs to satisfy \(\alpha_{1}^{-1}+\alpha_{2}^{-1}\geq\beta_{1}^{-1}+\beta_{2}^{-1}\) when \(\beta_{1}<\alpha_{2}\). In Figure 26, we show an example of an orbit that one constructs in this fashion. To determine the stability of this orbit, we calculate the Floquet exponent (16) and obtain \[\kappa=\kappa_{\rm smooth}+\frac{1}{T}\ln\left|\det S_{1}(T)\det S_{4}(T_{1}+T_ {2})\det S_{2}(T_{1})\right|=\frac{1}{T}\ln\left(\frac{\alpha_{2}}{\alpha_{1}} \right)\,, \tag{114}\] where we use the fact that \(\kappa_{\rm smooth}=\left[T_{1}(-\alpha_{2}+\beta_{1})+T_{2}(\alpha_{1}- \beta_{2})+T_{3}(\alpha_{1}+\beta_{1})\right]/T=0\). Therefore, the orbit (if it exists) is linearly stable for \(q\equiv\alpha_{2}/\alpha_{1}<1\). The Floquet multiplier is \(-q<0\), so one can lose stability only through a period-doubling instability. To model a herd (i.e., a network) of \(N\) identical cows, we suppose that each cow has an associated variable \(x_{i}(t;\theta_{i})\) that evolves according to \[\frac{\mathrm{d}x_{i}}{\mathrm{d}t}=a(\theta_{i})x_{i}+\sigma\sum_{j=1}^{N}w_{ ij}\chi(\theta_{j})x_{j}\,,\quad\chi(\theta)=\begin{bmatrix}\chi_{\mathcal{E}}( \theta)&0\\ 0&\chi_{\mathcal{R}}(\theta)\end{bmatrix}\,, \tag{115}\] where \(\chi_{\psi}\) is the indicator function \[\chi_{\psi}(\theta)=\begin{cases}1\,,&\text{if }\theta=\psi\\ 0\,,&\text{otherwise}\,.\end{cases} \tag{116}\] For \(w_{ij}>0\), this network model describes the case that a cow feels hungrier when it notices other cows eating and feels a greater desire to lie down when it notices other cows lying down. Because the indicator functions change with time, the network has a time-dependent coupling, so one cannot use our previous MSF for it. Nonetheless, the PWL nature of the dynamics allows one to obtain analytical insights into the model's behavior. Assuming the row-sum normalization \(\sum_{j=1}^{N}w_{ij}=1\) for all \(i\), a synchronous orbit (if it exists) satisfies the equation \(x_{i}(t)=s(t)\) for all \(i\), where \(\dot{s}=a(\theta,\sigma)s\) and \(a(\theta,\sigma)=a(\theta)+\sigma\chi(\theta)\). Therefore, one can use the approach that we described above for a single cow to construct a synchronous orbit under the replacement \(a(\theta)\to a(\theta,\sigma)\). For perturbations that do not change the order (i.e., the total number of states, including repetitions) in a state-transition sequence, linearization around the synchronous state leads to the evolution of the network perturbation \(U=(\delta x_{1},\delta x_{2},\ldots,\delta x_{N})\in\mathbb{R}^{2N}\) over one period of the form \(U(T)=\Psi U(0)\), where \[\Psi=K_{1}(T)\mathrm{e}^{A(\mathcal{S},\sigma)T_{3}}K_{4}(T_{1}+T_{2})\mathrm{ e}^{A(\mathcal{R},\sigma)T_{2}}K_{2}(T_{1})\mathrm{e}^{A(\mathcal{E},\sigma)T_{ 1}}\,, \tag{104}\] with \(A(\theta,\sigma)=I_{N}\otimes a(\theta)+\sigma w\otimes\chi(\theta)\) and \(K_{\mu}(t_{i})=I_{N}\otimes S_{\mu}(t_{i})\). Therefore, the synchronous state is linearly stable if all of the eigenvalues of \(\Psi\in\mathbb{R}^{2N\times 2N}\) lie within the unit disk. The above analysis allows one to generate a quantitative answer to the following question: Can herd interactions promote synchrony? Consider the choice \(q>1\), so that an isolated cow (i.e., \(\sigma=0\)) cannot achieve a stable \(\mathcal{E}\to\mathcal{R}\to\mathcal{S}\to\mathcal{E}\) cycle. One can numerically calculate the eigenvalues of \(\Psi\) (104) to determine whether or not there is a critical value of \(\sigma\) that brings all of the eigenvalues back inside the unit disk. Numerical calculations for several types of row-normalized networks (e.g., star networks and nearest-neighbor circulant networks) suggest that this is indeed the case, with a common critical value of \(\sigma=\sigma_{c}\) that is independent of \(N\). For the parameters in Figure 26 with \(q=1.5\), we find that \(\sigma_{c}\approx 0.025\). ## 7 Discussion In this review, we discussed several popular mathematical frameworks for analyzing synchronized states in coupled networks of identical oscillators. We focused on oscillator dynamics that take the form of piecewise-linear (PWL) ordinary-differential-equation models. This choice allows the semi-analytical construction of periodic orbits without the need to employ numerical ODE solvers. We demonstrated that it is also mathematically tractable to determine the stability of periodic states in networks of such coupled oscillators. The key augmentation to standard theoretical approaches is the use of saltation operators to treat the nonsmooth nature of the individual oscillator models and the network models. We thereby highlighted the usefulness of combining techniques from smooth dynamical systems -- in particular, weakly-coupled-oscillator theory and the master stability function (MSF) -- with techniques from nonsmooth modeling and analysis to deliver new tools for the analysis of dynamical systems on networks. Given the prevalence of nonsmooth models in mechanics and biology (as well as in other areas), it is very appealing to further apply and extend these approaches. For example, one can apply such methodology to networks of scalar-valued nodes with threshold-linear nonlinearities (of ReLu type, which is now ubiquitous in machine learning [150]), which have become very popular for developing ideas about so-called "sequential attractors" [14, 35, 36, 115, 114]. Additionally, Cho _et al_. [26] have connected synchronized cluster states and "chimera states" [111] (in which a subpopulation of oscillators synchronizes in an otherwise incoherent sea). Their research was formulated in a smooth setting, and it would be fascinating to explore it using a PWL perspective. The extension of the methodology to treat various complexities -- including nonidentical oscillators, oscillators with high-dimensional (non-planar) dynamics, excitable systems, coupling delays, adaptive networks (in which a dynamical process on a network is coupled to the dynamics of the network's structure), temporal networks (in which a network's entities and/or their interactions change with time), and multilayer networks (which can incorporate multiple types of interactions, multiple subsystems, and other complexities), and oscillator networks with polyadic (i.e., beyond pairwise) interactions -- is mathematically interesting and can build naturally on existing inroads on these challenges that have been made for smooth networks [126]. Relevant studies to extend to a PWL framework include investigations of networks of Kuramoto oscillators with heterogenous frequencies [109] and modular structures [145], an extension of the MSF for coupled nearly-identical dynamical systems [148] and dynamical systems with delays [86, 110], and extension of coupled-oscillator theory to networks with polyadic interactions (which are sometimes called "higher-order" interactions) [18, 19, 63, 89]. It is also worth extending the analysis of models of noisy PWL oscillators to networks of such systems [142]. The further development of techniques for analyzing nonsmooth network dynamics is extremely relevant for systems with switches or thresholds, which arise in models of social-influence-driven opinion changes [144] and contagions [91]. There are numerous outstanding challenges in the study of dynamics on networks that may benefit from the perspective of nonsmooth modeling and analysis. ### Acknowledgements We thank Thilo Gross for helpful discussions. ## Appendix A Piecewise-linear models In Table 1, we summarize the components \(A_{\mu}\) and \(b_{\mu}\) (with \(\mu\in\{1,2\}\)) of the PWL models in section 2 when they are written in the form (1). The dynamics of the PWL Morris-Lecar model with three zones is \[C\dot{v}=\rho(v)-w+I\,,\quad\dot{w}=g(v,w)\,, \tag{10}\] with a continuous \(\rho(v)\) (to approximate a cubic \(v\)-nullcline) of the form \[\rho(v)=\left\{\begin{array}{lll}-v&\mbox{if}&v<a/2\\ v-a&\mbox{if}&a/2\leq v\leq(1+a)/2\\ 1-v&\mbox{if}&v>(1+a)/2\,,\end{array}\right. \tag{11}\] and continuous \(g\) function \[g(v,w)=\left\{\begin{array}{lll}(v-\gamma_{1}w+b^{*}\gamma_{1}-b)/\gamma_{1 }&\mbox{if}&v<b\\ (v-\gamma_{2}w+b^{*}\gamma_{2}-b)/\gamma_{2}&\mbox{if}&v\geq b\,,\end{array}\right. \tag{12}\] with the constraints \(-a/2<b^{*}<(1-a)/2\), \(a/2<b<(1+a)/2\), \(\gamma_{2}>0\), and \(\gamma_{1}\in\mathbb{R}\). To construct periodic solutions, such as the one in Figure 2, we use the formalism in section 2. We break the periodic orbit into pieces such that each piece is governed by a linear dynamical system. This is similar to the system (1), but now the orbit has four distinct pieces that evolve according to \(\mathrm{d}x/\mathrm{d}t=A_{\mu}x+b_{\mu}\), with \(\mu\in\{1,2,3,4\}\), in three linear regimes \(R_{1}=\{x\in\mathbb{R}^{2}|\ v>(1+a)/2\}\), \(R_{2}=\{x\in\mathbb{R}^{2}|\ b<v<(1+a)/2\}\), and \(R_{3}=\{x\in\mathbb{R}^{2}|\ a/2<v<b\}\). Therefore, \[A_{1}=\begin{bmatrix}1/C&-1/C\\ 1/\gamma_{2}&-1\end{bmatrix},\ A_{2}=\begin{bmatrix}-1/C&-1/C\\ 1/\gamma_{2}&-1\end{bmatrix},\ A_{4}=\begin{bmatrix}1/C&-1/C\\ 1/\gamma_{1}&-1\end{bmatrix} \tag{13}\] \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Model & \(A_{1}\) & \(A_{2}\) & \(b_{1}\) & \(b_{2}\) \\ \hline \(\begin{array}{l}\mbox{McKean model}\\ \zeta(w)=(\gamma a+w)/I,\\ w\in[-\gamma a,-\gamma a+I],\\ \mbox{on }v=a.\end{array}\) & \(\left[\begin{array}{ll}-\gamma&-1\\ b&0\end{array}\right]\) & \(\left[\begin{array}{ll}I\\ 0\end{array}\right]\) & \(\left[\begin{array}{ll}0\\ 0\end{array}\right]\) \\ \hline \(\begin{array}{l}\mbox{The absolute model}\\ \mbox{PWL homoclinic}\\ \mbox{model}\end{array}\) & \(\left[\begin{array}{ll}1&-1\\ 1&-d\end{array}\right]\) & \(\left[\begin{array}{ll}-1&-1\\ d\overline{w}-\overline{v}\end{array}\right]\) & \(\left[\begin{array}{ll}a\\ d\overline{w}-\overline{v}\end{array}\right]\) \\ \hline \(\begin{array}{l}\mbox{PWL homoclinic}\\ \mbox{model}\end{array}\) & \(\left[\begin{array}{ll}\tau_{1}&-1\\ \delta_{1}&0\end{array}\right]\) & \(\left[\begin{array}{ll}\tau_{2}&-1\\ \delta_{2}&0\end{array}\right]\) & \(\left[\begin{array}{ll}0\\ -1\end{array}\right]\) & \(\left[\begin{array}{ll}0\\ -1\end{array}\right]\) \\ \hline \(\begin{array}{l}\mbox{Planer integrate-and-fire (IF) model}\\ \end{array}\) & \(\left[\begin{array}{ll}a_{1}&-1\\ a_{w}/\tau&b_{w}/\tau\end{array}\right]\) & \(\left[\begin{array}{ll}a_{2}&-1\\ a_{w}/\tau&b_{w}/\tau\end{array}\right]\) & \(\left[\begin{array}{ll}I\\ 0\end{array}\right]\) & \(\left[\begin{array}{ll}I\\ 0\end{array}\right]\) \\ \hline \end{tabular} \end{table} Table 1: Components of the examined models in the form (1). We complete the definition of the McKean model by using the Filippov convention. and \[b_{1}=\begin{bmatrix}(I-a)/C\\ b^{*}-b/\gamma_{2}\end{bmatrix},\ b_{2}=\begin{bmatrix}(1+I)/C\\ b^{*}-b/\gamma_{2}\end{bmatrix},\ b_{4}=\begin{bmatrix}(I-a)/C\\ b^{*}-b/\gamma_{1}\end{bmatrix}\,,\] (A.5) with \(A_{3}=A_{1}\) and \(b_{3}=b_{1}\). Let \(T_{\mu}\) denote the time-of-flight for each piece, and let \(T=\Sigma_{\mu=1}^{4}T_{\mu}\) denote the corresponding period of the orbit. To build a closed orbit, we use the boundary-crossing values of the voltage variable (i.e., \(v=b\) and \(v=(1+a)/2\)) and equation (2.4), and we enforce periodicity of the solution. Choosing initial data \(x^{1}(0)=(b,w^{1}(0))^{\top}\) and enforcing continuity of solutions by using the matching conditions \(x^{\mu+1}(0)=x^{\mu}(T_{\mu})\) for \(\mu\in\{1,2,3\}\) determines \(T_{\mu}\) and \(w(0)\) through the simultaneous solution of the equations \(v^{1}(T_{1})=(1+a)/2\), \(v^{2}(T_{2})=(1+a)/2\), \(v^{3}(T_{3})=b\), \(v^{4}(T_{4})=b\), and \(w^{1}(0)=w^{4}(T_{4})\). ## Appendix B Saltation operator Using the notation of subsection 2.1, we denote a periodic orbit by \(x^{\gamma}\), a perturbed orbit by \(\widetilde{x}\), an event time by \(t_{0}\), and a perturbed event time by \(\widetilde{t}_{0}\). We obtain the last two from the equations \(h_{\mu}(x^{\gamma}(t_{0}))=0\) and \(h_{\mu}(\widetilde{x}(\widetilde{t}_{0}))=0\), respectively. The difference between the perturbed and unperturbed events times is \(\delta t_{0}=\widetilde{t}_{0}-t_{0}\). The periodic and perturbed states after the switching event are \(x^{\gamma}(t_{0}^{+})=\mathcal{J}_{\mu}(x^{\gamma}(t_{0}^{-}))\) and \(\tilde{x}(\tilde{t}_{0}^{+})=\mathcal{J}_{\mu}(\tilde{x}(\tilde{t}_{0}^{-}))\), where \(\mathcal{J}_{\mu}\) is the switch rule. Without loss of generality, we consider \(\delta t_{0}>0\) so \(x^{\gamma}(t)\) and \(\tilde{x}(t)\) are on opposite sides of the switching manifold (because \(x^{\gamma}(t)\) has already crossed the switching boundary). We then have that \(\tilde{x}(\tilde{t}_{0}^{-})=\tilde{x}(t_{0}^{-}+\delta t_{0})\approx x^{ \gamma}(t_{0}^{-})+\delta x(t_{0}^{-})+\dot{x}^{\gamma}(t_{0}^{-})\delta t_{0}\). We do a first-order Taylor expansion of \(\mathcal{J}_{\mu}\) and obtain \[\tilde{x}(\tilde{t}_{0}^{+}) =\mathcal{J}_{\mu}(\tilde{x}(\tilde{t}_{0}^{-}))\approx\mathcal{J }_{\mu}(x^{\gamma}(t_{0}^{-})+\delta x(t_{0}^{-})+\dot{x}^{\gamma}(t_{0}^{-}) \delta t_{0})\] \[\approx\mathcal{J}_{\mu}(x^{\gamma}(t_{0}^{-}))+\mathrm{D} \mathcal{J}_{\mu}(x^{\gamma}(t_{0}^{-}))[\delta x(t_{0}^{-})+\dot{x}^{\gamma}( t_{0}^{-})\delta t_{0}]\] \[\approx x^{\gamma}(t_{0}^{+})+\mathrm{D}\mathcal{J}_{\mu}(x^{ \gamma}(t_{0}^{-}))[\delta x(t_{0}^{-})+\dot{x}^{\gamma}(t_{0}^{-})\delta t_{0 }]\,,\] (B.1) where \(\mathrm{D}\mathcal{J}_{\mu}\) is the Jacobian matrix of \(\mathcal{J}_{\mu}\). The first-order Taylor expansion of \(h_{\mu}(\tilde{x}(\tilde{t}_{0}^{-}))\) is \[h_{\mu}(\tilde{x}(\tilde{t}_{0}^{-})) =h_{\mu}(\tilde{x}(t_{0}^{-}+\delta t_{0}))=h_{\mu}(x^{\gamma}(t_ {0}^{-}+\delta t_{0})+\delta x(t_{0}^{-}+\delta t_{0}))\] \[\approx h_{\mu}(x^{\gamma}(t_{0}^{-})+\dot{x}^{\gamma}(t_{0}^{-}) \delta t_{0})+\nabla_{x}h_{\mu}(x^{\gamma}(t_{0}^{-}+\delta t_{0}))\cdot \delta x(t_{0}^{-}+\delta t_{0})\] \[\approx h_{\mu}(x^{\gamma}(t_{0}^{-}))+\nabla_{x}h_{\mu}(x^{ \gamma}(t_{0}^{-}))\cdot\dot{x}^{\gamma}(t_{0}^{-})\delta t_{0}+\nabla_{x}h_{ \mu}(x^{\gamma}(t_{0}^{-}))\cdot\delta x(t_{0}^{-})\,.\] (B.2) Using (B) and the fact that \(h_{\mu}(x^{\gamma}(t_{0}))=0=h_{\mu}(\tilde{x}(\tilde{t}_{0}))\), we obtain \[\delta t_{0}=-\frac{\nabla_{x}h_{\mu}(x^{\gamma}(t_{0}^{-}))\cdot\delta x(t_{0 }^{-})}{\nabla_{x}h_{\mu}(x^{\gamma}(t_{0}^{-}))\cdot\dot{x}^{\gamma}(t_{0}^{- })}.\] (B.3) We approximate \(\tilde{x}(t_{0}^{+})\) as \[\tilde{x}(t_{0}^{+})\approx\tilde{x}(\tilde{t}_{0}^{+})-\dot{\tilde{x}}(\tilde{ t}_{0}^{+})\delta t_{0}\approx\tilde{x}(\tilde{t}_{0}^{+})-\dot{x}^{\gamma}(t_{0}^{+}+ \delta t_{0})\delta t_{0}\approx\tilde{x}(\tilde{t}_{0}^{+})-\dot{x}^{\gamma}(t_ {0}^{+})\delta t_{0}\,.\] (B.4) Using (B) and (B) yields \[\delta x(t_{0}^{+}) =\tilde{x}(t_{0}^{+})-x^{\gamma}(t_{0}^{+})\approx\tilde{x}( \tilde{t}_{0}^{+})-\dot{x}^{\gamma}(t_{0}^{+})\delta t_{0}-x^{\gamma}(t_{0}^{+})\] \[\approx x^{\gamma}(t_{0}^{+})+\mathrm{D}\mathcal{J}_{\mu}(x^{ \gamma}(t_{0}^{-}))[\delta x(t_{0}^{-})+\dot{x}^{\gamma}(t_{0}^{-})\delta t_{0 }]-[x^{\gamma}(t_{0}^{+})+\dot{x}^{\gamma}(t_{0}^{+})\delta t_{0}]\] (B.5) \[=\mathrm{D}\mathcal{J}_{\mu}(x^{\gamma}(t_{0}^{-}))\delta x(t_{0}^ {-})+[\mathrm{D}\mathcal{J}_{\mu}(x^{\gamma}(t_{0}^{-}))\dot{x}^{\gamma}(t_{0}^{-} )-\dot{x}^{\gamma}(t_{0}^{+})]\delta t_{0}\,.\] Therefore, using (B.3) and (B.5), we write \(\delta x(t^{+})\) in the form (2.14), where \(S_{\mu}(t)\) is the saltation matrix \[S_{\mu}(t)=\mathrm{D}\mathcal{J}_{\mu}(x^{\gamma}(t^{-}))+\frac{[\dot{x}^{ \gamma}(t^{+})-\mathrm{D}\mathcal{J}_{\mu}(x^{\gamma}(t^{-}))\dot{x}^{\gamma}(t ^{-})][\nabla_{x}h_{\mu}(x^{\gamma}(t^{-}))]^{\top}}{\nabla_{x}h_{\mu}(x^{ \gamma}(t^{-}))\cdot\dot{x}^{\gamma}(t^{-})}\,.\] (B.6) ## Appendix C The nontrivial Floquet exponent for planar PWL systems For planar systems, one eigenvalue of the monodromy matrix \(\Psi\) is 1. Let \(\mathrm{e}^{\kappa T}\) denote the nontrivial Floquet multiplier. Using the relation \(\det\Psi=\mathrm{e}^{\kappa T}\times 1\) and equation (2.15), we obtain \[\mathrm{e}^{\kappa T} =\det\left[S(t_{M})G(T_{M})S(t_{M-1})G(T_{M-1})\times\cdots\times S (t_{2})G(T_{2})S(t_{1})G(T_{1})\right]\] \[=\det S(t_{M})\times\cdots\times\det S(t_{1})\det G(T_{M})\times \cdots\times\det G(T_{1})\] (C.1) \[=\det S(t_{M})\times\cdots\times\det S(t_{1})\det\mathrm{e}^{A_{M }T_{M}}\times\cdots\times\det\mathrm{e}^{A_{1}T_{1}}\,.\] Using the well-known fact \(\det\mathrm{e}^{A}=\mathrm{e}^{\mathrm{Tr}A}\), we obtain the useful formula \[\kappa=\frac{1}{T}\sum_{i=1}^{M}\left[T_{i}\operatorname{Tr}A_{\mu(i)}+\ln| \det S(t_{i})|\right]\,.\] (C.2) ## Appendix D Derivation of the jump condition in \(\mathcal{B}(t)\) Using equation (3.14), we consider a perturbed solution of the form \(x(t)=x^{\gamma}+\psi p(t)\), where \(\psi=\mathcal{O}(\sigma)\) has a small value, that crosses the switching manifolds at the perturbed switching times \(\tilde{t}_{i}=t_{i}+g_{i}(\psi)\), which we obtain by solving \(h_{\mu}(x(t_{i}+g_{i}(\psi)))=0\). In general, \(g_{i}(\psi)\) depends on the geometry of a switching manifold and on the displacement \(\psi p(t)\). For the PWL models that we consider, one can calculate \(g_{i}(\psi)\) explicitly. To first order, the Taylor expansion of \(h_{\mu}(\tilde{x}(\tilde{t}_{i}))\) is \[h_{\mu}(\tilde{x}(\tilde{t}_{i}))\approx h_{\mu}(x^{\gamma}(t_{i}^{-}))+ \nabla_{x}h_{\mu}(x^{\gamma}(t_{i}^{-}))\cdot\dot{x}^{\gamma}(t_{i}^{-})g_{i} (\psi)+\nabla_{x}h_{\mu}(x^{\gamma}(t_{i}^{-}))\cdot\psi p(t_{i}^{-})\,.\] (D.1) Because \(h_{\mu}(x^{\gamma}(t_{i}))=0=h_{\mu}(\tilde{x}(\tilde{t}_{i}))\), we obtain \[g_{i}(\psi)=-\frac{\nabla_{x}h_{\mu}(x^{\gamma}(t_{i}^{-}))\cdot\psi p(t_{i}^ {-})}{\nabla_{x}h_{\mu}(x^{\gamma}(t_{i}^{-}))\cdot\dot{x}^{\gamma}(t_{i}^{-} )}=-\frac{\psi p^{v}(t_{i}^{-})}{\dot{v}^{\gamma}(t_{i}^{-})}\,,\] (D.2) where \(p^{v}(t)\) is the \(v\) component of \(p(t)\). Equation (3.12) implies that \[\nabla_{(x^{\gamma}(\tilde{t}_{i}^{-})+\psi p(\tilde{t}_{i}^{-}))}\Theta(x) \approx\mathcal{Z}(t_{i}^{-}+g_{i}(\psi))+\psi\mathcal{B}(t_{i}^{-}+g_{i}( \psi))\] (D.3) immediately before the switching event. We obtain a similar equation by evaluating equation (3.12) at \(\tilde{t}_{i}^{+}=t_{i}^{+}+g_{i}(\psi)\). We follow the technique that was proposed by Wilson [160] to derive a jump condition in the iIRC, \(\mathcal{B}(t)\), and \(\mathcal{C}(t)\) for an \(m\)-dimensional piecewise-smooth systems with an \((m-1)\)-dimensional switching manifold \(\Sigma_{\mu}\) that is transverse to \(x^{\gamma}(t)\). We make four assumptions. First, for all \(k\), the phase function \(\psi_{k}(x)\) is continuous in an open neighborhood of \(x^{\gamma}(t)\). Second, for all \(k\), the function \(\psi_{k}(x)\) is at least twice differentiable in the interior of each region \(R_{\mu}\). Third, each boundary \(\Sigma_{\mu}\) is at least \(C^{1}\) (i.e., continuously differentiable) in an open ball \(B(p_{\mu},R)\) that is centered at \(p_{\mu}\) (the intersection point of \(\Sigma_{\mu}\) and \(x^{\gamma}(t)\)) with radius \(R\). It then follows that at each intersection point \(p_{\mu}\) that there exists a tangent hyperplane \(\Pi\) rhat is spanned by an orthonormal set of \((m-1)\)-dimensionals vectors \(w_{k}\) for \(k\in\{1,\dots,m-1\}\). Fourth, for all \(k\), the directional derivatives of \(\psi_{k}\) exist on \(\Pi\) in all tangential directions \(w_{k}\) and are identical from both sides. (Otherwise, the associated coordinate of \(\psi_{k}\) is not continuous [160].) For the planar PWL models that we consider, \(h_{\mu}(x)=v-a_{\mu}\), where \(a_{\mu}\) is a constant. Therefore, \(w_{1}=[0,1]^{\top}\). Using the continuity of \(\Theta(x)\) and the fourth assumption about the phase function \(\psi_{k}(x)\), approaching from either side of the switching manifold yields \[\Big{(}\nabla_{(x^{\gamma}(\tilde{t}_{i}^{-})+\psi p(\tilde{t}_{i}^{-}))} \Theta(x)\Big{)}\cdot w_{1}=\Big{(}\nabla_{(x^{\gamma}(\tilde{t}_{i}^{+})+\psi p (\tilde{t}_{i}^{+})}\Theta(x)\Big{)}\cdot w_{1}\,,\] (D.4) where we drop the subscript on \(\psi\) for convenience. Equivalently, \[\big{[}\mathcal{Z}(t_{i}^{-}+g_{i}(\psi))+\psi\mathcal{B}(t_{i}^{-}+g_{i}( \psi))\big{]}\cdot w_{1}=\big{[}\mathcal{Z}(t_{i}^{+}+g_{i}(\psi))+\psi \mathcal{B}(t_{i}^{+}+g_{i}(\psi))\big{]}\cdot w_{1}\,.\] (D.5) We Taylor expand equation (D.5) in \(\psi\) to obtain \[\left[\mathcal{Z}(t_{i}^{-})+\left(\left.\frac{\mathrm{d} \mathcal{Z}}{\mathrm{d}t}\right|_{t=t_{i}^{-}}\right)g_{i}(\psi)+\psi \mathcal{B}\left(t_{i}^{-}\right)\right]\cdot w_{1}\] (D.6) \[\qquad=\left[\mathcal{Z}(t_{i}^{+})+\left(\left.\frac{\mathrm{d} \mathcal{Z}}{\mathrm{d}t}\right|_{t=t_{i}^{+}}\right)g_{i}(\psi)+\psi \mathcal{B}\left(t_{i}^{+}\right)\right]\cdot w_{1}+\mathcal{O}\left(\psi^{2} \right)\,.\] We set the \(\mathcal{O}(\psi^{0})\) terms equal the two sides of equation (D.6) and use the normalization conditions to obtain the jump operator for \(\mathcal{Z}\) at \(t_{i}\). This jump operator is the same one that we obtained in subsection 3.1. Collecting the \(\mathcal{O}(\psi)\) terms in (D.6) yields \[\left[\left(\left.\frac{\mathrm{d}\mathcal{Z}}{\mathrm{d}t}\right|_{t=t_{i}^{ -}}\right)g_{i}(\psi)+\psi\mathcal{B}^{-}\right]\cdot w_{1}=\left[\left(\left. \frac{\mathrm{d}\mathcal{Z}}{\mathrm{d}t}\right|_{t=t_{i}^{+}}\right)g_{i}( \psi)+\psi\mathcal{B}^{+}\right]\cdot w_{1}\,.\] (D.7) We use equations (3.3) and (D.2) to rewrite (D.7) to obtain \[\psi\left[\frac{p^{v}(t_{i}^{-})}{\dot{v}^{\gamma}(t_{i}^{-})}A_{\mu(i)}^{ \top}\mathcal{Z}^{-}+\mathcal{B}^{-}\right]\cdot w_{1}=\psi\left[\frac{p^{v}( t_{i}^{-})}{\dot{v}^{\gamma}(t_{i}^{-})}A_{\mu(i+1)}^{\top}\mathcal{Z}^{+}+ \mathcal{B}^{+}\right]\cdot w_{1}\,.\] (D.8) The condition (3.17) holds on both sides of a switching manifold, so \[\mathcal{Z}^{-}\cdot(A_{\mu(i)}p(t_{i}^{-}))+f_{\mu(i)}^{-}\cdot\mathcal{B}^{ -}=0=\mathcal{Z}\cdot(A_{\mu(i+1)}p(t_{i}^{+}))+f_{\mu(i)}^{+}\cdot\mathcal{B }^{+}\,,\] (D.9) where \(f_{\mu(i)}^{-}\) is the vector field evaluated on the limit cycle immediately before a switching event and \(f_{\mu(i)}^{+}\) is the vector field evaluated on the limit cycle immediately after it. Combining (D.9) and (D.8) yields \[\mathcal{B}\,f_{\mu(i)}^{+} =\mathcal{B}^{-}\cdot f_{\mu(i)}^{-}+\mathcal{Z}^{-}\cdot(A_{\mu( i)}p(t_{i}^{-}))-\mathcal{Z}^{+}\cdot(A_{\mu(i+1)}p(t_{i}^{+}))\] (D.10) \[\mathcal{B}^{+}\cdot w_{1} =\mathcal{B}^{-}\cdot w_{1}+\frac{p^{v}(t_{i}^{-})}{\dot{v}^{ \gamma}(t_{i}^{-})}\left[A_{\mu(i)}^{\top}\mathcal{Z}^{-}-A_{\mu(i+1)}^{\top} \mathcal{Z}^{+}\right]\cdot w_{1}\,.\] (D.11) Therefore, the jump condition on \(\mathcal{B}\) during a transition across a switching manifold is \[\mathcal{B}^{+}=(S^{\top}(t_{i}))^{-1}\mathcal{B}^{-}+C^{-1}(t_{i})\eta(t_{i})\,,\] (D.12) where \(C(t_{i})\) and \(\eta(t_{i})\) are given by (3.22). ## Appendix E Interaction functions The interaction functions in the dynamical system (5.2) are \[\begin{array}{l}h_{1}\left(\omega\theta_{1},\omega\theta_{2}\right)=\mathcal{Z}^ {v}\left(\theta_{1}\right)\left(v^{\gamma}\left(\theta_{2}\right)-v^{\gamma} \left(\theta_{1}\right)\right)\,,\\ h_{2}\left(\omega\theta_{1},\omega\theta_{2}\right)=\mathcal{B}^{v}\left( \theta_{1}\right)\left(v^{\gamma}\left(\theta_{2}\right)-v^{\gamma}\left( \theta_{1}\right)\right)-\mathcal{Z}^{v}\left(\theta_{1}\right)p^{v}(\theta_{ 1})\,,\\ h_{3}\left(\omega\theta_{1},\omega\theta_{2}\right)=\mathcal{Z}^{v}\left( \theta_{1}\right)p^{v}(\theta_{2})\,,\\ h_{4}\left(\omega\theta_{1},\omega\theta_{2}\right)=\mathcal{I}^{v}\left( \theta_{1}\right)\left(v^{\gamma}\left(\theta_{2}\right)-v^{\gamma}\left( \theta_{1}\right)\right)\,,\\ h_{5}\left(\omega\theta_{1},\omega\theta_{2}\right)=\mathcal{C}^{v}\left( \theta_{1}\right)\left(v^{\gamma}\left(\theta_{2}\right)-v^{\gamma}\left( \theta_{1}\right)\right)-\mathcal{I}^{v}\left(\theta_{1}\right)p^{v}(\theta_{ 1}),\\ h_{6}\left(\omega\theta_{1},\omega\theta_{2}\right)=\mathcal{I}^{v}\left( \theta_{1}\right)p^{v}(\theta_{2})\,,\end{array}\] (E.1) where \(\mathcal{Z}^{v}\), \(\mathcal{I}^{v}\), \(\mathcal{B}^{v}\), and \(\mathcal{C}^{v}\) are the \(v\) components of the corresponding vectors.
2305.17497
FACTUAL: A Benchmark for Faithful and Consistent Textual Scene Graph Parsing
Textual scene graph parsing has become increasingly important in various vision-language applications, including image caption evaluation and image retrieval. However, existing scene graph parsers that convert image captions into scene graphs often suffer from two types of errors. First, the generated scene graphs fail to capture the true semantics of the captions or the corresponding images, resulting in a lack of faithfulness. Second, the generated scene graphs have high inconsistency, with the same semantics represented by different annotations. To address these challenges, we propose a novel dataset, which involves re-annotating the captions in Visual Genome (VG) using a new intermediate representation called FACTUAL-MR. FACTUAL-MR can be directly converted into faithful and consistent scene graph annotations. Our experimental results clearly demonstrate that the parser trained on our dataset outperforms existing approaches in terms of faithfulness and consistency. This improvement leads to a significant performance boost in both image caption evaluation and zero-shot image retrieval tasks. Furthermore, we introduce a novel metric for measuring scene graph similarity, which, when combined with the improved scene graph parser, achieves state-of-the-art (SOTA) results on multiple benchmark datasets for the aforementioned tasks. The code and dataset are available at https://github.com/zhuang-li/FACTUAL .
Zhuang Li, Yuyang Chai, Terry Yue Zhuo, Lizhen Qu, Gholamreza Haffari, Fei Li, Donghong Ji, Quan Hung Tran
2023-05-27T15:38:31Z
http://arxiv.org/abs/2305.17497v2
# FACTUAL: A Benchmark for Faithful and Consistent Textual Scene Graph Parsing ###### Abstract Textual scene graph parsing has become increasingly important in various vision-language applications, including image caption evaluation and image retrieval. However, existing scene graph parsers that convert image captions into scene graphs often suffer from two types of errors. First, the generated scene graphs fail to capture the true semantics of the captions or the corresponding images, resulting in a lack of faithfulness. Second, the generated scene graphs have high inconsistency, with the same semantics represented by different annotations. To address these challenges, we propose a novel dataset, which involves re-annotating the captions in Visual Genome (VG) using a new intermediate representation called FACTUAL-MR. FACTUAL-MR can be directly converted into faithful and consistent scene graph annotations. Our experimental results clearly demonstrate that the parser trained on our dataset outperforms existing approaches in terms of faithfulness and consistency. This improvement leads to a significant performance boost in both image caption evaluation and zero-shot image retrieval tasks. Furthermore, we introduce a novel metric for measuring scene graph similarity, which, when combined with the improved scene graph parser, achieves state-of-the-art (SOTA) results on multiple benchmark datasets for the aforementioned tasks. The code and dataset are available at [https://github.com/zhuang-li/FACTUAL](https://github.com/zhuang-li/FACTUAL). ## 1 Introduction A scene graph is a representation that describes the contents of a visual scene, including objects, their attributes, and the relationships between them. The grounding of a scene graph with an image or a text can provide significant benefits for various vision-language tasks, such as image caption evaluation Anderson et al. (2016) and image retrieval Johnson et al. (2015). Therefore, transduction of image descriptions into scene graphs through textual scene graph parsing has been a crucial vision-language research area. Accurately generating scene graphs that capture intersected information from images and their corresponding descriptions is crucial for a successful textual parser. However, current baseline parsers often generate _unfaithful_ scene graphs that fail to represent the complete intersected information or generate semantically correct graphs, as shown in Figure 1. Furthermore, _inconsistencies_ exist in the outputs of scene graph parsers, as depicted in the same figure, where "tennis" is interpreted as an attribute in one graph and as a part of an object in another graph. Such inconsistencies can severely impact downstream tasks of textual scene graph parsers, especially when they produce different graphs for a semantic unit, such as a phrase, across various captions, despite they carry the same semantic meaning. Upon inspection, we hypothesize that the issues of unfaithfulness and inconsistency arise due to the inherent shortcomings of scene graph parsing algorithms and limitations within the datasets. One widely utilized parser, SPICE-Parser Anderson et al. (2016), is known for converting caption dependency graphs into scene graphs using predefined rules, which can result in error propagation. Furthermore, the dependency graphs may not adequately capture the semantic characteristics of scene graphs, as dependency graphs primarily focus on syntactical relationships. Additionally, the limitations of the datasets contribute to the problems as well. As demonstrated in Figure 1, the largest scene graph dataset, VG Krishna et al. (2017), includes notable annotation issues regarding faithfulness and inconsistency. To address the aforementioned issues, we create a high-quality scene graph dataset for training parsers. We firmly believe that the problems of unfaithfulness and inconsistency within the dataset can be effectively resolved by incorporating two key measures: i) employing rigorous definitions for the literals and ii) implementing strict quality control during the annotation process. Therefore, we propose a novel intermediate meaning representation (MR) coined as FACTUAL-MR, which ensures **FA**ithful and **C**onsistent tex**TUAL** scene graph parsing. FACTUAL-MR is a semantic representation that can be deterministically mapped to the scene graph, thereby avoiding the issues that arise from converting syntactical graphs into scene graphs. The annotation of FACTUAL-MRs can be divided into manageable sub-tasks, allowing us to easily control the quality of annotations in each sub-task and ensure their faithfulness. Furthermore, the literals within the FACTUAL-MRs are precisely defined to ensure consistency in textual scene graph parsing annotations. As a result, we re-annotate captions sampled from the VG dataset with FACTUAL-MRs, enabling us to leverage the existing scene graph annotations from VG. Additionally, in order to further enhance the advantages provided by the scene graph parsing for its downstream tasks, we propose a simple yet effective metric called SoftSPICE. This metric calculates graph similarity and significantly improves the performance of vision-language tasks that leverage scene graphs. Overall, the key contributions are as follows: * We propose a novel intermediate representation, FACTUAL-MR, which can be easily annotated and converted into scene graphs. The annotation process of FACTUAL-MR could ensure the faithfulness and consistency of the scene graphs converted from FACTUAL-MR. * We construct a large-scale benchmark, FACTUAL, consisting of 40,369 parallel examples. We conduct thorough intrinsic and extrinsic evaluations to demonstrate that FACTUAL significantly improves the performance of textual scene graph parsing. * We propose a simple graph similarity metric, SoftSPICE, that achieves new SOTA results in image caption evaluation and zero-shot image retrieval tasks, when combined with a scene graph parser trained with FACTUAL. ## 2 Related Work Grounding a scene graph with an image or image description can be beneficial for a variety of downstream tasks, such as image retrieval (Andrews et al., 2019; Johnson et al., 2015), image caption evaluation (Anderson et al., 2016) and image captioning (Zhong et al., 2020). Currently, there are three main research directions to scene graph parsing: those that focus on parsing images (Zellers et al., 2018; Tang et al., 2020; Xu et al., 2017; Zhang et al., 2019; Cong et al., 2022; Li et al., 2022), text (Anderson et al., 2016; Schuster et al., 2015; Wang et al., 2018; Choi et al., 2022; Andrews et al., 2019; Sharifzadeh et al., 2022), or both modalities (Zhong et al., 2021; Sharifzadeh et al., 2022) into scene graphs. Parsing images involves utilizing an object detection model to identify the location and class of objects, as well as classifiers to determine the relationships and attributes of the objects. Textual scene graph parsing employs techniques such as the Sequence-to-Sequence model (Sutskever et al., 2014) to parse image descriptions into linearized scene graphs (Sharifzadeh et al., 2022) or generate intermediate representations, such as dependency graphs or Abstract Meaning Representation (AMR) (Banarescu et al., 2013), Figure 1: The intermediate representations and scene graphs produced by various parsers are compared with the ORACLE annotations when provided with an image and a caption. which are then mapped into scene graphs using deterministic rules or machine learning models. However, directly utilizing intermediate representations like dependency graphs or AMR often leads to subpar performance in downstream tasks, as emphasized by Anderson et al. (2016), and may even be infeasible for multi-modal tasks requiring annotations for both modalities, given that the intermediate representations only annotate the text. Recent studies in parsing both modalities Zhong et al. (2021); Sharifzadeh et al. (2022) have primarily utilized textual parsing models to enhance the performance of visual scene graph parsing. Our work primarily focuses on textual scene graph parsing. ## 3 Textual Scene Graph Parsing A scene graph, as introduced by Johnson et al. (2015), is a formal representation of the objects, their attributes, and the relationships between objects in a visual scene. Given a set of object classes \(C\), a set of attribute types \(A\), and a set of predicate types \(R\), a scene graph \(G\) is defined as a tuple \((O,E)\), where \(O=\{o_{1},...,o_{n}\}\) is a set of objects and \(E\in O\times R\times O\) is the set of edges connecting the objects. Each object \(o_{i}=\{c_{i},a_{i}\}\) is associated with an object class \(c_{i}\in C\) and an attribute \(a_{i}\in A\). As depicted in Figure 1, our work linearizes a scene graph into a simplified format. In this format, each fact is represented either as \((Object,Has\_attribute,Attribute)\) or as \((Object\_sub,Predicate,Object_{obj})\), which is consistent with the format of the linearized scene graphs outlined in Choi et al. (2022); Sharifzadeh et al. (2022). Therefore, the textual scene parsing aims to learn a mapping \(\pi_{\theta}:\mathcal{X}\rightarrow\mathcal{G}\), which translates a textual image description \(X\in\mathcal{X}\) into a scene graph \(G\in\mathcal{G}\). ### Challenges Unfaithfulness.The scene graph faithfulness is determined by its completeness and correctness. Completeness is defined as the extent to which the graph conveys the complete semantic meaning of the intersected information from both the caption and the image. For example, Figure 1 demonstrates that the output of VG-T5 Sharifzadeh et al. (2022) lacks the facts _(tennis player, hold, tennis racket)_ and _(tennis balls, rest on, tennis racket)_, indicating an incomplete graph. This incompleteness issue of parsing outputs can be caused by the noisy training set from VG, which was generated without rigorous quality validation. The other datasets derived from VG also suffer from annotation noise. The customized dependency (CDP) dataset Wang et al. (2018) transforms VG scene graphs (VG-SGs) into customized dependency graphs by aligning phrases of objects, attributes, and relations in VG-SGs with corresponding phrases in captions. Consequently, the dependency graphs can be mapped back to scene graphs, referred to as CDP-SGs. Although this approach claims to enhance scene graph parsing performance by framing it as a dependency parsing problem, it results in the loss of additional information due to semantic misalignments between VG-SGs and the captions. As highlighted in Table 1, CDP-SGs have more serious completeness issues. Correctness refers to the semantic accuracy of the graph with respect to the intersected information from the caption and the image. The annotation errors of VG contribute significantly to the correctness issues. As in Figure 1, the presence of the predicate "rest balls on ten" highlights a significant annotation mistake. Dependency-based parsing methods, such as SPICE-Parser, produce graphs that lack correctness primarily due to error propagation. As shown in Figure 1, the term "rest" is incorrectly considered an attribute of "racket" due to the parsing errors from the Stanford dependency parser Manning et al. (2014). Another issue with dependency-based methods is that they focus on capturing syntactic relationships among words rather than semantic relationships among objects. The phrases such as "without leaves" or "without a shirt" indicate the absence of objects like "leaves" or "shirt" in the scene, but dependency-based methods still interpret them as objects. Inconsistency.The inconsistency in the dataset is primarily the result of linguistic variations. The \begin{table} \begin{tabular}{|c|c c|c c c|} \hline & \multicolumn{3}{c|}{Fairfulness \(\uparrow\)} & \multicolumn{3}{c|}{Consistency \(\downarrow\)} \\ & Completeness & Correctness & Viles & 1 T& TMR & MTLD \\ \hline \hline VG-SG & 379 & 29\% & 2.80 & 12.98 & 15.17 \\ CDP-SG & 25\% & 73\% & 5.13 & 18.69 & 27.89 \\ FACTUAL-SG & **90\%** & **78\%** & **2.37** & **12.59** & **15.02** \\ \hline \end{tabular} \end{table} Table 1: Faithfulness and consistency evaluation of the ORACLE scene graph annotations in VG, CDP, and FACTUAL. We analyze 100 scene graphs extracted from the VG, CDP, and FACTUAL datasets. Our assessment includes measuring the rates of completeness and correctness for these scene graphs. Furthermore, we conduct a comprehensive evaluation of the entire corresponding datasets, utilizing a set of consistency metrics. Please refer to **Evaluation Metrics** of Sec. 5.1 for more details about the evaluation metrics. object, attribute, and relations are all extracted from texts, but the same semantics can be expressed in multiple ways. For instance, _(tennis player, hold, tennis racket)_ and _(tennis racket, held by, tennis player)_ are semantically equivalent, even though the orders of the subjects and objects differ. Different understanding of the tasks among crowd workers is also a serious issue. Some may consider "stone wall" as a composite object, while others may consider "stone" as an attribute and "wall" as an object. To measure the consistency of the annotations, we have calculated diversity metrics for the objects, attributes, and predicates within a set of examples encompassing various types of annotations. We assume that the diversity scores indicate the annotations' consistency. As in Table 1, the results of the three diversity metrics indicate that the annotations in VG and CDP datasets have a higher degree of diversity regarding their objects, attributes, and predicates than the ones in FACTUAL dataset. ## 4 Factual ### Meaning Representation We propose a novel intermediate _semantic_ representation, FACTUAL-MR, in which elements are clearly defined to eliminate confusion among annotators. The task of annotating captions and their associated images with FACTUAL-MRs can be broken down into manageable sub-tasks, and each FACTUAL-MR can be deterministically mapped into a conventional scene graph, enabling the utilization of FACTUAL parser outputs in a wide range of multi-modal applications that rely on scene graphs. Specifically, the template of each fact in FACTUAL-MR is presented in one of two formats: \(\{Object,Attribute\}\) or \(\{Quantifier_{sub},Object_{sub},Verb,Preposition,\)\(Quantifier_{obj},\)\(Object_{obj}\}\). Object.An object in a scene graph is essentially defined as a grouping of concepts. This results from the widely accepted notion in vision tasks that an image object typically encompasses a collection of homogeneous concepts within a bounding box [16]. Therefore, a common source of inconsistency in VG-SG is the various methods used to represent the quantity of objects. This can be attributed to the varying understandings of tasks among annotators. For example, as depicted in Figure 1, three trees may be represented as a single collective object contained within a large bounding box on an image, with the attribute of "three" _(trees, has_attribute, three)_, or as three distinct objects of _tree_ distributed throughout three facts in the visual scene. These different representations of object quantity can lead to inconsistencies. To address this, we propose defining each object in FACTUAL-MR as a grouping of collective concepts. To differentiate between two collective objects with identical names, unique suffix identifiers are utilized. For instance, the phrase "men watch men" would be represented as _(men, watch, men:1)_. Attribute.The attribute definition in FACTUAL-MR is similar to the original scene graph, with one notable distinction. In FACTUAL-MR, attributes are used to describe all individual concepts within each collective object. For example, in the case of _(3, tennis balls, has_attribute, white)_, it implies that all the tennis balls are white. Quantifier.The quantifier indicates the quantity of concepts within a collective object if the quantity is explicitly mentioned in the text. Additionally, a quantifier modifier may be used to specify the unit of measurement when explicit quantifier modifiers are present in the text. For instance, the phrase "both men" is expressed as "_2, men_" while "both groups of men" would be represented as "_2g, men_" and "both pairs of" as "_2p_". To avoid annotation inconsistencies, a limited set of pre-defined modifiers is provided. In cases where the quantity of objects cannot be expressed by the predefined set, two special quantities, "_many_" and "_unaccountable_", are offered as placeholders for annotators. annotation, an indicator, "\(p\):", is used as a prefix to the verb to indicate whether it is in a passive voice. ### Connection to Scene Graph To map a FACTUAL-MR into the original scene graph, we first combine the verb and prepositions into a predicate. The voice of the verb is altered based on whether it is passive or active. However, as the object in our annotation is collective, a collective-distributive ambiguity is present in the sentence, as also highlighted by Schuster et al. Schuster et al. (2015). For instance, given an image describing "three men reading books", we can know which man is reading which book according to the image, while in the image caption, the information is insufficient to determine this. Previous approaches, such as SPICE Anderson et al. (2016) and Stanford Schuster et al. (2015) parsers, address this issue using heuristic rules. The SPICE-Parser considers all relations between two collective objects as collective, leading to the phrase being expressed as _(men, reading, books), (men, has_attribute, 3)_. However, this annotation type is not commonly used as annotators tend to annotate relations distributedly in the VG-SG annotations. Another option, adopted by the Stanford parser, is to consider all these cases as distributive behaviours, resulting in the phrase being expressed as "_(man, reading, book), (man:1, reading, book), (man:2, reading, book)_". This may also be incorrect, as three men might read two books. Therefore, in such cases, we improve this heuristic by utilizing our annotated quantifiers. We annotate the implicit quantifiers for the "books" according to the image content. If FACTUAL-MR annotates the number of books as three, we know that each man is distributedly reading one book. Otherwise, they are collectively engaging in the activity. ### Annotation Our annotation process consists of two stages. In the first stage, we carefully selected approximately 44,000 captions, with each caption aligned to a distinct image, to ensure diversity in our FACTUAL dataset derived from the VG dataset. We hired 25 annotators with diverse backgrounds, either through Amazon Mechanical Turk Paolacci et al. (2010) or from local undergraduate students, and provided them with one-hour training sessions to ensure consistent annotation practices. Throughout the annotation process, both the images and captions were presented to the annotators to ensure the faithfulness of the annotations to both modalities. Each annotator was reimbursed at a rate of 0.25 USD per task. In the second stage, three expert annotators with a high level of agreement in their annotations performed post-processing and verification steps to ensure the quality of the data. After undergoing the quality check, we retained 40,369 examples in the dataset. Object and Attribute.The annotation process for objects and attributes involved extracting information from the captions to ensure faithfulness to the text while utilizing the image to resolve any linguistic ambiguities. For example, in the caption, "the picture depicts a car" it is unclear whether the image includes an object labelled as "picture" or if the caption is referring to the image itself as a "picture" without the context of the image. Furthermore, during the training, the annotators were also instructed to extract the objects for the co-references, such as the pronoun "it" mentioned in the captions. Quantifier.Regarding quantifiers, the annotators could only select from the pre-determined sets of quantities and quantity modifiers. If an exact match of a modifier was not found, the annotators were instructed to choose the modifier with the equivalent semantic meaning to the modifier in the text. In most cases, only the quantity was annotated when the number of objects was explicitly mentioned. However, exceptions were made for cases involving collective-distributive ambiguity, requiring the annotations of implicit quantities. Verb and Preposition.To ensure consistency in the predicate annotations, the annotators were instructed to select from a pre-determined set of predicates rather than writing them on their own. However, the predicates in the VG dataset were not mutually exclusive in semantics. Therefore, we implemented a process of partitioning them into 1000 clusters using K-means, followed by manually selecting around 2000 predicates by observing the clusters. Despite this pruning, the large number of remaining predicates still posed a challenge for annotators to make selections. Therefore, the predicates1 were further decomposed into around 400 verbs and 100 prepositions. For each selection slot, verbs and prepositions were ranked using an information retrieval method, and the annotators were asked to select from the 20 most probable candidates. Annotators were specifically instructed to annotate verbs in the active voice whenever possible. For example, if both active and passive voices were possible for annotation, as seen in the phrases "blanket covering cup" and "cup covered with a blanket", both should be annotated as _(blanket, cover, cup)_. However, in cases where only the passive voice construction was syntactically and semantically valid, such as in the example "cup filled with water," it should be annotated as _(cup, p:fill, with, water)_ since _(water, fill, cup)_ would not be appropriate. **Post-processing and Verification.** In the second stage, three expert annotators conducted a thorough examination of all cases to verify and rectify annotation errors. Particular attention was paid to identifying and correcting any incorrect annotations related to passive and active voice, as well as quantifiers and their modifiers. Furthermore, in cases where captions did not include specific name phrases for objects but only pronouns, those pronouns were converted into object names. For example, in the sentence "he is walking" where "he" was annotated as an object, it was resolved to "man." Additionally, any annotations that were entirely unrelated to the text and images were discarded. ### Statistical Analysis of Dataset We present a statistical overview of the FACTUAL dataset, which comprises 40,369 distinct captions and includes over 4,000 unique object labels with a total occurrence of 116,712. On average, each object label appears approximately 28 times throughout the dataset. Notably, prepositions occur more frequently compared to verbs, although there are four times as many distinct verb labels compared to the number of distinct prepositions. Furthermore, each fact within the dataset tends to be unique within a single caption, with an average occurrence of fewer than two times. Upon analyzing the scene level, we find that, on average, at least two distinct objects are present in each scene. However, there are much fewer distinct verbs, prepositions, and attributes. It is worth highlighting that quantifiers play a relatively minor role in the dataset, as most collective objects described in the image captions consist of only one individual object. ## 5 Experiments We evaluate the effectiveness of our new scene graph benchmark through one intrinsic evaluation and two extrinsic evaluation tasks. ### Textual Scene Graph Parsing Task Setting.Following Schuster et al. (2015); Wang et al. (2018); Choi et al. (2022), we construct scene graph parsers to translate textual descriptions of image regions into scene graphs, which are then compared against their respective ground truth scene graphs. Datasets.In terms of datasets, our evaluations are conducted on the VG Krishna et al. (2017), CDP Wang et al. (2018), and FACTUAL dataset. The VG dataset comprises 108,077 images and 5.4 million region captions. The CDP dataset converts all scene graphs in VG into a customized dependency graph, which has a one-to-one mapping to the original scene graphs. We report the performance of the parsers on two data splits for each dataset representation. For the FACTUAL dataset, we consider a random split (Random), which includes 37,861 training, 1,000 validation, and 1,508 test examples. Additionally, we also evaluate a more challenging split (Length) to assess the parsers' compositional generalization abilities. The benchmark test set for this split comprises 1,053 examples. The caption of each example includes more than ten caption tokens and three facts in the corresponding scene graphs. The remaining examples are split into 38,316 training and 1,000 validation examples. The test examples for VG and CDP consist of captions from the Random and Length splits of FACTUAL, while the remaining examples are divided into a validation set of 1,000 and a training set of over 2 million. Baselines.In this study, we evaluated the performance of five parsers: **SPICE-Parser**Anderson et al. (2016), **AMR-SG-T5**Choi et al. (2022), **CDP-T5**Choi et al. (2022), **VG-T5**Sharifzadeh et al. (2022), and **FACTUAL-T5**. SPICE utilizes \begin{table} \begin{tabular}{|c|c|c c c|c|c|c|} \hline & Object & Verb & Prop. & Predicate & Att. & Quantifier & Fact \\ \hline \hline \multirow{3}{*}{Hidlebs} & 4.02\(\mu\) & 412 & 107 & 107 & 107 & 2.085 & 13 & 14,049 \\ & 116,712 & 5123 & 3324 & 70.769 & 22,832 & 2,308 & 27,160 \\ & 28,871 & 61.54 & 304.46 & 43.59 & 10.95 & 192.33 & 1,77 \\ \hline \#labels per Scene & 2.12 & 0.60 & 0.78 & 1.09 & 0.56 & 0.05 & 1.77 \\ \# doc. per Scene & 2.09 & 0.63 & 0.80 & 1.75 & 0.57 & 0.06 & 1.76 \\ \hline \end{tabular} \end{table} Table 2: The statistics about the number of distinct labels and occurrence (occ.) of the various elements in the 40,369 FACTUAL-MRs. For simplicity, we omit their suffixes when calculating the occurrence of quantifiers. a set of rules to convert dependency graphs of captions into scene graphs. AMR-SG-T5 converts captions into AMRs through the use of AMR-BART Bai et al. (2022), and subsequently converts the AMRs into CDP-SG format by using a T5 Raffel et al. (2020) model. CDP-T5 directly converts captions into CDP-SGs without the intermediate steps. In contrast to the original CDP-to-SG parser Wang et al. (2018), which relies on intermediate representation, CDP-T5 demonstrates significantly better performance Choi et al. (2022). VG-T5, trained on the VG, parses captions into VG-SGs. FACTUAL-T5 parses captions into FACTUAL-SGs and maps them into scene graphs in a collective way. FACTUAL-T5 (pre) was first pre-trained on the VG dataset and then fine-tuned on FACTUAL. As different datasets use different annotations, SPICE2, AMR-SG-T5 and CDP-T5 are evaluated against the ground truth of the CDP dataset, while VG-T5 and FACTUAL-T5 are evaluated against the ground truth VG-SGs and FACTUAL-SGs. Footnote 2: It is worth noting that SPICE-Parser utilizes a dependency parser trained on a general domain instead of on the CDP dataset. However, it is also based on a dependency parser, and thus we compare its output scene graphs with the ground truth CDP-SGs. Evaluation.Following Schuster et al. (2015); Wang et al. (2018); Choi et al. (2022), we evaluate scene graph parsers utilizing the SPICE metric Anderson et al. (2016). The SPICE F-score measures the similarity between the candidate and ground truth graph representations extracted from captions by the parsers. In addition, we also employ the Exact Set Match metric Yu et al. (2019), which assesses the accuracy of the parsers by determining whether the strings of the parsed facts match the ground truth facts while disregarding the order of the facts. During the evaluation, all intermediate representations are converted into scene graphs. We also evaluate the faithfulness and consistency of parser outputs by human evaluation and automatic lexical diversity metrics, respectively. Specifically, three students manually examine the rates of correctness and completeness of the parsing outputs, and we report the average scores. We employ Yules I Yule (2014), TTR Templin (1957), and MTLD Koehn (2005) to evaluate the lexical diversity of objects, attributes, and predicates, which indicate consistency of the output scene graphs. second-best parser among the parsers evaluated. Table 4 also illustrates that our parser performs much better than the other baselines in terms of faithfulness while ranking second in terms of consistency. Interestingly, the VG-T5 model exhibits the best performance in consistency. However, its ORACLE annotations are more inconsistent than ours. Our analysis reveals that the VG-T5 prioritizes predicting scene graphs with simple lexicons and discards more complex patterns, resulting in its strong performance in consistency but much weaker performance in faithfulness metrics. ### Image Caption Evaluation Task Setting.To assess the quality of the model-generated captions regarding a set of reference captions and an image, we adopt the SPICE and SoftSPICE metrics to calculate a graph similarity between graphs extracted from the candidate and reference captions. As these metrics are based on the parser outputs, a _better_ parser will result in scores that more closely align with human judgment. Evaluation.Following Hessel et al. (2021), we employ two evaluation settings. The first setting involves calculating the correlation of the scores with human judgment utilizing Kendall's \(\tau\) and Pearson correlation on the Flicker8K dataset Hodosh et al. (2013). The Flicker8K dataset includes 17k "expert" human judgments for 5664 images, with each caption being rated on a scale of 1 to 4 against five reference captions. In the second setting, we utilize one (1-ref) or four (4-ref) reference captions sourced from the FOIL dataset Shekhar et al. (2017). This dataset consists of 32k pairs of true captions and their corresponding corrupted versions, where a single word is replaced with an incorrect one. The objective is to assess the accuracy of each image caption evaluation metric in identifying and assigning higher scores to the uncorrupted captions. This setting aims to evaluate the metric's ability to detect instances of sentence hallucination effectively. SoftSPICE.SPICE calculates the similarity between two graphs by matching strings of sub-components within the graphs. These sub-components include _objects_, tuples _{object, attribute}_ and triples _{object, predicate, object}_. To improve SPICE, we propose an alternative method that utilizes embedding-based techniques to calculate string similarity. This approach involves decomposing each graph into the aforementioned sub-components and encoding the text of each component using the Sentence-BERT Reimers and Gurevych (2019). The resulting similarity score, coined SoftSPICE, is as follows: \[\phi_{s}(G_{c},G_{r})=\frac{1}{|\mathcal{V}_{c}|}\sum_{\mathbf{e}_{c}\in\mathcal{ V}_{c}}\max_{\mathbf{e}_{r}\in\mathcal{V}_{r}}(cos(\mathbf{e}_{c},\mathbf{e}_{r})) \tag{1}\] where \(\mathbf{e}\) denotes the embedding of each component, \(\mathcal{V}_{r}\) and \(\mathcal{V}_{c}\) denote the sets of embeddings encoding components within the candidate and reference graphs, respectively. Additionally, we can also use the image \(I\) to compute a **SoftSPICE(img)** score, denoted as \(\phi_{i}(G_{c},I)\). This score is computed by combining the embeddings of the graph components and the image: \[\phi_{i}^{\prime}(G_{c},I)=\frac{1}{|\mathcal{V}_{c}|}\sum_{\mathbf{e }_{c}\in\mathcal{V}_{c}}cos(\mathbf{e}_{c},\mathbf{e}_{I}) \tag{2}\] \[\phi_{i}(G_{c},I)=\frac{2\cdot\phi_{s}(G_{c},I)\cdot\phi_{i}^{ \prime}(G_{c},I)}{\phi_{s}(G_{c},G_{r})+\phi_{i}^{\prime}(G_{c},G_{r})} \tag{3}\] where \(e_{c}\) and \(e_{I}\) are obtained by encoding the sub-components and the images with CLIP. Discussion.Table 5 illustrates that FACTUAL-T5 demonstrates improvement over other parsers in terms of enhancing the correlation of SPICE and SoftSPICE scores with human judgments. However, when using SPICE to detect hallucinated instances, our parser performs comparably to the SPICE-Parser. We attribute this to the fact that approximately one-third of the pairs will have tied SPICE scores due to the use of exact string matching. On the other hand, when using the embedding-based metric, SoftSPICE, the superiority of our parser on FOIL is revealed. Currently, the SPICE utilizing the SPICE-Parser has been a common standard in image caption evaluation settings. We are confident that our parser can be a suitable replacement for SPICE-Parser. We also compare SoftSPICE with current SOTA image evaluation metrics, namely BERTScore Zhang et al. (2019), CLIPScore, \begin{table} \begin{tabular}{c c|c c|c c} \hline \multirow{2}{*}{Metric} & \multirow{2}{*}{Parser} & \multicolumn{2}{c|}{Flicker8K} & \multicolumn{2}{c}{FOLL (1-ref)} & \multicolumn{2}{c}{FOLL (4-ref)} \\ & & \(\tau\cdot\) & \(\rho\cdot\) & \(Aoc\cdot\) & \(Aoc\cdot\) \\ \hline \hline \multirow{4}{*}{SPICE} & SPICE-Parser & 44.77 & 601.1 & 76.31 & **87.02** \\ & CDP-T5 & 33.50 & 49.05 & 65.66 & 72.76 \\ \cline{1-1} & VG-T5 & 37.18 & 51.94 & 68.43 & 76.12 \\ \cline{1-1} & **FACTUAL-T5(pre)** & **45.12** & **60.78** & **76.49** & 86.88 \\ \hline \multirow{4}{*}{SoftSPICE} & SPICE-Paser & 51.897 & 68.118 & 78.53 & 86.77 \\ \cline{1-1} & CDP-T5 & 45.54 & 59.64 & 53.58 & 59.49 \\ \cline{1-1} & VG-T5 & 39.66 & 53.05 & 70.80 & 76.77 \\ \cline{1-1} & **FACTUAL-T5(pre)** & **53.35** & **60.52** & **85.66** & **91.61** \\ \hline \end{tabular} \end{table} Table 5: (Left) The correlation scores between SPICE or SoftSPICE with the human judgment. (Right) The accuracies of the metrics w.r.t. detecting the hallucinated sentences. and RefCLIPScore. These metrics calculate the similarity between the embeddings of the candidate caption with the embeddings of the reference captions, the image, and both reference captions and images, respectively. As in Table 6, SoftSPICE performs comparably with all the SOTA methods _when there are over four reference captions_, and with the inclusion of image information, SoftSPICE(img) can even outperform SOTA results on Flicker8K. We also observed that the scene graph feature could be a useful supplement to caption-level features. By taking the harmonic mean of SoftSPICE(img) with BERTScore and RefCLIPScore, the performance of both metrics achieve new SOTA results. ### Zero-shot Image Retrieval Task Setting.The goal of image retrieval is to identify and retrieve an image that precisely corresponds to a given textual query description. This is typically accomplished by allocating scores to images based on their relevance to the query and selecting the top \(k\) images. Following the setting from Johnson et al. (2015); Wang et al. (2018), we have selected 456 captions and their corresponding images from the Random and Length test sets, initially prepared for intrinsic evaluation. These captions serve as queries to retrieve their associated images, forming the basis for evaluating the performance of our image retrieval system. We proceed under the assumption that an oracle scene graph corresponding to each selected image is available. Furthermore, we introduce a '_Local_' setting, which provides access to the coordinates of a bounding box within each image that corresponds to each caption and the ground truth scene graph aligned with this bounding box region. Evaluation.During the evaluation, the scene graph of the captions is generated using various baseline parsing methods. The 456 images are ranked according to the similarity scores computed using either the SoftSPICE or CLIPScore between each image and the caption. Notably, the representation encoders employed in both similarity measurements are not fine-tuned on the in-domain dataset. The performance of various methods is assessed using the Recall@k metric. The performance of different methods is assessed using the Recall@k metric, which indicates the percentage of caption queries where the top \(k\) retrieved images, given a specific query, include the ground truth. Discussion.As observed in Table 7, FACTUAL-T5 consistently outperforms other baselines in zero-shot image retrieval tasks, highlighting the superiority of our dataset and parser. The performance of both SoftSPICE and CLIPScore is generally enhanced by incorporating location information of the bounding boxes, depicting that more accurate information could boost image retrieval. Moreover, when combined with all available parsers, SoftSPICE demonstrates significantly superior performance compared to CLIPScore, emphasizing the substantial potential benefits of utilizing structured information for image retrieval. ## 6 Conclusion We introduce a new intermediate representation, coined FACTUAL-MR, which aims to address the issues of faithfulness and consistency for textual scene graph parsers. By utilizing a rigorous annotation process, it is possible to create a large-scale dataset based on FACTUAL-MR. Our experiments demonstrate that FACTUAL-T5, trained on this dataset, is capable of generating consistent scene graphs that are highly faithful to corresponding images and captions. Utilizing a novel graph similarity metric, SoftSPICE, FACTUAL-T5 significantly improve performance in both image caption evaluation and zero-shot image retrieval. \begin{table} \begin{tabular}{c|c|c|c} \hline \multirow{2}{*}{Metric} & \multicolumn{2}{c|}{Fickre8K} & \multicolumn{2}{c}{FOLL (-ref)} & \multicolumn{2}{c}{FOLL (-ref)} \\ & \(\tau_{\uparrow}\) & \(\rho\) \(\uparrow\) & \(Acc\) \(\uparrow\) & \(Acc\) \(\uparrow\) \\ \hline \hline \multicolumn{4}{l}{SoftSPICE} & 53.35 & 69.52 & 85.66 & 91.61 \\ SoftSPICE(img) & 54.85 & 70.55 & 88.12 & 92.31 \\ \hline \multicolumn{4}{l}{BERTTScore} & 36.71 & 49.81 & 86.70 & 90.49 \\ \multicolumn{4}{l}{BERTTScore + SoftSPICE(img)} & 51.08 & 65.80 & 90.50 & **94.64** \\ \hline \multicolumn{4}{l}{CLIPScore} & 51.44 & 64.86 & 86.85 & 86.85 \\ \multicolumn{4}{l}{ReCLIPScore} & 53.00 & 67.67 & **90.94** & 92.40 \\ \multicolumn{4}{l}{ReCLIPScore + SoftSPICE(img)} & **57.37** & **73.40** & 90.69 & 94.01 \\ \hline \end{tabular} \end{table} Table 6: The results comparing SoftSPICE with current SOTA image caption evaluation metrics. We use FACTUAL-T5 as the parser for SoftSPICE. \begin{table} \begin{tabular}{c c c|c|c|c} \hline \multirow{2}{*}{Local} & \multirow{2}{*}{Method} & \multirow{2}{*}{Parser} & \multicolumn{2}{c|}{Random} & \multicolumn{1}{c}{Length} \\ & & R@1 & R@5 & R@1 & R@5 \\ \hline \hline \multirow{4}{*}{Local.} & \multirow{2}{*}{SoftSPICE} & SPICE-Parser & 67.76 & 84.87 & 67.54 & 81.80 \\ & & P-T5 & 72.59 & 88.16 & 62.28 & 80.70 \\ & & VG-T5 & 49.56 & 88.36 & 58.77 & 74.34 \\ & \multirow{2}{*}{FLY} & **73.90** & **93.22** & **75** & **87.06** \\ & & N/A & 31.58 & 87.57 & 45.61 & 66.01 \\ \hline \hline \multirow{4}{*}{No Local.} & \multirow{2}{*}{SoftSPICE} & SPICE-Parser & 47.81 & 71.05 & 57.01 & 78.07 \\ & & CDP-T5 & 57.02 & 76.54 & 51.54 & 71.27 \\ \cline{1-1} & & VG-T5 & 38.38 & 58.11 & 51.54 & 70.61 \\ \cline{1-1} & \multirow{2}{*}{FLY} & FACTUAL-T5 & **64.45** & **33.99** & **66.42** & **85.53** \\ \cline{1-1} & & N/A & 23.02 & 47.37 & 34.65 & 55.26 \\ \hline \end{tabular} \end{table} Table 7: Zero-shot image retrieval evaluation on two sets of image-caption pairs that utilize localization or do not use localization information during image retrieval. ## 7 Limitations Despite the significant advancements made by the proposed FACTUAL-MR representation in addressing the limitations of current scene graph parsing datasets, there remain several areas for future research. First, FACTUAL-MR currently relies on heuristic rules to resolve the collective-distributive ambiguity as introduced in Section 4.2. However, the limitations still remain due to the ambiguity of language. To obtain a perfect parser, rich-world knowledge from multi-modalities or textual context (Li et al., 2020) is required, which is left as our future work. Second, there is currently no explicit alignment between objects represented within FACTUAL-MR and the corresponding bounding boxes in the image. To fully utilize multi-modal information, collecting such alignments may be necessary. Third, the proposed method utilizes ORACLE scene graphs of the image, however, in practical applications, extracting a scene graph from an image remains a challenging problem. Further research is required to determine if utilizing a visual scene graph parsing model to extract scene graphs from images would negatively impact image retrieval performance. Lastly, our current approach utilizes a large pretrained language model to train the parser. However, the issue of robustness in parsers (Huang et al., 2021; Zhuo et al., 2023) has always been a significant concern. The captions in the VG dataset mainly consist of short sentences with simple patterns. It remains unclear whether the parser is robust enough to handle sentences with more complex linguistic variations, which calls for further investigation. ## Acknowledgments We would like to express our gratitude to Weib Shi for his valuable assistance in conducting our human evaluation works. We also extend our appreciation to Adobe Inc. for their generous funding support in data collection. Additionally, we would like to thank Wuhan University for their valuable assistance in identifying students to assist with data annotation.
2308.07678
The infimum values of the probability functions for some infinitely divisible distributions motivated by Chvátal's theorem
Let $B(n,p)$ denote a binomial random variable with parameters $n$ and $p$. Chv\'{a}tal's theorem says that for any fixed $n\geq 2$, as $m$ ranges over $\{0,\ldots,n\}$, the probability $q_m:=P(B(n,m/n)\leq m)$ is the smallest when $m$ is closest to $\frac{2n}{3}$. Motivated by this theorem, in this paper we consider the infimum value of the probability $P(X\leq \kappa E[X])$, where $\kappa$ is a positive real number, and $X$ is a random variable whose distribution belongs to some infinitely divisible distributions including the inverse Gaussian, log-normal, Gumbel and logistic distributions.
Ze-Chun Hu, Peng Lu, Qian-Qian Zhou, Xing-Wang Zhou
2023-08-15T10:10:28Z
http://arxiv.org/abs/2308.07678v2
The extreme values of the probability functions for some infinitely divisible distributions motivated by Chvatal's theorem ###### Abstract Let \(B(n,p)\) denote a binomial random variable with parameters \(n\) and \(p\). Chvatal's theorem says that for any fixed \(n\geq 2\), as \(m\) ranges over \(\{0,\ldots,n\}\), the probability \(q_{m}:=P(B(n,m/n)\leq m)\) is the smallest when \(m\) is closest to \(\frac{2n}{3}\). Motivated by this theorem, in this note we consider the minimum value of the probability \(P(X\leq\kappa E[X])\), where \(\kappa\) is a positive real number, and \(X\) is a random variable whose distribution belongs to some infinitely divisible distributions including the inverse Gaussian, log-normal, Gumbel and logistic distributions. _MSC:_ 60C05, 60E15 _Keywords:_ Chvatal's theorem, infinitely divisible distribution, inverse Gaussian distribution, log-normal distribution, Gumbel distribution, logistic distribution ## 1 Introduction Let \(B(n,p)\) denote a binomial random variable with parameters \(n\) and \(p\). Chvatal's theorem says that for any fixed \(n\geq 2\), as \(m\) ranges over \(\{0,\ldots,n\}\), the probability \(q_{m}:=P(B(n,m/n)\leq m)\) is the smallest when \(m\) is closest to \(\frac{2n}{3}\). Chvatal's theorem has applications in machine learning, such as the analysis of generalized boundaries of relative deviation bounds and unbounded loss functions ([2] and [3]). As to the history of Chvatal's theorem, we refer to Barabesi et al. [1], Janson [5] and Sun [10]. Motivated by Chvatal's theorem, Li et al. [7] considered the minimum value problem on the probability that a random variable is not more than its expectation, when its distribution is the Poisson distribution, the geometric distribution or the Pascal distribution. Sun et al. [11] investigated the corresponding minimum value problem for the Gamma distribution among other things. Li et al. [8] studied the minimum value problem for the Weibull distribution and the Pareto distribution. In this note, we consider the minimum value of the probability \(P(X\leq\kappa E[X])\), where \(\kappa\) is a positive real number, and \(X\) is a random variable whose distribution is the inverse Gaussian, the log-normal distribution, the Gumbel distribution and the logistic distribution in Sections 2, 3, 4 and 5, respectively. ## 2 Inverse Gaussian distribution Let \(X_{\mu,\lambda}\) be an inverse Gaussian random variable with parameters \(\mu\) and \(\lambda\) (\(\mu>0,\lambda>0\)). By [6, Chapter 27], we know that the density function of \(X_{\mu,\lambda}\) is given by \[f_{\mu,\lambda}(x)=\sqrt{\frac{\lambda}{2\pi x^{3}}}\exp\left(-\frac{\lambda( x-\mu)^{2}}{2\mu^{2}x}\right),\ \ x>0,\] and \(E[X_{\mu,\lambda}]=\mu\). Then, for any given real number \(\kappa>0\), by [6, Chapter 27], we know that \[P(X_{\mu,\lambda}\leq\kappa E[X_{\mu,\lambda}]) = \int_{0}^{\kappa\mu}\sqrt{\frac{\lambda}{2\pi x^{3}}}\exp\left(- \frac{\lambda(x-\mu)^{2}}{2\mu^{2}x}\right)dx\] \[= \Phi\left(\sqrt{\frac{\lambda}{\kappa\mu}}(\kappa-1)\right)+\exp \left(\frac{2\lambda}{\mu}\right)\Phi\left(-\sqrt{\frac{\lambda}{\kappa\mu}}( \kappa+1)\right).\] Hereafter, \(\Phi(\cdot)\) stands for the distribution function of the standard normal distribution. Denote \(x:=\sqrt{\frac{\lambda}{\mu}}\). Then \[P(X_{\mu,\lambda}\leq\kappa E[X_{\mu,\lambda}])=\Phi\left(\sqrt{\frac{1}{ \kappa}}(\kappa-1)x\right)+e^{2x^{2}}\Phi\left(-\sqrt{\frac{1}{\kappa}}( \kappa+1)x\right).\] Define a function \[g_{\kappa}(x):=\Phi\left(\sqrt{\frac{1}{\kappa}}(\kappa-1)x\right)+e^{2x^{2}} \Phi\left(-\sqrt{\frac{1}{\kappa}}(\kappa+1)x\right),\ x>0. \tag{2.1}\] The main result of this section is **Theorem 2.1**: \((i)\) _If \(\kappa\leq 1\), then_ \[\inf_{x\in(0,\infty)}g_{\kappa}\left(x\right)=\lim_{x\rightarrow\infty}g_{ \kappa}(x)=\left\{\begin{array}{ll}0,&\mbox{if }\kappa<1,\\ \frac{1}{2},&\mbox{if }\kappa=1.\end{array}\right.\] \((ii)\) _If \(\kappa>1\), then_ \[\min_{x\in(0,\infty)}g_{\kappa}\left(x\right)=g_{\kappa}\left(x_{0}( \kappa)\right),\] _where \(x_{0}(\kappa)\) is the unique zero point of the function_ \[h_{\kappa}(x):=2\int_{\sqrt{\frac{1}{\kappa}}(\kappa+1)x}^{\infty}\exp\left(- \frac{1}{2}t^{2}\right)dt-\sqrt{\frac{1}{\kappa}}\frac{\exp(-\frac{\left( \kappa+1\right)^{2}}{2\kappa}x^{2})}{x},\ x\in(0,\infty). \tag{2.2}\] **Proof.** By taking the derivative of the function \(g_{\kappa}(x)\) defined in (2.1) and in virtue of (2.2), we have \[g_{\kappa}^{{}^{\prime}}(x) = \sqrt{\frac{1}{\kappa}}(\kappa-1)\Phi^{\prime}\left(\sqrt{\frac{ 1}{\kappa}}(\kappa-1)x\right)+4xe^{2x^{2}}\Phi\left(-\sqrt{\frac{1}{\kappa}}( \kappa+1)x\right) \tag{2.3}\] \[-\sqrt{\frac{1}{\kappa}}(\kappa+1)e^{2x^{2}}\Phi^{\prime}\left(- \sqrt{\frac{1}{\kappa}}(\kappa+1)x\right)\] \[= \frac{4xe^{2x^{2}}}{\sqrt{2\pi}}\int_{\sqrt{\frac{1}{\kappa}}( \kappa+1)x}^{\infty}\exp\left(-\frac{1}{2}t^{2}\right)dt-2\sqrt{\frac{1}{2 \pi\kappa}}\exp\left(-\frac{\left(\kappa-1\right)^{2}}{2\kappa}x^{2}\right)\] \[= \frac{2xe^{2x^{2}}}{\sqrt{2\pi}}\left[2\int_{\sqrt{\frac{1}{ \kappa}}(\kappa+1)x}^{\infty}\exp\left(-\frac{1}{2}t^{2}\right)dt-\sqrt{\frac {1}{\kappa}}\frac{\exp\left(-\frac{\left(\kappa+1\right)^{2}}{2\kappa}x^{2} \right)}{x}\right]\] \[= \frac{2xe^{2x^{2}}}{\sqrt{2\pi}}h_{\kappa}(x).\] By taking the derivative of the function \(h_{\kappa}(x)\), we get that \[h_{\kappa}^{{}^{\prime}}(x) = \sqrt{\frac{1}{\kappa}}\exp\left(-\frac{\left(\kappa+1\right)^{2 }}{2\kappa}x^{2}\right)\left(\frac{1}{\kappa}-\kappa+\frac{1}{x^{2}}\right) \tag{2.4}\] \[:= \sqrt{\frac{1}{\kappa}}\exp\left(-\frac{\left(\kappa+1\right)^{2 }}{2\kappa}x^{2}\right)\varphi_{\kappa}(x),\] where \(\varphi_{\kappa}(x)=\frac{1}{\kappa}-\kappa+\frac{1}{x^{2}}\). \((i)\) If \(\kappa\leq 1\), then \(\varphi_{\kappa}(x)\geq\frac{1}{x^{2}}>0\) for any \(x\in(0,\infty)\). Thus, we have \(h_{\kappa}^{{}^{\prime}}(x)>0,\ \ \forall x\in(0,\infty)\), which implies that the function \(h_{\kappa}(x)\) is strictly increasing on interval \((0,\infty)\). We have \[\lim_{x\rightarrow\infty}h_{\kappa}\left(x\right) = \lim_{x\rightarrow\infty}\left(2\int_{\sqrt{\frac{1}{\kappa}}( \kappa+1)x}^{\infty}\exp\left(-\frac{1}{2}t^{2}\right)dt-\sqrt{\frac{1}{\kappa }}\frac{\exp(-\frac{\left(\kappa+1\right)^{2}}{2\kappa}x^{2})}{x}\right)=0.\] Thus for any \(x\in(0,\infty)\), \(h_{\kappa}(x)<0\). Then by (2.3), we get \[g_{\kappa}^{{}^{\prime}}\left(x\right)<0,\ \forall x\in(0,\infty).\] Therefore, \(g_{\kappa}\left(x\right)\) is a strictly decreasing function on interval \(\left(0,\infty\right)\) and thus \[\inf_{x\in\left(0,\infty\right)}g_{\kappa}(x)=\lim_{x\rightarrow \infty}g_{\kappa}(x)\] \[=\lim_{x\rightarrow\infty}\Phi\left(\sqrt{\frac{1}{\kappa}}( \kappa-1)x\right)+\lim_{x\rightarrow\infty}e^{2x^{2}}\frac{1}{\sqrt{2\pi}} \int_{\sqrt{\frac{1}{\kappa}}(\kappa+1)x}^{\infty}\exp\left(-\frac{t^{2}}{2} \right)dt\] \[=\lim_{x\rightarrow\infty}\Phi\left(\sqrt{\frac{1}{\kappa}}( \kappa-1)x\right)+\lim_{x\rightarrow\infty}\frac{1}{\sqrt{2\pi}}e^{2x^{2}} \int_{0}^{\infty}\exp\left(-\frac{\left(u+\sqrt{\frac{1}{\kappa}}(\kappa+1)x \right)^{2}}{2}\right)du\] \[=\lim_{x\rightarrow\infty}\Phi\left(\sqrt{\frac{1}{\kappa}}( \kappa-1)x\right)+\lim_{x\rightarrow\infty}\frac{1}{\sqrt{2\pi}}\int_{0}^{ \infty}\exp\left(-\frac{1}{2}\left(u^{2}+\frac{2(\kappa+1)}{\sqrt{\kappa}}xu+ \frac{(\kappa-1)^{2}}{\kappa}x^{2}\right)\right)du\] \[=\lim_{x\rightarrow\infty}\Phi\left(\sqrt{\frac{1}{\kappa}}( \kappa-1)x\right)\] \[=\left\{\begin{array}{ll}0,&\mbox{if }\kappa<1,\\ \frac{1}{2},&\mbox{if }\kappa=1.\end{array}\right.\] \((ii)\) If \(\kappa>1\), then by \(h_{\kappa}^{{}^{\prime}}(x)=0\), i.e., \(\varphi_{\kappa}(x)=\frac{1}{\kappa}-\kappa+\frac{1}{x^{2}}=0\), we get \(x=\sqrt{\frac{\kappa}{\kappa^{2}-1}}.\) Then, by (2.4), we know that \(h_{\kappa}^{{}^{\prime}}(x)>0\) for any \(x\in(0,\sqrt{\frac{\kappa}{\kappa^{2}-1}})\) and \(h_{\kappa}^{{}^{\prime}}(x)<0\) for any \(x\in(\sqrt{\frac{\kappa}{\kappa^{2}-1}},\infty)\). It follows that the function \(h_{\kappa}(x)\) is strictly increasing on \((0,\sqrt{\frac{\kappa}{\kappa^{2}-1}})\) and strictly decreasing on \((\sqrt{\frac{\kappa}{\kappa^{2}-1}},\infty)\). Furthermore, by the fact that \(\lim_{x\to 0+}h_{\kappa}(x)=-\infty<0\) and \(\lim_{x\rightarrow\infty}h_{\kappa}(x)=0\), we know that the continuous function \(h_{\kappa}(x)\) has a unique zero point \(x_{0}(\kappa)\in(0,\sqrt{\frac{\kappa}{\kappa^{2}-1}})\). It follows that \(h_{\kappa}(x)<0\) for any \(x\in(0,x_{0}(\kappa))\) and \(h_{\kappa}(x)>0\) for any \(x\in(x_{0}(\kappa),\infty)\). Hence, by (2.3), we get that \(g_{\kappa}^{{}^{\prime}}(x)<0\) on \((0,x_{0}(\kappa))\) and \(g_{\kappa}^{{}^{\prime}}(x)>0\) on \((x_{0}(\kappa),\infty)\). It follows that \[\min_{x\in(0,\infty)}g_{\kappa}(x)=g_{\kappa}(x_{0}(\kappa)).\] The proof is complete. **Remark 2.2**: _For any \(\kappa>1,\mu>0,\lambda>0\), it is easy to know that_ \[P(X_{\mu,\lambda}\leq\kappa E[X_{\mu,\lambda}])>P(X_{\mu,\lambda}\leq E[X_{\mu,\lambda}])>\frac{1}{2},\] _where the second inequality follows from the proof of Theorem 2.1. Then for any \(\kappa>1\), we have_ \[\min_{x\in(0,\infty)}g_{\kappa}\left(x\right)=g_{\kappa}\left(x_{0}(\kappa) \right)>\frac{1}{2}.\] Log-normal distribution Let \(X_{\mu,\sigma}\) be a log-normal random variable with parameters \(\mu\) and \(\sigma\) (\(\mu\in\mathbb{R},\sigma>0\)). By [6, Chapter 22], we know that the density function of \(X_{\mu,\sigma}\) is \[f_{\mu,\sigma}(x)=\frac{1}{\sqrt{2\pi}\sigma x}\exp\left(-\frac{(\ln x-\mu)^{2 }}{2\sigma^{2}}\right),\ \ x>0,\] and the expectation is \(E[X_{\mu,\sigma}]=\exp(\mu+\frac{\sigma^{2}}{2})\). Then, for any given real number \(\kappa>0\), by [6, (22.1.2)], we have \[P(X_{\mu,\sigma}\leq\kappa E[X_{\mu,\sigma}])=\Phi\left(\frac{\ln\left(\kappa e ^{\mu+\frac{\sigma^{2}}{2}}\right)-\mu}{\sigma}\right)=\Phi\left(\frac{\ln \kappa}{\sigma}+\frac{\sigma}{2}\right),\] which shows that \(P(X_{\mu,\sigma}\leq\kappa E[X_{\mu,\sigma}])\) is independent of \(\mu\). Define a function \[g_{\kappa}(\sigma):=\Phi\left(\frac{\ln\kappa}{\sigma}+\frac{\sigma}{2} \right),\ \sigma>0.\] The main result of this section is **Proposition 3.1**: \((i)\) _If \(\kappa\leq 1\), then_ \[\inf_{\sigma\in(0,\infty)}g_{\kappa}(\sigma)=\lim_{\sigma\to 0+}g_{\kappa}( \sigma)=\left\{\begin{array}{ll}0,&\mbox{if }\kappa<1,\\ \frac{1}{2},&\mbox{if }\kappa=1;\end{array}\right. \tag{3.1}\] \((ii)\) _If \(\kappa>1\), then_ \[\min_{\sigma\in(0,\infty)}g_{\kappa}\left(\sigma\right)=g_{\kappa}\left(\sqrt {2\ln\kappa}\right)=\Phi(\sqrt{2\ln\kappa})>\frac{1}{2}. \tag{3.2}\] **Proof.** Define a function \[h_{\kappa}(\sigma):=\frac{\ln\kappa}{\sigma}+\frac{\sigma}{2},\ \sigma>0. \tag{3.3}\] Then \(g_{\kappa}(\sigma)=\Phi(h_{\kappa}(\sigma)).\) Therefore, in order to prove this proposition, it is enough to investigate the value of \(h_{\kappa}(\sigma)\). If \(\kappa\leq 1\), then \(\ln\kappa\leq 0.\) Thus, by (3.3), we have \(h_{\kappa}^{{}^{\prime}}(\sigma)=-\frac{\ln\kappa}{\sigma^{2}}+\frac{1}{2}>0\), which implies that \(h_{\kappa}(\sigma)\) is a strictly increasing function of \(\sigma\). Thus, \[\inf_{\sigma\in(0,\infty)}h_{\kappa}\left(\sigma\right)=\lim_{\sigma\to 0+}h_{ \kappa}(\alpha)=\left\{\begin{array}{rl}-\infty,&\mbox{if }\kappa<1,\\ 0,&\mbox{if }\kappa=1,\end{array}\right.\] It follows that (3.1) holds. If \(\kappa>1\), then \(\ln\kappa>0.\) If \(h_{\kappa}^{{}^{\prime}}(\sigma)=-\frac{\ln\kappa}{\sigma^{2}}+\frac{1}{2}=0\), then \(\sigma=\sqrt{2\ln\kappa}.\) It is easy to check that the function \(h_{\kappa}(\sigma)\) is strictly decreasing on \((0,\sqrt{2\ln\kappa})\) and strictly increasing on \((\sqrt{2\ln\kappa},\infty)\). Thus, \[\min_{\sigma\in(0,\infty)}h_{\kappa}(\sigma)=h_{\kappa}(\sqrt{2\ln\kappa})= \sqrt{2\ln\kappa}.\] It follows that (3.2) holds. The proof is complete. Gumbel distribution Let \(X_{\mu,\beta}\) be a Gumbel random variable with parameters \(\mu\) and \(\beta\) (\(\mu\in\mathbb{R},\beta>0\)). By [9], the density function of \(X_{\mu,\beta}\) is given by \[f_{\mu,\beta}(x)=\frac{1}{\beta}e^{-z-e^{-z}},\ z=\frac{x-\mu}{\beta},\ x\in \mathbb{R}\] and its expectation is \(E[X_{\mu,\beta}]=\mu+\beta\gamma,\) where \(\gamma\) is the Euler's constant. Then, for any given real number \(\kappa>0,\) we have \[P(X_{\mu,\beta}\leq\kappa E[X_{\mu,\beta}])=e^{-e^{-\frac{\kappa(\mu+\beta \gamma)-\mu}{\beta}}}=e^{-e^{-\frac{(\kappa-1)\mu}{\beta}-\kappa\gamma}}.\] Let \(x:=\frac{\mu}{\beta}\). Then \[P(X_{\mu,\beta}\leq\kappa E[X_{\mu,\beta}])=e^{-e^{-[(\kappa-1)x+\kappa\gamma] }}.\] Define a function \[g_{\kappa}(x):=e^{-e^{-[(\kappa-1)x+\kappa\gamma]}},\ \ x\in\mathbb{R}. \tag{4.1}\] The main result of this section is **Proposition 4.1**: \((i)\) _If \(\kappa<1\), then_ \[\inf_{x\in\mathbb{R}}g_{\kappa}\left(x\right)=\lim_{x\rightarrow\infty}g_{ \kappa}(x)=0;\] \((ii)\) _If \(\kappa>1\), then_ \[\inf_{x\in\mathbb{R}}g_{\kappa}\left(x\right)=\lim_{x\rightarrow-\infty}g_{ \kappa}(x)=0;\] \((iii)\) _If \(\kappa=1,\) then_ \[g_{\kappa}(x)\equiv e^{-e^{-\gamma}}>\frac{1}{2}.\] **Proof.** Define a function \[h_{\kappa}(x):=(\kappa-1)x+\kappa\gamma,\ x\in\mathbb{R}.\] Then \[g_{\kappa}(x)=e^{-e^{-h_{\kappa}(x)}},\ x\in\mathbb{R}.\] By taking the derivative of \(g_{\kappa}(x),\) we have \[g_{\kappa}^{{}^{\prime}}(x)=(\kappa-1)e^{-h_{\kappa}(x)}e^{-e^{-h_{\kappa}(x )}}.\] If \(\kappa<1,\) then \(g_{\kappa}^{{}^{\prime}}(x)<0,\) which implies that \(g_{\kappa}(x)\) is a strictly decreasing function of \(x.\) Then \[\inf_{x\in\mathbb{R}}g_{\kappa}\left(x\right)=\lim_{x\rightarrow-\infty}g_{ \kappa}(x)=0.\] If \(\kappa>1,\) then \(g_{\kappa}^{{}^{\prime}}(x)>0.\) Thus, \(g_{\kappa}(x)\) is a strictly increasing function of \(x\) and \[\inf_{x\in\mathbb{R}}g_{\kappa}(x)=\lim_{x\rightarrow-\infty}g_{\kappa}(x)=0.\] If \(\kappa=1,\) then by (4.1), we get that \(g_{\kappa}(x)\equiv e^{-e^{-\gamma}}>\frac{1}{2}\). The proof is complete. Logistic distribution Let \(X_{\mu,\beta}\) be a logistic random variable with parameters \(\mu\) and \(\beta\) (\(\mu\in\mathbb{R},\beta>0\)). By [6, Chapter 21], we know that the distribution function of \(X_{\mu,\beta}\) is given by \[P(X\leq x)=\frac{1}{1+e^{-\frac{x-\mu}{\beta}}},\ x\in\mathbb{R},\] and the expectation of \(X_{\mu,\beta}\) is \(E[X]=\mu\). Then, for any given real number \(\kappa>0\), we have \[P(X_{\mu,\beta}\leq\kappa E[X_{\mu,\beta}])=\frac{1}{1+e^{-\frac{\kappa\mu-\mu }{\beta}}}.\] Let \(y:=\frac{\mu}{\beta}\). Then \[P(X_{\mu,\beta}\leq\kappa E[X_{\mu,\beta}])=\frac{1}{1+e^{-(\kappa-1)y}}.\] Define a function \[g_{\kappa}(y):=\frac{1}{1+e^{-(\kappa-1)y}},\ \ y\in\mathbb{R}. \tag{5.1}\] The main result of this section is **Proposition 5.1**: \((i)\) _If \(\kappa<1\), then_ \[\inf_{y\in\mathbb{R}}g_{\kappa}\left(y\right)=\lim_{y\to\infty}g_{\kappa}(y)=0;\] \((ii)\) _If \(\kappa>1\), then_ \[\inf_{y\in\mathbb{R}}g_{\kappa}\left(y\right)=\lim_{y\to-\infty}g_{\kappa}(y) =0;\] \((iii)\) _If \(\kappa=1,\) then_ \[g_{\kappa}(y)\equiv\frac{1}{2}.\] **Proof.** By the definition \(g_{\kappa}(y)\) in (5.1), we have \[g_{\kappa}^{{}^{\prime}}(y)=\frac{(\kappa-1)e^{-(\kappa-1)y}}{(1+e^{-(\kappa -1)y})^{2}},\ \ y\in\mathbb{R}.\] If \(\kappa<1\), then \(g_{\kappa}^{{}^{\prime}}(y)<0,\) which implies that the function \(g_{\kappa}(y)\) is strictly decreasing. Then \[\inf_{y\in\mathbb{R}}g_{\kappa}(y)=\lim_{y\to\infty}g_{\kappa}(y)=0.\] If \(\kappa>1\), then \(g_{\kappa}^{{}^{\prime}}(y)>0.\) Thus, \(g_{\kappa}(y)\) is a strictly increasing function of \(y\) and \[\inf_{y\in\mathbb{R}}g_{\kappa}(y)=\lim_{y\to-\infty}g_{\kappa}(y)=0.\] If \(\kappa=1\), then \(g_{\kappa}(y)\equiv\frac{1}{2}\) by (5.1). The proof is complete. AcknowledgmentsThis work was supported by the National Natural Science Foundation of China (12171335), the Science Development Project of Sichuan University (2020SCUNL201) and the Scientific Foundation of Nanjing University of Posts and Telecommunications (NY221026).
2310.12759
A case study of early galaxy cluster with the Athena X-IFU
Context: Observations of the hot gas in distant clusters of galaxies, though challenging, are key to understand the role of intense galaxy activity, super-massive black hole feedback and chemical enrichment in the process of massive halos assembly. Aims: We assess the feasibility to retrieve, using X-ray hyperspectral data only, the thermodymamical hot gas properties and chemical abundances of a $z=2$ galaxy cluster of mass M500=7 x $10^{13} M_{\odot}$, extracted from the Hydrangea hydrodynamical simulation. Methods: We create mock X-ray observations of the future X-ray Integral Field Unit (X-IFU) onboard the Athena mission. By forward-modeling the measured 0.4-1 keV surface brightness, the projected gas temperature and abundance profiles, we reconstruct the three-dimensional distribution for the gas density, pressure, temperature and entropy. Results: Thanks to its large field-of-view, high throughput and exquisite spectral resolution, one X-IFU exposure lasting 100ks enables reconstructing density and pressure profiles with 20% precision out to a characteristic radius of R500, accounting for each quantity's intrinsic dispersion in the Hydrangea simulations. Reconstruction of abundance profiles requires both higher signal-to-noise ratios and specific binning schemes. We assess the enhancement brought by longer exposures and by observing the same object at later evolutionary stages ($z=1-1.5$). Conclusions: Our analysis highlights the importance of scatter in the radially binned gas properties, which induces significant effects on the observed projected quantities. The fidelity of the reconstruction of gas profiles is sensitive to the degree of gas components mixing along the line-of-sight. Future analyses should aim at involving dedicated hyper-spectral models and fitting methods that are able to grasp the complexity of such three-dimensional, multi-phase, diffuse gas structures.
F. Castellani, N. Clerc, E. Pointecouteau, Y. M. Bahé, J. Schaye, F. Pajot
2023-10-19T14:06:55Z
http://arxiv.org/abs/2310.12759v1
# A case study of early galaxy cluster with the _Athena_ X-IFU ###### Abstract Context:Observations of the hot gas in distant clusters of galaxies, though challenging, are key to understand the role of intense galaxy activity, super-massive black hole feedback and chemical enrichment in the process of massive halos assembly. Aims:We assess the feasibility to retrieve, using X-ray hyperspectral data only, the thermodynamical hot gas properties and chemical abundances of a \(z=2\) galaxy cluster of mass \(M_{500}=7\times 10^{13}\) M\({}_{\odot}\), extracted from the Hydranage hydrodynamical simulation. Methods:We create mock X-ray observations of the future X-ray Integral Field Unit (X-IFU) onboard the _Athena_ mission. By forward-modeling the measured \(0.4-1\) keV surface brightness, the projected gas temperature and abundance profiles, we reconstruct the three-dimensional distribution for the gas density, pressure, temperature and entropy. Results:Thanks to its large field-of-view, high throughput and exquisite spectral resolution, one X-IFU exposure lasting 100 ks enables reconstructing density and pressure profiles with 20% precision out to a characteristic radius of \(R_{500}\), accounting for each quantity's intrinsic dispersion in the Hydranage simulations. Reconstruction of abundance profiles requires both higher signal-to-noise ratios and specific binning schemes. We assess the enhancement brought by longer exposures and by observing the same object at later evolutionary stages (at \(z=1\) and 1.5). Conclusions:Our analysis highlights the importance of scatter in the radially binned gas properties, which induces significant effects on the observed projected quantities. The fidelity of the reconstruction of gas profiles is sensitive to the degree of gas components mixing along the line-of-sight. Future analyses should aim at involving dedicated hyper-spectral models and fitting methods that are able to grasp the complexity of such three-dimensional, multi-phase, diffuse gas structures. ## 1 Introduction The formation epoch of groups and clusters of galaxies ranges from \(z\sim 1-3\) when star formation in galaxies and super-massive black holes (SMBH) are at the peak of their activities. The gas trapped in the forming massive potential wells of these structures heats up under the dual effect of gravity and feedback from star formation and SMBH (e.g., Kravtsov and Borgani, 2012; Vogelsberger et al., 2014; Schaye et al., 2015; Bahe et al., 2017; Schaye et al., 2023). The questions of how this gas is accreted by galaxies and how it feeds their star formation and SMBH activity are still open. The same goes for the timescale within which galaxies evolve into the massive ellipticals that form the red sequence under the joint influence of their dense environment and the quenching of star formation by active galactic nucleus (AGN) feedback (e.g., Behroozi et al., 2013; Leauthaud et al., 2012; Eckert et al., 2021; Oppenheimer et al., 2021). This phase of violent and intense astrophysical activity injects large amounts of energy, gas and metals into the forming intra-cluster medium, shaping its thermal and chemical properties (e.g., McNamara and Nulsen, 2007; Biffi et al., 2017; Mernier et al., 2018). As such, these processes imprint the statistical (scaling and structural) properties of the population of groups and clusters of galaxies (Lovisari and Maughan, 2022, for a recent review). Constraining the properties and the evolution of hot gas in these massive halos out to their epoch of formation is an efficient way to understand the aforementioned assembly and evolution of the largest gravitationally bound halos in the Universe. By construction in a hierarchical scheme of structure formation, lower mass halos (\(M_{500}<10^{14}\) M\({}_{\odot}\)) constitute the vast majority of the population of groups of galaxies (e.g., Tinker and Chen, 2008). They are observed in numbers in large surveys, especially in optical surveys (e.g., Lambert et al., 2020; Werner et al., 2023), and they dominate the population of simulated halos in numerical simulations. The self-similar process of structure formation predicts their properties to be down-scaled versions of massive galaxy clusters. Though these predictions seem to agree with some of their statistical properties (e.g., the mass-temperature relation, Babyk and McNamara, 2023), many observations have shown that most actually depart from the scaling and structure behaviour of their more massive siblings (e.g., Ponman et al., 2003; Sun et al., 2011; Stott et al., 2012; Sanderson et al., 2013; Lovisari et al., 2015, 2021). Their shallower potential is more prone to the impact of AGN feedback in terms of intra-group (IGM) gas heating, but also in the way it impacts the IGM gas distribution and its depletion from these less deep gravitational potential wells (e.g., Eckert et al., 2021). The physics governing the gas content of galaxy groups and clusters is a current true challenge for numerical simulations, the current predictions of which largely vary from one work to the other (Oppenheimer et al., 2021). With their smaller masses and thus shallower potential well with respected to massive galaxy clusters, the observation of the X-ray emitting hot gas is harder for groups than for clusters. To date several groups or samples of groups of galaxies have been studied at X-ray wavelengths, mainly in the local Universe (e.g., Lovisari and Maughan, 2022). Reaching such objects out to larger redshifts to investigate their gas content and its evolution remains very challenging for the current generation of X-ray telescopes. The next generation of X-ray observatories shall combine a large collective area with high resolution spectro-imaging capabilities. The X-IFU instrument (Barret et al., 2023) on board the future European X-ray observatory _Athena_(Barcons et al., 2017) should enable studies of groups with masses of a few \(10^{13}\) M\({}_{\odot}\) out to \(z\simeq 2\). The understanding of the assembly of the survey of the Hot and energetically of mass-loss galaxies, is a key objective of the Hot and Energetic Universe science theme that the _Athena_ mission will implement (Nandra et al., 2013). In the wake of other previous feasibility studies addressing the science cases of chemical enrichment in groups and galaxy clusters (Cucchetti et al., 2018; Mernier et al., 2020), bulk and turbulent motion in the intra-cluster gas (Roncarelli et al., 2018; Clerc et al., 2019; Cucchetti et al., 2019), and the warm hot intergalactic medium (Walsh et al., 2020; Wijers et al., 2020; Wijers and Schaye, 2022), we address in this work the issue of the observation of distant galaxy groups and clusters in order to characterise their physical properties out to the epoch of their formation with the _Athena_ X-IFU instrument. Following the Athena Mock Observing Plan1 We present realistic mock observations with the X-IFU instrument of one simulated galaxy cluster extracted from the cosmological hydrodynamic simulations Hydrangea (Bahle et al., 2017). The paper is organised as follows. We present the input of our simulations in Sec. 2 and the mock X-IFU observations and processing in Sec. 3. In Sec. 4, we describe our Bayesian approach analysis. We present our results in Sec. 5 and discuss them in Sec. 6. Footnote 1: Version v4.3 issued on 8\({}^{\rm th}\) of September 2020 by the ESA Athena Science Study Team, the science objective concerning the evolution of thermodynamical properties of groups and clusters of galaxies from \(z>0.5\) and out to \(z\sim 2\) will be achieved with X-IFU observations. We therefore focused in this study on this specific instrument only. Throughout this paper, we made use of the cosmology setup adopted for the Hydrangea simulations, that is the Planck Collaboration et al. (2014) cosmology with \(h_{100}=0.6777\), \(\Omega_{m}=0.307\), \(\Omega_{\Lambda}=0.693\). At a redshift of \(z=2\), 1 arcsecond corresponds to a physical size of 8.6 kilo-parsec (kpc). ## 2 A simulated cluster of galaxies at \(z=2\) As input for our study, we used a simulated cluster of galaxies from the cosmological Hydrangea simulations (Bahe et al., 2017). ### Hydrangea simulation Hydrangea is a suite of 24 cosmological zoom-in simulations of massive galaxy clusters (selected such that \(M_{200}=10^{14-15.4}M_{\odot}\) at \(z=0\)) that is part of the Cluster-EAGLE (C-EAGLE) project (Barnes et al., 2017). Hydrangea adopts the EAGLE (Schaye et al., 2015; Crain et al., 2015) galaxy formation model but for zooms of regions taken from a larger volume (3200 cMpc)3 than the original EAGLE parent volumes of \(\leq\)(100 cMpc)3 Footnote 3: We made use of the Hydrangea python library: [https://hydrangea.readthedocs.io/en/latest/index.html](https://hydrangea.readthedocs.io/en/latest/index.html). Hardrangea uses a modified version of the \(N\)-Body Tree-PM smoothed particle hydrodynamics (SPH) code gadget-3 (Springel, 2005), with hydrodynamical updates by Schaye et al. (2015) and Schaller et al. (2015). The subgrid physics of the code is based on that developed for OWLS (Schaye et al., 2010): it implements for instance radiative cooling and photoheating (Wiersma et al., 2009) for 11 chemical elements (H, He, C, N, O, Ne, Mg, Si, S, Ca, and Fe), hydrogen reionization and ionizing UV/X-ray background (Haardt and Madau, 2001), star formation rate of gas following Schaye and Dalla Vecchia (2008), stellar mass loss based on Wiersma et al. (2009b), energy feedback from star formation onsets the thermal implementation of Dalla Vecchia and Schaye (2012) with the heating of a small number of gas particles by a large increment in temperature and the feedback from supermassive black holes ("AGN feedback") is implemented with a similar method (Booth and Schaye, 2009). We selected the most massive halo in the simulation at \(z=2\). From the three snapshots at redshifts 1, 1.5 and 2, we extracted all SPH particles within a comoving sphere of radius 1.5 Mpc centred on the halo center of potential. For each SPH particle, we retrieved quantities such as the particle position, velocity, temperature \(T\), mass \(m\), mass density \(\rho\) and chemical abundance2 Footnote 2: We made use of the Hydrangea python library: [https://hydrangea.readthedocs.io/en/latest/index.html](https://hydrangea.readthedocs.io/en/latest/index.html). The main properties of the halo at the selected redshifts are gathered in Table 1. The values of the mass within a radius encompassing 500 times the critical density of the Universe at the cluster redshift, \(M_{500}\), indicate a well-formed cluster of galaxies at \(z=2\) with \(M_{500}=7\times 10^{13}\) M\({}_{\odot}\), equivalent in mass to groups of galaxies in the local Universe. It transitioned into the more massive cluster regime at \(z=1\) with \(M_{500}=2\times 10^{14}\) M\({}_{\odot}\). Such system will be the progenitor of a massive cluster of galaxy (e.g., such as the Perseus cluster, at \(z=0\)). At the redshift of main interest (\(z=2\)) this cluster is not in a major merging stage. Consistently with other systems at this redshift, this cluster is not relaxed. The atomic gas content of the simulation is converted into chemical abundances: for an element \(i\), the mass fraction of the element \(X_{i}\), with an atomic mass \(\mathcal{M}_{i}\), is converted into the chemical abundance \(Z_{i}\) in solar metallicity units (as a number density ratio), assuming solar abundance \(Z_{\odot,i}\) from Anders and Grevesse (1989): \[Z_{i}=\frac{X_{i}}{X_{H}\times Z_{\odot,i}\times\mathcal{M}_{i}} \tag{1}\] where \(X_{H}\) is the hydrogen mass fraction of the gas particle. To serve our show-case study, we investigated various lines-of-sight at each redshift according to the distribution of key physical quantities such as the temperature, density, abundances. We qualitatively selected the two projected images presenting (i) the most disturbed and structure-rich (referred to as 'irregular'), and, (ii) the most regular and smoothed (referred to as'regular') distributions of these quantities. These orientations are used to bracket the impact of projection effects. \begin{table} \begin{tabular}{c c c c c} \hline \hline \(z\) & \(M_{500}\) & \(R_{500}\) & \(T_{500}\) & \(\theta_{500}\) \\ & (\(10^{14}\) M\({}_{\odot}\)) & (kpc) & (keV) & (arcmin) \\ \hline 1.016 & 2 & 616 & 4.12 & 1.24 \\ 1.493 & 1 & 419 & 3.25 & 0.80 \\ 1.993 & 0.7 & 309 & 3.44 & 0.60 \\ \hline \end{tabular} 1 \end{table} Table 1: Characteristics of Hydrangea halos ### Model of the ICM X-ray emission To model the X-ray emission from our simulated cluster, we down-selected the SPH gas particles to those representative of the ICM in the gas temperature-density plane. We restricted the temperature range to \(0.0808<T_{L\nu}<60\). These boundaries are forced by the tabulated X-ray emission model used afterwards (i.e., vapec under XSPEC). We set an upper limit on the gas density of \(n_{e}<1\)\(cm^{-3}\), to avoid overly dense regions in the simulations. Such particles are not representative of the ICM properties, but would nonetheless bias our mock observations. These are likely particles recently affected by the supernovae and/or AGN feedback implementation. We removed about 100 particles out of a few million. To model the X-ray emission of the cluster, we followed the procedure described in Cucchetti et al. (2018). For each selected gas particle, we assumed the collisionally-ionised diffuse plasma model APEC (Smith et al., 2001) computed from the AtomDB v3.0.9. atomic database (Foster et al., 2012). We used its vapec implementation under XSPEC (Arnaud, 1996) to account for the various chemical abundances traced in the Hydrangea simulations. The aforementioned reference solar abundances are from Anders & Grevesse (1989) and the cross section are taken from Verner et al. (1996). The redshift accounted for each particle reads as follow: \[z=z_{pec}+z_{clus}+z_{clus}\times z_{los}\quad\text{with}\quad 1+z_{pec}=\sqrt{ \frac{1+\beta}{1-\beta}} \tag{2}\] where \(z_{pec}\) is linked to the peculiar velocity of the particles along the line-of-sight, \(v_{los}\) within the clusters. \(\beta=v_{los}/c\), with \(c\) being the speed of light and \(z_{clus}\) the cosmological redshift of the cluster. The X-ray flux is directly proportional to the integral of the electron times proton density, \(n_{e}\times n_{H}\), over the particle volume, \(V\). Following the implementation in XSPEC the normalisation \(\mathcal{N}\) of the vapec model writes: \[\mathcal{N}=\frac{10^{-14}}{4\pi[D_{A}(1+z)]^{2}}\int_{V}n_{e}n_{H}dV\quad( \text{in cm}^{-5}) \tag{3}\] with \(D_{A}\) the angular diameter distance to the source (computed within the cosmological setup of the Hydrangea simulation). We further assume \(n_{e}=1.2\times n_{H}\). We fixed the value of the Galactic hydrogen column density to \(0.03\times 10^{22}\) cm\({}^{-2}\), a typical value for high galactic latitude. Its associated absorption of soft X-ray photons is modeled with a wabs model (Morrison & McCammon, 1983). From these hypotheses, we computed the flux of photons at the Earth emitted by each particle. We used each particle's X-ray spectrum between 0.2-12 keV as a density probability to draw the appropriate number of photons according to a given exposure time and collecting area. Stacked together over all cluster particles, we obtain a photon list at the Earth. The exposure time is systematically fixed to 10 times this of the mock observation exposure in order to provide enough statistics for the instrumental simulations (see Sec. 3). At this stage the collecting area is chosen to be 20,000 cm\({}^{2}\), largely encompassing this of the _Athena_ mirrors over the whole energy band. This leads to an oversized list of photons providing proper statistics for the telescope and instrumental simulations and avoid any duplication or under-sampling biases. ## 3 Simulated X-IFU/_Athena_ observations ### The Athena telescope and X-IFU instrument _Athena_ is the next generation of X-ray telescope from the European Space Agency (Barcons et al., 2017). It implements the science theme of the Hot and Energetic Universe (Nandra et al., 2013). With a focal length of 12 m it will embark a Wolter-I type mirror initially expected to have a collecting area of 14,000 cm\({}^{2}\) at 1 keV, and to provide a spatial resolution of 5 arcsec FWHM. Two instruments will board the payload, the Wide field Imager (WFI - Meidinger et al., 2017), and the X-ray Integral Field Unit (X-IFU - Barret et al., 2023). In this study we focus on the capabilities of the X-IFU instrument, whose main initial high level performance requirements relevant for our study include a spectral resolution of 2.5 eV over the 0.2-7 keV band. This would represent a gain of a factor of \(\approx\) 50 with respect to XMM-_Newton_. The effective area (constrained by the mirror collecting area) shall be of 10,000 cm\({}^{2}\) at 1 keV, whilst the hexagonal field-of-view shall have an equivalent diameter of 5 arcmin. We refer the reader to Barret et al. (2023) for an comprehensive description of the X-IFU instrument. ### X-IFU mock observations To simulate the observations with the X-IFU instrument, we followed the method described in Cucchetti et al. (2018) and summarised in the following. We made use of the SImulation of X-ray TEleescopes software package (SIXTE, Wilms et al., 2014; Dauser et al., 2019). SIXTE ingests as input a SIMPUT (Schmid et al., 2013) file containing either emission spectra of individual sources or regions, or directly a list of photons at the telescope. The SIXTE website3 distributes the baseline setup for the X-IFU/_Athena_, as described in the above section and detailed in Barret et al. (2023), formatted for the use of SIXTE. This includes ressources and configurations such as the focal plane geometry, the point spread function (PSF), the vignetting, the instrumental background, crosstalk, the ancillary response file (ARF) and the redistribution matrix file (RMF), as provided by the X-IFU consortium4. We used this baseline setup in this work. Footnote 3: [https://www.stermwarte.uni-erlangen.de/sixte/](https://www.stermwarte.uni-erlangen.de/sixte/) Footnote 4: [http://x-ifu.irap.omp.eu/](http://x-ifu.irap.omp.eu/) Footnote 4: [http://www.stermwarte.uni-erlangen.de/sixte/](http://www.stermwarte.uni-erlangen.de/sixte/) We took into account the astrophysical emission from foreground and background emissions, and co-added them to our cluster of galaxies into a total photon list received at the Earth. We modelled the Galaxy halo and local bubble as diffuse emission according to the model proposed by McCammon et al. (2002), that is an absorbed (wabs*apec) and unabsorbed (apec) thermal plasma emission model, respectively. As per the required _Athena_ spatial resolution, 80% of the sources constituting the extragalactic background are expected to be resolved (Moretti et al., 2003). This corresponds to a lower flux limit for individual simulated sources of \(\approx 3\times 10^{-16}\) ergs/s/cm\({}^{2}\) for a 100 ksec exposure time (down scaled to \(\approx 10^{-16}\) ergs/s/cm\({}^{2}\) for 1 Msec). AGN with fluxes below this limit are treated as a diffuse contribution, and modelled with an absorbed power-law spectrum parameterised as in McCammon et al. (2002). To avoid double-counting resolved AGN, we set the normalisation of this model to 20% of its nominal value. The resolved cosmic X-ray background (CXB) is accounted for by adding the contribution of foreground and background AGNs according to the procedure described by Clerc et al. (2018). In short, AGNs are drawn from their luminosity function, \(N(L_{0.5-2keV},z)\) (from Hasinger et al. 2005). Each source is assigned an emission spectrum from Gilli et al. (2007). They are then uniformly distributed over the simulated solid angle (thus neglecting any clustering effects). For both the Galactic and non-resolved CXB emission we used the model parameters provided by Loti et al. (2014). We fed the total (cluster of galaxies + astrophysical contamination) photon list to the xifupi pipeline method of SIXTE in order to generate a mock event list. All aforementioned instrumental effects and setups were accounted for, except for the cross-talk, which is irrelevant for our study. For each three redshifts and two lines-of-sight orientations we selected for our Hydrangea cluster (see Sec. 2.1 and Table 1), we generated two mock observations of one single pointing with 100 and 250 ksec, respectively. For the cluster at \(z=2\), we also ran a third deep exposure of 1 Msec. ### Processing of the mock data For the purpose of our study, we assumed in our analysis spherical symmetry for the cluster, and we focus on a radial analysis. #### 3.3.1 Point source masking We first proceed by generating a raw count map, as shown on Fig. 1 for \(z=2\) and projection'regular'. As in Cucchetti et al. (2018), we flagged the loci of the simulated AGNs with fluxes above the limit defined in Sec. 3.2, that is \(\approx 3\times 10^{-16}\) (\(\approx 10^{-16}\)) ergs/s/cm\({}^{2}\) for an exposure time of 100 ksec (1 Msec) They are masked by excluding all the pixels with number counts \(2\sigma\) above the mean count number of the pixels neighbouring their positions. This ensures that the ICM emission is accounted for. This procedure led to an effective flux threshold asymptotically reaching the value of \(\approx 3\times 10^{-16}\) (\(10^{-16}\)) ergs/s/cm\({}^{2}\) with the decreasing ICM emission. We performed a visual check and manually masked any remaining obvious point sources or overdense count regions that could arise from the Hydrangea simulation itself. #### 3.3.2 Surface brightness profile We derived the surface brightness (SXB) profile from the raw count image masked from the point sources in order to account for the geometry of the X-IFU detector. We thus avoid artificially over sampling the intrinsic spatial resolution of the instrument (that is the convolution of the mirror PSF and the pixel size). The focal plane pixels are attributed to the radial annuli which encompass the position of their centre. Hence the pattern of pixels populating the various radial annuli slowly tend to actual geometrical annuli over the detector array with increasing radius. However, the inner annuli significantly depart from such a regular shape. For instance, the central annulus contains a single pixel, whilst the second one contains the four nearest adjacent pixels to the central one in a cross pattern. This specific pattern is properly accounted for in the fit of the SXB profile (see Sec. 4). In our mock observation, the position of the cluster projected centre, as defined in the Hydrangea simulation, coincides with the instrument optical axis. In other words, the cluster center is positioned at the centre of the instrument detector array. The SXB profile is extracted in the 0.4-1 keV band, where the bremsstrahlung emission of the ICM is still relatively flat as a function of energy even at \(z=2\), hence minimising its dependence on the gas temperature. The derived SXB profile for \(z=2\) and projection'regular' is presented on Fig. 4, left panel. #### 3.3.3 Spectral analysis To recover the radial distribution of the gas temperature and chemical abundances, we binned our mock event list into six concentric annuli. As for the SXB profile, the centre is chosen to match the position of the projected centre of the simulated cluster. The six annuli were defined to gather at least 10,000 counts in order to populate the spectral channels of the X-IFU. The spectra are extracted using the makespec function of SIXTE and binned according to the optimal method by Kaastra & Bleeker (2016). A specific ancillary response is computed per annulus accounting for the mirror vignetting weighted for each contributing pixel by its number counts. The local background spectrum is extracted from the area of the field-of-view beyond \(R_{500}\). It is fitted with the model described in Sec. 3, that is apec+wabs*(apec+powerlaw). All parameters but the three normalisations are fixed. This best fit background model is then rescaled to the area of each annulus (i.e., the pixel solid angle times the number of pixels belonging to the annulus) and fixed as such for the fit of the ICM model. We fitted the spectra of the galaxy cluster with a wabs*vapec model under XSPEC with Cash statistics (Cash 1979), with the gas temperature, the normalisation and the Fe, Si and Mg abundances as free parameters. We limited our investigation of the chemical abundances to these three elements, as they present the most prominent lines with a reasonable probability to be detected out to a redshift of \(z=2\). All other abundances were fixed to the average value of the projected emission-measure weighted abundances from the Hydrangea halo particles over a given annulus. The reconstructed temperature and abundances radial distribution are shown in Fig. 4, right panel and Fig. 5, respectively. ## 4 Forward-modelling and MCMC analysis We chose a forward modelling approach to fit the SXB, temperature and abundance profiles in order to reconstruct the 3D radial distribution of the thermodynamical and chemical proper Figure 1: X-IFU map of a 100 ksec raw number count for our Hydrangea cluster of galaxies at \(z=2\) and for projection ‘regular’. The image includes astrophysical and instrumental backgrounds. The green circles mark the loci of the point sources to be excised. ties of the simulated cluster of galaxies. We recall that we assume spherical symmetry for our simulated clusters. ### The forward-modelling procedure We considered the gas density and pressure as two independent physical quantities in our model. We modelled the density distribution with a'simplified' Vikhlinin functional (Vikhlinin et al., 2006): \[n_{e}^{2}(x)=\frac{n_{0}^{2}}{(x/r_{s})^{n_{e}}[1+(x/r_{s})^{2}]^{3\beta_{1}- \alpha_{1}/2}} \tag{4}\] where \(x=r/R_{500}\). \(n_{0}\) is the central density, \(\alpha_{1}\) and \(\beta_{1}\) the inner and outer slopes, respectively, and \(r_{s}\) the scale radius. The pressure radial distribution is described by a gNFW profile (Nagai et al., 2007) \[P(x)=\frac{P_{0}}{(c_{500}~{}x)^{7}[1+(c_{500}~{}x)^{\alpha_{2}}]^{(\beta_{2}- 7)/\alpha_{2}}} \tag{5}\] Where \(P_{0}\) is the central pressure, \(c_{500}\) is the concentration at a radius of \(R_{500}\), \(\alpha_{2}\) and \(\beta_{2}\) the intermediate and outer slopes, respectively. We do not have enough leverage with our observational constraints to fit \(\gamma\), the inner slope. We thus let it as a free parameter with a Gaussian prior with mean 0.43 and standard deviation of 0.1, the values derived from the XCOP sample (Ghirardini et al., 2019). The temperature distribution is simply derived from the gas density and pressure assuming the ICM to be a perfect gas with \(P=n_{e}\times k_{B}T\) (\(k_{B}\) being the Boltzmann constant). The entropy is reconstructed assuming the conventionally adopted relation in X-ray astronomy as introduced by Voit (2005), that is \(K=k_{B}T/n_{e}^{2/3}\). For the radial distributions of chemical elements, we adopted a simple power-law model: \[A(x)=A_{0}~{}x^{-p} \tag{6}\] where \(A_{0}\) is specific to each element. However we assumed the same slope \(p\), for all elements, considering it a fair assumption according to Mernier et al. (2017). The remaining twelve free parameters are listed in Table 2 together with their initial values and priors adopted in our MCMC fit. We account for the parameter dispersion in each spherical shell by randomly drawing 5000 particles in the shell and adopting their values in density and pressure around the shell fiducial value. We aim to compare our results with volume-weighted distributions of the thermodynamic quantities (Sect. 5), so as to minimise the impact of high-density, small-volume particles ; therefore we weigh the random draws by the volume of each particle. In doing so, we follow the dispersion of thermodynamic quantities predicted by Hydrangea simulation about their fiducial profile. Fig. 2 illustrates this process and shows how translating the cloud of particles in the \(\log n_{e}-\log T\) plane provides a new distribution of densities and temperatures dispersed around a newly requested median value. These distributions are clipped at zero and at the maximal values allowed in our setup (1 cm\({}^{-3}\) and 15 keV). This process leads to 5000 spherically symmetric models, each contributing in equal proportion to the final model. Taken individually, they do not serve as realistic descriptions of the cluster ; they serve as intermediate models, enabling propagation of the scatter in thermodynamic profiles. We compute the X-ray emission of the 'toy model' cluster by averaging that of the 5000 models together. We apply an Abel transform, using the PyAbel5 python package with the Hansenlaw transform method (Hansen & Law, 1985), to each of the 5000 models and average the resulting surface brightness profiles. Assuming circular symmetry, the profile is distributed into a two-dimensional grid generously oversampling the X-IFU pixel size. Emission-measure-weighted temperature and abundance profiles are constructed similarly. We finally convolve these parameter grids with the PSF kernel and reshape it to the pixelization and geometry of the X-IFU focal plane. We reproduce the source and pixel masking applied to the mock data. With this process we ensure faithfully reproducing the input image characteristics (see Sec. 3.3). Footnote 5: [https://pyabel.readthedocs.io/en/latest/](https://pyabel.readthedocs.io/en/latest/) ### The MCMC fit We simultaneously fitted our SXB, temperature and abundances profiles using a Bayesian MCMC approach. The modeling procedure described above (including the assessment and propagation of the parameter dispersion in shell) is reproduced at each step of the MCMC. We formulated the associated likelihood as follows: \[\chi^{2}=-2~{}log{\cal L}=\sum_{p_{j}}\sum_{i}\frac{(y_{i,p_{j}}-M_{i,p_{j}})^ {2}}{\sigma_{i,p_{j}}^{2}} \tag{7}\] where \(y_{i,p_{j}}\), \(\sigma_{i,p_{j}}\) and \(M_{i}\) are the mock data, associated mock error and model for profile \(p_{j}\), respectively. \(p_{j}\) denotes the set of the SXB, temperature and Mn, Si, Fe abundances profiles. Such a likelihood implicitly assumes normal-distributed uncertainties. Figure 2: Dispersion of thermodynamic quantities (electronic density \(n_{e}\), units cm\({}^{-3}\) and temperature \(T\), units keV) in one radial shell with \(300\,\mathrm{kpc}<r<310\,\mathrm{kpc}\) of the \(z=2\) cluster. The leftmost cloud is a density map of extracted particles from the Hydrangea-simulated cluster. The leftmost green cross indicates the position of the median density and temperature. The cloud of 5000 blue points on the right-hand side illustrates our random generation of a new model, when requesting a new set of median values (as shown by the yellow rightmost cross). In this process, the shape of the dispersion in the \(\log-\log\) plane is essentially maintained fixed and translated. In case of asymmetric uncertainties in the observable, we take \(\sigma_{x,p}\) as the arithmetic mean of the upper and lower bounds of the 68% confidence level. Our MCMC sampling of the parameter space made use of python-based code emcee(Foreman-Mackey et al., 2019). ### Validation Before proceeding to the analysis of mock Hydrangea data, we first validate our fitting procedure using a simpler, spherically symmetric model. Its physical quantities (density, pressure, element abundances) are drawn following parametric profiles (Eq. 4, 5 and 6). The numerical values are chosen so as to approximately reproduce the radial behaviour of the \(z=2\) cluster that we picked in the Hydrangea simulation (Table 1). In contrast with the Hydrangea cluster though, we impose that each of these quantities, taking a single value within a spherical shell at radius \(r\). This idealised model is transformed into a mock 100 ks X-IFU observation (as described in Sect. 3) ; this step includes the extraction of temperature and surface brightness profiles. We then fit the X-IFU mock observations with the exact same cluster model used to fit the actual simulation, letting 14 parameters freely evolve ; namely: \(n_{0}\), \(\alpha_{1}\), \(\beta_{1}\), \(r_{s}\), \(P_{0}\), \(\alpha_{2}\), \(\beta_{2}\), \(\gamma\), \(c_{500}\), \(A_{0,i}\) (\(i\) being one of Fe, Si and Mg), \(p\) and finally a uniform background level in the 0.4-1 keV imaging band. Priors and initial values for the 14 parameters are listed in Table 2. Figure A.1 shows the results of our validation test. We find overall good agreement between the input model and the inferred properties. The three-dimensional density and pressure profiles roughly agree with the input profiles within the 68% confidence intervals, at least across the radial range of applicability of our procedure (that is, at radial distances located between the PSF size, \(r\approx 0.1R_{500}\), and the outermost radius, \(R_{500}\)). Nevertheless, significant deviations appear, especially when considering the two-dimensional temperature profiles and the comparison between the purple and black lines in the top-right panel of Fig. A.1. Despite the simplicity of the input model, projection effects along the line of sight induce temperature and abundance mixing, which limit the ability of a single APEC model to account for the observed spectrum. This effect leads to underestimated uncertainties issued by the XSPEC fit (green error bars). In addition to this issue comes our (so far unverified) assumption that weighting the 3-d temperature by the emission-measure provides a fair representation of the 2-dimensional temperature profile. We have checked that such deviations are not solely of statistical origin by increasing the exposure time of the mock observation to 1 Ms and finding similar offsets in the projected temperature profile (Fig. A.2). However, repeating the experiment with a toy-model cluster placed at \(z=1\) provides better agreement in the recovered profiles (Fig. A.3). Indeed, the larger apparent \(R_{500}\) as compared to the \(z=2\) system enables defining a finer two-dimensional binning of the temperature profiles (ultimately limited by the X-IFU pixel and/or PSF size), hence mitigating the line-of-sight mixing effects. In summary, our validation tests demonstrate the ability of our procedure to recover input profiles. However, we highlighted a limitation due to line-of-sight mixing of temperature (and abundances), attributed to both the finite X-IFU angular resolution relative to the \(z=2\) cluster angular span, and to the slight inadequacy of the spectral fitting model. These deviations propagate into the inference of the 3-dimensional thermodynamic profiles and limit our ability to recover their exact distributions. These limitations are in fact inherent to any observation of multiphase diffuse gas, irrespective of the instrument in use. ## 5 Results We now present the results obtained by fitting the \(z=2\) galaxy cluster extracted from the Hydrangea simulations (Sect. 2) and folded through the X-IFU instrumental response assuming a 100 ks exposure time. Similarly as for our validation model (see Sect. 4.3), we fit mock data with a spherically symmetric model with 14 free parameters. They are listed in Table 2. They enter Eqs. 4 to 6 and govern the median 3-dimensional model for \(n_{e}(r)\), \(P(r)\) and \(A(r)\). The thirteenth parameter is an additional background level in the \(0.4-1\,\mathrm{keV}\) imaging band. Accounting for the intrinsic dispersion of these quantities around their median values requires prior knowledge of its radial behaviour (or a parameterised model thereof). We simply assumed that the dispersion in the \((\log n_{e},\log T)\) plane follows that of the Hydrangea cluster in each radial shell (see Fig. 2 and Sec. 4.1). This assumption avoids introducing additional model parameters, although one expects that using the exact dispersion for the forward-model may put our results on the optimistic side. For computational reasons, we do not assume any dispersion of the abundance within a spherical shell in our forward model. Figures 3, 4 and 5 illustrate the outcome of the forward-modelling procedure. In each figure the uncertainty on the 14 free model parameters is propagated and provides the envelope indicated with red dashed lines. In contrast to the validation model (e.g. Fig. A.1), our visualization now incorporates the effect of the intrinsic dispersion in the thermodynamic quantities, which is a key ingredient in our model. In each figure (but for the chemical abundances) this intrinsic dispersion is added in quadrature to the parameter uncertainties in order to provide the red shaded envelope. Profiles built from simulation particles also incorporate intrinsic dispersion. In each radial shell we computed the particle-volume weighted histogram of a given quantity (e.g., density). Fig. 3 reports the associated 16, 50 and 84 percentiles as a plain blue line and a blue shaded envelope. Focusing first on the recovery of physical thermodynamic quantities (electron density \(n_{e}\), pressure \(P\), temperature \(T\) and entropy \(K\) in Fig. 3) we notice a qualitatively good agreement \begin{table} \begin{tabular}{l l l l} \hline \hline Parameter & Initial value & Priors & Unit \\ \hline \(n_{0}\) & \(10^{-2}\) & \(\mathcal{U}(10^{-7}\,-\,1)\) & cm\({}^{-3}\) \\ \(\alpha_{1}\) & \(-0.3\) & \(\mathcal{U}(-1\,-\,3)\) & \\ \(\beta_{1}\) & \(0.9\) & \(\mathcal{U}(0.1\,-\,4)\) & \\ \(r_{s}\) & \(0.3\) & \(\mathcal{U}(0.1\,-\,1)\) & \(R/R_{500}\) \\ \(P_{0}\) & \(5\,10^{-2}\) & \(\mathcal{U}(0\,-\,0.2)\) & keV cm\({}^{-3}\) \\ \(\alpha_{2}\) & \(3\) & \(\mathcal{U}(0\,-\,4)\) & \\ \(\beta_{2}\) & \(5.17\) & \(\mathcal{U}(1\,-\,10)\) & \\ \(\gamma\) & \(0.43\) & \(\mathcal{N}(0.43,0.1)\) & \\ \(c_{500}\) & \(2.4\) & \(\mathcal{U}(1\,-\,4)\) & \\ \(A_{0,Fe}\) & \(5\,10^{-2}\) & \(\mathcal{U}(0\,-\,2)\) & \(Z/Z_{\odot}\) \\ \(A_{0,Si}\) & \(0.13\) & \(\mathcal{U}(0\,-\,2)\) & \(Z/Z_{\odot}\) \\ \(A_{0,Mg}\) & \(9\,10^{-2}\) & \(\mathcal{U}(0\,-\,2)\) & \(Z/Z_{\odot}\) \\ \(p\) & \(0.6\) & \(\mathcal{U}(-1\,-\,3)\) & \\ \hline \(bkg\) & \(10^{-2}\) & \(\mathcal{U}(0\,-\,0.3)\) & cts/s/arcmin\({}^{2}\) \\ \hline \end{tabular} 1 \end{table} Table 2: Cluster model parameters between the profiles recovered from the fit and the input profiles. The large amount of intrinsic dispersion within each spherical shell makes comparison of the median profiles cumbersome, since there is no one unique density (resp. pressure, temperature, entropy) at a given radius, rather a distribution of densities (resp. \(P\), \(T\), \(K\)). In order to quantify the agreement between the input and fitted profiles, we performed a two-sample Kolmogorov-Smirnov (KS) test in thin shells at each radius \(r\), by comparing the distribution of density values \(n_{e}\) (resp. \(P\), \(T\), \(K\)) of the Hydrangea simulation to the distribution of densities (resp. \(P\), \(T\), \(K\)) sampled by the MCMC. The KS-statistic is related to the probability to reject the null hypothesis, being that both distributions originate from the same (unknown) distribution. The lower the KS value, the more confident we are that both profiles are in agreement with each other. We also computed the average value of the KS statistics over the radial range comprised between the PSF size and \(R_{500}\) and we listed the values in Table 3 (in bold characters). Folding the MCMC parameter samples through our model provides the distribution of projected observables, namely the 0.4-1 keV surface brightness (Fig. 4, left) and the emission-measure-weighted temperature (Fig. 4, right). Most of the dispersion in surface brightness posterior samples originates from the intrinsic dispersion of thermodynamic quantities (mostly \(n_{e}\)), while statistical uncertainties on the profiles arising from the MCMC have little impact on the global error budget. The deviation of the reconstructed surface brightness profile, relative to the statistical error, is at most 1.4. The variance in the posterior projected temperature is roughly equally shared between intrinsic dispersion and parameter uncertainties. Similarly as for the validation case (Sect. 4.3 and Fig. A.1), some deviations appear between the best-fit and input models, although they are contained within the 1-\(\sigma\) envelope. A noticeable outlier is the XSPEC-fitted temperature in the fourth radial bin at \(R\simeq 0.45\,R_{500}\). Part of this bias is due to mixing effects along the line-of-sight, and a spectral model that is unable to account for such mixed components. The bias is also caused by a relatively faint CXB point source located within the brighter cluster region. It is therefore absent from the set of excised points sources (circles in Fig. 1). This unmasked point source brings an excess of high energy photon in the fourth radial bin spectrum, that is sufficient to bias high the XSPEC measurement. The deviation of the reconstructed temperature, relative to the statistical error, is at most 2.9 and at most 1 if we remove the failed measurement in the fourth annulus. The inference of chemical abundance profiles is depicted in Fig. 5. None of the profiles is correctly recovered, in other words, the 68% posterior confidence level (dashed purple envelope) does not reproduce well the EM-weighted abundance profile (black thick line) known as input from the hydrodynamical simulation. It is worth noting how XSPEC-fitted abundances scatter widely around the expected values, denoting both a lack of statistics and inadequacy in the spectral model due to the mixing of components. The apparent underestimation of posterior abundance profiles originates from the low signal-to-noise ratios and to an improper use of a Gaussian likelihood (Eq. 7) for strictly positive abundance values. We have verified that increasing the X-IFU exposure to 1 Ms provides a decent recovery of the iron (Fe) and silicon (Si) profiles, while magnesium (Mg) still suffers from poor statistics (see Sect. 6 and Fig. B.3). Such a result is not surprising due to the faintness of the cluster emission, and to the large dispersion of abundances along a single line of sight in the Hydrangea simulation (that is spatially correlated with the density and temperature) and thus the inability of our model to adequately capture the 3-dimensional structure of the element abundances in such a complex object. This issue is exacerbated by our choice of a crude spatial binning, mostly targeted towards temperature extraction (see e.g., Cucchetti et al. 2018, presenting an alternative binning scheme tailored for abundance measurements). ## 6 Discussion ### On inferring the properties of a \(z=2\) cluster of galaxies Our analysis demonstrates the capability to infer the (volume-weighted) thermodynamic properties of a realistic cluster of galaxies located at \(z=2\), using a moderate exposure budget of 100 ks with X-IFU onboard _Athena_. Despite the compact faint appearance of this low-mass object at such large cosmological distances (Fig. 1) the effective radial range accessible to X-IFU spans almost one decade between \(\approx 0.1-1\,R_{500}\). This enables recovery of the shape, amplitude and characteristic slopes of the profiles. The finite instrumental angular resolution prevents accessing smaller scales and resolving core properties. The density and pressure profiles are the quantities best reconstructed in comparison to the input data, with deviations of the median profile reaching at maximum 20%. Temperature and entropy display more noticeable deviations, up to 50% when considering the median profiles. The explanation for this difference relates to the fact that temperature and entropy are deduced from the other two. Therefore, their uncertainties propagate the errors of both density and pressure profiles. Moreover, this difference is also related to our primary observables used for inference. Since surface brightness profiles are measured with much less uncertainty than projected temperature (see e.g. Fig. 4), quantities heavily dependent on temperature (temperature itself, and entropy) are much more strongly affected by measurement systematics than density and pressure. Our \(z=2\) study highlights a key point, being that the ICM is not spherically symmetric, nor is it homogeneous within a given radial shell, making a mere comparison of median profiles incapable of grasping the complete reality of the scientific problem. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & & [ksec] & \(n_{e}\) & \(T\) & \(P\) & \(K\) \\ \hline \multirow{4}{*}{\(z=1\)} & \multirow{2}{*}{‘regular’} & 100 & 0.152 & 0.157 & 0.121 & 0.168 \\ & & 250 & 0.107 & 0.158 & 0.117 & 0.127 \\ & \multirow{2}{*}{‘irregular’} & 100 & 0.072 & 0.084 & 0.102 & 0.065 \\ & & 250 & 0.079 & 0.152 & 0.150 & 0.122 \\ \hline \multirow{4}{*}{\(z=1.5\)} & \multirow{2}{*}{‘regular’} & 100 & 0.119 & 0.167 & 0.142 & 0.188 \\ & & 250 & 0.105 & 0.116 & 0.100 & 0.159 \\ & & 100 & 0.088 & 0.243 & 0.149 & 0.235 \\ & \multirow{2}{*}{‘irregular’} & 250 & 0.060 & 0.113 & 0.088 & 0.112 \\ \hline \multirow{4}{*}{\(z=2\)} & \multirow{2}{*}{‘regular’} & **100** & **0.136** & **0.234** & **0.080** & **0.241** \\ & & 250 & 0.431 & 0.368 & 0.614 & 0.387 \\ \cline{1-1} & & 1000 & 0.161 & 0.332 & 0.140 & 0.321 \\ \cline{1-1} & & 100 & 0.113 & 0.327 & 0.220 & 0.291 \\ \cline{1-1} & \multirow{2}{*}{‘irregular’} & 250 & 0.097 & 0.309 & 0.305 & 0.217 \\ \cline{1-1} & & 1000 & 0.048 & 0.269 & 0.229 & 0.207 \\ \hline \hline \end{tabular} * **Notes.** Radial average of Kolmogorov-Smirnov test statistics, for each of the four thermodynamic profiles recovered by our fitting procedure. The KS tests are computed with respect to the input profiles from the Hydrangea simulation (lower KS values indicate closer agreement). The radial average is evaluated between the size of the PSF and \(R_{500}\). Bold characters refer to the configuration specifically discussed in Sect. 5 and shown in Fig. 3. \end{table} Table 3: Kolmogorov-Smirnov test results For this reason we have also compared distributions of thermodynamic quantities at fixed radius. Our radially-dependent KS-statistic test indicates again that the recovered density and pressure relate well to the Hydrangea simulation, with KS statistics spanning values between 0 and 0.2. Entropy and pressure may display KS-values up to \(0.4-0.5\), notably in the centre where PSF blurring is significant. The KS indicator is also elevated at radial locations affected by a faulty XSPEC measurement (Fig. 4, right, at \(R\simeq R_{500}/2\) in this example). Such a systematic error is not solely due to poor statistics, nor to inhomogeneity in the cluster gas distribution. Indeed, we have shown that this error acts as a floor uncertainty, inherent to our analysis setup. First, the inability of a single APEC model to account for a multi-temperature plasma projected along the line-of-sight induces discrepancies that are not well captured by the XSPEC error bars. Development of multi-temperature spectral models with arbitrary distributions of emission measure (e.g., generalizing the class of gadem models) would certainly benefit high-resolution spectroscopy of diffuse astrophysical plasmas. Moreover, identification of point sources contaminating the spectral measurements and buried in the cluster emission should also enhance the quality of spectral fits. Second, we have worked under the assumption that emission-measure weighting fairly represents the measured X-IFU spectroscopic temperature. Previous studies focusing on _XMM_-Newton and _Chandra_ have instead proposed'spectroscopic-like' weightings in order to alleviate this con Figure 3: Three-dimensional thermodynamic quantities (in blue) for the \(z=2\) galaxy cluster with projection ‘regular’ in the Hydrangea sample, and their best-fit models inferred from an X-IFU 100 ks exposure (in red). Each panel representing one quantity is made of three plots. The top curves display the radial profiles ; the blue shaded envelope represents the dispersion of the gas particles in the hydro-simulation. The red dashed lines indicates the effect of the variance of the 14 free model parameters ; the shaded envelope also includes the radial dispersion encapsulated in our model. The middle plot represents the deviation of the profile relative to the input median profile. The bottom plot shows the results of a Kolmogorov-Smirnov (KS) test performed at each radius \(r/r_{500}\), related to the probability that the input and best-fit profiles do not originate from the same distribution (the lower KS, the closer the agreement between the profiles). Vertical lines indicate the range of applicability of our modelling procedure. cern (e.g. Mazzotta et al. 2004; Vikhlinin 2006). We have verified that spectroscopic-like temperature profiles are even more discrepant with measurements than emission-measure weighted profiles. Such a work is pending realization in the case of high-resolution instruments like the X-IFU. More generally, our study calls for further development of new analysis tools dedicated to the analysis of hyperspectral imaging of extended structures (e.g. Picoquenot et al. 2019), with an ability to handle the regime of low number of counts. ### Impact of deeper observations and of targeting a cluster in a more mature evolutionary stage Up to now, our results and discussion have focused on a single galaxy cluster extracted from the \(z=2\) simulation snapshot. Observing this cluster of galaxies at later times (i.e., at lower redshifts, \(z=1\) and 1.5) brings a supplementary amount of information to our study. At later epochs, this cluster is more massive, more extended, hotter and intrinsically more luminous (Table 1). This leads to an increase of the signal-to-noise ratios in observables, both the surface brightness and spectra. Being closer to the observer, the surface brightness is also less faint, hence an Figure 4: Projected observables and their best-fit models for the \(z=2\) galaxy cluster with projection ‘regular’ in the Hydrangea sample, as seen by X-IFU in a 100 ks exposure. Left: surface brightness profile in the 0.4–1 keV band and its uncertainties (green points). The posterior mean (‘best-fit model’) and its 68% confidence envelope are represented in purple colours. The bottom panel displays the deviation of the measurements relative to the best-fit, normalised by the errors. Right: two-dimensional temperature profile as measured by KSPEC in each annular bin (green points and errors). The emission-measure weighted temperature \(T_{Edd}\) (known from the hydrodynamic simulation) is shown as a thick black line. In both panels, the shaded purple envelope includes both the contribution of the intrinsic dispersion of physical quantities within the radial shells and the propagated variance of the 14 free model parameters (dashed lines). Figure 5: Projected abundance profiles and their best-fit models for the \(z=2\) galaxy cluster with projection ‘regular’ in the Hydrangea sample, as seen by X-IFU in a 100 ks exposure. Each panel corresponds to one chemical element of Fe, Si and Mg. The measurement output by KSPEC is shown as green points (and errors). The emission-measure weighted abundance profile (known as input) is displayed with a thick black line. The best-fit model is displayed in purple and the dashed lines represent the error of the free model parameters. other increase in signal-to-noise ratios. We note however that the angular diameter distance hardly changes over this range of redshifts, by a few percent at most. Therefore, little gain is expected from angular resolution effects. We have replicated the analysis shown previously for all 14 configurations displayed in Table 3. In each case we inferred the four thermodynamic profiles and the three abundance profiles, accounting for the intrinsic dispersion within a radial shell. We inspected the results in light of deviations from the known input profiles. Although the fidelity of the profile reconstruction has a strong radial dependence, we summarise here our results with a single quantity, namely the average of the KS tests performed over the whole range of radii comprised between the PSF size and \(R_{500}\) (Table 3). In general, moving the cluster closer to the observer provides an enhancement in the accuracy of the reconstructed thermodynamic profiles and abundance profiles, as does increasing the exposure time. This is especially visible for the density profiles, whose inference relies primarily on surface brightness profiles. However, we found significant outliers to this overall trend, due to the systematic effects already discussed in Sect. 5 and App. B. Indeed, a single faulty temperature measurement in one radial bin (e.g., due to mixing components by projection along the line-of-sight) has a negative impact on all reconstructed quantities. Surprisingly, such situations may occur even at low redshifts and/or for large exposure times. One such example is the \(z=1\), 'irregular' configuration, which seems better characterised at 100 ks than at 250 ks. A second example is the \(z=2\), 250 ks,'regular' configuration, which comprises a catastrophic temperature measurement, hence shifting the reconstructed profiles considerably away from the true one. We also find that observing the cluster in an orientation that minimises the projection of gas phases along the line-of-sight and maximises the asymmetries in the plane of the image (i.e., the so-called 'irregular' case) often slightly improves the reconstruction of profiles, consistent with our expectations. ### Perspectives This work presents results on one single test case only. This object may display peculiarities that are not representative of the entire population of groups and clusters. A complete assessment of the scientific feasibility related to thermodynamic profiles inference with X-IFU would involve a larger sample of objects. On the one hand, singularities associated to a single test case would average out ; on the other hand, this would more closely match the approach that observers take in studying intra-cluster and intra-group physics. The study presented in this paper was conducted with the current public science requirements for the _Athena_ missions and the X-IFU instruments. The _Athena_ mission is currently undergoing a complete reformulation of its science case and consequently of the specifications of its instruments. The outcomes of our study might be modulated by the outcome of this reformulation. As stressed in the introduction, we limited our study to an investigation with the X-IFU instrument following the specifications from the Athena Mock Observing Plan. However, we note that a natural extension of the presented work would be to investigate distant groups of galaxies with deep pointed observations with the second Athena instrument, the Wide Field Imager (WFI, Rau et al., 2017), and in combination with X-IFU observations. This would optimise the physical characterisation of these objects, at the expense of exposure time as the two instruments will not be observing simultaneously. Accounting for the current Athena mock observing plan specifications and the ongoing reformulation process for the Athena mission, we will implement this dual combination in a forthcoming investigation. The upcoming XRISM mission will soon provide the community with unprecedented X-ray observations with high spectral resolution of nearby bright objects. More distant objects such as the first groups of galaxies will have to wait for the advent of observations by the next generation of X-ray integral field unit such as the X-IFU instrument on board the _Athena_ mission or the LEM mission concept (Kraft et al., 2022). ###### Acknowledgements. The authors would like to thank Dominique Eckert for referee this paper and for providing insightful comments on the study. EP, PC and NC acknowledge the support of CNRS/INSU and CNES. JS acknowledges the support of The Netherlands Organisation for Scientific Research (NWO) through research programme Athena 184.034.002. YMB gratefully acknowledges funding from the Netherlands Research Organisation (NWO) through Veni grant number 639.041.751, and financial support from the Swiss National Science Foundation (SNSF) under project 200021_213076.
2303.15668
Non-Hermitian guided modes and exceptional points using loss-free negative-index materials
We analyze the guided modes in coupled waveguides made of negative-index materials without gain or loss. We show that it supports non-Hermitian phenomenon on the existence of guided mode versus geometric parameters of the structure. The non-Hermitian effect is different from parity-time (PT) symmetry, and can be explained by a simple coupled-mode theory with an anti-PT symmetry. The existence of exceptional points and slow-light effect are discussed. This work highlights the potential of loss-free negative-index materials in the study of non-Hermitian optics.
Li-Ting Wu, Xin-Zhe Zhang, Ru-Zhi Luo, Jing Chen
2023-03-28T01:23:21Z
http://arxiv.org/abs/2303.15668v1
# Non-Hermitian guided modes and exceptional points using loss-free negative-index materials ###### Abstract We analyze the guided modes in coupled waveguides made of negative-index materials without gain or loss. We show that it supports non-Hermitian phenomenon on the existence of guided mode versus geometric parameters of the structure. The non-Hermitian effect is different from parity-time (\(\mathcal{PT}\)) symmetry, and can be explained by a simple coupled-mode theory with an anti-\(\mathcal{PT}\) symmetry. The existence of exceptional points and slow-light effect are discussed. This work highlights the potential of loss-free negative-index materials in the study of non-Hermitian optics. pacs: 123 ## I Introduction Recent years the non-Hermitian physics, especially parity-time (\(\mathcal{PT}\)) symmetry [1; 2; 3; 4; 5; 6; 7], has attracted much attention in the society of optics, because it provides an additional degree of freedom in manipulating the dynamics of optical waves. Nevertheless, the requirement on proper strength of gain and loss effects hinders the realistic applications of \(\mathcal{PT}\) symmetric optics. The advances of other categories of non-Hermitian optics, e.g. anti-\(\mathcal{PT}\) symmetry [8; 9; 10; 11], might overcome this drawback because gain and loss are not strictly required. The simplest configuration in studying \(\mathcal{PT}\) symmetric optics is two parallel waveguides (WGs). Such a kind configuration can be readily studied by using Maxwell's equations, and it supports some intriguing optical effects especially the stopped light at exceptional points (EPs) [12; 13; 14; 15] that separate the conserved and broken \(\mathcal{PT}\) phases [16; 17; 18; 19; 20]. However, for an optical wave propagating inside a WG, it contains multiple important physical parameters especially the wavevector \(k\) charactering the propagation of phase front and the Poynting vector \(S\) presenting the propagation of energy flux. They carry different informations about the wave, and might have different directions of propagation [21; 22]. The extreme situation is that in a negative-index material (NIM) with simultaneous negative magnetic permeability (\(\mu_{\text{NIM}}<0\)) and electric permittivity (\(\epsilon_{\text{NIM}}<0\)), where the direction of \(k\) is opposite to that of \(S\)[23; 24; 25; 26; 27]. However, non-Hermitian feature of coupled WGs containing NIM has not been extensively discussed. Only recently do we notice that Mealy etc. al. [28] have showed that EP of degeneracy can be obtained in two coupled WGs by a proper coupling of forward and backward waves, where the backward waves are generated by a proper designed grating. In this article, we check the optical waves in coupled WGs made of NIM and positive-index material (PIM). A PIM has positive magnetic permeability and electric permittivity, and covers all the dielectric materials in nature. We show that the guided modes in this system also support non-Hermitian feature, even when all the constituent media are loss-free. At a given angular frequency and WG thicknesses, the wavevector \(k\) of the eigenmodes and the associated eigenvectors vary with the distance \(a\) between the two WGs, and they coalesce at a critical value of \(a_{c}\) below which the wavevector \(k\) becomes complex. This phase transition point is an EP. A simple non-Hermitian coupled-mode theory is utilized to explain our results, which implies that the field dynamics induced by NIM belongs to anti-\(\mathcal{PT}\) symmetry. Features at EPs including the slow-light effect are discussed.This study highlights the important novelties of NIM, especially its great potential in the study of non-Hermitian optics by bypassing many restrictions of \(\mathcal{PT}\) symmetry. ## II Simulation and analysis Let us consider the simple structure shown in Fig. 1. It contains two straight WGs made of lossless media surrounded by air. One WG is PIM, and we assume that it is a dielectric of \(\epsilon_{\text{PIM}}=4\) and \(\mu_{\text{PIM}}=1\). The other WG is a NIM that has been discussed by various authors [29; 30; 31]. The well documented structure of NIM with \(\epsilon_{\text{NIM}}=1-\omega_{p}^{2}/\omega^{2}\) and \(\mu_{\text{NIM}}=1-F\omega^{2}/(\omega^{2}-\omega_{0}^{2})\) can be utilized here, where \(\omega_{p}=2\pi\times 10\) GHz, \(\omega_{0}=2\pi\times 4\) GHz, and \(F=0.56\)[29; 30; 31]. An effective NIM is achieved between 4 GHz and 6 GHz. Because NIM requires an intrinsic dispersion of \(\partial(\epsilon_{\rm NIM}\omega)/\partial\omega>0\) and \(\partial(\mu_{\rm NIM}\omega)/\partial\omega>0\) so as to give a positive energy density [21; 22], in our below analysis we would keep the angular frequency \(\omega\) a constant, and test the variation of the wavevectors \(k\) of the eigenmodes versus a geometric parameter of the coupled WGs. Here we set \(\omega=2\pi\times 5\) GHz (a free-space wavelength of 6cm), and the dispersion gives \(\epsilon_{\rm NIM}=-3\) and \(\mu_{\rm NIM}=-0.556\). The index of refraction equals \(n_{\rm NIM}=\sqrt{\epsilon_{\rm NIM}}\sqrt{\mu_{\rm NIM}}=-1.291\). Thickness \(b\) of each WG is 4 cm. The distance \(a\) between the two WGs is chosen as the variable. Assuming the direction of the wavevector \(k\) is \(z\), the dispersion and distribution of the eigenmodes can be numerically found by using Maxwell's equations and boundary conditions [21; 14; 22]. For example, considering the transverse-electric (TE) polarized eigenmodes with non-vanishing \(E_{y}\) component \[E_{y}=e^{-jkz+j\omega t}\left\{\begin{array}{ll}E_{1}e^{-j\beta(x-a/2-b)},&x >a/2+b\\ E_{2}e^{-j\mu_{\rm NIM}(x-a/2)}+E_{3}e^{+j\mu_{\rm NIM}(x-a/2)},&a/2+b>x>a/2\\ E_{4}e^{+j\mu_{\rm N}}+E_{5}e^{-j\mu_{\rm NIM}},&a/2>x>-a/2\\ E_{6}e^{-j\mu_{\rm NIM}(x+a/2)}+E_{7}e^{+j\mu_{\rm NIM}(x+a/2)},&-a/2>x>-a/2-b\\ E_{8}e^{+\beta(x+a/2+b)},&x<-a/2-b\end{array}\right. \tag{1}\] with \[k^{2}-\beta^{2}=\omega^{2}/c^{2}, \tag{2}\] \[k^{2}+\alpha_{m}^{2}=\epsilon_{m}\mu_{m}\omega^{2}/c^{2}, \tag{3}\] where the subscript \(m\) stands for NIM and PIM. The associated magnetic fields can be found from Eq. (1) using \(\nabla\times\vec{E}=-\partial\vec{B}/\partial t\). By applying the electromagnetic boundary conditions, and defining \[F_{m}=\frac{\beta+j\alpha_{m}/\mu_{m}}{\beta-j\alpha_{m}/\mu_{m}}, \tag{4}\] \[\Upsilon_{m}=\frac{F_{m}\exp(j2\alpha_{m}b)-F_{m}^{-1}}{\exp(j2\alpha_{m}b)- 1}, \tag{5}\] it is readily to show that the eigensolutions \(k\) are given by \[\Upsilon_{\rm NIM}\Upsilon_{\rm PIM}-\exp(-2\beta a)=0. \tag{6}\] By substituting the eigensolutions \(k\) back into Eq. (1) we can calculate the distributions of fields. Formula for transverse-magnetic (TM) polarized eigenmodes can be developed similarly. Equation (6) can be numerically solved, for example, by calculating the left-hand side of it at different \(k\) values to find zeroes. A standard result of \(k\) versus \(a\) is shown in Fig. 2(a). We can see when the WG distance \(a\) is large enough so that the two WGs are weakly coupled, two separated eigenmodes can be achieved. Each eigenmode represents a localized mode in a single WG. From the distributions of field \(E_{y}\) and Poynting vector \(S_{z}\) shown in Figs. 2(b) and 2(d) we can conclude that the eigenmode at \(k_{\rm NIM}=1.328\) cm\({}^{-1}\) is supported by the NIM WG, where the direction of energy flux is opposite to that of \(k\). The other one at \(k_{\rm PIM}=1.186\) cm\({}^{-1}\) is localized at the PIM WG. From the number of nodes (\(E_{y}=0\)) inside each WG we can see the field supported by the planar PIM WG is a TE\({}_{2}\) mode, and that in the NIM WG is a TE\({}_{1}\) one [30]. Note that the wavevector \(k\) is real here, and we have assumed that the expansion coefficient \(E_{1}\) in Eq. (1) is real. As a result, the imaginary component Im(\(E_{y}\)) is zero and is not shown in Figs. 2(b) and 2(d). The coefficient \(\exp(-jkz+j\omega t)\) in Eq. (1) is neglected when plotting the distribution of fields. Figure 2(a) displays a unique phenomenon that is absent in ordinary dielectric WGs. When \(a\) decreases, the modes in the two WGs can couple together to form hybrid eigenmodes. However, here the \(k\) values of the eigenmodes approach to each other. When \(a\) is smaller than a critical distance \(a_{c}\), the eigenmodes coalesce. Below \(a_{c}\) only complex \(k\) exists (brown line and dots in Fig. 2(a)), which can be found by searching for the solutions of Eq. (6) in the space formed by Re(\(k\)) and Im(\(k\)). This phenomenon is in sharp contrast with the ordinary belief about coupled PIM-PIM WGs, that when \(a\) decreases, the coupling between the two WGs would become stronger and the split in \(k\) should increase rather than decrease. To illustrate this difference, we show an example about \(k\) versus \(a\) in a PIM-PIM configuration in Fig. 2(c), where NIM in the structure of Fig. 1 is replaced by a conjugate PIM with \(\epsilon_{\rm PIM}=3\) and \(\mu_{\rm PIM}=0.556\). We can see in this PIM-PIM configuration all the solutions of \(k\) are real, and the two branches of \(k\) do not coalesce. The coalescence of dispersion at a critical value of \(a_{c}\) shown in Fig. 2 resembles EPs in \(\mathcal{PT}\) symmetry very much [12; 13; 14; 15; 16; 17; 18; 19; 20]. We check the eigenmodes around this point (see Fig. 3), and find that when approaching this point from \(a>a_{c}\), the field distributions of the two eigenmodes become more and more similar with each other. So at this critical point not only the eigensolutions \(k\) but also the associated eigenvectors coalesce simultaneously. This point is not a diabolic point in Hermitian system, but a standard EP [16; 17; 18]. The region of \(a>a_{c}\) (\(a<a_{c}\)), where the eigensolutions \(k\) are real (complex), possesses an exact (broken) non-Hermitian phase. Figure 3 also displays the distributions of fields when \(a<a_{c}\). In this phase-broken region, at each given \(a\) value we can generally find two complex solutions of \(k\), which are conjugate to each other. The complex solutions of \(k\) render complex values of \(\beta\) and \(\alpha_{m}\) given by Eqs. (2) and (3). Consequently, Im(\(E_{y}\)) is no longer zero (see the insets of Fig. 3). To check how the curves of \(k\) versus \(a\) shown in Fig. 2(a) vary when parameters of the coupled WGs change, we repeat the calculations at different \(\epsilon_{\rm PIM}\) values. It would change the \(k_{\rm PIM}\) value of the PIM WG. One can also achieve the same effect by changing the thickness of it. As for the NIM WG, we would keep all the parameters constant so that \(k_{\rm NIM}\) does not vary in this article. Figure 4 displays the results when \(\epsilon_{\rm PIM}\) increases linearly from 4.3 to 4.6. The increased \(\epsilon_{\rm PIM}\) would push the \(k_{\rm PIM}\) value in Figure 2: (a) Variation of \(k\) versus \(a\). (b) and (d) are the distributions of field \(E_{y}\) and Poynting vector \(S_{z}\) at the two dispersion points in (a) where \(a=3\) cm. (c) shows the results in a PIM-PIM structure. Note that Im(\(E_{y}\)) is zero in (b) and (d), and the \(y\)-coordinate of Im(\(k\)) in (a) is shown at the right side, which is properly chosen so that the origins of the curves Im(\(k\)) overlap with EP. the PIM WG close to or even pass \(k_{\rm NIM}\) in the NIM WG. From Fig. 4 we can see with \(\epsilon_{\rm PIM}\) increasing, the loop of \(k\) versus \(a\) would shrink, and eventually disappear around \(\epsilon_{\rm PIM}=4.5\). When \(\epsilon_{\rm PIM}\) further increases from 4.5, the curve appears again and becomes larger. Since \(k_{\rm NIM}=1.328\) cm\({}^{-1}\) does not change, the scenario of \(\epsilon_{\rm PIM}=4.5\) in fact is around the critical point where the degeneracy of \(k_{\rm PIM}=k_{\rm NIM}\) takes place. Figure 3: Distributions of fields \(E_{y}\) at some different points in the \(k\) versus \(a\) curves. They confirm that the transition point at \(a_{c}\) is an EP. Note that \({\rm Im}(E_{y})\) is no longer zero when \(a<a_{c}\) where \({\rm Im}(k)\neq 0\). ## III A non-Hermitian coupled-mode theory In the above simulation all the media in the coupled WGs are loss-free. Nevertheless, the coalescence of eigenmodes at a critical value of \(a_{c}\) hints that this loss-free system can possess non-Hermitian phenomenon. Here we could utilize a non-Hermitian coupled-mode theory [21] in the form of \(M\Psi=k_{\pm}\Psi\) to explain the results, which is \[\left[\begin{array}{cc}k_{\text{NIM}}&-\gamma\\ \gamma&k_{\text{PIM}}\end{array}\right]\left[\begin{array}{c}\psi_{\text{NIM }}\\ \psi_{\text{PIM}}\end{array}\right]=k_{\pm}\left[\begin{array}{c}\psi_{\text{ NIM}}\\ \psi_{\text{PIM}}\end{array}\right]. \tag{7}\] Here \(k_{\text{NIM}}\) and \(k_{\text{PIM}}\) are the wavevectors of guided modes in separated NIM and PIM WGs, respectively, at the given angular frequency \(\omega\). Both \(k_{\text{NIM}}\) and \(k_{\text{PIM}}\) are real. Parameter \(\gamma\) is also real and characterizes the mutual coupling between the two WGs. The basic vectors \(\psi_{\text{NIM}}\) and \(\psi_{\text{PIM}}\) represent field components of the guided modes in characterizing the eigenvectors. Evidently, Eq. (7) is non-Hermitian because the off-diagonal elements are not conjugates, i.e. \(M_{12}\neq M_{21}^{*}\). The choice of \(M_{21}=-M_{12}=\gamma\) can be explained by simultaneously considering two aspects. One is that the total energy should be conserved in this loss-free system [32; 28]. The other is that the propagating directions of energy in the NIM and PIM WGs are opposite to each other, so the coupling between them is contradirectional [32; 28]. To see whether Eq. (7) can explain the main features in our former analysis, we can first solve it. Defining \[k_{\text{NIM}}=k_{0}+\Delta, \tag{8}\] \[k_{\text{PIM}}=k_{0}-\Delta, \tag{9}\] where \(k_{0}\) is the mean wavevector, and \(\Delta\) is the detuning, the eigen-solutions \(k_{\pm}\) are \[k_{\pm}=k_{0}\pm\sqrt{\Delta^{2}-\gamma^{2}}. \tag{10}\] By using Eq. (10) we can fit the curves \(k\) versus \(a\) given by Eq. (6) and find how the magnitude of \(\gamma\) varies with the WG distance \(a\). Results of \(\epsilon_{\text{PIM}}=4\) is shown in Fig. 5. When performing the best fitting, we firstly choose proper values of \(k_{\text{NIM}}\) and \(k_{\text{PIM}}\), which are 1.328 cm\({}^{-1}\) and 1.186 cm\({}^{-1}\), respectively, for the results shown in Fig. 5. The values of \(k_{\text{NIM}}\) and \(k_{\text{PIM}}\), as well as those of \(k_{0}=1.257\) cm\({}^{-1}\) and \(\Delta=0.071\) cm\({}^{-1}\), are kept as constants. Then, based on the \(k_{\pm}\) values obtained from Eq. (6) we can use Eq. (10) to find the values of \(\gamma\). Because the two solutions of Eq. (6) might give different values of \(\gamma\), we would only keep the averaged one, which is then substituted back into Eq. (10) to test the discrepancy from Eq. (6). Similar fitting process is also performed in the phase-broken region of complex \(k\) values. From Fig. 5(a) we can see the prediction from Eq. (10) fits well with those of Eq. (6). The deviation between Re(\(k\)) in the phase-broken region might be fixed by assuming a spatial dependence of \(k_{\text{NIM}}\) and \(k_{\text{PIM}}\) on \(a\), which would not be discussed here. It is interesting to emphasize that the magnitude of \(\gamma\) exponentially decreases with increasing \(a\), and can be approximately expressed by \[\gamma=\gamma_{0}\exp(-a/L_{\text{c}}) \tag{11}\] where \(\gamma_{0}=0.18\) cm\({}^{-1}\). The decay length \(L_{\text{c}}\) equals 1.45 cm, very close to the value of 1.438 cm given by \(\beta^{-1}=(k_{0}^{2}-\omega^{2}/c^{2})^{-1/2}\). Evidently, when \(a\) is smaller than \(a_{c}\), the mutual coupling strength \(|\gamma|\) is stronger than \(|\Delta|\), then the phase of the coupled WGs is spontaneously broken and gives complex \(k\) values. When \(a\) is larger than \(a_{c}\), \(|\gamma|\) becomes smaller than \(|\Delta|\), and the phase is conserved. Position \(a_{c}\) is the critical phase transition point of EP where \(|\gamma|=|\Delta|\). When the two WGs are far away from each other, \(\gamma\) is zero, and \(k_{\pm}=k_{\text{NIM, PIM}}\). Now the eigenvectors are \(\Psi=[1,0]^{7}\) and \(\Psi=[0,1]^{7}\), and the fields are localized in separated WGs. Furthermore, the detuning \(\Delta\) determines the size of the loop in the \(k\) versus \(a\) spectrum. When \(k_{\text{NIM}}=k_{\text{PIM}}\), i.e. the eigen-solutions in the NIM and PIM WGs are degenerated, the phase is always spontaneously broken because now \(\Delta=0\) and any nonzero \(\gamma\) would give complex \(k_{\pm}\) values. It is the case shown in Fig. 4(c) at \(\epsilon_{\text{PIM}}=4.5\). Away from this degeneracy scenario, \(|\Delta|\) increases and the loop becomes more and more great, which can be observed in other plots in Fig. 4. Now, we can pay attention to the EP where \[\Delta=\pm\gamma \tag{12}\] that gives \(k_{\pm}=k_{0}\). The eigenfuntion of the eigenmode is \[\left[\begin{array}{c}\psi_{\text{NIM}}\\ \psi_{\text{PIM}}\end{array}\right]_{\text{EP}}=\left[\begin{array}{c}1\\ \text{sign}(\Delta/\gamma)\end{array}\right]. \tag{13}\] where the function sign(\(x\)) equals 1 (\(-1\)) when \(x>0\) (\(x<0\)). In other words, at this EP the fields in the two WGs are in-phase (a phase difference of zero) or out-phase (a phase difference of \(\pi\)) with each other. The exact phase difference is determined by the sign of \(\Delta/\gamma\). Generally \(\gamma\) is determined by the overlap integral of fields in the gap between the two WGs and its sign is fixed [14; 15; 28], but sign(\(\Delta/\gamma\)) can be tuned by changing the resonant conditions of \(k_{\text{NIM, PIM}}\) in the two WGs, e.g. by changing the index of refraction inside or the thicknesses of WGs. As a consequence, the eigenvector at EP is tunable. Note that here we could not make any comment on the amplitudes of fields in the two WGs because the refractive indexes in them are not required to be equal. It is in sharp contrast with \(\mathcal{PT}\) symmetric WGs that generally only symmetric configurations are considered [10; 11; 12]. This drawback hinders us to reveal more unique features about EP. However, the phase-jump, as a signature of sign(\(\Delta/\gamma\)) at EP, is an observable, e.g. by choosing two sets of \(\epsilon_{\text{PIM}}\) values so that in one case \(\Delta>0\) while in the other case \(\Delta<0\), and then analyzing the field patterns inside the structure. The results about the distributions of \(E_{y}\) at EPs when \(\epsilon_{\text{PIM}}=4.4(\Delta>0)\) and \(\epsilon_{\text{PIM}}=4.6(\Delta<0)\) are shown in Fig. 6. From the two plots we can see the fields in the NIM WG are almost identical, but the fields in the PIM WG are flipped with respect to each other. It is then evident that the phase between the two WGs are shifted by \(\pi\) in the two scenarios. Furthermore, associated with the \(\pi\)-phase jump there exists a node inside the gap between the two WGs (red arrow in Fig. 6(b)), which implies that in this scenario \(\psi_{\text{NIM}}=-\psi_{\text{PIM}}\). From the fact of \(\Delta<0\) in Fig. 6(b) we can also conclude that \(\gamma>0\). The group velocities \(v_{g}\) of the guided modes can also be analyzed. However, since here the angular frequency \(\omega\) is kept constant, \(v_{g}\) cannot be found from \(\partial\omega/\partial k\). Here we calculate \(v_{g}\) by using the Poynting vector \(S_{z}\) and the energy density \(W\) via \(v_{g}=S_{z}/W\). When calculating the energy density \(W\) we have adapted the formula of \(\epsilon_{0}\partial(\epsilon_{\text{NIM}}\omega)/\partial\omega|E|^{2}+\mu_{ 0}\partial(\mu_{\text{NIM}}\omega)/\partial\omega|H|^{2}\)[21; 22]. At 5 GHz, the utilized dispersion gives \(\partial(\epsilon_{\text{NIM}}\omega)/\partial\omega=5\) and \(\partial(\mu_{\text{NIM}}\omega)/\partial\omega=4.975\). The curves of \(v_{g}\) versus \(a\) at \(\epsilon_{\text{PIM}}=4.0\) are shown in Fig. 7. As expected, the values of \(v_{g}\) are limited in the region defined by \(v_{g}=-0.2c\) of NIM (it is negative because the energy flux is backward propagating) and \(v_{g}=0.5c\) of PIM, where \(c\) is the speed of light. When the field is mainly localized in the NIM WG, \(v_{g}\) is negative. Otherwise \(v_{g}\) is positive. As \(a\) decreases, the split between \(k\) also decreases, and \(v_{g}\) of the two branches approach to each other. At EP the group velocity is zero, which can be explained by the balanced positive energy flux in PIM (including the surrounded air) and negative energy flux in NIM, Figure 5: Variations of (a) \(k\) and (b) \(\gamma\) versus the WG distance \(a\) by using Eq. (10) to fit the results from Eq. (6). Green line in (b) is an exponential fit of \(\gamma\) using Eq. (11). Figure 6: Distributions of fields \(E_{y}\) at EPs when (a) \(\epsilon_{\text{PIM}}=4.4(\Delta>0)\), and (b) \(\epsilon_{\text{PIM}}=4.6(\Delta<0)\), respectively. Red arrow represents the node in (b), which is associated with the \(\pi\)-phase jump at EP when \(\Delta\) varies its sign. respectively. Note that the stopped light at EP demonstrated here is different from that in \(\mathcal{PT}\) symmetric WGs [12; 13; 14]. Here the stopped light is associated with the negative flux in NIM, and the whole structure is still Hermitian because no loss or gain is presented. On the contrary, the stopped light in \(\mathcal{PT}\)-symmetric WGs with spatial distributions of gain and loss cannot be explained by the propagating of energy flux associated with real-valued fields [14], but is the consequence of zero \(c\)-product [12; 33] of the non-Hermitian system. The \(c\)-product can be applied when the two WGs are not only geometric but also electromagnetic (except for the loss/gain effect) identical so that \(\psi\) in the eigenvector can be unambiguously defined [12; 33]. Here the two WGs are made of different media, possess different indexes of refraction, and can have different thicknesses, so the applicability of \(c\)-product needs further discussion. ## IV Discussion Above we have shown that non-Hermitian optical effects can be observed in loss-free coupled WGs made of NIMs. Here we would like to make further comments on the category of the non-Hermitian effect. From Eq. (7) we can see it can be classified into the anti-\(\mathcal{PT}\) symmetric one because the diagonal elements of the \(2\times 2\) matrix \(M\) satisfy \(\mathrm{Im}(M_{11})=\mathrm{Im}(M_{22})\) and \(M_{12}=-M_{21}^{*}\). It should be emphasized that when utilizing Eq. (7) we have assumed that the basic vectors \(\psi_{\mathrm{NIM}}\) and \(\psi_{\mathrm{PIM}}\) represent field components of the guided modes that characterize the wave functions. The set of analogues of the basic vectors \(\psi_{\mathrm{NIM}}=E_{y}^{\mathrm{NIM}}\) and \(\psi_{\mathrm{PIM}}=E_{y}^{\mathrm{PIM}}\) is adopted in our analysis. However, since the two WGs are not identical with each other, the choice of the basis vectors is arbitrary. For example, we can also utilize another set of analogues by adding an additional phase of \(\pi/2\) to the basic vector \(\psi_{\mathrm{PIM}}\). Now, Eq. (7) becomes \[\left[\begin{array}{cc}k_{\mathrm{NIM}}&j\gamma\\ j\gamma&k_{\mathrm{PIM}}\end{array}\right]\left[\begin{array}{c}\psi_{ \mathrm{NIM}}\\ j\psi_{\mathrm{PIM}}\end{array}\right]=k_{\pm}\left[\begin{array}{c}\psi_{ \mathrm{NIM}}\\ j\psi_{\mathrm{PIM}}\end{array}\right]. \tag{14}\] Equation (14) is just the standard anti-\(\mathcal{PT}\) symmetric operator that has been discussed in many literatures [8; 9; 10; 11]. It can also explain all the features on the dispersion and distributions of fields. As for which one is preferred in studying NIM, we still prefer Eq. (7) because the node shown in Fig. 6(b) can be more intuitively explained. The importance of this article is that it proves we can access non-Hermitian optics without using gain or loss. It also shows that the negative energy flux in NIM has many realistic impacts on future applications, especially by considering the fact that all the media are loss-free. Experimental investigation of the non-Hermitian effect discussed here can utilize documented NIM design [26; 27], and transfer to other frequency regimes by simply rescaling the geometric parameters of the coupled-WG configuration. Theoretical interest can be paid to discuss the deep-lying physics of the spontaneous phase broken, and the utilization of the anti-\(\mathcal{PT}\) symmetry in loss-free NIM to achieve higher-order EPs and mode switching purpose. Before ending this article, we would like to emphasize again that the whole structure studied here is loss- and gain-free. Consequently, in principle it is a closed Hermitian system, and the energy should be conserved. The non-Hermitian effects discussed above, especially the existence of complex \(k\) below \(a_{c}\), should not violate the principle of energy conservation. These complex-\(k\) eigenmodes might be associated with some effects similar to the photonic bandgap effect and waveguiding effect below cutoff, which forbid the propagation of optical field without absorbing any energy [21; 28]. Detailed discussion deserves our further efforts. ## V Conclusion In summary, in this article we check the optical waves in coupled WGs made of lossless NIM and PIM. We show that the guided modes in this kind of gain/loss-free optical system can also support non-Hermitian features. A simple non-Hermitian coupled-mode theory is utilized to explain our results. This theory proves that the critical degeneracy point at \(a_{c}\) is an EP, and the field dynamics induced by NIM belongs to anti-\(\mathcal{PT}\) symmetry. Features at EPs including the slow-light effect are discussed. This study highlights the non-negligible novelties of NIM, which can be utilized in studying non-Hermitian optics to bypass many restrictions of \(\mathcal{PT}\) symmetry. ## Funding Natural National Science Foundation of China (NSFC) (12104227, 12274241); Scientific Research Foundation of Nanjing Institute of Technology (YKJ202021). ## Disclosures The authors declare no conflicts of interest. ## Data availability Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
2305.00855
Jointly Managing Electrical and Thermal Energy in Solar- and Battery-powered Computer Systems
Environmentally-powered computer systems operate on renewable energy harvested from their environment, such as solar or wind, and stored in batteries. While harvesting environmental energy has long been necessary for small-scale embedded systems without access to external power sources, it is also increasingly important in designing sustainable larger-scale systems for edge applications. For sustained operations, such systems must consider not only the electrical energy but also the thermal energy available in the environment in their design and operation. Unfortunately, prior work generally ignores the impact of thermal effects, and instead implicitly assumes ideal temperatures. To address the problem, we develop a thermodynamic model that captures the interplay of electrical and thermal energy in environmentally-powered computer systems. The model captures the effect of environmental conditions, the system's physical properties, and workload scheduling on performance. In evaluating our model, we distill the thermal effects that impact these systems using a small-scale prototype and a programmable incubator. We then leverage our model to show how considering these thermal effects in designing and operating environmentally-powered computer systems of varying scales can improve their energy-efficiency, performance, and availability.
Noman Bashir, Yasra Chandio, David Irwin, Fatima M. Anwar, Jeremy Gummeson, Prashant Shenoy
2023-05-01T14:53:53Z
http://arxiv.org/abs/2305.00855v1
# Jointly Managing Electrical and Thermal Energy in Solar- and Battery-powered Computer Systems ###### Abstract. Environmentmentally-powered computer systems operate on renewable energy harvested from their environment, such as solar or wind, and stored in batteries. While harvesting environmental energy has long been necessary for small-scale embedded systems without access to external power sources, it is also increasingly important in designing sustainable larger-scale systems for edge applications. For sustained operations, such systems must consider not only the electrical energy but also the thermal energy available in the environment in their design and operation. Unfortunately, prior work generally ignores the impact of thermal effects, and instead implicitly assumes ideal temperatures. To address the problem, we develop a thermodynamic model that captures the interplay of electrical and thermal energy in environmentally-powered computer systems. The model captures the effect of environmental conditions, the system's physical properties, and workload scheduling on performance. In evaluating our model, we distill the thermal effects that impact these systems using a small-scale prototype and a programmable incubator. We then leverage our model to show how considering these thermal effects in designing and operating environmentally-powered computer systems of varying scales can improve their energy-efficiency, performance, and availability. Environmentally-powered computer systems, thermal effects, energy-efficiency, performance, batteries. + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: accepted version + Footnote †: ccs: ccs: ccs: accepted version + Footnote †: ccs: ccs: ccs: accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs accepted version + Footnote †: ccs: ccs accepted version + Footnote †: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs accepted version + Footnote †: ccs: ccs accepted version + Footnote †: ccs: ccs accepted version + Footnote †: ccs: ccs accepted version + Footnote †: ccs: ccs accepted version + Footnote †: ccs: ccs accepted version + Footnote †: ccs: ccs accepted version + Footnote †: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs: ccs accepted version + Footnote †: ccs: ccs: ccs accepted version To address the problem, we enumerate, quantify, and model the numerous thermal effects that impact solar- and battery-powered computer systems. While the temperature responses of individual components, e.g., processors, batteries, cooling elements, etc., are well-known, optimizing the performance, energy-efficiency, and availability of these systems requires understanding the relationships between these components and their environment. For example, at low temperatures, environmentally-powered systems can leverage some of the thermal energy generated by their processors to heat their battery, which can significantly increase the energy-efficiency of both. Of course, these systems must also efficiently dissipate their heat at high temperatures to prevent processors and batteries from over-heating and becoming unavailable. To this end, we design a thermodynamic model for environmentally-powered systems by combining well-known physical models of heat transfer, batteries, processors, and cooling elements. Importantly, our model captures _thermal feedback loops_ between components that affect the system's operation, such as how scavenging a system's waste heat warms its battery, increasing its energy-efficiency, and enabling more computation. We empirically validate our model using a small-scale prototype and programmable incubator that precisely regulates temperature between -30\({}^{\circ}\)C and 40\({}^{\circ}\)C. We then leverage our thermal model to show how considering thermal effects in both designing and operating environmentally-powered computer systems can improve their performance, energy-efficiency, and availability. Specifically, our model and analysis quantifies the effect of a system's power draw, enclosure insulation, and ambient temperature on its energy-efficiency, i.e., computational work done using a fully charged battery. We also highlight the tradeoff between an enclosure's heat transfer coefficient and its energy-efficiency: better insulation increases energy-efficiency when cold by more productively using waste heat, but decreases it when hot by requiring additional energy to power a cooling element that dissipates heat to prevent battery and processor over-heating. Our work differs from prior work on optimizing the cooling infrastructure of data. centers powered by the electric grid, as that work mostly focuses on the efficient movement of heat from _within_ the facility to outside of it, and does not exhibit the feedback loop between computation and batteries present in environmentally-powered systems. Our work demonstrates that managing and adapting to variable thermal energy is just as important as electrical energy in solar- and battery-powered computer systems, and that they are dependent on each other. Currently, thermal management is mostly an after-thought for these systems with most implicitly designed for ideal-to-higher temperatures, often with little insulation that reduces the need for active cooling as temperatures rise, but wastes much of the heat these systems produce as temperatures drop. There is currently little understanding, and no explicit modeling, of how temperature affects these systems. Our work is an important step towards better understanding how the temperature effects of individual components manifest at the system-level. Our hypothesis is that optimizing environmentally-powered computer systems requires jointly managing their electrical and thermal energy as part of their design and operation. In evaluating our hypothesis, we make the following contributions. **Thermodynamic Model and Validation**. We design a comprehensive thermodynamic model for an environmentally-powered computer system that consists of an energy source, e.g., solar panel, enclosure, batteries, and processors that are subjected to some ambient temperature. The model accounts for the effect of heat and processor power on battery capacity, charging, and discharging, the heat emitted by the processor, and the energy consumed by a cooling element to dissipate heat. We validate our model by enumerating, isolating, and empirically quantifying the thermodynamic effects that impact the system's operation, and how they relate to each other. Our empirical analysis demonstrates the impact of each effect on system operation under different ambient temperatures. **System Design and Operation Use Cases**. We present both a design and operation use case for our thermodynamic model. In the design use case, we leverage our thermodynamic model to highlight the tradeoffs between system design parameters and user-specified performance objectives, e.g., for performance, energy-efficiency, and availability. In the operation use case, we demonstrate how a scheduler can leverage our thermodynamic model to operate the given design of an environmentally-powered computer system to optimize for a user-specified performance objective. **Implementation and Evaluation**. We implement a small-scale prototype and programmable incubator to empirically validate our model. We develop a model-driven simulator to enable long-term experimentation. We quantify the design tradeoffs and the operational space to show how our thermodynamic model can be leveraged to improve the system-level performance of two case study applications--a small-scale embedded system for precision agriculture and medium-scale federated learning at an edge datacenter. Figure 1. _Environmentally-powered computer systems consist of processors powered by solar and batteries and include a cooling element (a). Common applications include small- to medium-scale embedded systems. e.g., for precision agriculture (b) and medium- to large-scale edge data centers (c). In both cases, systems may be exposed to highly variable temperatures._ Background We summarize the thermodynamic effects exhibited by batteries, processors, and cooling elements, and how they alter the energy-efficiency of each. We model these effects in the next section. ### Environmentally-powered Systems Environmentally-powered computer systems operate on renewable energy harvested from their environment, such as solar or wind, and stored in batteries. Figure 1(a) shows these systems' typical components, including solar panels, processors, batteries, and a cooling element, such as a fan or pump. Figure 1(b) & (c) show two example applications that leverage environmentally-powered systems. Precision agriculture applications deploy these systems to gather data from small-scale embedded devices that monitor environmental conditions, such as soil moisture, humidity, and temperature. Similarly, there are a wide range of smart city applications, such as traffic monitoring, vehicle-to-edge communication, and crime detection, that analyze and process the data collected by sensors and cameras at medium- to large-scale edge data centers. ### Batteries The energy stored by batteries is related to their temperature and discharging/charging current. We discuss these relationships below. **Temperature-Energy Effect.** The usable energy capacity of lithium batteries decreases with temperature. Figure 2(a) shows curves from our prototype's battery datasheet, where the points represent experiments we run to empirically validate the datasheet using our programmable incubator. The graph shows that the battery's usable capacity, as a percentage of its charged capacity, drops by over 50% at the 3V cut-off voltage when the temperature drops from 25\({}^{\circ}\)C to -20\({}^{\circ}\)C. This "wasted" energy is consumed as heat by the battery to catalyze its chemical reaction that produces electricity. We call this the _temperature-energy_ effect. In addition, discharging at low temperatures can damage batteries and reduce cycle-lifetime. In general, lithium batteries should not be discharged below -20\({}^{\circ}\). Likewise, as temperatures increase, batteries' self-discharge rate also increases, which reduces the energy available for discharge, although not by as much as a decrease in temperature. Lithium batteries generally cannot be discharged at >60\({}^{\circ}\), and as at that temperature their available capacity drops to 0. Figure 3(a) plots the temperature-energy effect in our prototype with ambient temperature on the x-axis and available battery capacity on the y-axis. The points represent experiments using our prototype, while the continuous line represents our model's prediction, which closely matches the data. In this case, we set the battery to 100% capacity at 25\({}^{\circ}\)C. As shown, the available capacity drops significantly as the temperature decreases, with only 80% of the energy available at 0\({}^{\circ}\)C and 50% available near -20\({}^{\circ}\)C. Temperatures between 0\({}^{\circ}\)C and -20\({}^{\circ}\)C are common over winter in much of the U.S., Europe, and other high latitude locations. As we show, the heat generated by processors can be leveraged to raise the internal enclosure temperature and extract more energy from batteries. The plot also shows how our prototype's charge controller automatically shuts down the battery once it reaches 60\({}^{\circ}\)C for safety. While ambient temperatures generally do not reach 60\({}^{\circ}\)C, they can reach this high within an insulated enclosure if processors generate heat faster than the system can dissipate it, i.e., via convection. **Discharge-Energy Effect.** The _discharge-energy effect_ refers to the decrease in available energy that occurs when discharging at higher rates. Figure 2(b) shows curves from our prototype's battery datasheet, where the points represent experiments we have run for empirical validation. The graph shows that, at the 3V cut-off voltage, the battery's usable energy decreases by 25% when increasing the current from a C-rate of 0.2 to 2, where a C-rate of \(N\) represents the discharge current required to fully discharge the battery in \(1/N\) hours. Due to this effect, if processors execute at 100% utilization, they draw less energy from a battery than if they operate at lower utilization, since utilization is roughly linear with current draw. Thus, the slower processing speed, the more energy they can extract from batteries, and the more overall computation they can perform. Figure 3(b) quantifies the discharge-energy effect where we plot the discharge current (which is linear with utilization) on the x-axis and the available energy capacity on the y-axis. The points represent experiments with our prototype, while the continuous line represents our model's prediction, which closely matches the empirical data. Here, we normalize the experiment to some discharge current (equivalent to 50% utilization), which we set at 100%, and then set the available capacity at 100% for that discharge current. This setting allows us to evaluate the discharge-energy effect at currents higher than 1C, which may be required by the system to server workload bursts. The experiment shows a linear relationship: as we slow down the system (by reducing utilization), we are able to extract more than nominal available energy, and as we speed up the system (by increasing its utilization), we draw less energy. In this case, only 80% capacity is available when operating at 100% utilization compared to 50% utilization. The discharge-energy effect counteracts the temperature-energy effect: running the processor at high utilization generates more heat, which can warm the battery and increase its available energy, but the increased discharge current reduces the available energy. Thus, determining the most energy-efficient operating point at any given time is non-trivial. Figure 2. _A system’s usable battery capacity varies with both temperature (a) and discharge current (b)._ **Temperature-Charging Effect.** The _temperature-charging effect_ refers to the relationship between temperature and the rate at which batteries can charge. While lithium batteries can safely discharge down to \(-20^{\circ}\)C, their maximum charging rate decreases with temperature and charging is not possible below \(0^{\circ}\)C. Thus, _low temperatures prevent storing energy and make using energy much less efficient._ Lithium batteries can also easily overheat at high temperatures, since their chemical reaction generates additional heat that increases their internal temperature beyond the ambient temperature. Charge controllers generally prevent charging/discharging when the internal temperature rises too high (above \(-60^{\circ}\)C). Thus, _high temperatures can prevent storing and using energy._ Figure 3(c) shows how the charging capacity of the battery changes with temperature. In this case, 100% represents the maximum charging capacity at \(25^{\circ}\)C. As shown, charging rate decreases rapidly from \(5^{\circ}\)C to \(0^{\circ}\)C, where it falls to 0%. At \(5^{\circ}\)C capacity is \(\sim\)80% and then increases roughly linearly with temperature. As expected, higher temperatures enable the system to charge the battery at faster rates. As above, the points represent experiments using our prototype and line represents our model's prediction, which closely matches. ### Processors Since processors do no mechanical work, their power is converted to heat, which must be dissipated to prevent them from overheating due to thermal runaway. A system's energy-efficiency is a function of temperature if it leverages outside air (or water) for cooling, since lower ambient temperatures require using less additional energy to actively cool the processor. Such "free cooling" is often used by cloud data centers (Han et al., 2017). Thus, unlike batteries, processors are _more_ energy-efficient at low temperatures, since they do not have to consume additional power to dissipate heat. Also unlike batteries, computer systems (at a fixed frequency and voltage) are more energy-efficient, in joules per computation, at higher power, since they are generally not energy-proportional and a higher power (and utilization) amortizes their baseload power over more computation. Of course, an ideal energy-proportional system has the same energy-efficiency at any utilization. The heat generated by processors can be recycled by environmentally-powered systems to optimize the efficiency of the battery based on the various effects above. For example, scheduling workload at low temperatures can improve system performance by generating heat that improves battery efficiency. We call this the _scheduling-performance_ effect. ### Cooling System Fans (or pumps) are generally used to dissipate heat in computer systems. The intensity with which a variable speed motor must rotate the blades of a fan (or pump) to maintain a certain temperature is a function of the heat dissipated by the processor, the conductivity of the system enclosure's insulation, and the external temperature. Thus, there is an _insulation-cooling effect_ that impacts system enclosure design: the thicker the enclosure's insulation, the better its performance in low temperatures, but the more the motor must run in high temperatures, and vice versa. In addition, as we discuss, fan (and pump) energy usage is a cubic function of the amount of air (or water) it moves, and thus the heat it dissipates. The _insulation-cooling effect_ captures the tradeoff between having thicker insulation to retain heat during low temperatures at the cost of consuming more energy via a fan (or pump) to dissipate heat at high temperatures. Figure 4 quantifies the insulation-cooling effect for our prototype at \(25^{\circ}\)C for different levels of insulation on the x-axis. The y-axis shows the power required by the fan (or pump) motor to maintain \(25^{\circ}\)C as the enclosure's insulation increases when operating the system at 50% utilization. As expected, when the insulation is thin, there is almost no need for cooling, and it consumes little power. However, as we increase the insulation's thickness, the enclosure retains more heat, which requires consuming more power to dissipate that heat by active cooling. Of course, the energy used by the cooling element is energy that does not go towards productive computation. One option at these higher insulation levels, instead of running the cooling element, is to operate at a lower utilization to generate less heat, which reduces the need to consume energy by the fan to dissipate heat. Thus, as with the discharge-energy effect, operating slower, at a lower utilization, enables more energy to go towards productive computation. That is, active heat dissipation using the cooling element enables environmentally-powered systems to operate at higher workload intensities than they otherwise could, but at the cost of lower energy-efficiency. Also, as above, the points in the graph represent experimental data from our prototype, while the line represents our model's prediction, which closely matches. Figure 4. Fan power as a function of insulation. Black points are experimental data and the red curve represents our model. Figure 3. Energy-efficiency as a function of temperature (a), discharge current (b), and available battery capacity when charging at various temperature (c). For (a) and (b), the black points are experimental data and the red curves represent our model. ## 3. Thermodynamic Model To better understand the effect of temperature on the operation of an environmentally-powered system, we develop a comprehensive physical thermodynamic model that estimates a system enclosure's change in temperature over some time interval \(\Delta t\). Our contribution lies in leveraging basic thermodynamic relationships to develop an end-to-end model for predicting system-level performance; the basic relationships can be found in classic thermodynamic textbooks (Han, 1998; Han, 1998). Figure 5 illustrates our model and its key parameters. Table 1 outlines the notations used in the model, their definitions, and units. The model assumes a processing element, such as CPU, GPU, radio, or their combination with a dynamic power range, and batteries reside within an enclosure of a given size. The processors, batteries, and the air within the enclosure each have an associated temperature (\(T\)), mass (\(m\)), and thermal capacity (\(C\)), which is the heat required to change the temperature of the mass, and is in units of joules per degree K. We also assume the ambient temperature outside (\(T_{amb}\)) is unaffected by any heat transfer with the enclosure. The enclosure provides insulation from the environment based on its heat transfer coefficient \(U\), which is an empirically derived constant that dictates the heat transfer rate (\(\hat{Q}_{trans}\)) in joules per unit time between the enclosure and its external environment. The overall heat transfer coefficient \(U\) is a combination of the internal convection inside the enclosure, conduction through the enclosure walls, and external convection away from the enclosure. We can calculate \(U\) as thermal resistors connected in series, as below. \[\frac{1}{U}=\frac{1}{h_{i}}+\frac{d}{k}+\frac{1}{h_{0}}. \tag{1}\] In Equation 1, \(h_{i}\) is the internal convection coefficient, \(k\) is the thermal conductivity, \(d\) is the thickness of the enclosure, and \(h_{0}\) is the outer convection coefficient. The heat transfer rate (\(\hat{Q}_{trans}\)) below between the enclosure and its external environment is directly proportional to the heat transfer coefficient (\(U\)), temperature difference (\(\Delta T(t)\)) between inside and outside the enclosure, and the heat transfer area (\(A\)), computed as below. \[\hat{Q}_{trans}=U\times A\times\Delta T(t)=U\times A\times(T_{amb}-T_{enc}(t)). \tag{2}\] For simplicity, our model assumes the enclosure is a cube with side length \(L\) with a surface area \(A=6L^{2}\). As shown in Equation 2, the heat transfer rate is a function of surface area \(A\). At any time \(t\), \(\Delta T(t)=T_{amb}-T_{enc}(t)\) represents the difference between the temperature inside and outside the enclosure. Thus, a positive \(\Delta T(t)\) represents heat flowing into the enclosure, and a negative \(\Delta T(t)\) represents heat flowing out of it. Given Equation 2, we can compute the total heat transfer (in joules) over a time interval \(\Delta t\) by simply integrating \(\hat{Q}_{trans}\) over time, calculated as below. \[Q_{trans}=\int_{t}^{t+\Delta t}\hat{Q}_{trans}\,dt. \tag{3}\] Equation 3 enables us to compute the total heat energy transferred between the inside of the enclosure and the outside environment. However, some of this heat energy is absorbed by the mass within the enclosure, including the processors, batteries, and air, and thus does not contribute to raising the enclosure's temperature (\(T_{enc}\)). This heat energy is a function of the enclosure's heat capacity (\(C_{enc}\)), which is computed as the mass-weighted average of the respective heat capacities of the objects within the enclosure, as shown below, where \(m_{enc}=m_{air}+m_{bat}+m_{proc}\) or the total mass within the enclosure. Here, we assume the enclosure includes only processing elements, batteries, and ambient air. \[C_{enc}=\frac{1}{m_{enc}}\times(m_{air}.C_{air}+m_{bat}.C_{bat}+m_{proc}.C_{ proc}) \tag{4}\] We can empirically measure the battery and computing platform's mass (\(m\)) and heat capacity (\(C\)) using a scale and calorimeter, respectively. We cannot directly weigh the air mass, but can derive it using simple models. In particular, the mass of air in the enclosure is a product of its volume and the air's density \(\rho_{air}\), which is directly proportional to the atmospheric pressure, and inversely proportional to the temperature (\(T_{air}\)) as well as its specific gas constant (\(R_{specific}\)), as given below. \[\rho_{air}=\frac{p}{R_{specific}\times T_{air}}. \tag{5}\] For dry air on earth \(R_{specific}=287.058J\cdot kg^{-1}\cdot K^{-1}\). We assume the enclosure is closed when \(p=1\) atmosphere and \(T_{air}=25^{\circ}\)C. Note that, based on the ideal gas law, even when the temperature inside the enclosure changes, the ratio of its pressure \(p\) to its temperature \(T_{air}\), and thus its air density, remains constant. As a result, in this case, the density of air \(\rho_{air}=1.1839\)kg/m\({}^{3}\), which results in an air mass \(m_{air}=1.1839\times L^{3}\). The heat capacity of air (\(C_{air}\)) at earth's surface under these conditions is also a constant and equal to 717 joules per kilogram per degree Kelvin (K). We have retrieved the coefficients for the thermal properties of the air and other components from Engineering ToolBox (Tolacco, 2018). \begin{table} \begin{tabular}{||l|l|l||} \hline **Notation** & **Definition** & **Unit** \\ \hline \hline Q & Heat transfer rate & Js\({}^{-1}\) \\ \hline h & Heat transfer coefficient & Wm\({}^{-2}\)K\({}^{-1}\) \\ \hline U & Combined heat transfer coefficient & Wm\({}^{-1}\)K\({}^{-1}\) \\ \hline k & Thermal conductivity & Wm\({}^{-1}\)K\({}^{-1}\) \\ \hline d & Enclosure thickness & m \\ \hline A & Enclosure surface area & m\({}^{\prime}\) \\ \hline T & Temperature & K \\ \hline \(\Delta T\) & Temperature difference & K \\ \hline m & Mass & Kg \\ \hline C & Thermal capacity & JK\({}^{-1}\) \\ \hline R\({}_{specific}\) & Specific gas constant & JK\({}^{-1}\)K\({}^{-1}\) \\ \hline \(p\) & Pressure & Jm\({}^{-3}\) \\ \hline \(\rho\) & Density & Kg\({}^{-3}\) \\ \hline V & Volume & m\({}^{3}\) \\ \hline \(\mathcal{V}\) & Processing element voltage & volts \\ \hline \(\mathcal{I}\) & Processing element current & ampere \\ \hline AF & Airflow & m\({}^{3}\)s\({}^{-1}\) \\ \hline \end{tabular} \end{table} Table 1. _Model notations, definitions, and units._ Figure 5. _A simple and general thermodynamic model of an environmentally-powered computer system._ Given all this, we can compute the enclosure's heat capacity \(C_{enc}\) above. If the enclosure generates no internal heat, then its temperature will eventually reach an equilibrium temperature equal to the temperature \(T_{amb}\) of the ambient environment. To reach equilibrium, the total amount of heat \(Q_{trans}\) the enclosure will absorb or release is the product of its total mass \(m_{enc}\), heat capacity \(C_{enc}\), and change in temperature, computed as below. \[Q_{trans}=m_{enc}\times C_{enc}\times(T_{amb}-T_{enc}(0)). \tag{5}\] Here, \(T_{enc}(0)\) is enclosure temperature at the start. While the equation above represents the heat transferred to reach the equilibrium temperature, the same basic equation also dictates the heat transferred over any arbitrary time interval \(\Delta t\) based on the change in temperature at the time interval's start and end, as given below. \[Q_{trans}=m_{enc}\times C_{enc}\times(T_{enc}(t+\Delta t)-T_{enc}(t)). \tag{6}\] Notice that we have computed the total heat transfer \(Q_{trans}\) over a time interval in both Equation 3 and Equation 6. Setting these equations equal to each other yields our model, which predicts the temperature within the enclosure after some time interval \(\Delta t\) given a starting temperature \(T_{enc}(t)\), the enclosure's mass (\(m_{enc}\)) and heat capacity (\(C_{enc}\)), as well as its thermal conductivity (\(k\)), surface area (\(A\)), ambient temperature (\(T_{amb}\)), and depth (\(d\)). \[m_{enc}\times C_{enc}\times(T_{enc}(t+\Delta t)-T_{enc}(t))=\int_{t}^{t+ \Delta t}\hat{Q}_{trans}\,dt\] \[T_{enc}(t+\Delta t)=T_{enc}(t)+\frac{1}{m_{enc}C_{enc}}\int_{t}^{t+\Delta t} \hat{Q}_{trans}\,dt \tag{7}\] To this point, our model assumes the processor generates no heat. In practice, however, the power drawn by the processor is converted to heat, which our model assumes is uniformly distributed throughout the enclosure. For now, we assume there are no mechanical components, such as fans, to dissipate this heat. We discuss modeling for heat dissipation using a fan below. Thus, we extend our model above to account for the processors' power draw by assuming it is entirely converted to heat. We can account for this heat energy by simply adding it to the heat transferred with the environment, given as below. \[T_{enc}(t+\Delta t)=T_{enc}(t)+\frac{1}{m_{enc}C_{enc}}\int_{t}^{t+\Delta t}( \hat{Q}_{trans}+(\mathcal{V}\cdot\mathcal{I}))\,dt.\] Here, \(\mathcal{V}\) and \(\mathcal{I}\) are platform's voltage and current, and \(\mathcal{V}\cdot\mathcal{I}\) is its overall power draw. The model above simply observes that power translates directly to heat within the enclosure and thus augments any other heat transfer mechanism available to the system. Our model captures an enclosure's temperature change over time based on its physical characteristics, the ambient temperature, and the processor's power usage. Unfortunately, there are no good physical models that capture the effect of temperature and power draw on usable battery capacity. Thus, we use empirical models from our battery's datasheet in Figure 2, which we experimentally validated. We next extend our model to include using an arbitrary cooling component, such as an air conditioner or simple fan. **Heat Dissipation Using Active Cooling.** Processors may dissipate heat faster than an enclosure can transfer it to the external environment based on its conductivity, which causes the temperature to rise to a point where neither the battery nor processor can function. In this case, fans or pumps may be necessary to increase the rate of heat dissipation within the enclosure using convection. While our model below focuses on fans, which transfer heat by moving air, the same basic approach applies to pumps, which transfer heat by moving a liquid. The selection of a fan depends on the specifications of the enclosure and the maximum rate of heat dissipation required. A fan moves the air that absorbs the heat from inside the box and then dissipates it to the external environment. The amount of energy dissipated (\(Q_{diss}\)) depends on the mass of the moving air (\(m_{air}\)), the specific heat of the moving air (\(C_{air}\)), and the temperature change of the moving air (\(\Delta T_{air}\)). \[Q_{diss}=m_{air}\times C_{air}\times\Delta T_{air}\] The mass of the moving air can be calculated from the volume of air (\(V_{air}\)) being moved and the density of the moving air (\(\rho_{air}\)). \[Q_{diss}=(V_{air}\times\rho_{air})\times C_{air}\times\Delta T_{air}\] We divide both sides of the equation to get the power needed to dissipate at each time (\(t\)). \[\frac{Q_{diss}}{t}=(\frac{V_{air}}{t}\times\rho_{air})\times C_{air}\times \Delta T_{air}\] The air volume over time is the air flow rate, which we term as \(AF_{air}\). We term the power dissipation as \(P_{diss}\). We arrange the equation to get the airflow required for a given power dissipation. \[AF_{air}=\frac{P_{diss}}{\rho_{air}\times C_{air}\times\Delta T_{air}}\] The value of \(C_{air}\) is \(1kJ\cdot kg^{-1}\cdot C^{-1}\) and the density of air at \(20^{\circ}\)C is \(\rho_{air}=1.20\)kg/m\({}^{3}\). We can use this equation to either find the heat dissipation rate for a given flow rate or the heat flow required to achieve the desired heat dissipation rate. Since the equipment inside the enclosure will exhibit resistance to the air flow, air needs to be delivered at a certain pressure that can overcome the resistance. However, the amount of pressure required is highly dependent on the design and physical characteristics of Figure 6. _The change in the enclosure’s temperature \(T_{enc}\) over time is a function of the (a) ambient temperature \(T_{amb}\), (b) the enclosure’s thermal conductivity \(k\), and (c) the processor’s power usage \(P\)._ the product to be cooled, and must be determined either experimentally using anemometers and manometers to measure the air speed and pressure, respectively, or using different computer-aided design (CAD) software to design and calculate airflow characteristics. Either method will yield system pressure requirements that increase with the air flow. The pressure exerted by the fan reduces as its air flow increases, and delivers the highest air flow when the back pressure is lowest. The intersection of the two curves is the operating point of the fan for the given system. We assume our model uses a fan that has the required airflow at its operating point. The power consumption of a variable speed fan has a cubic relationship with the change in the airflow. That is, if the airflow of the fan doubles, its power consumption increase by 8\(\times\). This relationship is shown in the equation below. \[P_{j}=P_{i}\times(^{AF_{j}}/_{Air})^{3}\] Here, \(P_{i}\) and \(AF_{i}\) are the initial power consumption and airflow while \(P_{j}\) and \(AF_{j}\) are the final power consumption and airflow, respectively. Note that using the fan to dissipate heat reduces the energy available for doing productive computation. **Implementation and Model Validation.** To validate our model and experiment with thermodynamic design, we built a programmable incubator by connecting a mini-freezer and incandescent light bulb (as a heat source) to programmable relays controlled by a Raspberry Pi (Figure 7). Our incubator programmatically controls temperature between -30\({}^{\circ}\)C and 40\({}^{\circ}\)C with an error of \(\pm 2^{\circ}\)C. We use the Nvidia Jetson Nano as our computing platform for validation. The Nano has a baseload power of \(\sim\)1W, and a maximum power of 10W. We use Panasonic NCR18650B lithium-ion batteries rated for 3.2 Amp-hours (Ah) at 3.6V. We use a boost converter to build a 4Ah, 5V battery bank for the Nano with 20Wh maximum energy capacity. Figure 2 from SS2 shows our battery's response to temperature and discharge current. Our enclosure uses Expanded Polystyrene (EPS) foam, which has a thermal conductivity \(k\) of 0.04\(Wm^{-1}K^{-1}\)[(17)]. We use 1.25in thickness as our baseline for experiments. We vary \(k\) by changing its thickness. For example, halving the thickness increases \(k\) by 2\(\times\) to 0.08\(Wm^{-1}K^{-1}\). Three parameters affect the change in enclosure's temperature \(T_{enc}\) over time: the difference with the ambient temperature \(T_{amb}\), the enclosure's thermal conductivity \(k\), and the processor's power usage \(P\). Figure 6 shows the complex non-linear effect of each on the change in \(T_{enc}\), initialized to 0C. Our baseline is \(T_{amb}\)=10\({}^{\circ}\)C, \(k\)=0.04, and \(P\)=0W. Figure 6(a) then varies \(T_{amb}\) without changing \(k\) or \(P\), and shows that a higher ambient temperature causes the enclosure's temperature to rise more quickly. Similarly, Figure 6(b) varies \(k\), and shows that a higher thermal conductivity, i.e., more insulation, also causes the temperature to rise quickly. Finally, Figure 6(c) varies the processor's power usage, and shows how the resulting heat increases the temperature up to 100\({}^{\circ}\)C at full utilization (10W), which is well beyond the 10\({}^{\circ}\)C ambient temperature. ## 4. Thermodynamic Model Use Cases In this section, we first present the design of an environmentally-powered computer system, specifically its enclosure, and operation of the system as the two uses cases for the thermodynamic model, as shown in Figure 8. We then present a broader analysis that outlines the use cases across a wide range of settings. ### Use Case 1: Designing the System In the design use case, the end goal is to determine the configuration range for the system enclosure that allows the system to meet its performance objectives across seasons. To do so, a pre-requisite is the historical temperature profile of the system's location and user-specified system objectives, such as 100% availability at 50% of the power. User also specifies the order of priority for secondary metrics. In addition, user must also specify the capacity of different system components such as the processor, batteries, and solar panels. Given these inputs, we exhaustively search the system enclosure parameter space, which includes the enclosure insulation and cooling element capacity, to find a range of parameters that satisfy the user-specified performance objectives. We take an iterative approach to finding the right specifications for the enclosure and the cooling element. We start with an initial set of values for the enclosure's insulation, or thermal conductivity \(k\), that may correspond to low (e.g., styrofoam, \(k=0.04W/m.K\)), medium (e.g., Polyvinyl Chloride (PVC) plastic \(k=0.2W/m.K\)), and high (e.g., glass, \(k=0.8W/m.K\)) value. These values are configurable and correspond to actual insulation materials that can be used for the enclosure. Similarly, we pick an initial value for the cooling capacity, specified in watts (W). We then use an iterative process to find the optimal enclosure and cooling element specifications using our thermodynamic model based on the temperature and solar power profiles at all insulation values. Since the workload is not known, we vary the system's operational power between the user-specified minimum (e.g., 30%) and 100% power. For each of the configuration combinations (i.e., thermal conductivity, cooling capacity, and operational power), we compute the value of all of the metrics that quantify system performance objectives. We then use a brute-force approach to find the best configurations. Since finding the right enclosure specification is a one time process done before system deployment, the time required for our brute-force approach is not a problem. Finally, we output a single configuration that satisfies the primary performance objective while maximizing the other metrics in the order of their Figure 8. _An overview of thermodynamic model use cases._ Figure 7. _Small-scale prototype inside the incubator._ priority. It is possible that no configuration meets the desired level of performance for the provided specifications. ### Use Case 2: Operating System Components In the operation use case, our goal is to determine the operating point of the computing and cooling elements that allows the system to meet its performance objectives over a finite scheduling horizon. To do so, we require all the inputs of the design use case and the output parameters of the design process with one key distinction. Instead of the observed temperature, solar generation, and workload arrival schedule, we need forecasted values for these inputs. Temperature forecasts are generally highly accurate and readily available. Solar power forecasts are also available through many open-source and publicly-available tools, such as Solar-TK (Brock et al., 2018). The workload patterns for applications that are deployed using small-scale embedded systems or edge datacenters tend to also be deterministic. Given these inputs, we simulate our thermodynamic model using the forecasted values of temperature, solar power, and workload. In doing so, we schedule the workload for each hour of the scheduling horizon, while satisfying the specified objectives. In determining the operational schedule, we leverage the insight that thermal and electrical energy both exhibit a feedback loop between themselves, and that the thermal energy generated due to energy consumption at time \(t\) affects the availability of energy in subsequent time periods. We use a simple example to demonstrate this effect, which we term the _scheduling-energy effect_. It refers to the relationship between the intensity at which the processors operate, and the energy available from the battery. Specifically, due to the temperature-energy effect described in Section 2, the warmer the battery, the more energy a system can extract from it. Thus, when scheduling workload, systems can extract more energy, and perform more total computation, if they operate at a higher utilization as the temperature decreases, and lower utilizations when the temperature increases. The former maintains a higher temperature, which increases battery efficiency, while the latter generates less heat, which reduces the need for cooling, which consumes additional energy to dissipate the waste heat. Figure 9 quantifies the scheduling-energy effect in a scenario where the temperature drops over night but rises during the day. The graph compares operating continuously at \(\sim\)50% utilization with a scheduling policy that operates at \(\sim\)25% utilization during the day and \(\sim\)80% utilization over night. In this case, the latter schedule is able to perform 11% more computation than the former because of the effects above. For this experiment, our computations is simply an integer benchmark. The experiment demonstrates that scheduling _when_ and _how much_ processors dissipate heat can affect a system's energy-efficiency and the total computation. We use a simple iterative approach to find an optimal workload schedule that meets the user's performance objective. We work backwards from the end of the scheduling horizon and schedule workload for each hour such that the energy is extracted from the battery at the highest energy-efficiency. In the first round, it simply uses the default workload pattern and then changes the workload in each slot to ensure the performance objective is met. ### Thermodynamic Model at Work We next show how our thermodynamic model can improve the design and operation of the use cases above. We present multiple combinations of performance objectives and minimum operational power constraints. We also decouple the results across three seasons to demonstrate how the performance tradeoffs are impacted by seasonal variations even for a given location. #### 4.3.1. **Evaluation Setup** Below, we outline the key metrics used to specify systems' performance objectives. **Performance Metrics.** We define system performance objectives using three metrics: _energy-efficiency_, _availability_, and _work rate_. _Energy-efficiency_ is defined as the percentage of available energy in the battery that is extracted and used for computation. \[\text{Energy Efficiency}=100\times\frac{\text{Energy used for computation}}{\text{Energy stored in the battery}}\] A value of 100% means that all the energy extracted from the battery is used for the computation, while a value of 50% means that only 50% of the energy could be extracted and used for the computation. In this case, the other 50% either could not be extracted from the battery or was used by the cooling system. A higher value is better. _Availability_ is the percentage of time the system was up and has enough power to operate at or above a threshold utilization. \[\text{Availability}=100\times\frac{\text{System up time}}{\text{Total experiment time}}\] _Availability_ ranges between 0% and 100%. A higher value is better for a given operating point. Finally, the _work-rate_ is defined as the amount of work done (computation) per unit time. Its value is in the (0, inf) range. Higher values of work rate are better. **System Configurations.** For our use case demonstration, we consider a small-scale carbon-free edge datacenter similar to those considered in recent work (Brock et al., 2018; Brock et al., 2018). We consider the enclosure, in this case a small room, as a cube with 8ft sides. The size of the battery is 20kWh with a minimum state-of-charge of 40% or 8kWh. The edge datacenter houses 8 servers of 250W each with a total demand of 2kW at 100% utilization. The available battery capacity Figure 10. _Temperature profiles of a single location in northeast United States across different seasons._ Figure 9. _Work done using i) naive and ii) thermal-aware scheduling that exploits the electrical/thermal feedback loop._ of 12kWh is enough to run all the servers at 50% utilization for 24 hours under ideal conditions. This setting allows us to vary the system utilization (and the current draw) around 50% and evaluate the effect of increasing or decreasing battery's discharge current on system's objectives. The heat capacity, volume, and mass of the battery is based on our lithium-ion battery datasheet (Kumar et al., 2018). We demonstrate the effect of the design and operating point on the energy-efficiency, availability, and performance using a site in the northeast U.S. This site exhibits significantly different weather across winter, summer, and spring. The temperature profiles for three representative days of these seasons are shown in Figure 10. The temperature varies from \(77^{\circ}\)F (\(25^{\circ}\)C) to \(97^{\circ}\)F (\(36^{\circ}\)C) in summer, from \(46^{\circ}\)F (\(8^{\circ}\)C) to \(52^{\circ}\)F (\(11^{\circ}\)C) in spring, and \(5^{\circ}\)F (\(15^{\circ}\)C) to \(16^{\circ}\)F (\(5^{\circ}\)C) in winter. We configure the enclosure's heat transfer coefficient at three insulation settings: low (2), medium (0.9), and high (0.35). These values are achieved by varying the thickness of insulated wall with thermal conductivity of 0.15 W/m-K. #### 4.3.2. **Energy-efficiency** Figure 11 shows the energy-efficiency for different design parameters and operating points across all seasons. Here, we assume the system optimizes for energy-efficiency and discuss the design choices and operating points across seasons. **Effect of Design.** The choice of design to optimize for energy-efficiency depends on which season the system optimizes for and how much loss of energy-efficiency it is willing to accept in other seasons. If energy-efficiency in winter is desired for the system, we should opt for a design that offers the highest protection against ambient weather and best performance in heat scavenging, termed as winter-optimal (high insulation). This design gives you 100% energy efficiency in winter and offers the highest energy-efficiency for any operating point (Figure 11(a)). However, its energy-efficiency in other seasons is significantly lower, especially in summer (Figure 11(c)), since, in summer, this design has to use significant fan energy to dissipate the waste heat. Similarly, a low insulation design is the best choice for summer energy-efficiency. As expected, its performance in winter is the worst as high conductivity allows processor heat to escape, preventing it from retaining heat when idle. The spring-optimal design (medium insulation) offers the best performance in spring and fall (Figure 11(b)). Since its performance is better than low insulation in winter and high insulation in summer, it is the best choice to optimize energy-efficiency across seasons. **Effect of Operation.** The design of a system for a season does not automatically guarantee the best energy-efficiency. The operating point, i.e., utilization, provides another knob that optimizes the energy-efficiency. For example, during winter, the energy-efficiency for the low insulation design is highest at the maximum operating point. This is because, at higher utilization, more heat is generated which keeps the battery warm. The gain in battery energy-efficiency is enough to offset the negative effect of higher current draw. However, the same design offers the best energy-efficiency at the lowest operating point in spring and summer. This is because, at low operating points, the low insulation is able to dissipate the heat through normal heat transfer. The low operating point not only avoids the use of a fan but also the negative impact of higher discharge currents. This trend is not same for all the design choices. For a high and medium insulation, the best performance is achieved at mid-operating points in their respective seasons. This demonstrates that, given a design, the choice of operating point will vary within a season and across seasons. Figure 11 can also be used as a guide to designing systems. If the system must operate at a certain operating point, you can choose a design that gives the best performance. For example, if the system must always operate at 10% power intensity, the high insulation gives the best performance both in winter and summer, and comparable performance for the rest of the year. Figure 11. Energy-efficiency across three seasons: (a) winter, (b) spring, and (c) summer. For each season, we evaluate our three designs that are winter-optimal (red line, long dash), spring-optimal (black line, small dash), and summer-optimal (purple line, solid) over various operating points represented by power intensity of x-axis. Figure 12. Availability across three seasons: (a) winter, (b) spring, and (c) summer. For each season, we evaluate our three designs that are winter-optimal (red line, long dash), spring-optimal (black line, small dash), and summer-optimal (purple line, solid) over various operating points represented by power intensity of x-axis. #### 4.3.3. Availability Figure 12 evaluates availability for different design parameters and operating points across seasons. Here, we assume the system optimizes for availability and discusses the design choices and operating points to achieve that across seasons. _Effect of Design._ The availability across all operating points differs for each insulation level. The maximum availability offered by each insulation across all operating points differs significantly across seasons. A high insulation offers a minimum of 60% availability across all operating points as compared to 35% for the low insulation. This is because the energy-efficiency of the two designs varies significantly at the highest operating point. However, the same high insulation offers only 38% availability at all operating points in summer. This is due to the energy loss to a fan, as it needs higher airflow to dissipate heat as the operating point increases. The choice of design for 100% availability is straightforward if the operator does not care about the operating point or energy-efficiency. Figure 12 illustrates that all the design options offer 100% availability across all seasons. They only differ by the highest operating point at which they offer 100% availability. Thus, if the operator wants the system to be 100% available, they can choose any design and then operate it at the highest operating point at which it offers 100% availability. _Effect of Operation._ Each operating point offers different availability across seasons. For example, at 30% or lower power intensity, the system achieves 100% availability irrespective of design. This is less than the 50% power intensity that an ideal system can support with 100% availability. It shows the poor thermal management of designs for non-optimal seasons. Note that each design gives 100% availability at a higher operating point in its optimal season. For example, a high insulation manages thermal energy the best in winter, as it exceeds the 50% operating point. This is due to heat retention that takes the battery temperature above 25\({}^{\circ}\)C and extracts more energy than the nominal value. This effect is consistent across seasons: the design that best manages thermal energy in a given season, achieves the highest operating point for 100% availability. #### 4.3.4. Performance Figure 13 shows the performance of different designs during the summer. There are two key points in this evaluation. First, at different operating points, the speed of work differs. This is intuitive as power intensity on the x-axis is directly proportional to the CPU utilization. At 100% utilization, the rate of computation is 10 times faster than at 10% utilization. The second takeaway is the difference in performance across designs. The low insulation performs the best as it does not need a fan or active cooling. However, the other two designs must dissipate energy using a fan, as well as use workload scheduling to reduce the energy consumption of the fan. For example, the high insulation design uses a very simple scheduling policy to stop computation when the temperature exceeds 60\({}^{\circ}\)C and resumes it only when the temperature drops to 60\({}^{\circ}\)C using a combination of passive cooling through conduction and active cooling using the fan. This way, the design is able to achieve higher energy-efficiency at the cost of performance. ## 5. Case Studies We next present two applications as case studies that make use of our thermodynamic model while prioritizing different objectives. The first application is precision agriculture where an IoT base station is gathering sensor data from environmental sensors that are part of a distributed wireless sensor network. The state of the art for this application is Farmbeats (Zhou et al., 2017), which is an IoT platform for data-driven agriculture. The second application is federated learning in smart cities where multiple edge computing platforms are training a machine learning (ML) model. These case studies show that our _thermal-aware_ design and operation achieves better performance than application-specific state-of-the-art. ### Sensor Data Acquisition We evaluate the performance of our _thermal-aware_ approach against Farmbeats (Zhou et al., 2017). Farmbeats attempts to minimize the data gaps by varying the duty cycle of the data acquisition. Vasisht et al. (Vasisht et al., 2017) detail the hardware specifications of the system. The system is powered by two solar panels of 60W each. Solar panels are connected to four 12V-44Ah batteries connected in parallel. The processing component is a Raspberry Pi 4B that consumes 2.7W in idle and roughly 7W at maximum. The environmental sensors are interfaced with the base station through a 802.11b router, that consumes 20W power at maximum with a base power of 3W. We assume a linear relationship between the router power and sensor data acquisition rate for the purpose of this case study. However, as shown in the FarmBeats paper (Fig 5(a)), there is no enclosure for batteries. In our case study, we design an enclosure for the system that we use for both the FarmBeats-like and our proposed thermal-aware approach. This case study essentially compares the performance of thermal-agnostic Farmbeats-like system and a thermal-aware operation for data acquisition. In the former case, we only change the rate based on the available energy. In the later case, we leverage scheduling-energy effect to get better performance. Figure 14 shows the data rate achieved under _thermal-aware_ and thermal-agnostic Farmbeats operation. Both approches perform Figure 14. _Performance comparison against Farmbeats. A thermal-aware operation outperforms Farmbeats thermal-agnostic design during the summer and winter months. Both have the same performance during spring season._ Figure 13. _Performance during summer for high (winter-optimal), medium (spring-optimal), and low (summer-optimal) insulation designs over various operating points._ similarly during spring when the temperatures are moderate and there are no significant variations in temperature over the course of a single day. However, the _thermal-aware_ operation of the base station outperforms Farmbeats during summer and winter. The difference is significant during winter when thermal effects are significant. Overall, our _thermal-aware_ approach achieves 24% higher data rate than Farmbeats' thermal-agnostic design. ### Federated Learning at the Edge There is significant prior work on leveraging federated learning on resource-constrained edge devices (Han et al., 2017; Wang et al., 2018; Wang et al., 2018). These approaches aim to maximize the accuracy of the trained model under energy availability constraints at each device and constraints on the bandwidth available to upload the local model to the parameter server. However, these approaches do not explicitly consider how these constrained resources might be further impacted when exposed to a wide range of extreme environments. This case study shows that by jointly managing electrical and thermal energy, we can achieve better performance, both in terms of energy efficiency and work rate for such environmentally-powered computer systems. **Application Setup.** Our baseline for this case study is the Multi-Exit Federated Edge Learning (ME-FEEL) approach proposed by Tang et al. (Tang et al., 2018), which uses a modified ResNet18 (Tang et al., 2018) deep learning model that enables exiting at any of its seven layers. Tang et al. (Tang et al., 2018) profile the training time for each exit stage that we leverage in our simulation. Our application scenario consists of periodic rounds where the edge device trains its model and uploads the results to the server within a certain time threshold. The edge device picks an exit stage based on the energy availability at the device. A higher exit stage indicates more work done and vice versa. Since the amount of energy in both cases is the same, a higher work-rate means higher energy efficiency. Furthermore, since the system selects the exit stage based on the available energy, it is available for 100% of the rounds. We evaluate two scenarios where an application maximizes its work rate and energy-efficiency over a long period. **Optimizing Work Rate.** As illustrated in Wang et al. (Tang et al., 2018), a higher exit stage yields higher accuracy and application aims to exit at the highest stage possible. To achieve this goal, the application must prioritize the work rate over system availability and energy-efficiency. Figure 15 compares the performance of the ME-FEEL's thermal-agnostic approach to our _thermal-aware_ approach. A _thermal-aware_ approach results in a higher probability of exiting at a higher stage. As a result, a _thermal-aware_ model training has a higher accuracy. However, in attempting a higher exit stage and increased work rate, the node may run out of energy and not be able to participate in some rounds. Thus, despite high accuracy, the model may not be trained on the latest data. Our approach achieves an accuracy of 71% versus 62% for the baseline, representing a 14.5% improvement. ## 6. Related Work **Energy-harvesting Sensor Systems.** There is prior work on designing environmentally-powered systems that dynamically adapt their energy usage to enable perpetual operation, mostly for small-scale energy-harvesting sensor systems (Han et al., 2017; Wang et al., 2018; Wang et al., 2018). Most of the prior work assumes ideal operating conditions, e.g., 20-25\({}^{\circ}\)C and ignores thermal effects. The closest work to ours is (Wang et al., 2018), as it considers the effect of ambient temperature and discharge current on battery's energy-efficiency. However, it ignores other effects in our work, e.g., insulation-fan effect and scheduling-energy effect. **Edge AI.** There is prior work on Edge AI that maximizes the energy-efficiency of the edge computing platforms by optimizing various power management techniques (Han et al., 2017). However, this body of work is orthogonal and can be used in conjunction with our approach. **Sustainable Clouds.** There is recent work on designing sustainable clouds powered by renewable energy with battery storage (Beng et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), which focuses on adapting the workload to match variations in the energy supply but ignores thermal effects. **Data Center Cooling.** Prior work on managing heat in large-scale data centers mainly focuses on heat movement within facilities and avoiding hotspots. The work on free cooling data centers discusses the impact of ambient temperature on energy-efficiency and workload adaptation to optimize cooling efficiency (Beng et al., 2018; Wang et al., 2018). However, data centers do not have large batteries and thermal effects in them are limited (Gupta et al., 2018). Also, data centers focus on cooling; the heat produced is a waste that must be dissipated. In contrast, environmentally-powered computer systems can leverage this heat in cold weather. **Energy Modeling.** There is work on modeling thermal energy in buildings, such as OpenStudio (Han et al., 2017). However, it generally does not model battery components and its thermal effects. ## 7. Conclusion In this paper, we considered environmentally-powered computing system and showed that they must consider not only their electrical energy but also the thermal energy for effective design and operation. Our evaluation showed that a season-specific design can achieve up to 35% higher energy-efficiency than a non-optimal design while also outperforming the non-optimal design by achieving 20% higher availability. Finally, our case studies showed that the thermal-aware operation of the systems yield an improvement of 24% in data acquisition rate for precision agriculture application, 14% increase in model accuracy for the federated learning at the edge, and 41% increase in the data used for training at the edge. ## Acknowledgment We thank the anonymous e-Energy reviewers for their insightful comments and feedback. This research is supported by NSF grants 2230143, 2107133, 2105494, 2213636, 2211302, 1908536, 1925464, 2211888, US Army contract W911NF-17-2-0196, and CX-027429. Figure 15. _Performance comparison against a modified MultiExit Federated Edge Learning (ME-FEEL) approach (Wang et al., 2018). A thermal-aware operation outperforms ME-FEEL by exiting in a higher stage leading to a higher training accuracy._
2301.03673
Prototype Global Analysis of LISA Data with Multiple Source Types
The novel data analysis challenges posed by the Laser Interferometer Space Antenna (LISA) arise from the overwhelmingly large number of astrophysical sources in the measurement band and the density with which they are found in the data. Robust detection and characterization of the numerous gravitational wave sources in LISA data can not be done sequentially, but rather through a simultaneous global fit of a data model containing the full suite of astrophysical and instrumental features present in the data. While previous analyses have focused on individual source types in isolation, here we present the first demonstration of a LISA global fit analysis containing combined astrophysical populations. The prototype pipeline uses a blocked Metropolis Hastings algorithm to alternatingly fit to a population of ultra compact galactic binaries, known ``verification binaries'' already identified by electromagnetic observations, a population of massive black hole mergers, and an instrument noise model. The Global LISA Analysis Software Suite (GLASS) is assembled from independently developed samplers for the different model components. The modular design enables flexibility to future development by defining standard interfaces for adding new, or updating additional, components to the global fit without being overly prescriptive for how those modules must be internally designed. The GLASS pipeline is demonstrated on data simulated for the LISA Data Challenge 2b. Results of the analysis and a road-map for continued development are described in detail.
Tyson B. Littenberg, Neil J. Cornish
2023-01-09T20:48:27Z
http://arxiv.org/abs/2301.03673v1
# Prototype Global Analysis of LISA Data with Multiple Source Types ###### Abstract The novel data analysis challenges posed by the Laser Interferometer Space Antenna (LISA) arise from the overwhelmingly large number of astrophysical sources in the measurement band and the density with which they are found in the data. Robust detection and characterization of the numerous gravitational wave sources in LISA data can not be done sequentially, but rather through a simultaneous global fit of a data model containing the full suite of astrophysical and instrumental features present in the data. While previous analyses have focused on individual source types in isolation, here we present the first demonstration of a LISA global fit analysis containing combined astrophysical populations. The prototype pipeline uses a blocked Metropolis Hastings algorithm to alternatingly fit to a population of ultra compact galactic binaries, known "verification binaries" already identified by electromagnetic observations, a population of massive black hole mergers, and an instrument noise model. The Global LISA Analysis Software Suite (GLASS) is assembled from independently developed samplers for the different model components. The modular design enables flexibility to future development by defining standard interfaces for adding new, or updating additional, components to the global fit without being overly prescriptive for how those modules must be internally designed. The GLASS pipeline is demonstrated on data simulated for the LISA Data Challenge 2b. Results of the analysis and a road-map for continued development are described in detail. ## I Introduction The mHz band of the gravitational wave spectrum is expected to contain an unprecedented abundance of galactic, extra-galactic, and cosmological gravitational wave (GW) sources. The Laser Interferometer Space Antenna (LISA) will survey the mHz GW band and provide unique observational constraints on the formation and evolution of compact binaries in the Milky Way, the origin and growth of massive black holes throughout cosmic history, the dynamics of dense stellar environments in galactic nuclei, the fundamental nature of gravity and black holes, and more [1]. The richness of the LISA source catalog comes at the price of a more complicated analysis framework than is required for currently operating GW observatories. While aspects of the methodology developed for ground-based interferometers (many discrete sources) [2] and pulsar timing (overlapping sources, sophisticated noise modeling) [3] under-gird development of LISA analysis pipelines, new strategies are needed to account for the overwhelming number and density of GW signals in the LISA data. The fundamental challenge of LISA analysis stems from the large number (\(\mathcal{O}(10^{4})\)) and long duration (months to years) of detectable signals, resulting in non-negligible overlaps in time and frequency between discrete sources. As a result, analyses cannot treat sources independently and sequentially work through a list of candidate detections. Instead, the LISA analysis has to be approached globally, simultaneously fitting complete data models including all of the detectable GW sources and the detector noise. The need for a "Global Fit" was first described in 2005 [4], and has been identified as the primary challenge to the LISA analysis since early in the mission formulation [5]. This has lead to a coordinated effort to develop capable algorithms well in advance of mission operations [6]. Global fit analyses are not unique to LISA, as there are analogous methods used elsewhere in GW astronomy, and more broadly within astronomy and astrophysics (e.g. _Gaia_[7]). For LIGO-Virgo analysis, the BayesWave pipeline simultaneously models Gaussian noise, non-Gaussian noise artifacts, and short-duration GW transients [8]. PTA analyses use a global fit to simultaneously model a correlated, stochastic gravitational wave background, a solar system ephemeris model, and multiple noise sources for each pulsar in the array [9; 10]. Some PTA analyses also perform a global fit for multiple source types, such as the signals from individual black hole binaries and a stochastic confusion noise from unresolved binaries [11], or perform a BayesWave-style analysis to reconstruct un-modeled burst signals [12]. Where the analogy ends is the scale of the LISA problem compared to elsewhere in GW astronomy, evident in the number of sources that are part of the global analysis, the diversity of source types (SMBHBs, EMRIs, UCBs, SGWBs, SOBHs, etc.), and data complications from multi-year integration times (glitches, nonstationary noise, gaps, etc.). In this paper we present the first demonstration of a LISA global fit analysis contending with multiple source types. We call the algorithm the Global LISA Analysis Software Suite, or GLASS. As the proving ground for GLASS we use the simulated data released in the second round of the LISA Data Challenges (Challenge LDC2a-v2) [13]. The simulated data contain Gaussian detector noise; a simulated population of Milky Way ultra compact binaries (UCBs); 35 galactic UCBs already discovered by electromagnetic observations, the so-called "verification binaries" (VGBs); and a population of merging massive black hole binaries (MBHBs). The philosophy of GLASS is to incorporate independently developed, stochastic sampling algorithms designed to address different facets of the full LISA analysis. GLASS is then effectively an overarching umbrella that manages the interfaces, matches data formats, and orchestrates how the different samplers will work together to converge and adequately cover the target, high dimensional, joint posterior distribution function. GLASS uses a four-component model to fit the simulated LDC2a-v2 data. The UCB, VGB, and MBHB models each employ analytic template waveforms to fit the detectable sources, while the noise level, including the unresolved astrophysical foreground from the galactic binaries, is fit with a phenomenological model. The parameters of the four model components are optimized using a blocked Metropolis Hastings (MH) algorithm. The test data set spans one year of simulated LISA observations however our analysis processes the data sequentially, analyzing increasingly large strides of the "observed" time series as we envision would be done during mission operations. Results from the analysis of the LDC2a-v2 data are compared to the input populations to assess the performance of the algorithm. Figure 1 summarizes the LDC2a-v2 results by showing the combined reconstructed model components represented as the amplitude spectral density of the time delay interferometry (TDI) A channel [14]. The original data are shown in gray in the background of the figure. The purple lines are the posteriors of the reconstructed UCB waveforms while orange are the reconstructed VGBs. The magenta broadband curves are the reconstructed massive black hole signals, and the light blue curve is the noise model. Note that at low frequency the credible 50 and 90% credible intervals are visible in the black hole signals while otherwise the credible intervals on the reconstructions are too narrow to be visible in this plot. This figure represents the key result of this study: We are demonstrating a prototype pipeline able to simultaneously fit thousands of overlapping signals of different morphology and an _a priori_ unknown noise level. The remainder of this paper will provide a detailed description of the algorithm and a demonstration of the performance. Section II describes the analysis architecture, and how the different modules are integrated together into a global fit pipeline. Section III.1 describes the noise model and sampling algorithm, adapted from the BayesLine algorithm used for LIGO-Virgo noise modeling [15]. Section III.2 summarizes updates to the galactic binary sampler GBMCMC first described in Ref [16], while Section III.3 specifies the configuration changes for GBMCMC to perform targeted analysis of known binaries. Section III.4 describes how the MBHBMCMC massive black hole sampler from Ref [17] was adapted for this work. Section IV presents the evolving results from 1.5, 3, 6, and 12 month analyses of the LDC2a-v2 data before we conclude in Section V with a development road map to improve GLASS's capabilities in order to analyze increasingly realistic LISA data. ## II The glass architecture The central engine of GLASS is a blocked Markov Chain Monte Carlo sampler [18]. In the blocked sampling scheme, a subset of the model parameters (a block) is updated while holding all other parameters fixed. Different blocks are updated independently in sequence, and the process cycles until the sampler has converged. For the LISA Global Fit problem, blocked MH samplers have two advantages. First, they work well for high dimension spaces when parameter correlations are confined to relatively small and _a priori_ identified sub spaces. Second, they are naturally modular, turning the daunting task of building an algorithm equal to the complexity of LISA data into a well defined set of components that are developed independently and then integrated. The blocked MH scheme in GLASS is hierarchical where the top level blocks, which we will refer to as "modules," are the joint set of parameters for the different model components i.e., blocks for the noise, VGB, UCB, and MBHB parameter sets. The sampling within the VGB and MBHB modules is further grouped into blocks by individual sources, while the UCB module has one more layer of hierarchy-where model parameters are grouped by narrow-band frequency segments, and then by indi Figure 1: Median reconstructed global fit model components for the full 12 month LDC2a-v2 data, shown as the ASD in the TDI A channel. Gray is the residual, purple are the UCB detections, orange or the fits to the known binaries, magenta are the MBHB mergers, and light blue is the noise model. vidual sources within each segment. Each module uses a customized parallel tempered Markov Chain Monte Carlo sampler developed independently of one another. The role of GLASS is to coordinate which blocks are updating and to exchange state data between modules. The modules work on their own subset of the data, use their own likelihood function, tempering scheme, proposals, etc., and in principle could even use a different representation of the data (e.g., time series, frequency series, or wavelets). In the current implementation of GLASS each module is working in the frequency domain. Figure 2 is a schematic diagram for how modules operate on different bandwidths of the data. Note that the colors indicating each module will be consistent throughout this paper. The noise model is fit over the full frequency measurement band of the data (light blue). For the LDC2a-v2 data we take that to range from \(\sim\)10\({}^{-5}\) to \(\sim\)30 mHz. Massive black hole mergers are broadband signals where the maximum frequency is determined by the total mass of the binary. The MBHB module bandwidth is dynamically based on the source parameters, but generally extends up to a few to \(O(10)\) mHz (magenta). The UCBs are narrow-band signals, generally spanning \(\lesssim 10\)\(\mu\)Hz, but are by far the most numerous of the LISA sources, and will be found throughout the measurement band of our analysis, though sparsely above 10 mHz. The UCB module consists of several instances of the same sampler, each focusing on a band-limited segment of data (purple). The bandwidth of each segment depends on the frequency, using smaller segments where sources are most densely spaced. Finally the VGB module is conducting a narrow-band targeted analysis for individually known sources (orange). To understand the interface between modules consult the joint likelihood function for the global fit: \[p(d|\theta)=(2\pi)^{-\frac{N}{2}}\det\left(C(\theta_{\text{noise}})\right)^{- \frac{1}{2}}e^{-\frac{1}{2}(d-h(\theta_{\text{MEHB}})-h(\theta_{\text{UCB}})- h(\theta_{\text{VGB}}))^{T}C(\theta_{\text{noise}})^{-1}(d-h(\theta_{\text{MBHB}})-h( \theta_{\text{UCB}})-h(\theta_{\text{VGB}}))} \tag{1}\] where \(d\) is the data, \(N\) is the number of data points, \(C\) is the noise covariance matrix, \(\theta\) represents the full parameter set, \(\theta_{i}\) are the model parameters in the block for module \(i\), and \(h\) are the co-added detector responses to the modeled sources in each module. For example, \(h(\theta_{\text{MBHB}})\) is really shorthand for \(\sum_{n}h(\theta_{\text{MBHB}}^{n})\) where \(n\) is indexing all of the sources in the MBHB model. Sticking with the MBHB example, the sampler adopted for that model was developed assuming no other sources in the data, and a known noise covariance matrix. The internal likelihood for the \(k^{\text{th}}\) component of the MBHB module is then just \[p(d|\theta_{\text{MBH}}^{k})=e^{-\frac{1}{2}\left(\bar{d}-h(\theta_{\text{ MBHB}}^{k})\right)^{T}C^{-1}\left(\bar{d}-h(\theta_{\text{MBHB}}^{k})\right)} \tag{2}\] where the normalization term is absent because the sampler only considers the likelihood ratio between two points in parameter space and the covariance matrix is independent of the (MBHB) parameters. To interface this sampler with the rest of GLASS at the beginning of each one of the MBHB blocks' updates the covariance matrix is replaced based on the current state of the noise model \(\theta_{\text{noise}}\) and the "data" as seen by the MBHB sampler is the _residual_ after subtracting all other model components-as far as the MBHB sampler for the \(k^{\text{th}}\) MBHB is concerned \(\bar{d}=d-h(\theta_{\text{UCB}})-h(\theta_{\text{VGB}})-\sum_{n\neq k}h( \theta_{\text{MBHB}}^{n}).\) The MBHB sampler is otherwise blissfully unaware of what is happening in the rest of the global fit. It is the job of GLASS to keep track of the current state of each module, prepare the effective data and noise covariance matrix, and refresh the likelihood (using the new effective data and noise model) of the current state before a sampler updates it's block of parameters. An identical argument applies to the other modules. The individual samplers for the modules have been developed and published elsewhere and will be briefly summarized in later sections before focusing on updates made to each of them for the LISA global analysis. To understand how the data is shared between modules a block diagram of the workflow is shown in Fig. 3. The diagram is a simplified version of the true workflow, depicting only three UCB and MBHB nodes each. In practice GLASS uses several hundred UCB nodes and one MBHB node per source in the model. The noise, VGB, UCB, and MBHB model updates are executed by the BayesLine, VBMCMC, GBCMC, and MBHBMCMC blocks, respectively. GLASS uses the Message Passing Interface (MPI) standard to exchange information between the different modules. For shorthand we will refer each MPI Figure 2: Schematic block diagram for how frequency domain data are segmented by the different model components. The noise module must cover the full frequency range (light blue). The MBHB module is broadband, covering almost the same width as the noise model (magenta). The UCB module divides the data into narrow-band, overlapping, segments (purple), while the VGB model targets only the frequency range spanned by each individual known binary (orange). process as a "node" of the analysis, though in practice we use multiple MPI processes per node. The BayesLine module uses a single node which also serves as the root process responsible for the work shared by all nodes and the overall orchestration of the analysis (P0 in the flow chart). At start up the root node handles data parsing, selection, and conditioning before broadcasting the data and the initial state of each sampler to all other worker nodes. During each iteration of the sampling nodes P0 and P1 first update their parameter blocks for the noise model and VGB model, respectively. At the end of the noise and VGB module updates (each involving several internal MCMC steps for each model), the VGB process sends the current state of the VGB model in the frequency domain to the root process. The root process then broadcasts the current state of the noise model and VGB model to the UCB (P2-P4) and MBHB (P5-P7) nodes. The UCB and MBHB processes then create their respective residuals and update their block of parameters. Each UCB process is responsible for a narrow-band segment of the data but care must be taken at the segment boundaries where individual sources can span the interface. Each node shares its current state with the neighboring nodes (i.e. the adjacent frequency segments) so that the receiving node can remove the state of the neighboring model when forming the residual that will effectively serve as the data for the current update. The segments overlap in frequency and each node is responsible for fitting to the sources in its half of the overlapping region. This overlap, which is set to be a factor of two larger than the typical bandwidth of a source at that frequency, ensures that the templates for sources located near the boundaries are not artificially truncated. To preserve the correlations between sources that are close to one another on either side of a boundary, the UCB nodes alternate which block is updating and which is waiting. For example, all of the odd-numbered processes will update their parameters, exchange with their neighbors, and then the even processes will update. For the MBHB modules, additional pre-processing is needed once their residuals are formed. The MBHB module relies on the heterodyned likelihood described in [19; 20; 21] and recomputes the coefficients based on the current state of the sampler, as well as updating proposal distributions used within the sampler that use the information matrix as an approximation to the covariance matrix of the posterior. After that pre-processing each MBHB module updates its parameter block in parallel with the other MBHBs in the fit. At the end of the UCB and MBHB updates each module sends the current state of its model in the frequency domain to the root process to broadcast to all of the other nodes, and the entire cycle repeats. After many iterations when the sampling has finished each process performs a minimal level of post-processing to prepare for the next stage of the pipeline when the posterior samples are consolidated into a source catalog. Note that in the traditional blocked MH sampling scheme only one block of parameters are updated at a time while the others are held fixed whereas GLASS is updating blocks of parameters in parallel. This was a choice made to improve the computational efficiency of the algorithm. The current bottlenecks for the analysis are the heterodyne step for the MBHB model and the convergence time for the highest frequency UCBs. Those two aspects of the problem set the scale for the cost of each iteration and the number of iterations needed, respectively. To maximize efficiency, we do enough MBHB parameter updates to match the cost of updating the heterodyne. The number of internal GBCMC updates per cycle of the full sampler are then dynamically adjusted to take approximately the same amount of computational time as the MBHB models. The comparative costs of the noise model and VBMCMC updates are significantly lower, similar to the costs of the data sharing and common processing that must get done before each iteration (e.g. writing results to file, etc.). This scheme was thus created to maximize the duty cycle of individual nodes by minimizing the amount of time nodes are blocked waiting for other processes to finish their work. While technically violating the conditions needed to have the resulting samples be representative of the target posterior distribution function, the effects are only noticeable for blocks that are correlated. Updating alternating UCB blocks ensures that no correlated UCB parameters are being altered in parallel as the data segments for each UCB node are larger than the bandwidth of a single UCB signal. Similar arguments can be made between other blocks being updated in parallel although a production analysis on observational data requires more through testing for confirmation, and/or a more conservative approach and a higher computational cost. It is a trivial rearrangement of where the MPI exchanges take place to revert to the traditional serial update of all parameter blocks. Each of the samplers use parallel tempering [22] to improve the mixing of the chains. Parallel tempering is especially critical for promoting transitions between models in the trans-dimensional samplers. The tempering scheme is independently developed and tuned for the different modules, and only the zero temperature chain parameters or state are shared between different processes. Each of the parallel tempering samplers is multi-threaded ideally using a single CPU per chain. In practice there is a trade space between the number of resources needed for the analysis and the amount of time those resources are needed. Processing of the LDC2a-v2 data was done on Amazon World Service (AWS) cloud computing infrastructure which favors smaller-scale jobs running for longer. As a result the LDC2a-v2 analysis used multiple threads per CPU. The final configuration for the full LDC2a-v2 analysis (12 months of simulated data) used 624 MPI tasks and 12 CPUs per task for a total of 7488 CPUs. The run was deployed on 78 \(\times\) 96 CPU nodes. The noise model and verification binary modules used one MPI task each. There were 15 MBHB mergers in the data each run with a dedicated MPI task. The remaining 607 MPI tasks were dedicated to the UCB model covering.03 to 23 mHz. ## III Description of the individual samplers The individual model components that are integrated into the GLASS architecture are independently developed, described, and published. Each is still under active development so it is useful to overview each sampler with emphasis on updates that have been made since the most recent publications. ### Global Noise Model For the noise model we use an adaptation of the BayesLine algorithm originally developed for LIGO-Virgo noise modeling [15]. The original BayesLine algorithm fits the power spectral density (PSD) of the noise \(S_{n}(f)\) independently in each detector. The LIGO-Virgo version of the pipeline uses a two-component fit to phenomenologically model the noise spectrum. The main component is a broadband noise spectrum that looks similar to a sum of power laws, with steeply rising noise at low frequency and a more gradual increase at high frequency. Modeling the actual LIGO-Virgo data with a broken power law is not sufficiently flexible so BayesLine uses a cubic spline interpolation where each spline control point \(i\) is parameterized by its frequency and PSD level \([f^{i},S_{n}^{i}]\). The location of the spline points, as well as the total number, are free parameters sampled over with a trans-dimensional MCMC. For the LIGO-Virgo application BayesLine also includes a linear combination of Lorentzians to fit the narrowband features in the spectrum due to calibration lines, the power supply, resonances of the mirror suspension system, etc. The simulated LISA data do not contain narrowband noise features and so GLASS's implementation of BayesLine does not use the line model. Note, however, that there were spectral lines in the LISA _Pathfinder_ data and so future version of the model will need such a feature [23]. There are two important differences between the BayesLine implementation integrated into GLASS and that which has been used for LIGO-Virgo data. First, the LISA noise spectrum spans the frequency regime where finite arm length effects of the detector response are in the measurement band, unlike ground-based interferometers which operate entirely in the long wavelength limit. The arm length manifests in the noise spectrum as sharp features where the PSD, and instrument response, formally go to zero for signals with wavelength that fit an integer number of cycles within the detector arm. Mathematically this is a result of terms proportional to \(\sin^{2}(f/f^{*})\) appearing in the detector response functions with the "transfer frequency" \(f^{*}\equiv c/2\pi L\) where \(c\) is the speed of light and \(L\approx 2.5\) Gm is the arm length of the LISA detector. The resulting spectrum is not well modeled by a spline interpolation at high frequencies. However, the difficult features for a spline interpolation to track are a purely geometric effect set by the size of the detector. We therefore model the _difference_ between a reference noise spectrum, including the geometric effects, and the observed data \(S_{n}^{\text{modeled}}=S_{n}^{\text{observed}}-S_{n}^{\text{reference}}\), i.e. we are fitting for broadband differences between the reference noise level, derived from the current best estimate of the LISA performance, and the observed data. Where the reference model is accurate the modeled PSD will be consistent with zero. Second, the interpolation between control points for the GLASS application of the spline model employs Akima splines [24] rather than the cubic splines used in the LIGO-Virgo applications. The Akima splines are less prone to oscillations between control points by relaxing the requirement of a continuity in the second derivative of the interpolated curve. The tendency for cubic splines to oscillate is exacerbated by the large dynamic range and steeply changing spectrum at low frequency. Akima splines perform better on the LISA spectrum and are worth considering for LIGO-Virgo noise modeling as well. ### GBMCMC Updates The UCB sampler is the GBMCMC pipeline described in Ref. [16]. The GBMCMC application is the latest in a long line of algorithms designed for the LISA galactic binaries which partition the frequency domain data into many narrow-band segments and uses model selection to determine the number of detectable binaries in each segment [25; 26]. The model selection method of choice used by GBMCMC is a transdimensional, or reversible jump Markov Chain Monte Carlo (RJMCMC). Of the different samplers integrated by GLASS GBMCMC has seen the most additional development. The sampler was updated to use multi-threading for the parallel tempered chains, making a significant improvement in the run time especially when leveraging the increasingly large number of CPUs available per node. The pipeline has also updated the way results from previous runs are incorporated as proposal distributions for subsequent analyses. As described in Ref [16], the LISA data are processed in increasingly-long time epochs, starting with the first 1.5 month segment of data and re-processing each time the available data has doubled (i.e., after 3, 6, and 12 months). In the first version of the GBMCMC pipeline multivariate Gaussian proposals were built using the covariance matrix of the posterior samples for each UCB in the catalog. In the latest version the single Gaussian proposal was replaced by a Gaussian Mixture Model (GMM) which is fit using the Expectation Maximization (EM) algorithm run on the posterior samples for sources in the previous epoch's catalog. Eval uating the GMM proposal is more computationally costly than the single multivariate proposal, but we have found it to be offset by the improvement in convergence time. The GBMCMC sampler now also includes a basic "split-merge" proposal for trans-dimensional steps, whereas the original algorithm only used "birth-death" moves. A birth-death move chooses to either remove or add a feature to the model (in GBMCMC's case, a source from or to the fit). The split-merge proposal attempts to divide a single feature into two, or combine a pair of features Figure 3: Block diagram of Global Fit architecture. Process P0 is the root process and runs the noise model sampler. Processes P1 to P3 are for the UCB model. Processes P4 to P6 run the MBHB model. Gray is the blocked MH sampler. Purple are the \(N_{\text{MBHB}}\) independent MBHB sampling steps. Green are the \(N_{\text{UCB}}\) coupled UCB processes, which exchange only between adjacent segments. Orange is the Noise model which is run on the root processes. Data from all processes are shared with, and broadcast from, root. In practice \(\mathcal{O}(10^{3})\) UCB processes are needed, and \(\mathcal{O}(10)\) MBHB processes. into one. The current implementation of the the split-merge proposal is naive, choosing to remove one source and replace it by two drawn from the same distribution as is used by the birth-death moves, or to remove two of the current sources and replace them by a single draw. In other words, the current split-merge proposal is really two birth moves and one death move, or two death moves and one birth move, respectively. Further development of more efficient split-merge proposals will be a critical area to improve the sampling. Finally, in the previous applications of the GBMCMC sampler the frequency segments were of equal bandwidth over the entire observing band. Because each segment was analyzed independently the overall number of segments (and therefore nodes) needed for the analysis was not a limiting factor in its deployment. Within the GLASS architecture when all of the processes are communicating via MPI we need to be more parsimonious about the number of segments being analyzed. To that end we adopted an adaptive segment size depending on the source density. At low frequency where the source density is the highest the segments are more narrow, and at high frequency where the source density is low (and the signals have larger bandwidth) the segments are wider. The exact segmenting was fixed before the LDC2a-v2 analysis was started and kept the same for each epoch's analysis. A more efficient approach would be to use the previous epoch's catalog to dynamically determine where to best place the segment boundaries to both keep the source density per segment near constant, and to avoid having loud signals near segment boundaries as an insurance policy, even though the "edge effects" of the segmenting are already ameliorated by GLASS's data sharing scheme between UCB segments. ### VBMCMC Updates The verification binary sampler (VBMCMC) in GLASS is identical to GBMCMC but is run in a different configuration. Whereas GBMCMC is performing a blind search for UCBs in the LISA data, part of which includes a model selection step to determine if a candidate source in the data is detectable, VBMCMC is executing a targeted analysis of binaries which have already been identified as LISA sources by EM surveys [27]. The VBMCMC sampler therefore uses a fixed-dimension analysis with priors on the orbital period and sky location of the binaries derived from the EM observations. Because the sky localization from the EM observations is orders of magnitude more precise than LISA will ever achieve, the sky location parameters in VBMCMC are fixed to the EM-observed values, effectively using delta functions for the priors. The same is true for the orbital period of the binaries (converted to GW frequency for the VBMCMC model). While it is possible that some of the currently-known binaries' orbital period measurements are not as precise as what LISA will infer, the long temporal baseline of EM observations should effectively pin the orbital period for LISA observations assuming continued effort to periodically monitor the known binaries prior to LISA's launch. For the known binaries where this is not the case, replacing the delta function prior on GW frequency with a Gaussian distribution with width from the EM uncertainties is a trivial change. Using a fixed-dimension targeted search for the known binaries instead of retroactively extracting the known binaries from the full UCB source catalog allows for upper limits to be set on binary parameters (most notably the GW amplitude) in the event that some of the known binaries are below the detection threshold at the time of the global fit analysis, perhaps due to elevated levels of the astrophysical foreground (confusion) noise, or because they will require longer integration times with LISA before becoming detectable. The targeted VBMCMC analysis will also reduce contamination of known binaries from other loud sources at similar orbital periods, as many of the currently know binaries are at frequencies where the galactic source density is expected to be highest. Analysis of known binaries will be particularly vulnerable to source contamination early in the LISA mission when the frequency resolution and integrated signal to noise levels are still improving at the same time as when UCB observations may play an important role in the early phases of instrument characterization. ### MBHB Updates The MBHB module uses elements of the LISA-Massive-Black-Hole-Binary pipeline originally developed for low-latency detection and parameter estimation of massive black hole mergers with LISA [21]. The pipeline starts with a pre-processing search phase that uses short stretches of data (typically a few weeks), treating the galactic foreground as a noise source, and using a maximized likelihood function to rapidly lock on to any massive black hole binaries. The search is repeated on each segment of data after subtracting the previously found signals until no additional sources above a S/N threshold are found. The rapid search is then followed with a full MCMC exploration of each source, taken one at a time, that refines the parameter estimates. These estimates are then used as the starting point for the MBHB analysis in GLASS. The same PTMCMC sampling routine is used in the global fit, but now with the noise model replaced by the spline model, and with the resolved UCBs subtracted. Another key difference is that the model components are updated in an alternating fashion, in contrast with the low latency analysis where the noise model is fixed and the MBHB signals are analyzed sequentially. The MBHB block of parameters is updated as follows: When it comes time to update a particular MBHB, the sampler receives the current state of the residuals residual constructed from the other model components. That is, the original data with the current state of the combined UCB and VGB models, as well as the other MBHB waveform models, subtracted. A key element of the MBHB sampler is the use of heterodyning to accelerate the likelihood calculations [19; 20; 21]. The heterodyne is computed using a reference waveform-in this case one based on the parameters of the MBHB model at the end of the last update, and the current state of the residual. The computational cost of setting up the heterodyne is equal to a few times the cost of a standard likelihood evaluation, so to be cost effective it makes sense to perform hundreds of iterations with the fast heterodyned likelihood before moving on to the next block in the sequence of updates. The current version of the MBHB sampler uses customized implementation of the IMRPhenomD waveform model which describes the dominant \((2,2)\) harmonic for spin-aligned, quasi-circular binaries. In future versions of the sampler the waveform model will be generalized to include spin precession effects, sub-dominant waveform harmonics, and eventually, orbital eccentricity. Another limitation of the current implementation is that the dimension of the MBHB model is fixed to whatever was found by the low latency search. The MBHB model also needs to be trans-dimensional with the results from the search phase being used to propose adding or removing sources. ## IV Demonstration Having described the overall GLASS architecture we now turn to a demonstration of the pipeline's performance on simulated LISA data. The test data set was produced by the LISA Data Challenge (LDC) team and is the first of the LDC data sets to contain a combination of different source types [28]. The following demonstration of GLASS's current capabilities uses the LDC2a-v2 data which contain \(\sim 30\) million galactic UCDs, 37 VGBs, 15 MBHB mergers, and an unspecified instrument noise level. The LDC2a-v2 data span one year of LISA observation time assuming a 100% duty cycle. In reality there will be periodic and sporadic interruptions to the data taking which will require further development of GLASS's noise model and likelihood functions. For each LDC simulation there are "blind" and "training" data sets, where the training data contain the list of signals (injections) used for the simulation. The blind data are simulated using a different realization of the same population that is found in the training data. For this demonstration the training data are used, enabling assessment of the pipeline performance through comparisons of the resulting source catalog to the injected signals. The LDC2a-v2 data TDI channels are generated assuming an equal arm interferometer with stationary instrument noise such that the TDI "A" and "E" channels are noise-orthogonal [14]. The GLASS noise model correspondingly uses an independent fit to the A and E instrument noise levels. This simplifying assumption will need to be relaxed for analysis of the observational data but will only effect the computational cost of the analysis by introducing correlations between TDI data streams which result in non-zero off diagonal terms in the noise covariance matrix at each frequency. The resulting likelihood evaluations require more operations to compute (see Eq. 1) but will not effect the overall complexity of the global fit. Fig 4 shows the amplitude spectral density (ASD) of the TDI A channel after the first six months of the LDC2a-v2 data. Through the remainder of the paper, the A channel will be used to visualize the data and/or signal models. The differences between the A and E channel data are subtle and not informative at this level, though they are crucial for the analysis to decompose the observed signal into the the two GW polarization states. The data shown in the figure are colored over the intervals being analyzed by different model components. The noise model (light blue) covers the full frequency range. The MBHB model (magenta) spans a similar bandwidth, though there is additional padding for the noise model to ensure that it extends beyond where the MBHB signals are in band during the time they are observable. The UCB segments are shown in alternating light and dark purple bands. Though it is difficult to discern from the figure, especially due to the frequency axis being on a log scale, the width of the UCB segments is frequency-dependent, roughly tuned to use narrower segments where the source density is higher. Finally the locations of each narrow-band segment for the targeted Figure 4: Amplitude spectral density (ASD) of TDI A channel data analyzed by each block of the sampler. The UCB segments are shown in alternating shades of purple. Note the changing bandwidth of the UCB segments, which are larger where the source-density is lower. VGB segments are orange. The MBHB (magenta) and noise models (blue) cover the full analysis band, with the noise model extending to slightly higher and lower frequencies for margin. This example uses the first 6 months of the LDC2a-v2 data. VGB analyses are shown in orange. Throughout the remainder of the paper the same color scheme will be used to identify different model components: Blue for noise, purple for UCBs, orange for VGBs, and magenta for MBHBs. For the headline demonstration of GLASS at work, Fig. 5 shows the reconstructed components of each part of the data model. The figure is showing the same content as Fig. 1 but zoomed in to a narrow interval around 6 mHz containing one of the loudest currently known sources, HM Cnc [29], shown in orange. Here we can see all of the model components on display, with a densely-packed collection of UCBs in addition to HM Cnc all overlapping one another (purple), and the MBHB mergers sweeping through the band (magenta). The gray curve depicting the residual after all model components have been subtracted from the data is fit by noise model shown in blue. Note that in this figure the uncertainty in the reconstructions is thinner than the line widths in the figure, as all of the sources in this interval have high signal to noise ratio (S/N). Broadening the aperture to the full analysis band of the demonstration, Fig. 6 shows the original data's ASD which is dominated by the UCBs and is thus shown in purple. Removing the resolved UCBs (and VGBs) leaves the magenta residual containing a bump in the spectrum from the combined signals of the MBHBs. The final (light blue) residual is after all of the resolved GW signals in the fit are subtracted from the data. The remaining bump in the residual spanning \(\sim\)\(3\times 10^{-4}\) to \(\sim\)\(5\times 10^{-3}\) Hz is due to the foreground of un-resolvable UCBs. As described above the analysis is repeated on increasingly long epochs of the full data set, starting with the first 1.5 months of observations going up to the full year of data. Analyses are conducted each time the data volume has doubled, resulting in analyses of 1.5, 3, 6, and 12 month segments. As the observing time increases the number of detectable signals grows. For UCBs, which are continuous sources, this is due to the source building signal power over time and the improving frequency resolution of the data. The MBHBs are transient sources so longer observation times provide more opportunity to catch a black hole merger in the act. Fig. 7 shows the number of candidate detections in the source catalogs for the UCB (purple, left) and MBHB (magenta, right) models as the observing time increases. The drop in the total number of UCB detection candidates between 1.5 and 3 Figure 5: Same as Fig. 1 but focused on a narrow frequency band near 6 mHz. The known binary in this segment of data (orange) is representative of how HM Cnc will appear in the LISA data. Figure 6: ASD of the data including all signals (purple), after removing the fit to the resolvable UCBs leaving behind only the MBHBs (magenta), and then the final residual after all signals in the fit are removed (light blue). Figure 7: Number of UCB (top) and MBHB (bottom) detections as a function of observation time. The UCB detection number is the number of candidates from the maximum \(a\) posteriori model after clustering samples by waveform match and then selecting candidates with \(z>0.5\) (lightest shade), \(z>0.9\) (medium shade), and those that uniquely correspond to a source in the injected population with a match of \(m>0.9\). See Sec. IV.2 for a full explanation of the match. months is unexpected. However, the number of confident detections that are clearly associated with an injected signal increases, as will be described in section IV.2. Time-dependent analyses of data with such short observing times will be particularly sensitive to the initial orientation of the spacecraft constellation relative to the galactic center, and the modeling of the time-varying noise due to the galactic foreground [30]. While needing further study, our assessment is that the initial conditions of the LISA orbits and our admittedly incorrect assumption of stationary noise lead to this counter-intuitive result. Having summarized the GLASS performance on a data-wide level, we now take a detailed look at the performance of the individual model components by studying the properties of the recovered source catalogs and comparing them to the input populations. ### Noise Model The instrument noise model parameters used in GLASS are not physically meaningful and so the primary diagnostics for the performance of the sampler are functional tests. In each cycle of the GLASS blocked MH sampler, the PSD model is fitting the residual after the current state of the UCB, VGB, and MBHB models have been subtracted from the data. That residual includes the instrument noise as well as the unresolved galactic foreground, referred to in the LISA literature as "confusion noise," which is expected to be the dominant source of residual power between \(\sim\)10\({}^{-1}\) and \(\sim\)3 mHz. Exactly where the galactic foreground drops below the instrument noise depends on details of the galactic population of compact binaries [31], the performance of the LISA instrument, and the observation time [32; 33]. Fig. 8 shows the power spectrum of the 12 month data A channel (dark gray), and the residual after removal of a fair draw from the joint UCB+VGB+MBHB model (light gray). The colored lines are the PSD fits from the noise model for the 1.5, 6, and 12 month runs. The 3 month result is omitted for clarity. The black dashed line is the PSD used for simulating the instrument noise. In each of the PSD fits the spline model used \(\sim\)15 to \(\sim\)30 control points. The results clearly show how the prominent bump in the spectrum where the astrophysical foreground dominates initially grows with time across the band as the joint S/N of the galaxy increases, and then is slowly reduced as the UCB model is able to resolve more binaries, particularly at higher frequency. Outside of the interval dominated by the astrophysical foreground, the modeled PSD matches the simulated levels. The 1.5 month PSD fit is truncated at low frequency because the bandwidth of the noise fit is dynamically set based on the signal content of GLASS which does not include any MBHB merger signals in the first month of the LDC2a-v2 data, alleviating the need to model the PSD below \(\sim\)0.3 mHz. A more quantitative assessment of the PSD model is possible by testing the whitened data \(\tilde{w}(f)\equiv\tilde{d}(f)/\sqrt{S_{n}(f)}\) where the tilde denotes a Fourier transform, \(d\) is the data, and \(S_{n}(f)\) is the PSD. The PSD is proportional to the frequency-dependant variance of the noise and therefore the whitened data should be consistent with a zero mean unit variance normal distribution \(N[0,1]\). Fig. 9 shows histograms of the combined real and imaginary components of the Fourier transformed whitened residuals for the 1.5 (left, purple), 6 (middle, orange) and 12 (right, pink) data. Displayed above each panel is the mean and standard deviation of the whitened residuals, in agreement with the expected results. While the performance of the noise model passes the tests presented here we know that the model is incomplete and demands further development to meet the challenges of the real observing data. The primary limitation of the current model is the implicit assumption that the noise is stationary. In practice the LISA noise will have a time-varying PSD due to secular and random fluctuations of the instrument performance, as well as the cyclostationary modulations of the galactic foreground imparted by LISA's orbit [34; 30]. Generalizing the noise model using time-frequency methods [35] is an immediate priority for future development of GLASS, and has ripple effects through the rest of the model components. ### UCB Catalog The UCB catalog contains \(\sim\)2000 candidate detections after the first 1.5 months of observing, climbing to \(\sim\)8500 by the end of the 1 year LDC2a-v2 data set. Fig. 10 shows Figure 8: Median inferred noise PSD for three (purple), six (green), and 12e (orange) month observing times. The black dashed line is the true PSD used when simulating the data. For reference, the dark gray is the power spectrum of the 12 month data, and the darker gray is the residual after removal of the UCB, VGB, and MBHB models. The difference between the inferred and true PSD between \(\sim 2\times 10^{-4}\) and \(\sim 6\times 10^{-3}\) Hz is due to the unresolved galactic foreground, or “confusion noise.” a scatter plot of the point-estimate frequency and GW amplitude parameters \((f,\mathcal{A})\) of the recovered sources in the catalog after analysis of the full 12 month LDC2a-v2 data. The black line is a representative LISA sensitivity curve, so that the S/N of each source is proportional to its height above the curve. The "cavity" of sources above the curve at low frequency is due to the foreground of unresolved galactic binaries becoming the dominant noise source. Distilling the output of the GBCMMC sampler to a discrete list of catalog sources is a nuanced and lossy process, a detailed description of which is found in Ref. [16]. To summarize: In each frequency segment the maximum a posteriori (MAP) number of source templates used to fit the data (i.e., the number of templates used most frequently in the RJMCMC sampler) is selected as the reference model. The posterior samples from that model are clustered into discrete catalog entries using the _match_\(m\equiv(h_{i}|h_{j})/\sqrt{(h_{i}|h_{i})(h_{j}|h_{j})}\) between the waveforms computed from the chain samples at step \(i\) and \(j\) where \((\cdot|\cdot)\) is the standard noise-weighted inner product. The threshold for considering a chain sample as a member of a cluster is \(m>0.8\). The fraction of the total number of steps in the chain that have a sample in a particular entry is interpreted as a detection confidence \(z\). The threshold for inclusion in the final catalog is \(z>0.5\) i.e., that a catalog entry has a sample in more than half of the total number of chain steps in the MAP model. This is not a strict criteria, roughly equating to sources with a Bayesian odds ratio \(>1\) being included in the model. We therefore use an additional threshold of \(z>0.9\) for catalog sources to be considered confident detections. Neither the match requirement for inclusion as a sample belonging to a catalog entry, or the fraction of samples from the chain that an entry contains, are extensively tested or optimized through large scale injection studies. Such critical work must be thoroughly undertaken in advance of using GLASS or anything like it for production analyses. As an example of the content contained for a single UCB, Fig. 11 shows a set of marginalized 2-dimensional posterior distributions for a high S/N binary found near 10 mHz. UCBs in the GLASS catalog are identified by their median frequency, so in the 12 month catalog this binary was labeled LDC0100498745. See [16] for a dis Figure 11: LDC0100498745 parameter estimation over time. Green, orange, and pink contours correspond to the measurement after 3, 6, and 12 months of observing with LISA. The contours mark the 1 and 2\(\sigma\) credible intervals. The sampling parameters from GBCMCM are re-parameterized into orbital period and derivative \((P,\dot{P})\) [top left]; amplitude and inclination \((\mathcal{A},\iota)\) [top right]; galactic latitude and longitude \((l,b)\) [bottom left]; chirp mass and luminosity distance \((\mathcal{M},D_{L})\) [bottom right]. The black dashed lines show the injected parameter values. Figure 10: Scatter plot of frequency \(f\) and GW amplitude \(\mathcal{A}\) of UCBs in the 12 month source catalog. The black line is an example LISA sensitivity curve. Figure 9: Distribution of the whitened data \(\tilde{w}(f)=\tilde{d}(f)/\sqrt{S_{\mathrm{n}}(f)}\) for the 1 month [left], 6 month [middle] and 12 month [right] whitened residual data. If the PSD model is functioning correctly the whitened data should be distributed as a zero mean unit variance Gaussian. The mean and standard deviation computed from the whitened data are printed above each panel. cussion and demonstration of how UCB candidates are traced through versions of the source catalogs from earlier analysis epochs (i.e. tracking how this particular source was labeled in the 6 month catalog, etc.). Shaded contours are the 1 and 2\(\sigma\) credible intervals, and the colors correspond to observing time with blue green, orange, and pink representing 3, 6, and 12 months respectively. As with the color-coded source types, throughout the paper these colors (along with light-purple for 1 month) will consistently represent the observing times in subsequent figures. The posteriors are represented in a different parameterization than is used in the sampling, to better match the observables customarily used by the EM observing community. The top left panel shows the orbital period \(P\) and first derivative \(\dot{P}\) of the binary system. Note how the measurement precision of \(\dot{P}\) increases more rapidly than other parameters. This is because the orbital evolution of the binary enters the phase as a \(T^{2}\)-dependent term, so the information accumulates more rapidly than the typical \(\sqrt{T}\) scaling due to the increasing S/N of a continuous source. The top right panel shows the gravitational wave amplitude and the binary's orbital inclination in degrees. The bottom left panel is the sky location in galactic coordinates, with \(l\) as the galactic latitude and \(b\) the galactic longitude. The bottom right panel are the chirp mass \(\mathcal{M}\) and luminosity distance \(D_{L}\) parameters derived from the GW observables assuming the orbital evolution is purely driven by emission of gravitational waves. The horizontal and vertical lines mark the parameter values for the injected signal, i.e. the "right answer" when we have the luxury of knowing the true source parameters. One intriguing opportunity afforded by the LISA UCB catalog is to map the Milky Way's stellar remnant population. Fig. 12 shows the map of the UCB sky in galactic coordinates after 1.5, 3, 6, and 12 months from top left to bottom right. The maps are constructed by combining the posterior samples from all of the sources in the UCB catalog. After only 1.5 months of observing large scale galactic features like the bulge and disk are evident in the maps. The resolution of the image continues to improve as the observing time increases, revealing a remarkably clear view of the galactic disk and bulge with hundreds of well-localized sources. The quality of the image will steadily improve over the LISA mission life time, and will include distance information from the chirping binaries, enabling three-dimensional inferences on the spatial distribution of binaries throughout the galaxy [36]. With the benefit of knowing the input source population, the observed UCB catalog is compared to the injected binaries to study the detection efficiency of the analysis. The primary metric for assessing the the quality of the inferred source catalog is the maximum match \(m\) between the waveform computed from the point estimate source parameters in the catalog and the waveforms from the injected population. For computational efficiency the match is only computed between the catalog waveform and an injected waveform if their frequency parameters are within 10 frequency bins of one another. Fig. 13 shows the distribution of the match values between the sources in the catalog and those in the input population. For inclusion in the sample we select catalog candidates with detection confidence \(z>0.9\) instead of the \(z>0.5\) criteria for inclusion in the catalog. The top panel shows the cumulative distribution function of the matches. The vertical axis is thus interpreted as the fraction of sources in the catalog with match below \(m\). The bottom panel is the un-normalized survival function, i.e. the total number of sources in the catalog with match greater than \(m\). If we consider \(m>0.9\) as a criteria for an unambiguous mapping between a source in the catalog and an injection then \(\sim\)75% (\(\sim\)80%) of the confidently identified (\(z>0.9\)) binaries in the 12 (6) month catalog exceed the criteria equating to \(\sim\)5000 (\(\sim\)4500) sources. The most striking feature in the results is a population of sources with matches below \(\sim\)0.9 that emerges in the 12 month catalog. Previous analyses of simulated galaxies in LISA data have not shown a similarly-populated low match tail [25; 26], instead finding \(\gtrsim\)90% of sources with \(m>0.9\). The comparatively high rate of low-match sources in the GLASS UCB catalog demands further investigation and discussion. To begin, compare the matches for each search over different projections of the full parameter space shown in Fig. 14 displaying the frequency-amplitude plane on the left and the sky location on the right. Here the sky location is displayed using the sampling parameters (ecliptic coordinates) rather than galactic coordinates as in Fig 12. The source model in GBMCMC is parameterized using ecliptic coordinates to minimize covariances between parameters, and thereby make it easier to sample the posterior. In this figure each point in the scatter plot is colored by the waveform _mismatch_ defined as \(1-m\), and the color map uses a log scale. Thus cool colors are good matches, and warm colors are poor matches. The majority of low match sources are found in the frequency interval between 1 and 6 mHz where the unresolved UCBs are the dominant source of noise. One significant difference between the GLASS analysis and previous UCB searches is the global spline noise model. In references [25; 26] each narrow-band frequency segment independently modeled the noise level effectively using a \(\mathcal{O}(10^{3})\) parameter piece-wise (and discontinuous) fit to the instrument noise. The GLASS noise model is constrained to be a continuous function, and is using model selection to determine the most parsimonious number of parameters, naturally making the noise model less flexible. It holds together that the differences between a fixed (and high dimensional) piece-wise fit and the parsimonious spline model have a larger effect in the foreground-dominated part of the spectrum where there is stronger coupling between the UCB and noise models. Another, and perhaps more impactful difference, is a difference in the prior used for the GW source parameters between the LDC2a-v2 analysis and previous demonstrations. Previous incarnations of the GBMCMC sampler, and it's ancestors, have used a prior on the sky location parameters derived by assuming the sources followed the spatial distribution of the galaxy. While that prior is still an option in GLASS, it was intentionally not used for the LDC2a-v2 analysis in favor of a uniform prior on the sky location parameters. The choice to not use a galaxy prior was motivated by the idea of eventually performing a hierarchichal analysis where the posterior samples of the binaries in the catalog are used to constrain models for the spatial distribution of sources in the galaxy [36]. Hierarchical analyses are complicated by non-trivial priors on the posterior samples and so the choice was made to produce samples for the UCBs in the most accessible form possible. The expectation was that this choice would produce larger uncertainties in the position reconstruction of individual sources, but an unintended consequence is the effect it had on the detection efficiency. The right hand side of Fig. 14 clearly shows sources with high mismatch are preferentially located outside of the galactic plane, whereas high match sources follow the expected "U" shape of the galaxy in ecliptic coordinates. Mismatching sources in the catalog generally arise from two circumstances: Either a source template is fitting to blended contributions from multiple injections or multiple source templates are being used to fit a single injection. The former (one-to-many) can be the most parsimonious solution (having the highest Bayesian evidence), or it could be due to modeling issues either from the noise or sky location prior while the latter (many-to-one) is clearly a problem with sampler convergence. In the regime where a few templates are fitting a larger number of signals, clear attribution for why this was the preferred configuration of the model is difficult to assign. However, it is true that from a strict model selection point of view if fewer templates can adequately fit features in the data, i.e. a larger combination of sources, the parsimonious solution is favored. Additional information, such as a prior that favors sources in the galactic plane, is needed to help further disentangle the overlapping sources. If the sky location prior and/or noise model were the predominant cause of the poor matches, why would it only appear in the 12 month analysis? One possible explanation is that the recovered source catalogs from shorter integration times are going to be dominated by the loudest, most isolated, binaries in the population and as the observing time increases the search is able fit Figure 12: Maps of the source sky locations in galactic coordinates from the UCB catalogs after 1.5 [top left], 3 [top right], 6 [bottom left] and 12 [bottom right] months of observing, showing the increasingly clear reconstruction of the Milky Way disk and bulge structures. features in the data with lower intrinsic GW amplitude where the injected source density increases. Additionally, the uniform prior on the sky location of binaries is worse for longer observation times. With short-duration observations the LISA UCB catalog will be dominated by near-by sources, especially at lower frequency, where a uniform distribution on the sky, while still not accurate, is closer to the observed population than at later times when the UCB catalog will sample the full galaxy. The issues of source confusion and the dependence on priors for what constitutes a "detection" are many, nuanced, and require dedicated study. Another possibility is that the jump from 6 months to 12 months was too big a step for the Gaussian mixture model proposal which uses the posteriors from the shorter time span analysis as a proposal for the longer time span analysis. Fig. 15 shows a pair of examples suspected of exhibiting the parsimony "failure" mode. Both show results from a subset of a single UCB analysis segment from the 12 month data roughly bracketing the frequency range where the low match sources are most prevalent. The top panels show the ASD of the simulated data after the MBHB signals have been removed (gray), the injected UCB model (black), and the posterior distribution for the recovered UCB model (purple). The middle panels show the injected source parameters in the frequency-amplitude plane (open circles), the point estimate parameters from the UCB catalog entries (filled purple circles), unfiltered chain samples from GBMCMC (light purple), and the posterior samples colored differently for each individual source in the catalog. The bottom panel shows the match for each posterior sample in the same colors as the panel above, and open circles for the point estimate match used to construct the curves in Fig. 13. In these examples the catalog entries with poor matches are typically found in parts of the frequency segment where there are numerous unresolved injections. The posterior samples do not obviously favor one injected value over the other, yet the overall fit follows the injected signal (top panel). An example of the overfitting problem, where multiple templates are fitting one source, is shown in fig. 16 using the same format as 15. In this segment there are six densely-packed injections, two of which are well-fit in the catalog. The remaining four were fit by 13 templates in the 12 month analysis-a clear convergence error. Examples like figures 15 and 16 will drive future development of the GBMCMC pipeline. To test our conjecture about the root cause of the high rate of low match sources in the catalog, we reanalyzed five UCB segments from the LDC2a-v2 data with the MBHB injections removed, and used the GBMCMC sampler alone. The frequency segments were chosen to be evenly spaced between 1 and 6 mHz to cover the regime where the low match catalog entries were most common. Three different configurations were used to measure the effect of the different suspects for the low match population. The first uses proposals built from the 6 month GLASS run, and the flat sky prior, just like the production LDC2a-v2 results. The only difference between the first example and the global analysis (ignoring covariances with the MBHB model) is the noise model, which in GBMCMC is parameterized as a constant over the frequency interval of the segment. Assuming a constant PSD over the each analysis segment effectively amounts to a _significantly_ more flexible noise model compared to what is used in GLASS. The second configuration is the same as the first, but with proposals built from a 9 month GBMCMC analysis of the segments to test whether the low match sources were due to convergence problems stemming from the time steps between analyses being too large, reducing the efficiency of the Gaussian mixture model proposals. The final configuration reverts to the 6 month proposals but includes the galaxy prior as described in [16]. Fig. 17 shows the same type of results as fig. 13 but only for the six test segments, and comparing the different test configurations with the global analysis. Of the different configurations tested, using the constant PSD model slightly improves the purity of the catalog, but similarly reduces the total number of detections in the catalog. Including an intermediate 9 month analysis results in a higher number of detections and an improved match distribution, meaning that the overall convergence of the UCB model is heavily dependent on the efficiency of the proposals. Including the galaxy prior has the largest influence on the results, both improving the catalog purity _as well as_ the total number of detections. While a full-scale study is needed to conclusively assess the performance of the different configurations, these results are suggestive that using a model for the galaxy in the analysis and shorter time steps between global fit processing will improve the quality of the UCB source catalog. The sensitivity to the galaxy prior also reinforces the fact that modeling choices have a significant Figure 13: Top: Cumulative distribution function of matches. Bottom: Un-normalized survival function of the match. Purple, green, orange, and pink curves are for 1.5, 3, 6, and 12 month observing times respectively. Results are from confident (\(z>0.9\)) detections from the source catalog. impact on the resulting inferences, especially near the detection threshold. ### VGB Catalog Analysis and interpretation of the VGB catalog is more straightforward due to the model using informative priors derived from EM observations. Because the sampler is assuming a single source at known orbital period and sky location, the most relevant parameters to be measured by LISA are the GW amplitude and binary inclination. In the event that the LISA observation time is not long enough for the source to be "detectable," the posterior samples are useful for setting inclination-dependent upper limits on the amplitude, and therefore the combined Figure 14: Scatter plot of point estimates for candidate sources in the UCB catalog after 12 months of observing colored by the minimum mismatch between the catalog waveform and the injected waveforms. Left: GW frequency-amplitude plane showing that most of the high mismatch sources are in the confusion noise regime below \(\sim\)6 mHz. Right: Sky location parameters in ecliptic coordinates, revealing that the high mismatch sources are preferentially located out of the galactic plane. Figure 15: Investigation of possible causes for the low match population in the UCB catalog. Left and right panels focus on different frequency intervals towards the low (\(\sim\)1.8 mHz) and high (\(\sim\)4 mHz) end of the region where the false alarms were most frequent. The top panel shows ASD of the LDC2a-v2 data with MBHBs removed (gray), input UCB signal (black) and the joint posterior for reconstruction from the GBMCMC sampler (purple). The middle panels are a scatter plot of the UCB frequency-amplitude parameters for the injections (open circles), point estimate catalog entries (filled circles), chain samples (light purple dots), and posterior samples for the individual sources (multi-color dots). The bottom panel shows the maximum match between each posterior sample and an injected waveform (same colors as middle panel) and the match from the catalog point estimate used in the summary plots like fig. 13. The low-match sources are consistent with fitting combinations of injections. chirp mass of, and distance to, the binary. Fig. 18 shows four representative examples from the full set of known binaries comparing inferences between 3, 6, and 12 month observations (green, orange, and magenta). The two dimensional posteriors present the 1 and 2\(\sigma\) contours, and the dashed black lines mark the injected parameter values. The top two panels show results for AM CVn and HM Cnc-two of the canonical known binaries that are identifiable early in the LISA observations. The bottom left panel is for the ultra compact X-ray binary UCXB 4U1820-30 which transitions from a regime where upper limits are set after 3 months of observing, to a constraint in the 6 and 12 month catalogs, indicated by the open contours in green to the closed contours in orange and magenta. Finally, source CX0GBS J1751 remains undetectable by GLASS after 12 months of observing but note that the upper limit inferred for the amplitude decreases over time. ### MBHB Catalog The final parts of the GLASS analysis to investigate are the MBHB results. The most interesting single example is the first MBHB to appear in the LDC2a-v2 data, which merges during the second month of the simulated observations. The simulated source also happens to be one of highest S/N binaries in the population and is ob Figure 16: Same as fig. 15 for an example where multiple templates were fitting a smaller number of injections, i.e. a clear example of a convergence failure for GBMCMC. Figure 17: Same as fig. 13 but for different test configurations of GBMCMC on six frequency segments between 1 and 6 mHz to explore possible causes for the high fraction of low match sources in the GLASS UCB catalog. The dark blue curves are the GLASS results for the test segments. Light blue use the flat sky prior but a constant PSD model. Dark orange are the same configuration as light blue but include proposals generated from a 9 month run. Light orange uses the constant PSD model and a prior that prefers sources to be located in the galactic plane. Results are from confident (\(z>0.9\)) detections. In this limited test, using the galaxy prior improves both the catalog purity and the number of high-match sources in the catalog and shows that smaller time steps between global fit runs are advantageous. Figure 18: Variety of results from targeted analysis of known binaries in the LDC2a-v2 data showing measurement of the GW amplitude and binary inclination as a function of observing time, comparing 3 (green), 6 (orange), and 12 (magenta) month observations. (a) AM CVn is a straightforward example of a strong LISA source properly identified early in the observing campaign with inferences steadily improving over time. (b) HM Cnc shows similar behavior as AM CVn but at higher S/N. (c) UCXB 4U182030 shows how a binary will transition from being undetected, with the analysis providing upper limits on the amplitude (green) to a point where the binary parameters will be constrained (orange, magenta). (d) CX0GBS J1751 is an example of improving upper limits with observation time. This binary will require longer observing times to be constrained. servable in three of the different analysis epochs used for this demonstration of GLASS. Fig. 19 shows the posterior distribution function for the intrinsic source parameters: \(m_{1}\) and \(m_{2}\) are the masses of the black holes in the binary while \(\chi_{1}\) and \(\chi_{2}\) are their respective dimensionless spin parameters. Recall that the data and MBHB model both currently assume the binaries have BH spin aligned with the orbital angular momentum vector-an assumption that is not valid in nature but made out of convenience at this stage of development for simulations and pipelines. As with other examples, the color indicates observing time and the dashed lines mark the injection values. What is remarkable about fig. 19 are the changes in the posteriors over the observing time even though the binary merged in month 2 of the simulated data. The subtle reduction in the width of the posteriors is due to the improved foreground subtraction from the UCB model, which effectively lowers the noise level, and thereby increases the S/N, of the MBHB mergers. Because of the global nature of LISA analysis, inferences from transient sources will continue to improve long after they have left the measurement band. Moving on to the full MBHB population, fig. 20 shows the posterior distribution function for the mass parameters from each of the 15 MBHBs injected into, and recovered from, the LDC2a-v2 data. The MBHBs are labeled in the GLASS catalog by the merger time (in seconds) relative to the start of observations. The input population covers a wide range off masses and the posteriors are generally well-constrained due to the large number of in-spiral cycles combined with the strength of the merger signal. To compare against the true values from the simulations, the right-hand panel shows the same posteriors but shifted by the injected mass values, so the point (0,0) marks the truth. For visibility, only the \(1\sigma\) contours are shown on the right hand side. All but one of the binaries contain the truth value inside of the \(2\sigma\) contour. The one source whose injected value is outside of the bulk of the GLASS posterior is the lowest mass binary in the input population. The same bias is seen in other prototype analyses and is suspected of being the results of artifacts in the LDC2a-v2 data from the waveform simulation process [37]. For the final demonstration of the MBHB catalog, fig. 21 displays the sky map for each MBHB source in the catalog, in order of merger time from top left to bottom right, after the full 12 month analysis. The variety found in the MBHB sky maps is the result of the relative importance for each event of the three different "channels" of localization information for these signals. The first channel is through the GW phase which is frequency-modulated by the orbital motion of the spacecraft. The second channel of localization information comes from the relative arrival time of the GW signal's wave-fronts at the different spacecraft. The third, and least informative, is from the non-uniform detector response over the sky which is encoded in the relative amplitudes of the GW signal in the different TDI channels. Sky maps that contain many modes of high probability are typically from higher mass binaries that are shorter lived in the LISA data, missing out on the Doppler modulations induced by the orbital motion of the detector and not reaching high enough frequency to differentiate the signal arrival times at the different spacecraft. Lower mass MBHB signals get the best of both worlds, as they are in the measurement band for long enough to have clearly detectable frequency modulation and they reach short enough wavelengths to benefit from the time of arrival measurements. Note that even for the high mass and short-lived binaries additional information from spin precession and, more importantly, higher harmonics of the waveform will help break degeneracies [38]. See Ref. [39] for a more comprehensive discussion and demonstration of MBHB parameter estimation with LISA. ## V Discussion and Future Work The global analysis demonstrated here is one important step towards a fully functioning pipeline ready for LISA observational data but there is still a long way to go. Obviously missing are the other anticipated source types, though the GLASS architecture is designed to seamlessly accommodate additional modules. Extending to other source types is generally expected to be a small perturbation relative to the overall scale of the analysis which is set by the UCBs. Another obvious direction of development is to reduce the overall computational cost of the analysis. The current version of GLASS used \(\mathcal{O}(10^{3})\) CPUs for \(\mathcal{O}(5)\) days Figure 19: Marginalized posterior distributions for the mass \(m\) and dimensionless spin \(\chi\) parameters of the first MBHB to merge in the LDC2a-v2 data, during the second month of the simulated data. The dashed lines mark the parameter values for the simulated signal. Note the parameter estimation improves after the signal has left the LISA band because the galactic foreground decreases as the UCB model resolves more binaries. to process the 12 month data, and those processing times will increase roughly linearly as the observation time grows. The current algorithm will become uncomfortably expensive for multi-year data sets. Reducing the computational cost is of crucial importance for further development because the main source of stress on the analysis methods provided by LISA data is the scale. Optimal development of the global analysis requires frequent processing of full-scale data sets. The two prongs for reducing the computational cost of the analysis are by lowering the cost of each likelihood evaluation with accelerated compuatational techniques (e.g. Ref [40]), and by reducing the total number of likelihood evaluations by developing a more efficient sampling algorithm through further development of data- and domain-driven proposal distributions. Beyond implementation improvements, there are a number of model assumptions, reflected in the simplicity of the likelihood function, that will need to be relaxed to properly handle observational data. Generally speaking, it is the assumption of stationary noise that is most problematic, though there are different sources of non-stationarity that each deserve their own strategy. As mentioned earlier, there will be periodic (due to the galactic foreground) and secular (due to the instrument) changes in the instrument noise level which introduce non-zero off-diagonal elements of the frequency-domain noise covariance matrix. We will mitigate these effects by moving away from conducting the analysis in the Fourier domain, in favor of a discrete wavelet basis, which still yields a diagonal noise covariance matrix if the stationary timescales are longer than the duration of the wavelet basis functions. Descriptions of waveform and noise models in the wavelet domain are found in Refs [30; 35; 41] and are scheduled to be integrated into GLASS. In the wavelet domain the noise model will be a function of both time and frequency to track the slow drift in the instrument and foreground noise levels. The time-frequency approach renders the heterodyning currently used for both the UCB and MBHB likelihood calculations redundant. Wavelet decompositions incorporate a natural compression of GW signals since the likelihood only integral only changes along the signal track \(f(t)\), which has length \(\sim\)\(\sqrt{N}\) for a data set with \(N\) data points [35; 41]. Wavelet domain likelihoods are typically faster than their heterodyned analogs without requiring a reference waveform or any pre-computation step. Faster likelihood functions allow for more rapid convergence and better mixing between blocks of the MH sampler for the same computational cost. The wavelet domain is also better suited to handling data gaps than frequency domain analyses. In the wavelet domain the basis functions are finite duration with built-in window functions that naturally suppress spectral leakage caused by gaps in the data. Fourier methods require additional data conditioning to deal with gaps, either through windowing or data augmentation [42]. Another modeling limitation of the current analysis is that it treats the unresolved galactic signals as _noise_, when in reality they are better described as a cyclo-stationary stochastic _signal_. Going forward we will explore using a physically parameterized instrument noise model [43], while treating the unresolved galactic binaries as a separate, time-varying, stochastic signal [30; 43]. Such a treatment requires that we include the (approximately, at low frequency) noise-only TDI "T" channel, and not just the A and E channels as is done in the current version. Modeling of the galactic confusion will be further improved by coupling the resolved UCB population in the global fit to a physical model of the foreground via parameterized priors for the spatial distribution of binaries [36] and the overall number density of sources as a function of frequency. A source of non-stationary noise not well suited for modeling with the power spectral density (or time-frequency equivalent) are short duration noise transients Figure 20: Mass posteriors for the entire observed MBHB population in the 12 month LDC2a–v2 data. Left: 1 and 2\(\sigma\) contours for the inferred masses. Right: 1\(\sigma\) contours for the binaries shifted by the true values for each simulated source, such that (0,0) marks the injected parameter location. or "glitches." The path to incorporating a glitch model (and, by corollary, a model for generic GW transients) into GLASS has already been paved by similar work done for LIGO-Virgo data [8] and theoretical demonstrations using simulated LISA data [44]. There is already LDC data that contain a simulated glitch population informed by the LISA _Pathfinder_ observations [45] and the incorporation of a transient noise module into the GLASS framework is a near-term priority. One final currently planned development direction for GLASS is in the instrument model itself, enabling the global analysis to start with lower level data products than the TDI channels currently used. A data-driven approach to perform self-calibration coupled with the global fit naturally propagates uncertainties at each stage of the signal processing chain to the astrophysical inferences made with the GW signals. Such capabilities may prove to be important for science investigations where control Figure 21: Marginalized distributions of MBHB sky location in ecliptic coordinates. The sky maps are ordered by merger time from top left to bottom right. The different morphologies of the sky maps are due to the different maximum frequency and duration of the signals, with lower mass binaries reaching high frequency and spending more time in band leading to precise sky localization. Degeneracies in the sky location improve when higher harmonics are included in the waveforms. of systematic errors are vital, the most obvious example of which would be testing the nature of gravity with high S/N MBHBs or EMRIs. Already-demonstrated examples of self-calibration methods for LISA include employing UCBs as phase standards [46] and using the phasemeter data to infer the light travel time between spacecraft for cancellation of laser frequency noise and construction of the TDI interferometer combinations [47, 48]. Another possible capability to explore is the removal of noise in the inter- and intra-spacecraft interferometer measurements caused by angular jitter of the test masses masquerading as distance fluctuations-the so-called "tilt to length coupling" inherent in the LISA measurement [49]. An instrument module in the GLASS is valuable for quantitatively understanding and, if necessary, mitigating the affect of calibration uncertainties an astrophysical inferences. As is clear from the long list of future work, the GLASS architecture described in this paper does not represent a finished design but instead is the scaffolding upon which further development will be built. Nevertheless, our demonstrated results are an important way-point on the path towards a fully functional pipeline ready for LISA observational data. ## VI Acknowledgements _Software:_ Results presented here used v2.0 of ldasoft, a public C library which includes the noise, UCB, VGB, and global fit samplers. The MBHB sampler is managed independently at LISA-Massive-Black-Hole. Postprocessing and visualization tools for the source catalogs are available the python package lisacattools which in turn depends on numpy [50], pandas [51, 52], matplotlib [53], astropy [54], seaborn [55], and ChainConsumer [56]. The authors thank K. Gresbach for multithreading the GBMCMC pipeline; J. Baker, K. Lackeos, T. Robson, J. Slutsky, and J.I. Thorpe for their useful discussions and suggestions during the development of the global fit pipeline and the assessment of the results; J.C. Malapert and J.I. Thorpe for their role as co-developers of lisacattools; J.H. Thompson for indispensable help deploying the pipeline on the AWS resources; and the LISA Data Challenge group for providing and supporting the simulated data. T.B. Littenberg is supported by the NASA LISA Study Office. N.J. Cornish appreciates the support of the NASA LISA Preparatory Science Grant 80NSSC19K0320.
2306.06974
A Computational Theory and Semi-Supervised Algorithm for Clustering
A computational theory for clustering and a semi-supervised clustering algorithm is presented. Clustering is defined to be the obtainment of groupings of data such that each group contains no anomalies with respect to a chosen grouping principle and measure; all other examples are considered to be fringe points, isolated anomalies, anomalous clusters or unknown clusters. More precisely, after appropriate modelling under the assumption of uniform random distribution, any example whose expectation of occurrence is <1 with respect to a group is considered an anomaly; otherwise it is assigned a membership of that group. Thus, clustering is conceived as the dual of anomaly detection. The representation of data is taken to be the Euclidean distance of a point to a cluster median. This is due to the robustness properties of the median to outliers, its approximate location of centrality and so that decision boundaries are general purpose. The kernel of the clustering method is Mohammad's anomaly detection algorithm, resulting in a parameter-free, fast, and efficient clustering algorithm. Acknowledging that clustering is an interactive and iterative process, the algorithm relies on a small fraction of known relationships between examples. These relationships serve as seeds to define the user's objectives and guide the clustering process. The algorithm then expands the clusters accordingly, leaving the remaining examples for exploration and subsequent iterations. Results are presented on synthetic and realworld data sets, demonstrating the advantages over the most widely used clustering methods.
Nassir Mohammad
2023-06-12T09:15:58Z
http://arxiv.org/abs/2306.06974v1
# A Computational Theory and Semi-Supervised Algorithm for Clustering ###### Abstract A computational theory for clustering and a semi-supervised clustering algorithm is presented. Clustering is defined to be the obtainment of groupings of data such that each group contains no anomalies with respect to a chosen grouping principle and measure; all other examples are considered to be fringe points, isolated anomalies, anomalous clusters or unknown clusters. More precisely, after appropriate modelling under the assumption of uniform random distribution, any example whose expectation of occurrence is \(<1\) with respect to a group is considered an anomaly; otherwise it is assigned a membership of that group. Thus, clustering is conceived as the dual of anomaly detection. The representation of data is taken to be the Euclidean distance of a point to a cluster median. This is due to the robustness properties of the median to outliers, its approximate location of centrality and so that decision boundaries are general purpose. The kernel of the clustering method is Mohammad's anomaly detection algorithm, resulting in a parameter-free, fast, and efficient clustering algorithm. Acknowledging that clustering is an interactive and iterative process, the algorithm relies on a small fraction of known relationships between examples. These relationships serve as seeds to define the user's objectives and guide the clustering process. The algorithm then expands the clusters accordingly, leaving the remaining examples for exploration and subsequent iterations. Results are presented on synthetic and realworld data sets, demonstrating the advantages over the most widely used clustering methods. Introduction Grouping is a natural process carried out by humans in their interaction with the environment and is particularly considered fundamental when interpreting and understanding visual information. This is exemplified in the theories of human vision, and in particular the Gestalt Theory of Psychology where partial gestalts (local groupings) are taken to repeatedly fuse to culminate in global perceptions [2]. It is thus natural to see that grouping--otherwise known as cluster analysis in the field of machine learning--is ubiquitous and appears in practically all disciplines where data is collected. This includes malware clustering in cybersecurity, topic clustering in document processing, fraud analysis in finance and grouping gene expressions in molecular biology. The wide range of subjects highlights the importance of cluster analysis; specifically for the exploration, representation and understanding patterns of interest. However, clustering data into meaningful groups is a challenging problem. Not least because even defining and framing the problem so as to state precisely what is to be computed is fraught with difficulties. Often the problem is vaguely posed as finding _natural_ groupings of the data where within group points are _similar_ with respect to some measure like distance, density or probability, yet _dissimilar_ from other such formations. However, the problem statement also needs to make explicit that some observations may be noise or anomalies and not belong to any meaningful group, or be altogether a completely unknown group, thus requiring more nuanced decision boundaries and not the forceful lumping of all examples into some groups. Another factor that leads to challenges is the type of learning encountered in cluster analysis--machine learning algorithms generally fall under the categories of supervised, unsupervised and semi-supervised. In supervised learning labels of the data are all available and the goal may be to learn a classifier or a predictor. In unsupervised learning data is unlabelled and is a scenario most widely encountered in the realworld because most data is unlabelled and acquiring such labels can be costly (or even unnecessary). The term unsupervised is also often interpreted to mean that the algorithms run without user assistence and interaction; here an algorithm returns what it deems to be the best result, or a number of different results are made available for the user to select according to their goal. The semi-supervised learning category normally assumes that some small fraction of labelled data is available, or only the labels of a particular class; such information can help guide, validate and improve the methods. The cluster analysis problem usually falls under the category of unsu pervised learning and is used to find patterns of interest; however, deciding what is the ideal way to formulate and guide (optimisation) algorithms is difficult, as well as the measuring of performance. This has resulted in many heuristic based algorithms for which users are left to supply critical parameters that cannot be automatically reliably inferred from the data. Instead, such parameters may be estimated by data exploration, visualisation, sampling and labelling. Thus, it seems reasonable to question whether these algorithms are _truly unsupervised_; instead placing them into the category of semi-supervised learning where even though labelled data is not used at the outset, the user is providing some additional information to the algorithm that is obtained through human intuition and experience of the specific data or domain. Interestingly, such information can often be interpreted as constraints on the possible data partitions or clusters that can be returned by the algorithm, where the parameters specify the resolution at which the process is to be carried out. In more recent years clustering has also become explicitly semi-supervised through the ideas of constrained clustering where typically samples of labelled data or relationships are leveraged in some modified version of the \(k\)-means algorithm [10]. This form of clustering is practical and interactive since in many scenarios some data or knowledge of clusters is available and supervision can take the goals of the user into consideration at multiple steps. Furthermore, this specification of constraints for clustering is important because without guidance or additional external information (that is unavailable from the data), the algorithm cannot know the criteria by which to cluster, nor the level of resolution that is commensurate with the goal. In the present work both the unsupervised and semi-supervised approaches to clustering are of interest--leaving aside a strict definition. The former has benefits of ease of use while the latter can use explicit prior information from the user to guide or constrain the clustering to solutions that are desired amongst the vast number of possibilities. ### Prior Art There are several general approaches in unsupervised clustering which include hierarchical, density, spectral, information-theoretic, probability and distance to the centroid based methods. For each approach, together with a representation of the data, many algorithms have been developed that are usually modifications of the fundamental idea so that certain deficiencies are claimed to be overcome. However, this normally introduces complexities that result in an overall less practical algorithm in terms of additional runtime, storage or parameters. It is thus important to note that even with the flux of new algorithms over the past decades, the most widely used and cited algorithms are still agglomerative clustering, DBSCAN, gaussian mixtures and \(k\)-means. No single algorithm can work best across all data sets and for every data set that an algorithm particularly excels at one can construct another where it fails. However, there are still properties an algorithm may have that are generally problematic. These include: the setting of unintuitive and data specific parameters that can be nigh on impossible to specify in advance and difficult to do so even after data exploration; high runtime and storage requirements with increasing number of observations or dimension of the data; the assignment of anomalies to a cluster when they are clearly alien to the group; the inability to predict on new data points given the current clustering; and being unable to produce good clustering results across as many varieties of data sets and domains actually encountered in the realworld as possible. For example, in hierarchical clustering such as the agglomerative method [11], users are required to specify the granularity at which clusters are to be taken. This has the advantage that clustering can be considered at different resolutions or levels but requires nontrivial (and perhaps unintuitive) specification on the users part. There are also a number of linkage methods to select from that can give different results. The runtime of the algorithms and storage requirements are also prohibitively expensive with increasing number of examples; being \(O(n^{2}logn)\) and \(O(n^{2})\), respectively. Another important aspect to note is that all merges are final in the building of clusters. This can be problematic when for example there are anomalies present since the grouping cannot adjust or correct later to a better solution. Furthermore, the hierarchical method does not natively associate a score with an observation or predict the cluster label of a new observation given the current clustering, this restricts its utility somewhat unless additional classifiers are applied on top. Density based methods work on the premise that clustering locates regions of high density that are separated from one another by regions of low density. The most popular method is DBSCAN [3] which can cluster arbitrarily shaped data distributions efficiently (\(O(nlogn)\) runtime and in \(O(n)\) memory), while also potentially detecting anomalies. However, it requires the specification of parameters that are highly sensitive and difficult to set in practice even with the 'elbow' identification method. Thus, it can fail to give good results in even what appear to be simple data sets if clusters are too close to one another or there are large differences in densities. The method also benefits from post-processing to remove small groups of data that may get incorrectly classed as clusters, and it has been found in private experiments that its anomaly detection results are unsatisfactory. DBSCAN also does not natively provide a cluster association score for every observation or have predictive ability over new observations. The only way to handle the latter issue is to rerun the clustering again with the new data. Probability based models fit statistical models to the data and estimate parameters, with the mixture of gaussians method being the most popular. In the cases where the assumptions are met it can give very good results, and the data points are not only assigned a grouping but each also has an associated probability score of belonging. The use of distributions enables simple characterisation of the clusters produced by the parameters, and has some general applicability in terms of handling ellipsoidal clusters. However, the method can be particularly slow due to the use of the Expectation Maximisation algorithm for finding the distribution parameters and hence is not practical for large data sets. It also does not work well when there is little data, if the data points are nearly co-linear, when the number of groups is assumed incorrectly, or when the gaussian data distribution assumption is not met. Noise and anomalies are also not natively handled since it computes a partitioning. Of all the clustering algorithms mentioned thus far, it is still the simple method of \(k\)-means [5, 4] that is the most widely known and stands out as the algorithm of choice for practitioners--despite it being over 50 years old. It does however have several drawbacks which has resulted in much additional development, but most of which have not gained traction for practical use because of the additional complexities they introduce without necessarily better results. Due to it being a centroid based method, and because of its widespread appeal and use, comparisons in the present paper will focus more on \(k\)-means since any algorithm that can better or complement it will be a significant contribution to the field of cluster analysis. Hence, this algorithm will be elaborated upon. \(k\)-means has a number of important properties that have made it popular. Foremost is its simplicity to understand and to implement, and it is very fast to compute in practice--both with increasing feature dimensions and the number of examples. Its time complexity is \(O(tknd)\) (\(t\): number of rounds, \(k\): number of clusters, \(n\): number of observations, \(d\): dimension of data) and so it is linear with the number of examples in the data. It's speed also makes it suitable for interactive use which is important for exploratory work. The space complexity is also low, being \(O((n+k)d)\). \(k\)-means being a centroid based method also benefits from being able to natively score points and predict upon new data by simply using the distance to the centroid of the parent cluster; both of which can be beneficial in applications. Like other algorithms it requires parameters, principally the value \(k\) specifying how many clusters the algorithm is to return. Indeed, too few will result in some groups being clumped together as one, while too many will result in desired clusters being separated into sub-clusters. However, users can intuitively understand this parameter and in specific problems are able to set it in advance, estimate it using the 'elbow' method or gauge it interactively from the clustering results. Thus, although the ideal solution is parameter-free, having only a single intuitive parameter alleviates the burden somewhat and makes \(k\)-means appear relatively straightforward to use compared to other algorithms. However, \(k\)-means is sensitive to the initialisation centroids, resulting in getting stuck in local optima so that multiple runs with random initialisation is sometimes recommended. However, this may not necessarily work well depending on the data and number of clusters specified. Thus, another approach includes taking samples of points and clustering them using hierarchical clustering where the centroids of those clusters are used as seeds. This approach may only work well if the sample size is small due to the computational complexity of hierarchical clustering and if \(k\) is relatively small compared to the sample size. Note that the number of clusters still requires specification and its empirical success remains unclear. Other extensions include \(k\)-means++ and bisecting \(k\)-means. The former picks centroids incrementally where new centroids are taken to be a random one that is far from its closest centroid and based on a probability measure that is proportional to the square of its distance. The latter splits the data into two clusters, and repeatedly operates on one of the clusters until \(k\) clusters have been produced. Interestingly this method also yields a hierarchical clustering by recording the sequence of clusterings produced as \(k\)-means bisects each cluster. The optimisation criteria and the computation of centroids using local means makes \(k\)-means sensitive to anomalies which can lead to suboptimal partitions of the data. However, detecting and removing anomalies first is a challenging problem in of itself. In the case of clustering followed by anomaly removal, the sequential combination requires further analysis and practical investigation due to the distorting affect anomalies can have on the initial clustering. Another couple of methods similar to \(k\)-means, called \(k\)-medoids and \(k\)-medians have been proposed to overcome the problem of sensitivity to anomalies. The former takes an actual point to represent a cluster rather than the mean, while the latter uses medians as representatives. However, both are generally computationally slower than \(k\)-means, particularly when the data set and \(k\) are very large. Closely related to the problem of anomalies is that \(k\)-means can give undesirable partitions in the presence of clusters with different sizes or densities. Additionally, because it is a partitioning algorithm it will always place an example into a cluster even if it is an anomaly, a cluster of anomalies or in fact belongs to another unknown cluster. Thus, it is important to differentiate between algorithms that return clusters (such as DBSCAN) and partitioning algorithms since the term clustering is overloaded. It is usually the case that when users want to carry out cluster analysis, they desire groups of data that are similar to be returned while excluding that which does not belong to the group. This has implications with regard to _what_ should be computed in the problem of clustering data and whether \(k\)-means is merely approximating what is desired of an algorithm. A criticism often levied at \(k\)-means is its inability to cluster certain complex grouped data since it constructs hyperspherical clusters (or hyperellipsoidal clusters when using the Mahalanobis distance). While this is a point of criticism, it may in fact be a contributing factor for its widespread and general use since it does not follow the detailed distribution of the data and potentially suffer from overfitting. Indeed, it is assumed in this paper that for many realworld data sets, desired cluster groupings are roughly hyperspherical or are separate enough that such a clustering assumption works well in practice. \(k\)-means has also been the basis of many semi-supervised clustering algorithms, with perhaps the most well known of them being \(COPK\)-means [10] and \(PCK\)-means [1]. The former operates by assimilating hard constraints from the user that cannot be violated while carrying out \(k\)-means clustering. Thus, the constraint labelling is assumed noiseless. Similarly, the latter method incorporates constraints from the user, but these are soft constraints where the algorithm suffers a penalty for violation. Both algorithms have claimed to improve clustering results over the standard unsupervised version. However, although the application domain is important (since oftentimes some labelled data or constraints are available), these methods have received little attention in practice. This is perhaps because the data sets used in the original works are relatively small, the algorithms slower to run, and because the results are not substantially improved. Furthermore--if it is of concern--most of the practical drawbacks of the \(k\)-means algorithm are also carried over into these semi-supervised implementations. ### Contributions Data clustering is intended to be carried out by a computer due to the size, number, diversity and speed at which information needs to be processed. However, oftentimes there is a comparison with the human ability to cluster sensory stimuli or data (particularly in low dimensions) which is carried out innately and effortlessly. Thus, to reach an ideal solution it seems pertinent to understand and learn from how humans might address or even solve the problem--particularly the diversity aspect and speed at which it is conducted. Our starting point will be to consider clustering systematically as an information processing task and addressing it using David Marr's [6] three levels of analysis--particularly the computational theory and algorithmic levels. This approach disciplines us to ask the correct questions at the appropriate levels of explanation so that we can better understand the overall task. Thus, beginning with the top level of computation we can ask in general and precise terms what is to be computed--the inputs and outputs--and why it is appropriate, and what realworld constraints should be incorporated to make clustering tractable. Then we can consider the next level of processing to derive representations of the data examples, the inputs and outputs, the nature of the ill-posed problem, and how we might bring in additional assumptions to develop an algorithm for carrying out the transformation from input to the desired output. The present work builds a clustering algorithm from the ground up by following the above analyses and using human visual perception as inspiration. This approach was successfully used by Mohammad [8] in his development of anomaly detection, which will form a core component of the clustering method. Indeed, clustering is considered the complement of anomaly detection (sensing distinctness), where clusters are deemed to be sets of data points that do not contain any anomalies with respect to a chosen grouping principle and measure. Mohammad's definition of an anomaly is used, and data points are represented as the distance to the median of the parent group so that hyperspherical clusters are formed in order to have a solution that is generally applicable and practically useful like the \(k\)-means algorithm (while accepting failure in other and non-hyperspherical distributions). Furthermore, anomaly detection being the core of the approach naturally results in isolated points or clusters being labelled as anomalies so that clustering--and not a partitioning of the data--is carried out. Note that clustering is done to discover meaningful groupings of the data or interesting patterns that may not have been previously known. However, without embedding or conveying to an algorithm what is desired or considered meaningful, an algorithm cannot know which grouping of the data is appropriate. Indeed, any random clustering could be returned and justifiably so since constraints or requirements have not been provided. This resonates with the idea from human vision studies that intelligence identifies the appropriate resolution for the present utility, with perception iconicising the visual input at that appropriate level. Most algorithms implicitly assume some desired properties of the clusters in addition to requiring parameters, these naturally constrain the possible solutions. In the case of \(k\)-means it is that clusters are hyperspherical in nature and of similar size and density, and the data is without noise or anomalies, with a specified number of clusters; the computational problem being to minimise the within partition sum of squared error. While for DBSCAN regions of high density are considered clusters where the output is constrained by the two parameters specifying a radius to compute densities and the minimum number of points to be in that region. In the semi-supervised algorithms such as \(COPK\)-means, the base \(k\)-means assumptions are held, and in addition the labelled examples or pairwise constraints are provided to help the clustering approach the users desired outcome. The clustering method presented in this paper is parameter-free but assumes an interactive process between human and machine, where users must provide a small amount of labelled data that approaches a constant with increasing size of the data per cluster, so that seed members of _known_ groups are obtained. This little information is enough to constrain and guide the algorithm to grow meaningful clusters and carry out both local and global anomaly detection. As users interact with the clustering outputs, further observations can be sampled, analysed and discovered, while existing clusters may be modified. The method--being based on Mohammad's anomaly detection algorithm--is fast, intuitive and light on memory usage. Furthermore, because there is no best clustering algorithm in general, a framework is also provided (but not yet developed) for dealing with other useful representations of data in order to tackle different data distributions. To detail the contributions of this work, the rest of the paper is laid out as follows: Section 2 provides a computational theory of clustering and describes the parameter-free, semi-supervised clustering algorithm. Section 3 gives some empirical results on synthetic one-dimensional and two-dimensional data sets to intuitively understand the performance of the algorithm. Additional results on publicly available data sets are also provided to demonstrate realworld applications. Finally, section 4 concludes the paper and provides some thoughts for future work. A Computational Theory for Clustering The goal of the present work is to draw insights from the human visual process of object grouping and develop an algorithm capable of automating clustering tasks on large datasets. In the case of complete automation we will have an unsupervised solution, whereas when we permit a small amount of human involvement (labelling data) or interaction (iterative evaluation) we will have a semi-supervised solution. Human vision is considered to be an example of complex information processing for which Marr [6] introduced the idea of a three level analysis. His core theme was that vision is not governed by a single explanation, equation or level of understanding, but that each visual task requires its own multiple levels of analysis which form a top-down hierarchy. This approach has significantly impacted the field of computational neuroscience because it helps delineate what is to be computed and whether it satisfies assumptions obtained from the realworld and with what is desired of the task, from how one might carry out the computation and its implementation. This section will describe the levels of analysis as applied to the problem of clustering, and present an algorithm for efficient computation. ### Tri-level Hypothesis The top level in Marr's analysis is called the _computational_ level and deals with _what_ the computation is, or the goal of the task, and _why_ it is appropriate; this is interpreted at both the general and specific levels. At the general level we answer what information is extracted and why it is useful, while at the specific level we state exactly what is to be computed from input to output and what realworld constraints or assumptions are generally true of the world and powerful enough, to allow a process to be defined and the desired outcome to be achieved. The constraints help decide whether the actual function is the one that should be computed, and why this computation is specified instead of some other for the task at hand. Finding such constraints that impose themselves naturally to sufficiently allow a unique solution is a true discovery of permanent value even if it is not internally verifiable. The second level in the analysis is the _algorithmic_ level and is concerned with _how_ the computations associated with the first level are to be carried out. This is broken into two parts. First is the specification and representation of the data input and output, and second is the actual algorithm that carries out the transformation. Note that many representations can be put forward, and each can make some aspects explicit and relegate others to the background--this has significant bearing on the choice of algorithm. Indeed, for each representation many algorithms can be chosen and each algorithm may have certain desirable and undesirable characteristics according to the specific problem. The third level of analysis, known as the _implementation_ level, focuses on the physical realisation of the computations. This can involve various systems, ranging from the human brain to specialized mechanical machines, but for our purposes it will be a single computer. Nevertheless, it may be important to consider the potential extensions to distributed implementations in scenarios such as big data applications and distributed devices. ### Computational Theory The clustering problem is generally understood to be the grouping of portions of data that are similar in a particular way, but separate from other such groups. The process has important applications in data mining where the output can be used--amongst other purposes--for segmentation, summarisation and classification. The input to the problem is assumed to be in a dataframe format, where examples are represented as rows and feature values as columns of numbers. The desired output is a one-dimensional array of labels that identify which cluster each example belongs to and in addition there may be a membership score. Note that a specific cluster label is required for the unknown or anomalous group. Unfortunately, there is not enough information available in the input to get a unique desired output directly and hence _any_ partitioning of the data could be returned. The problem is ill-posed, and we require additional external information to constrain the possible solutions; we derive this from the human experience of grouping data points in low dimensisonal spaces. The process of clustering is distinct from partitioning; subsets of data points may be grouped into a cluster but not necessarily all points should be assigned to a cluster. This then provides our first natural and most important constraint that _similar points should be clustered together, but anomalies with respect to every grouping and measure, must not belong to any cluster_. The second assumption is that an _object that is part of a cluster has both a membership and degree of membership_. In other words even within a single group all observations have a level of belonging. For example, points that are further away from the centroid or occupy less dense regions of space than others. The removal of what is anomalous--with respect to some characteristic from a set of data points, leaves us with a group of points that are now deemed similar; hence, the following definition of a cluster is made: **Definition 2.1** (Cluster).: A cluster is a subset of data points that does not contain any anomalies with respect to a chosen grouping principle and measure. This definition needs to be made precise and requires some additional explanation and support. Assuming that the realworld physically imposes natural groupings of interest, the term anomaly is taken to be that defined by Mohammad [8]. **Definition 2.2** (Anomaly).: A grouping of interest represented by a gestalt law is perceived by the Helmholtz principle when it is unexpected to happen (i.e. its expectation of occurrence is \(<1\)) in uniform random noise. Any observation that is unexpected to occur with respect to this grouping is perceived, by the same principle, to be an anomaly. Anomalies, according to this definition can be computed precisely, and an algorithm under a particular representation of Euclidean distance from the median is provided by Mohammad [8]. This not only provides us with a classification but also a score defining the degree of anomalousness. Hence, clusters being simply the complement of the set of anomalies in the data are also computable and have an associated score. This also satisfies a third constraint that is taken as an assumption: _what is computed must have parameters or thresholds that adapt to the data, and are not fixed unless they are universally useable_. This is reasonable to assume since data sets, and indeed clusters within data sets, can vary in terms of numbers, size, density or shape and an algorithm needs to be able to best adapt to such differences. However, the constraint is unlikely to be universally satisfied across all data sets due to the range of possible data distributions and the specificity of each adaptation. The clustering process is approached from the perspective of removing anomalies as it provides a more intuitive and appreciable framework. This viewpoint allows for easier understanding and handling of the subject and algorithms, emphasising that points classified as not belonging to any group are considered anomalies. Gestalt groupings play an important role since clustering is often described as finding natural groupings in the data. (Figure 1 illustrates some examples of 'natural' clusters by proximity.) Thus, it can be argued that satisfactory clustering results are not completely aribtrary, in that though a staggeringly large number of possible cluster outputs can be returned, it is only a relative few that are acceptable. Thus, the groupings that we care about and consider natural are limited and are conjectured to include grouping by the gestalt laws such as proximity, good continuity and similarity. These groupings should agree with what the common man sees and believes to be true. _Clustering is in the eye of the beholder_ is an old adage, but the eye being human is constrained by only perceiving certain patterns that correspond to prior grouping laws of interest. ### Representation There are many ways in which humans group things, and how things can be represented. For example, there is grouping by shape, size, continuity, distance between objects, colour and any combination of these. Thus, the clustering problem also requires users to specify the goal by which data points are desired to be grouped by, and it is only the grouping that has some utility that is desired. It is assumed that clustering is an interactive process in that a user does not simply run an algorithm and take the output at face value. Rather, checks are carried out, samples of clusters are taken, and the results are intuitively checked for consistency and the data explored further. Hence, the information by which the grouping is desired is obtained by the assumption that _a user is able to provide cluster labels to a tiny fraction of the data in order to seed the required goals of the clustering_. (This is perhaps akin to how a child may view some unknown objects and is then provided only a handful of labelled examples from which it is able to extrapolate and learn the abstract class.) Note that together with the exclusion of anomalies from a group, and the labelling of some data points that belong to a group from which clusters can be grown, another constraint is realised: _unknown clusters of data that have no samples of labels should not be incorrectly assigned a cluster label of another group._ Unknown groups here should simply be highlighted as anomalies with respect to all other groupings, and require uncovering through the users iterative and interactive exploration and labelling of the data. Although it may appear that requesting users to supply similar examples is difficult, any clustering algorithm output necessarily requires that the user evaluate the results by looking at samples from the cluster to ascertain whether meaningful groupings have been obtained. This necessarily requires that the user has some domain expertise or knowledge of the features. This part of the evaluation is brought forward so that users are asked in advance to provide a small number of examples that likely belong to the same cluster. The labelling does not have to be comprehensive or completely accurate, since toleration of a minority of mislabelled examples is handled by the ejection of anomalies. For a given representation of the data points, certain groupings of interest are highlighted, and others are hidden. The representations often used in cluster analysis include distance from the centroid, density within a radius of a point or distance between two examples. (Note even the notion of distance between two points can be broken down into further subgroups like Euclidean, Manhattan or Mahalanobis). Thus, another constraint that is selected by the user is the choice of representation and measure. For the purposes of clustering in this paper, it is assumed that _the Euclidean distance to the centroid (median) of a cluster is an appropriate measure_ by which data points are to be grouped, or equivalently anomalies removed from the set to leave a grouping. This is because it is assumed data is often transformed such that the Euclidean distance works well and is one of the reasons why \(k\)-means has wide and general applicability. Furthermore, there is no evidence that another metric is better in general. The median is the chosen centroid instead of the mean because of its robustness to extreme deviations and better measure of central tendency. Other measures related to grouping laws such as density, good continuity, repetition, etc. are also of importance and may be preferred in certain applications. In such cases a grouping of the data must be chosen. For example, a plane of best fit may be selected if the grouping of interest is best described by good continuity, hence any examples that deviate by an unexpected amount would be deemed anomalous with respect to the group. ### Algorithm Given a representation of the data points--which is defined as the Euclidean distance from the associated median--and a small subsample of labelled data, this section presents an algorithm for transforming the input into the desired output. The primary computational task involves detecting and removing anomalies from a set of data points, while simultaneously expanding clusters by incorporating non-anomalous points. Thus, anomaly detection lies at the core of the philosophy and method employed. Here, we also acknowledge an additional constraint arising from the human ability to quickly perceive clusters in low-dimensional data, as well as the requirement to efficiently compute clusters over large realworld datasets. Namely, that the developed _algorithm must be fast to compute and scale efficiently with increasing numbers of examples and dimensions of the data_. Many clustering algorithms that yield satisfactory results with small datasets or low-dimensional data often struggle to meet these requirements, resulting in unacceptably long computation times. To begin with, the clustering process requires users to provide a small subsample of known cluster assignments, possibly obtained randomly, that are largely accurate. These assignments serve as the initial seeds and guides for the clustering, while all other points are initially considered anomalies. Subsequently, the algorithm iterates over the clusters in order of the tightest concentration of points. For each cluster, anomalies are identified and ejected from the cluster, being reassigned to the anomaly category. Following this step, the algorithm examines all the anomalous points to determine if they should be considered anomalies or potentially belong to the cluster. Points that are determined to be non-anomalous have their label assignments changed to match the current cluster, while the labels of anomalies remain unchanged. This process of ejecting anomalies and expanding clusters continues iteratively until no further changes in the clustering occur or a maximum number of iterations have completed. The algorithm's pseudocode is given below where Mohammad's anomaly detection algorithm, called Perception, is used to detect anomalies parameter-free: ``` Input : Data points, subsample of labeled data, \(max\_n\_iterations=1000\) Output : Cluster labels and membership score Initialisation: 1 Initialise all unlabelled points as anomalies with label \(-1\); 2 Order the labelled cluster groups by minimum sum of squared error; 3\(counter=0\); 4whilecluster changes occurand\(counter<\)max_n_iterationsdo 5foreach\(cluster\)do // Fit anomaly detector and check for anomalies: \(clf=\) Perception() ; 6\(clf.fit\)(cluster); 7\(clf.predict\)(cluster); 8ifanomalies are foundthen 9 Remove anomalies from the cluster and assign them to the anomaly category; 10 11 end if // Re-fit classifier on cluster without anomalies: \(clf.fit\)(cluster); 12 // Examine all anomalous points: 13foreachanomalous pointdo 14 prediction, score = \(clf.predict\)(\(anomalous\ point\)); 15ifprediction is non-anomalousthen 16 Change point label assignment to match the current cluster; 17 18 end if 19else 20 Leave anomaly label unchanged; 21 22 end if 23 24 end for 25 26 end for 27\(counter=counter+1\); 28 29 end for 30Re-fit the Perception() model on each cluster and compute scores; 31return labels and membership scores of all examples; ``` **Algorithm 1**Semi-supervised clustering Experimental Results This section provides the results of the clustering algorithm over several data sets. This includes one-dimensional and two-dimensional synthetic data, and then a selection of publicly available data sets. Although the presented method is semi-supervised, comparisons are not made against COPK-means or PCK-means. This is because their results were remarkably similar to standard \(k\)-means, but with longer execution times and the need for additional constraints. Instead, comparisons are made with \(k\)-means (with the given \(k\) used where possible) and DBSCAN (\(mps\) and \(eps\) decided as in the guidelines of the original paper). The implementation of these algorithms is done using the scikit-learn library [9]. These algorithms were chosen due to their widespread appeal and usage when users encounter clustering problems, aiming to demonstrate the effectiveness of our proposed method. Evaluation of clustering algorithms is notoriously difficult because of the large number of measures from which to choose, and because certain measures favour certain algorithms due to the nature of the optimisation problem being solved. Partitioning algorithms will also give very different results to clustering algorithms that report anomalies (such as the one presented in this paper), and may also appear artificially superior. This is because many data sets assign all examples to some cluster, and do not highlight those that are actually anomalies. Hence, even the ground truth labelling may not always be an accurate reflection of reality; for example in the generation of gaussian clusters it can be argued that there are outlying points with respect to the central mass of points of each cluster (e.g., Figure 9). In addition, it is important to consider that a dataset can be clustered in multiple ways or based on different features. Therefore, even with the availability of labels for evaluation, a clustering method may produce valid and effective clusters but which do not align with the pre-existing labels. Consequently, this work adopts a qualitative approach to address the clustering evaluation problem. ### One-dimensional Synthetic Data The first data set presented is composed of three gaussian distributed clusters, each of size \(10000\) examples with centers \([0,50,100]\) and standard deviations of \([1,3,6]\). Added to this set are a few additional isolated local anomalies, a few mislabelled examples, and a small globally anomalous cluster. Figure 1 illustrates the data distribution where each of the three clusters are labelled and coloured--black, blue and green--and anomalies shown in red. Note however that anomalies resulting from the generating gaussian distribution for each of the clusters are not highlighted as such since it would require selecting an anomaly detection algorithm which may give results different to another, and that such algorithms often require appropriate but unknown parameters. In addition to the labelled scatter plot, a histogram is also provided to show the relative densities which is otherwise hidden in the plot. A tiny fraction of the data--approximately 150 examples (0.5%) together with their labels--are randomly chosen from the data (excluding the small anomalous cluster). These labels act as the seeds for the proposed semi-supervised clustering algorithm and are expected to have been provided by the user after familiarising themselves and interacting with the data and domain. Note that the seeds need not cover all clusters since unrecognised clusters will simply be reported as anomalies and can be analysed in subsequent rounds. The results of the algorithm are shown in Figure 2 where three clusters are identified along with the isolated anomalies and small anomalous cluster; the latter of which can be investigated by the user in order to decide if it satsifies her criteria of being a new cluster. Note that the algorithm importantly flags the points in the outer regions of each cluster as anomalous. Closer inspection of one of the clusters and its histogram Figure 1: Three labeled gaussian distributed clusters with a few additional mislabelled examples, some isolated anomalies and a small anomalous cluster. distribution illustrates how this can be desirous since these points may be considered outliers compared to the main mass of the cluster (Figure 3). Thus, the proposed method carries out a clustering of data as opposed to partitioning _all_ data points into some group. Indeed, \(k\)-means partitions all data into one of three clusters--including the anomalies (Figure 4), while DBSCAN (\(mps=10,eps=0.114\)) on the other hand detects many anomalies, but also returns many more clusters embedded within the three main groups (Figure 5). ### Two-dimensional Synthetic Data The next example is a two-dimensional data set composed of eight gaussian distributed clusters at different locations and with standard deviations \([0.6,2,0.2,0.7,3,0.4,0.6,0.6]\) (Figure 6). Added to this are many isolated anomalies, and a relatively small anomalous cluster which could also be considered a newly emerging cluster. The total number of points is 10300, and a small random sample of a 100 points from the eight clusters are taken as seeds which is illustrated by Figure 7. The number of seeds selected is such that we have at least 10 per cluster in order to reliably build out the clusters. These would be expected to be provided by a domain expert and Figure 2: Results of the proposed semi-supervised clustering algorithm. Note that the three clusters are found using the seed labels. Anomalies are also identified as being unexpectedly far from any of the clusters, and the anomalous cluster of points is also not forced into any labelled cluster. Figure 4: The result of applying \(k\)-means where all points are put into one of three groups. It has no notion of outliers in the data that do not belong to any cluster. Figure 3: A close up view of cluster 1 from Figure 2 showing the identification of not only anomalies and mislabelled data, but also those that might be considered rare or unusual in a gaussian distributed cluster of points. help guide the clustering algorithm to desired solutions. Note that the labels can even be a little noisy since mislabelled examples can be ejected out of the group as anomalies during algorithm execution. The clustering result is shown in Figure 8 where the eight clusters are largely identified, some points around the clusters are deemed anomalous, and many isolated points and the small cluster labelled as anomalies. Note the difficulty of the task in that some clusters are close or overlapping, while others are located inside another. Hence, there are arguably misclassifications along bordering points. Figure 9 also shows a close up view of one cluster to show the proposed methods ability to highlight fringe and far lying points as outliers with respect to the cluster's mass of points. Contrasting the results with that of \(k\)-means (\(k=8\)), we see in Figure 10 that it unintuitvely partitions the points such that it does not reflect the desired groupings. Clusters originally labelled 3 and 6 are also grouped together and the small anomalous group of points is deemed to be a cluster. \(k\)-means also partitions all the data points resulting in anomalies being assigned to some cluster. The results obtained with DBSCAN (\(mps=15,eps=1.17\)) exhibit interesting characteristics. The algorithm successfully identifies and labels many isolated points as outliers, accurately capturing their anomalous Figure 5: DBSCAN results are shown (without the histogram for clarity). Note that it has a concept of anomalies and thus identifies many. However, in this particular example too many additional clusters are returned amongst the three main clusters. Figure 6: Eight gaussian clusters are shown with differing standard deviations \((0.6,2,0.2,0.7,3,0.4,0.6,0.6)\) and locations. Added to the data is a scattering of anomalies and also a small anomalous cluster at coordinates \((11,20)\). Figure 7: A random sample of points (1% of the data) is shown from the gaussian distributed clusters. These form the seeds of the clustering algorithm. nature. It also considers the small group of points to indeed be a cluster. However, it merges the majority of points into one large cluster, leading to a loss of granularity in the clustering results. It also does not report many of the points around the gaussian clusters as anomalies (as in Figure 9). This behavior aligns with the algorithm's primary objective being to cluster data rather than detect anomalies. ### Multi-dimensional Realworld Examples This section presents the clustering algorithm's results on two widely-used benchmark datasets: MNIST handwritten digits and the 20 newsgroup corpus. These datasets were selected to showcase different application areas and to be easily interpretable for non-experts. Given the high-dimensionality of the raw data in these domains, dimensionality reduction techniques were employed to enhance the clustering outcomes. Specifically, UMAP [7], a method known for its speed and effectiveness with largely default settings was utilised. The distance metric and the desired number of dimensions to map to were the only explicitly specified parameters: the cosine metric was used to address the datasets' high dimensionality and the number of dimensions set to values that depend on the size of the data. Figure 8: Results of applying the proposed clustering algorithm. The known clusters are largely recovered while points deemed anomalous with respect to the clusters are highlighted. Figure 10: Results of \(k\)-means algorithm (\(k=8\)). Noteable observations are that some clusters are merged, data is unintuitively partioned and that anomalies are always assigned to some cluster. Figure 9: A close up view of cluster 7 shows how the proposed clustering algorithm not only detects clusters, but also highlights anomalous points as those being unexpectedly distant from the central mass of points. #### 3.3.1 MNIST Handwritten Digits Figure 12 depicts the application of UMAP to 2 dimensions for visualising the smaller MNIST digits dataset, consisting of 1,797 examples available in sklearn [9]. The plot reveals nine prominent clusters along with several smaller groups scattered throughout. The ground truth labels indicate the presence of ten distinct groups, each corresponding to a specific digit. For the clustering algorithm, a mapping to a 10-dimensional space is utilised, and approximately 6% of the data (107 examples) is randomly sampled to provide around 10 labeled items per group, serving as seeds for the algorithm. The clustering results are shown in Figure 13, exhibiting an accuracy of 0.84 according to the ground truth. However, a closer examination of the results unveils an interesting finding: the algorithm not only successfully captures much of the primary clusters but also assigns points that may be outliers with respect to the clusters to the anomalous group. These anomalous parts of clusters or individual small clusters often represent either uniquely written digits, or slight variations within the same class, such as the digit 1 with or without a horizontal baseline1. Footnote 1: The interactive plots where points in the graph can be visualised are available in the github project at [https://github.com/M-Nassir/clustering](https://github.com/M-Nassir/clustering) For completeness, Figure 14 shows the algorithm results on the full Figure 11: Results of the DBSCAN algorithm. Noteable observations are that most of the clusters are reported to be a single large cluster and that many outlying points are highlighted as anomalies. Figure 12: This illustrates the 2-dimensional UMAP of the smaller MNIST data set obtained from sklearn, using the cosine metric. Each data point is color-coded based on its corresponding class label obtained from the provided labels. Figure 13: The results of applying the clustering algorithm to the smaller MNIST digits data set obtained from sklearn after performing a UMAP transformation. The actual clustering process involved mapping the data to 10 dimensions. However, for the purpose of visualisation, the results are presented in a 2-dimensional mapping. The accuracy of the clustering according to the ground truth labels is 0.84. MNIST digits data set comprising 70,000 examples with UMAP reduction to 20 dimensions due to the much larger data set size. Here an accuracy of 0.92 is achieved according to the ground truth labels by leveraging merely 0.5% (350) of labelled examples as seeds. Notably, despite the substantial expansion in the number of examples, only a modest number of additional samples were required for satisfactory results. Anomalies are also found within different segments of the primary clusters, warranting further investigation to discern whether they signify false positives or alternative representations of a handwritten digit. #### 3.3.2 20 Newsgroups Corpus The 20 Newsgroups data set in the scikit-learn library can be used as an example of clustering data in the text document space. It contains over 18,000 news articles that are organised into 20 categories. In this specific example, a subset of 6 categories is used. These were selected for their distinctness, resulting in a total of 5,881 examples. To prepare the data, a standard pre-processing procedure is followed, Figure 14: The results of applying the clustering algorithm to the full MNIST digits data set after performing a UMAP transformation. The actual clustering process involved mapping the data to 20 dimensions since the data set is much larger. However, for the purpose of visualisation, the results are presented in a 2-dimensional mapping. The accuracy of the clustering according to the ground truth labels is 0.92. including lemmatisation, creating a tf-idf matrix, and removing stopwords with a minimum document frequency of five. Due to the high dimensionality of the data, UMAP is employed for dimension reduction, as it effectively preserves the relationships between examples. The resulting reduction to 2 dimensions is illustrated in Figure 15, where each example is coloured according to its ground truth category label. Notably, there is significant overlap between clusters, particularly in the central regions, and a small cluster consisting of examples from all categories can be observed on the far top-left of the plot at approximate coordinates (-8,9). For clustering purposes, the data is mapped to 10 dimensions and a 2% random sample of labelled examples is included to guide the algorithm. This approach incorporates user input to achieve desired outcomes. The clustering results are shown in Figure 16 (displayed over the 2-dimensional mapping). The algorithm successfully recovers most of the six main clusters, achieving an overall accuracy of 0.74 according to the ground truth labels. Moreover, the algorithm flags many examples located between clusters as anomalies, as well as the small dense cluster on the far top-left. This again highlights that the algorithm performs a clustering as opposed to a partitioning of the data points. Table 1 provides the top 10 keywords for each of the six clusters, along with the anomalous group. The keywords align well with the category labels, while the anomalous group consists of seemingly random words that do not align well to any specific category. Figure 15: A 2-dimensional UMAP plot of the 20 Newsgroup data set is shown. Each point is coloured according to its ground truth category. ## 6 Conclusion \begin{table} \begin{tabular}{|c|l|} \hline **Category** & **Keywords** \\ \hline **Anomalies** & helmet, thanks, edu, just, like, list, mail, com, dog, wa \\ \hline **comp.windows.x** & use, display, application, program, motif, widget, thanks, file, server, window \\ \hline **rec.motorcycles** & rider, road, just, like, riding, dod, motorcycle, ride, wa, bike \\ \hline **rec.sport.hockey** & espn, season, playoff, play, year, player, hockey, wa, team, game \\ \hline **sci.crypt** & escrow, use, nsa, algorithm, phone, government, encryption, clipper, chip, key \\ \hline **soc.religion.christian** & christians, bible, people, sin, christ, christian, church, jesus, wa, god \\ \hline **talk.politics.guns** & just, weapon, government, law, did, right, fbi, people, wa, gun \\ \hline \end{tabular} \end{table} Table 1: Top Cluster Keywords Figure 16: The results of applying the proposed semi-supervised clustering algorithm on 6 of the 20 Newsgroup corpus data set categories after a UMAP reduction to 10 dimensions. The accuracy of the clustering according to the ground truth labels is 0.74. Conclusion The present work introduces two significant contributions. Firstly, inspired by David Marr's tri-level hypothesis for information processing, it presents a computable definition of a cluster. This definition is derived by analysing elements of human visual clustering and assuming certain constraints deemed true of the real world. A key observation is that a cluster is a collection of points devoid of any anomalies with respect to a given grouping principle and measure. This sheds light on the dual nature of clustering and anomaly detection, highlighting their interdependence and the role of data representations and measures in achieving desired groupings. Secondly, a novel semi-supervised clustering algorithm is proposed whose kernel is based on the anomaly detection work by Mohammad [8]. This work gave a precise computable definition of what constitutes an anomaly and provides a compelling rationale for accepting such a definition. Additionally, an efficient algorithm to compute anomalies based on the Euclidean distance from the median is provided. The clustering algorithm operates on numerical data and utilises a small amount of labelled examples for initialisation, guidance, and subsequent expansion of clusters. The algorithm handles noisy labelled data by ejecting distinctly incorrect points from the groups. Instead of aiming for a complete partitioning of all the data, the algorithm focuses on clustering the data, thus ensuring that anomalies and unknown clusters are not forcibly assigned to existing clusters. Such instances are left to be analysed in subsequent steps. The results of the semi-supervised clustering algorithm are compared with those of popular unsupervised clustering algorithms, namely \(k\)-means and DBSCAN. The synthetic data examples showcased the algorithm's ability to build clusters even with a relatively small number of examples per cluster and accommodated clusters of different densities. In contrast, \(k\)-means and DBSCAN exhibited undesired results in these examples due to their inherent constraints and assumptions. In particular \(k\)-means gave unintuitive partitions that split clusters or assigned clearly anomalous points to a cluster. On the other hand, DBSCAN proved to be sensitive to the supplied parameters where clusters either got merged or split because of its inability to locally adapt. The incorporation of dimension reduction techniques, particularly UMAP, in conjunction with semi-supervised clustering, has proven to be highly advantageous. This is exemplified through the analysis of image and text data examples. The employed workflow, which encompasses dimension reduction for visualisation, sample labelling, and clustering, offers significant benefits in the identification of clusters. Notably, it recognises the importance of not forcibly assigning every example, including anomalies and points from unknown clusters, to specific clusters. Instead, the approach allows for a more nuanced treatment of data points, facilitating the identification of clusters while acknowledging the presence of ungrouped instances. These examples also showed that user interaction and sampling remain important even after UMAP visualisation, as points appearing visually close together may not belong to the same group. An example illustrating this phenomenon can be found in the MNIST digits mapping (Figure 13), where the clusters of digits 8's and 1's exhibit visual proximity while representing separate clusters. Clustering is an interactive and iterative process that combines exploration and analysis. Merely applying an algorithm without understanding the data and the problem space is not recommended, and results are never accepted blindly. During the exploratory phase, it is important to gain familiarity with the data and establish certain assumptions that constrain solutions, such as the type and number of clusters, the possibility of clusters forming hyper-spherical groupings, algorithm parameters, or the labelling of points for seeding and guiding an algorithm. In the presented semi-supervised clustering algorithm, the explicit constraints are taken to be labelled examples as opposed to parameter specification, and implicit is the choice of grouping law and measure. However, note the difference between knowledge of a few labelled examples per cluster and estimating parameters such as the \(k\) in \(k\)-means or \(mps\) and \(eps\) in DBSCAN. In the former case clusters can be gradually built over the labelled subset, and then expanded upon, without requiring knowledge of samples from all the clusters. Furthermore, being parameter-free relieves users of one of the main challenges commonly encountered in unsupervised learning problems. The presented work highlights how all algorithms constrain solutions, thereby reducing the number of possible groupings that can be returned. However, it is postulated that only relatively few grouping principles are of actual interest. These principles and appropriate measures constrain what groupings are acceptable a-priori, particularly when formulated through representation and unexpectedness. Hence, a realist position is adopted, where clustering is deemed an objective process as opposed to a purely subjective experience. In future research, the exploration of additional grouping principles and measures, extending beyond proximity and the Euclidean distance to the median, will be undertaken. This expansion aims to enhance the versatility and effectiveness of the clustering algorithm. Moreover, the inherent speed and nature of the algorithm also makes it well-suited for online clustering scenarios. Consequently, an extension of the algorithm to handle the assignment of new points to existing clusters and the identification of anomalies that may potentially form new clusters will be investigated.
2302.13130
Point Cloud Forecasting as a Proxy for 4D Occupancy Forecasting
Predicting how the world can evolve in the future is crucial for motion planning in autonomous systems. Classical methods are limited because they rely on costly human annotations in the form of semantic class labels, bounding boxes, and tracks or HD maps of cities to plan their motion and thus are difficult to scale to large unlabeled datasets. One promising self-supervised task is 3D point cloud forecasting from unannotated LiDAR sequences. We show that this task requires algorithms to implicitly capture (1) sensor extrinsics (i.e., the egomotion of the autonomous vehicle), (2) sensor intrinsics (i.e., the sampling pattern specific to the particular LiDAR sensor), and (3) the shape and motion of other objects in the scene. But autonomous systems should make predictions about the world and not their sensors. To this end, we factor out (1) and (2) by recasting the task as one of spacetime (4D) occupancy forecasting. But because it is expensive to obtain ground-truth 4D occupancy, we render point cloud data from 4D occupancy predictions given sensor extrinsics and intrinsics, allowing one to train and test occupancy algorithms with unannotated LiDAR sequences. This also allows one to evaluate and compare point cloud forecasting algorithms across diverse datasets, sensors, and vehicles.
Tarasha Khurana, Peiyun Hu, David Held, Deva Ramanan
2023-02-25T18:12:37Z
http://arxiv.org/abs/2302.13130v3
# Point Cloud Forecasting as a Proxy for 4D Occupancy Forecasting ###### Abstract Predicting how the world can evolve in the future is crucial for motion planning in autonomous systems. Classical methods are limited because they rely on costly human annotations in the form of semantic class labels, bounding boxes, and tracks or HD maps of cities to plan their motion -- and thus are difficult to scale to large unlabeled datasets. One promising self-supervised task is 3D point cloud forecasting [18, 11, 19, 20] from unannotated LiDAR sequences. We show that this task requires algorithms to implicitly capture (1) sensor extrinsics (i.e., the egomotion of the autonomous vehicle), (2) sensor intrinsics (i.e., the sampling pattern specific to the particular LiDAR sensor), and (3) the shape and motion of other objects in the scene. But autonomous systems should make predictions about the world and not their sensors! To this end, we factor out (1) and (2) by recasting the task as one of spacetime (4D) occupancy forecasting. But because it is expensive to obtain ground-truth 4D occupancy, we "render" point cloud data from 4D occupancy predictions given sensor extrinsics and intrinsics, allowing one to train and test occupancy algorithms with unannotated LiDAR sequences. This also allows one to evaluate and compare point cloud forecasting algorithms across diverse datasets, sensors, and vehicles. ## 1 Introduction Motion planning in a dynamic environment requires autonomous agents to predict the motion of other objects. Standard solutions consist of perceptual modules such as mapping, object detection, tracking, and trajectory forecasting. Such solutions often rely on human annotations in the form of HD maps of cities, or semantic class labels, bounding boxes, and object tracks, and therefore are difficult to scale to large unlabeled datasets. One promising _self-supervised_ task is 3D point cloud forecasting [18, 11, 19, 20]. Since points appear where lasers from the sensor and scene intersect, the task of forecasting point clouds requires algorithms to implicitly capture (1) sensor extrinsics (_i.e._, the ego-motion of the autonomous vehicle), (2) sensor intrinsics (_i.e._, the sampling pattern specific to the LiDAR sensor), and (3) the shape and motion of other objects in the scene. This task can be non-trivial even in a static scene (Fig. 2). We argue that autonomous systems should focus on making predictions about the world and not themselves, since an ego-vehicle has access to its future motion plans (extrinsics) and calibrated sensor parameters (intrinsics). We factor out these (1) sensor extrinsics and (2) intrinsics by recasting the task of point cloud forecasting as one of spacetime (4D) occupancy forecasting. This disentangles and simplifies the formulation of point cloud forecasting, which now focuses solely on forecasting the central quantity of interest, the 4D occupancy. Because it is expensive to obtain ground-truth 4D occupancy, we "render" point cloud data from 4D occupancy predictions given sensor extrinsics and intrinsics. In some ways, our approach can be seen as the spacetime analog of novel-view synthesis from volumetric models such as NeRFs [12]; rather than rendering images by querying a volumetric model with rays from a known camera view, we render a LiDAR scan by querying a 4D model with rays from known sensor intrinsics and extrinsics. This allows one to train and test 4D occupancy forecasting algorithms with un-annotated LiDAR sequences. This also allows one to evaluate and compare point cloud forecasting algorithms across diverse datasets, sensors, and vehicles. We find that our approach to 4D occupancy forecasting, which can also render point clouds, performs drastically better than SOTAs in point cloud forecasting, both quantitatively (by up to 3.26m L1 error, Tab. 1) and qualitatively (Fig. 6). Our method beats prior art with zero-shot cross-sensor generalization (Tab. 2). To our knowledge, these are first results that generalize across train/test sensor rigs, illustrating the power of disentangling sensor motion from scene motion. ## 2 Related Work Point Cloud ForecastingAs one of the most promising self-supervised tasks that exploit unannotated LiDAR sequences, point cloud forecasting [11, 18, 19, 20] provides the algorithm past point clouds as input and asks it to predict future point clouds as output. Traditionally, both the input and the output are defined in the sensor coordinate frame, which moves with time. Although this simplifies preprocessing by eliminating the need for a local alignment, it forces the algorithm to implicitly capture (1) sensor extrinsics (i.e., the egomotion of the autonomous vehicle), (2) sensor intrinsics (i.e., the sampling pattern specific to the particular LiDAR sensor), and (3) the shape and motion of other objects in the scene. We argue that autonomous systems should make predictions about the world and not their sensors. In this paper, we reformulate point cloud forecasting by factoring out sensor extrinsics and intrinsics. Concretely, the new setup asks the algorithm to estimate the depth for rays from future timestamps. We show that one could use it as a proxy for training and testing 4D occupancy forecasting algorithms. Moreover, we demonstrate that one can evaluate existing point cloud forecasting methods under this setup, allowing 4D occupancy forecasting algorithms to be compared with point cloud forecasting algorithms. Occupancy ForecastingOccupancy, as a predictive representation complementary to standard object-centric representations in the context of supporting downstream motion planning, has gained popularity over the last few years due to its efficiency in representing complex scenarios and interactions. Most existing works on occupancy forecasting focus on _semantic_ occupancy grids from a bird's-eye view (BEV) [5, 10, 16]. They choose to focus on 2D for a good reason since most autonomous driving planners reason in a 2D BEV space. A downside is that it is expensive to obtain ground-truth _semantic_ BEV occupancy for training and testing algorithms. [7] claim that if we reduce our goal from _semantic_ occupancy to _geometric_ occupancy, that is knowing if a location is occupied without asking which type of object is occupying it, one could learn to forecast _geometric_ BEV occupancy from unannotated LiDAR sequences. In this paper, we take the idea from [7] and go beyond BEV - we propose an approach to learning to forecast 4D _geometric_ occupancy from unannotated LiDAR sequences. We also propose a scalable evaluation to this task that admits standard point cloud forecasting methods. Novel View SynthesisWe have seen tremendous progress in novel view synthesis in the last few years [9, 12, 13]. At its core, the differentiable nature of volumetric rendering allows one to optimize the underlying 3D structure of the scene by fitting samples of observations with known sen Figure 2: Points depend on the intersection of rays from the depth sensor and the environment. Therefore, accurately predicting points requires accurately predicting sensor extrinsics (sensor egomotion) and intrinsics (ray sampling pattern). But we want to understand dynamics of the environment, not our LiDAR sensor! sor poses without explicit 3D supervision. Our work can be thought of as novel view synthesis, where we try to synthesize depth images from novel views at future timestamps. Thanks to motion sensors (e.g., IMU), one can assume that relative LiDAR pose among frames in a log can be reliably estimated. Our work also differs from common novel view synthesis literatures in a few important aspects: (a) we use an efficient feed-forward network to predict the spacetime occupancy volume instead of applying test-time optimization; (b) we optimize an explicit volumetric scene representation (i.e., occupancy grid) instead of an implicit neural scene representation; (c) our approach relies on shape and motion prior learned across diverse scenarios in order to predict what happens next instead of reconstructing based on samples only from a specific scenario. ## 3 Method Autonomous fleets log an abundance of unannotated sequences of LiDAR point clouds \(\mathbf{X}_{-T\dots T}\), where we also estimate the relative sensor location for each frame \(\mathbf{o}_{-T:T}\). Suppose we split such a sequence into a historic part \(\mathbf{X}_{-T:0}\) and \(\mathbf{o}_{-T:0}\) and a future part \(\mathbf{X}_{1:T}\) and \(\mathbf{o}_{1:T}\). Standard point cloud forecasting methods, denoted by function \(g\), take the historical sequence of point clouds \(\mathbf{X}_{-T:0}\) as input and try to predict the future sequence of point clouds \(\hat{\mathbf{X}}_{1:T}\). \[\hat{\mathbf{X}}_{1:T}=g(\mathbf{X}_{-T:0}) \tag{1}\] To introduce our approach, we need to first re-parametrize a point from the future LiDAR point cloud, say \(\mathbf{x}\in\mathbf{X}_{t}\) where \(t=1\dots T\), as a ray that starts from the sensor location \(\mathbf{o}_{t}\), travels along the direction \(\mathbf{d}\), and reaches the end point \(\mathbf{x}\) after a distance of \(\lambda\): \[\mathbf{x}=\mathbf{o}_{t}+\lambda\mathbf{d},\mathbf{x}\in\mathbf{X}_{t} \tag{2}\] Conceptually, our approach, denoted by function \(f\), takes a ray from a future timestamp \(t\) parametrized by its origin and direction \((\mathbf{o}_{t},\mathbf{d})\), and tries to predict the distance \(\hat{\lambda}\) the ray would travel, based on historic sequence of point clouds \(\mathbf{X}_{-T:0}\) and sensor locations \(\mathbf{o}_{-T:0}\). \[\hat{\lambda}=f(\mathbf{o}_{t},\mathbf{d};\mathbf{X}_{-T:0},\mathbf{o}_{-T:0}) \tag{3}\] Intuitively, Eq. (3) is similar to view synthesis in NERF [12] except we are computing expected depth rather than expected color. Below, we introduce how we formulate the differentiable volumetric rendering process and use it for learning to forecast 4D occupancy. Spacetime (4D) occupancyWe define spacetime occupancy as the occupied state of a 3D location at a particular time instance. We use \(\mathbf{z}\) to denote the true spacetime occupancy, which may not be directly observable due to line-of-sight visibility constraints. Consider a bounded spatial-temporal 4D volume, \(\mathcal{V}\), which is discretized into spacetime voxels \(\mathbf{v}\). We can use \[\mathbf{z}[\mathbf{v}]\in\{0,1\},\mathbf{v}=(x,y,z,t),\mathbf{v}\in\mathcal{V} \tag{4}\] to represent the occupancy of voxel \(\mathbf{v}\) in the spacetime voxel grid \(\mathcal{V}\), which can be _occupied_ (1) or _free_ (0). In practice, we learn an occupancy predictio network \(h\) (parametrized by \(\mathbf{w}\)) to predict discretized spacetime 4D occupancy given historic sequence of point clouds and sensor locations, \[\hat{\mathbf{z}}=h(\mathbf{X}_{-T:0},\mathbf{o}_{-T:0};\mathbf{w}) \tag{5}\] Figure 3: High-level overview of the approach we follow, closely inspired by a prior work [7]. Instead of directly predicting future point clouds by observing a set of historical point clouds, we take a geometric perspective on this problem and instead forecast a generic intermediate 3D occupancy-like quantity within a bounded volume. Known sensor extrinsics and intrinsics are an input to our method, which is different from how classical point cloud forecasting is formulated. We argue that this factorization is sensible as an autonomous agent plans its own motion and has access to sensor information. Please refer to our appendix for architectural details. where \[\hat{\mathbf{z}}[\mathbf{v}]\in\mathbb{R}_{[0,1]} \tag{6}\] represents the predicted occupancy of voxel \(\mathbf{v}\) in the space-time voxel grid \(\mathcal{V}\). Please refer to the appendix for network architecture details. Depth rendering from occupancyGiven a ray query \(\mathbf{x}=\mathbf{o}+\lambda\mathbf{d}\), our goal is to predict \(\hat{\lambda}\) as close to \(\lambda\) as possible. We first compute how it intersects with the occupancy grid by voxel traversal [2] (Fig. 4). Suppose the ray intersects with a list of voxels \(\{\mathbf{v}_{1}\dots\mathbf{v}_{n}\}\). We discretize the ray space by assuming that a ray can only stop at voxel boundaries or infinity. We interpret occupancy of voxel \(\mathbf{v}_{i}\) as the conditional probability that a ray leaving voxel \(\mathbf{v}_{i-1}\) would stop in voxel \(\mathbf{v}_{i}\). We can write \[p_{i}=\prod_{j=1}^{i-1}(1-\hat{\mathbf{z}}[\mathbf{v}_{j}])\hat{\mathbf{z}}[ \mathbf{v}_{i}] \tag{7}\] where \(p_{i}\) represents the probability that a ray stops in voxel \(\mathbf{v}_{i}\). Now we can render the distance by computing the stopping point in expectation. \[\hat{\lambda}=f(\mathbf{o},\mathbf{d})=\sum_{i=1}^{n}p_{i}\hat{\lambda}_{i} \tag{8}\] where \(\hat{\lambda}_{i}\) represents the stopping distance at voxel \(\mathbf{v}_{i}\). You may have noticed that Eq. (8) does not capture the case where the ray stops outside the voxel grid, where the stopping distance is ill-defined (it will stop at infinity). During training, we allow a virtual stopping point outside the grid at the ground-truth location, i.e., \[\hat{\lambda}=f(\mathbf{o},\mathbf{d})=\sum_{i=1}^{n}p_{i}\hat{\lambda}_{i}+ \prod_{i=1}^{n}(1-p_{i})\hat{\lambda}_{n+1} \tag{9}\] where \(\hat{\lambda}_{n+1}=\lambda\). Loss functionWe can train the occupancy prediction network with a simple L1 loss between the rendered depth \(\hat{\lambda}\) and the ground-truth depth \(\lambda\). \[L(\mathbf{w})=\sum_{(\mathbf{o},\lambda,\mathbf{d})\in(X_{1:T},\mathbf{o}_{1 :T})}|\lambda-f(\mathbf{o},\mathbf{d};\mathbf{X}_{-T:0},\mathbf{o}_{-T:0}, \mathbf{w})| \tag{10}\] ## 4 Evaluation The golden standard for evaluating 4D occupancy forecasting would be to compare the predicted occupancy with the ground-truth, but because it is extremely expensive to obtain ground-truth 4D occupancy, we "render" future point clouds from forecasted 4D occupancy with known sensor intrinsics and extrinsics, use the quality of rendered future point clouds as a proxy for that of forecasted 4D occupancy. We introduce a new evaluation, where we factor out sensor intrinsics and extrinsics such that algorithms can be evaluated solely based on how well it captures how the scene unfolds. We provide future rays as queries and ask algorithms to provide a depth estimate for each query. Given a query ray \(\overrightarrow{OB}\), there is a prediction ray \(\overrightarrow{OP}\), where \(O\) represents the origin, \(Q\) represents the ground-truth end point, and \(P\) represents the predicted end point. \[\overrightarrow{OB} =\mathbf{o}+\lambda\mathbf{d} \tag{11}\] \[\overrightarrow{OP} =\mathbf{o}+\hat{\lambda}\mathbf{d} \tag{12}\] Figure 4: We illustrate the process of rendering depth for a given ray from the predicted occupancy grid. We assume that rays only stop at the voxel boundary, which discreizes the output space into a discrete set of events. We then compute the probability for a ray stopping at each boundary intersection. Finally, we compute the expected stopping distance. Figure 5: Ray Clamping. First, we move the origin towards the end point until the origin touches the volume or infinity. Then, we move the end point towards the origin until the end point touches the volume or infinity. At all times, we make sure the end point stays ahead of the origin (like two rings on a string). Being inside the volume counts as touching it. Given such a pair of rays, we define the error \(\varepsilon\): \[\varepsilon=|\overrightarrow{OQ}-\overrightarrow{OP}|=|\overrightarrow{PQ}|=| \lambda-\hat{\lambda}| \tag{13}\] Near-field errorSince LiDAR rays only travel through freespace and terminate when reaching occupied surface, there is a physical meaning behind the \(\varepsilon\) in Eq. (13). In practice, occupancy and freespace prediction is only relevant in regions that are reachable by the autonomous vehicle in planning's time horizon. To reflect the focus on the reachable regions, we propose an operation to clamp any given ray \(\overrightarrow{XY}\) to the fixed volume \(\mathcal{V}\). We call it _ray clamping_, denoted as \(\phi_{\mathcal{V}}:\overrightarrow{XY}\rightarrow\overrightarrow{X^{\prime}Y ^{\prime}}\) and illustrated in Fig. 5. We define the near-field (bounded by volume \(\mathcal{V}\)) prediction error \(\varepsilon_{\mathcal{V}}\) as \[\varepsilon_{\mathcal{V}}=|\phi_{\mathcal{V}}(\overrightarrow{OQ})-\phi_{ \mathcal{V}}(\overrightarrow{OP})|=|\overrightarrow{O^{\prime}Q^{\prime}}- \overrightarrow{O^{\prime}P^{\prime}}|=|\overrightarrow{P^{\prime}Q^{\prime}}| \tag{14}\] Even though this metric penalizes disagreements of predicted depth along query rays within the bounded volume, it does not capture the severity of a prediction error. In real-world, one meter of an error close to the AV matters more. To this end, we also propose using a relative near-field prediction error \(\varepsilon_{\mathcal{V}}^{rel}\) defined as, \[\varepsilon_{\mathcal{V}}^{rel}=\frac{|\phi_{\mathcal{V}}(\overrightarrow{OQ} )-\phi_{\mathcal{V}}(\overrightarrow{OP})|}{|\overrightarrow{OQ}|}=\frac{| \overrightarrow{P^{\prime}Q^{\prime}}|}{|\overrightarrow{OQ}|} \tag{15}\] The proposed evaluation requires one predicted ray for every ground-truth ray (query). Any algorithms that are capable of rendering depth for a given ray by design meets this requirement, including 4D occupancy forecasting from Sec. 3. However, for point cloud forecasting algorithms, the number of predicted points does not necessarily match the number of ground-truth rays, plus there is no one-to-one mapping between predicted and ground-truth points. To resolve this discrepancy, we propose to fit a surface to the predicted point clouds, on which we can query each ground-truth ray, find its intersection with the fitted surface, and output the (clamped) ray distance. In practice, we interpolate depth among the spherical projections of predicted rays. We also consider vanilla chamfer distance \(d\) (16) and near-field chamfer distance \(d_{\mathcal{V}}\) (17) \[d=\frac{1}{2N}\sum_{\mathbf{x}\in\mathbf{X}}\min_{\mathbf{\hat{x}}\in\mathbf{ \hat{X}}}||\mathbf{x}-\mathbf{\hat{x}}||_{2}^{2}+\frac{1}{2M}\sum_{\mathbf{ \hat{x}}\in\mathbf{\hat{X}}}\min_{\mathbf{x}\in\mathbf{\hat{X}}}||\mathbf{x}- \mathbf{\hat{x}}||_{2}^{2} \tag{16}\] where \(\mathbf{X}\), \(\mathbf{\hat{X}}\) represents the ground-truth, predicted point cloud; \(N\) and \(M\) are their respective number of points. \[d_{\mathcal{V}}=\frac{1}{2N^{\prime}}\sum_{\mathbf{x}\in\mathbf{X}_{\mathcal{V }}}\min_{\mathbf{\hat{x}}\in\mathbf{X}_{\mathcal{V}}}||\mathbf{x}-\mathbf{\hat {x}}||_{2}^{2}+\frac{1}{2M^{\prime}}\sum_{\mathbf{x}\in\mathbf{X}_{\mathcal{V }}}\min_{\mathbf{\hat{x}}\in\mathbf{X}_{\mathcal{V}}}||\mathbf{x}-\mathbf{\hat {x}}||_{2}^{2} \tag{17}\] where \(\mathbf{X}_{\mathcal{V}}\), \(\mathbf{\hat{X}}_{\mathcal{V}}\) represents the ground-truth point cloud and predicted point cloud within the bounding volume \(\mathcal{V}\); \(M^{\prime}\), \(N^{\prime}\) are their respective number of points. ## 5 Experiments DatasetsWe perform experiments on nuScenes [4], KITTI-Odometry [3, 6] and ArgoVerse2.0 [20]. nuScenes [4] is a full-suite autonomous driving dataset with a total of 1,000 real-world driving sequences of 15s each. KITTI [6] is also a multi-sensor dataset with 6 hours of diverse driving data across freeways and urban areas. KITTI-Odometry is a subset of this KITTI dataset where sequences have accurate sensor poses. ArgoVerse2.0 [20] contains the largest set of unannotated LiDAR sequences. Please see the appendix for results on ArgoVerse2.0. SetupWe consider a bounded area around the autonomous vehicle: -70m to 70m in the x-axis, -70m to 70m in the y-axis and -4.5m to 4.5m in the z-axis in the nuScenes coordinate system. This is our 4D volume \(\mathcal{V}\), described in Sec. 3. We follow the state-of-the-art in point cloud forecasting and evaluate forecasting in a 1 second horizon and a 3 second horizon. We adopt the same setup as prior methods [18, 19]. On nuScenes, for 1s forecasting,, we take 2 frames of input and 2 frames of output at 2Hz; for 3s forecasting, we take 6 frames of input and 6 frames of output at 2Hz. For all other datasets, we always take 5 frames of input and 5 frames of output for both 1s and 3s forecasting. BaselinesFirst, we construct an aggregation-based ray-tracing baseline (similar to [11]). Specifically, we populate a binary occupancy grid given the aligned LiDAR point clouds from the past and present timesteps and use it for querying ground-truth rays. In addition to this, we compare our 4D occupancy forecasting approach to state-of-the-arts (SOTAs) in point cloud forecasting, including SPFNet [19] and S2Net [18] on the nuScenes dataset, and ST3DCNN [11] on the KITTI-Odometry dataset. For SPFNet [19] and S2Net [18], we are able to obtain the raw point cloud predictions from the authors and evaluate the results on the new metrics. For fair comparison, the S2Net results are based on a single sample from their VAE. We retrain ST3DCNN [11] models for 1s and 3s forecasting. In addition, the state-of-the-art approaches (barring ST3DCNN) tend to predict a confidence score for each point, indicating how valid the predicted point is; we evaluate the predicted point cloud both with and without confidence filtering, with a recommended confidence threshold at 0.05 [18, 19]. Quantitative and qualitative results with confidence filtering can be found in the appendix. ### Re-evaluate state-of-the-arts Qualitative results on nuScenesWe compare the forecasted point clouds from our 4D occupancy forecasting approach to SOTA on point cloud forecasting in Fig. 6, where we see a drastic difference in how the predicted point clouds look like. Our forecasts look significantly more representative of the scene geometry compared to SOTA. This demonstrates the benefit of learning to forecast spacetime 4D occupancy with sensor intrinsics and extrinsics factored out. Surprisingly, we find that aggregation-based raytracing is a competitive baseline, qualitatively better than the SOTA. However, in addition to this _aggregation_, our approach is also able to hallucinate or _spacetime-complete_ both the fu \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Horizon} & \multirow{2}{*}{L1 (m)} & \multirow{2}{*}{AbsRel (\%)} & \multicolumn{2}{c}{Chamfer Distance (\(m^{2}\))} \\ & & & & Near-field & Vanilla \\ \hline \multirow{2}{*}{S2Net [18]} & 1s & 3.49 & 28.38 & 1.70 & 2.75 \\ & 3s & 4.78 & 30.15 & 2.06 & **3.47** \\ \hline \multirow{2}{*}{SPFNet [19]} & 1s & 4.58 & 34.87 & 2.24 & 4.17 \\ & 3s & 5.11 & 32.74 & 2.50 & 4.14 \\ \hline \multirow{2}{*}{Ray tracing} & 1s & 1.50 & 14.73 & **0.54** & **0.90** \\ & 3s & 2.44 & 26.86 & 1.66 & 3.59 \\ \hline \multirow{2}{*}{Ours} & 1s & **1.40** & **10.37** & 1.41 & 2.81 \\ & 3s & **1.71** & **13.48** & **1.40** & 4.31 \\ \hline \hline \end{tabular} \end{table} Table 1: Results on nuScenes [4]. We see that the conclusions made from the proposed metrics are more in line with the qualitative results in Fig. 6. This reiterates the need for metrics that intuitively evaluate the underlying _geometry_ of the scene instead of uncorrelated samples of the scene (e.g., points in space). \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{\begin{tabular}{c} Train \\ set \\ \end{tabular} } & \multirow{2}{*}{Horizon} & \multirow{2}{*}{L1 (m)} & \multirow{2}{*}{AbsRel (\%)} & \multicolumn{2}{c}{ \begin{tabular}{c} Chamfer Dist. (\(m^{2}\)) \\ Near-field \\ \end{tabular} } \\ & & & & Near-field & Vanilla \\ \hline \multirow{2}{*}{ST3DCNN [11]} & 1s & 3.49 & 28.38 & 1.70 & 2.75 \\ & 3s & 4.78 & 30.15 & 2.06 & **3.47** \\ \hline \multirow{2}{*}{Ours} & 1s & **1.12** & **9.09** & **0.51** & **0.61** \\ & 3s & **1.45** & **12.23** & **0.96** & **1.50** \\ \hline \hline \multirow{2}{*}{Ray tracing} & - & 1s & **1.50** & 16.15 & **0.62** & **0.76** \\ & 3s & 2.82 & 29.67 & **4.01** & 5.92 \\ \hline \multirow{2}{*}{Ours} & 1s & 1.71 & **14.85** & 2.52 & 3.18 \\ & 3s & **2.82** & **23.87** & 4.83 & **5.79** \\ \hline \hline \multirow{2}{*}{Ours} & 1s & 1.25 & 9.69 & 1.95 & 2.27 \\ & 3s & 1.70 & 14.09 & 4.09 & 5.09 \\ \hline \multirow{2}{*}{Ours} & _AV2_ + & 1s & **1.19** & **9.30** & **0.54** & **0.64** \\ & 3s & **1.67** & **13.40** & **1.24** & **1.80** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance as a function of the available target dataset (in this case, KITTI-Odometry). With access to all of KITTI-O (**top**), our method outperforms the SOTA. With no access to KITTI-O (_i.e._ zero-shot sensor generalization in the **middle**), our method trained on AV2 outperforms the ray tracing baseline at 3s, though the baseline fares well at 1s. Note that both approaches still beat the SOTA [11] by a large margin. Finally, with access to only 20% of KITTI-O (**bottom**), our method fares quite well, particularly when trained on both AV2 and KITTI-O. Cross-dataset generalization and training is made possible by disentangling sensor intrinsics/extrinsics from scene motion. Figure 6: Qualitative results. We compare the point cloud forecasts of S2Net [18], SPFNet [19] and the raytracing baseline on the nuScenes dataset with our approach on three different sequences at different time horizon. Our forecasts look significantly crisper than the SOTA. This demonstrates the benefit of learning to forecast spacetime 4D occupancy with sensor intrinsics and extrinsics factored out. We also visualize the forecasted 4D occupancy at the corresponding future timestamp. As compared to simple _aggregation_-based raytracing, we are able to _spacetime-complete_ 4D scenes. We highlight some potential applications in Fig. 7 and Fig. 8. We visualize a render of the predicted occupancy and the color encodes height along the z-axis. ture motion of dynamic objects and the occluded parts of the static world. We also visualize the 3D forecasted occupancy at corresponding timestamps that our approach predicts "for free". Please refer to the caption for more details. Results on nuScenes with new metricsWe compare our 4D occupancy forecasting to SOTA on point cloud forecasting in terms of depth error along the future rays, following the evaluation protocol outlined in Sec. 4. We find that the 4D occupancy forecasting approach outperforms all baselines by significant margins in both 1s and 3s forecasting, reducing both the L1 and the absolute relative error by more than half, compared to the state-of-the-art methods on point cloud forecasting. The improvements here are consistent with the qualitative results in Fig. 6. As noted before, the raytracing baseline performs better than SOTA. Results on nuScenes with old metricsWe also evaluate by both vanilla (16) and near-field chamfer distance (17) following the protocol in Sec. 4. Our approach shines in terms of near-field chamfer distance. One contributing factor could be that our approach is specifically optimized for capturing occupancy evolution in the near field. In addition, S2Net [18] outperforms us in terms of vanilla chamfer distance, which is not surprising since we are incapable of deciding where rays end outside the predefined voxel grid. Results on KITTI-OdometryNext, we use KITTI-Odometry to test our method in different settings with limited access to the target dataset. This mimics the setting where a next-generation sensor platform may be gradually integrated into fleet operations. Tab. 2 shows that with access to the full target dataset (KITTI-Odometry) for training, our method resoundingly outperforms the SOTA ST3DCNN [11]. Next, if no samples from the target dataset are available, one can employ either a non-learnable method such as our raytracing baseline, or one may pretrain on a (large) dataset with a different sensor platform. To this end, we find that our method trained on ArgoVerse2.0 outperforms the SOTA on KITTI-Odometry, while also outperforming raytracing baseline for long-horizon (3s) forecasting. Finally, with access to only 20% of KITTI-Odometry, our method pretrained on ArgoVerse2.0 and finetuned on KITTI-Odometry outperforms the alternatives. _To our knowledge, these are the first results in sensor transfer/generalization that illustrate the power of disentangling sensor extrinsics/intrinsics from scene motion._ Please see qualitative results in the appendix. ### Architecture ablations Here, we explore two other variants of our architecture: a _static_ variant that predicts a single voxel grid for all future timesteps, and a _residual_ variant that predicts a single static voxel grid with residual voxel grids for each output timestep. We evaluate these variants on nuScenes. The main observation is that the static variant is a powerful baseline for short-horizon forecasting. This is because a single voxel grid serves as a dense static map of the local region, and since an extremely high majority of the world remains static, this is expected to be a reasonable baseline for short-horizon forecasting. Note that this variant is still stronger than the ray tracing baseline in Tab. 1 because of its ability to hallucinate occluded parts of the world. On the other hand, the proposed _dynamic_ variant (which predicts one voxel grid per future timestep), performs the best at long-horizon forecasting. With the residual variant, our hope was to separate dynamic scene elements from static regions, but in practice this decomposition fails as there is not enough regularization to force motion-based separation. Since, the static variant outperforms the state-of-the-art on 1s forecasting, we analyse these variants further in the appendix by using the segmentation annotations on nuScenes-LiDARSeg [1] and computing the proposed metrics separately on foreground and background points. This helps us understand which regions in the scene contribute the least to the performance of these variants. ### Applications Generalization across sensorsIn Fig. 7 (captioned as new intrinsic-view synthesis), we show how one can render point clouds as if they are captured by different LiDAR sensors from the same predicted future occupancy. Typically, different LiDAR sensors exhibit different ray patterns when sensing. For the case shown, the nuScenes LiDAR is an "in-domain" sensor, i.e., the occupancy grid was predicted by a network learned over LiDAR sweeps captured by a nuScenes LiDAR. The KITTI and ArgoVerse LiDARs are "out-of-domain". We hope that learning of such a generic representation allows methods in sensor domain transfer [8, 21] to look at the task from the perspective of space \begin{table} \begin{tabular}{c c c c c c} \hline \hline Arch. & Horizon & L1 (m) & AbsRel (\%) & \begin{tabular}{c} Chamfer Distance (\(m^{2}\)) \\ Near-field \\ \end{tabular} & Vanilla \\ \hline \multirow{2}{*}{S} & 1s & **1.28** & **9.27** & 1.03 & 3.41 \\ & 3s & 1.73 & 13.54 & **1.40** & 3.73 \\ \hline \multirow{2}{*}{D} & 1s & 1.40 & 10.37 & 1.41 & **2.81** \\ & 3s & **1.71** & **13.48** & **1.40** & 4.31 \\ \hline \multirow{2}{*}{S+R} & 1s & 1.34 & 9.73 & **1.00** & 3.20 \\ & 3s & 1.82 & 13.84 & 1.52 & **3.54** \\ \hline \hline \end{tabular} \end{table} Table 3: We evaluate two variants of the proposed dynamic (D) architecture using the geometry forecasting metrics - static (S) and residual (S+R). We find that the static variant is a powerful baseline that beats our dynamic approach for 1s forecasting and by extension, the state-of-the-art. time 4D occupancy. The formulation we have laid out also makes it easy to train across different datasets, making zero-shot cross-dataset transfer possible for LiDARs [14, 15]. In the previous section and in Tab. 2, we highlight the first result in this direction, where our method trained on the Argo-Verse2.0 dataset when tested on KITTI-Odometry beats the prior art [11] on KITTI-Odometry. Furthermore, our proposed disentangling also allows for multi-dataset training, for which we point the readers to the appendix. Novel view synthesisIn Fig. 8 (captioned as new-extrinsic view synthesis), we show dense depth maps rendered from our learnt occupancy grid using novel ego-vehicle trajectories or viewpoints. Such dense depth of a scene is not possible to get from existing LiDAR sensors that return sparse observations of the world. Although classical depth completion [17] from sparse LiDAR input exists as a single-frame (current timestep) task, here we note that with our representation, it is possible to densify sparse LiDAR point clouds from the _future_, with such rendered depth maps backprojected into 3D. This dense 360\({}^{\circ}\) depth is evaluated on sparse points (with the help of future LiDAR returns) by our proposed ray-based evaluation metrics. ## 6 Conclusion In this paper, we propose looking at point cloud forecasting through the lens of geometric occupancy forecasting, which is an emerging self-supervised task [7], originally set in the birds'-eye-view but extended to full 3D through this work. We advocate that this shift in viewpoint is necessary for two reasons. First, this shift helps algorithms focus on a generic intermediate representation of the world, i.e. its spatiotemporal 4D occupancy, which has great potential for downstream tasks. Second, this "renovates" how we formulate self-supervised LiDAR point cloud forecasting [11, 18, 19] by factoring out sensor extrinsics and intrinsics from the learning of shape and motion of different scene elements. In the end, we reiterate that the two tasks in discussion are surprisingly connected. We propose an evaluation protocol, that unifies the two worlds and focuses on a scalable evaluation for predicted geometry. Figure 8: **Novel extrinsic-view synthesis** Dense depth maps rendered from the predicted future 4D occupancy from novel viewpoints. To render these depth maps, we take a novel future trajectory of the egovehicle. Placing the camera at each of these locations, always facing forward into the voxel grid (shown in the future dotted red trajectory on the left), gives us a camera coordinate system in which we can shoot rays from the camera center to every pixel in the image, and further beyond into the 4D occupancy volume. Every pixel represents the expected depth along its ray. The RGB image at \(t=0s\) is shown as reference and is not used in this rendering. For the depth maps, darker is closer, brighter is farther. Depth on sky regions is untrustworthy as no returns are received for this region from the LiDAR sensor. Figure 7: **Novel intrinsic-view synthesis** We show how to simulate different LiDAR ray patterns on top of the same learned occupancy grid. In this case, the future occupancy is predicted with historic LiDAR data scanned by nuScenes LiDAR (Velodyne HDL32E). First, we show the rendered point cloud under the native setting. Then, we show the rendered point cloud for KITTI LiDAR (Velodyne HDL64E, 2x as many beams). Finally, we have the rendered point cloud for Argoverse 2.0 LiDAR (2 VLP-32C stacked on top of each other). The fact that we can forecast occupancy on top of data captured by one type of sensor and use it to simulate future data for different sensors shows how generic the forecasted occupancy is as a representation. We support this generalization quantitatively in Tab. 2. ## Appendix In this supplement, we extend our discussion of the proposed reformulation of point cloud forecasting into 4D occupancy forecasting. Specifically, we discuss details about the network architecture of the proposed approach in Sec. A, further results on nuScenes, KITTI-Odometry and ArgoVerse2.0 in Sec. B, and the quality of our forecasts separately on the foreground and background points in Sec. C. ## Appendix A Network details Architecture implementationWe build on top of the encoder-decoder architecture first proposed by Zeng _et al_. [22] for neural motion planning. We extend the version of this architecture used by Khurana _et al_. [7] for forecasting occupancy in the birds'-eye-view. The only difference between our setup and that used in prior work [7], is that we treat our 4D voxel grid (X x Y x Z x T) as a reshaped 3D voxel grid (X x Y x ZT), where the Z or height dimension is incorporated into the channel dimension of the input, allowing us to still make use of 2D convolutions on a 4D voxel grid. This means that every channel in the input, represents a slice of the world through the height and time dimensions. Differentiable rendererWe extend the differentiable raycaster developed by Khurana _et al_. [7] to 3D and employ it as the differentiable voxel renderer in our approach. As in the prior work, we define our set of rays using the position of the egovehicle in the global coordinate frame as the origin, and all the LiDAR returns as the end points for the rays. The 4D voxel grid is initialized with three labels - empty, occupied and unknown based on the returns in the LiDAR sweeps. Each ray is traversed using a fast voxel traversal algorithm [2]. Given all the voxels and their occupancies along a ray, we compute the expected distance the ray travels through the voxel grid. This is same as volume rendering but in a discretized grid [12]. The gradient of the loss between this expected distance and the groundtruth distance is backpropagated to all the voxels traversed by the ray. Note that when a ray does not terminate within the voxel grid volume, we put all the probability mass of occupancy at the boundary of the voxel grid, similar to Mildhall _et al_. [12]. This means that when a ray passes through occupancy regions that are empty (refer occupancy visuals in the main draft and summary video), the rays results in a point at the boundary of the voxel grid. Dataset training and testing splitsWe use the official train and validation splits of nuScenes and ArgoVerse2.0. Only when comparing results on KITTI-Odometry with ST3DCNN [11], we follow their dataset splits for training and testing. These dataset splits allow us to draw apples-to-apples comparisons with state-of-the-art approaches. ## Appendix B Additional results ### nuScenes Results with confidence thresholdingWe supplement the results in the main paper by evaluating the point cloud forecasts of SPFNet [19] and S2Net [18] by thresholding points at a recommended confidence threshold of 0.05. Qualitatively in Fig. 9, we observe point clouds from SOTA that only consist of high confidence LiDAR returns close to the ground plane, because of which we perform quantitatively much better than these baselines on our ray-based metrics. We summarise these results in Tab. 4. Access to ground-truth egoposes during evaluationNote that in our proposed formulation of 4D occupancy forecasting, we view the LiDAR point clouds used during training as just another observation of the world, which in our case, happens to come from the view of the ego-vehicle. In reality, this LiDAR measurement of occupancy could have also come from any other observer in the world. Similarly, during evaluation, the only LiDAR measurement we have access to comes from the view of the ego-vehicle, making this the only datapoint to evaluate our occupancy forecasts against. This creates an apparent advantage for our method when comparing to point cloud forecasting approaches because they do not have access to ground-truth egoposes from the future. To alleviate this concern, first, we use the future ground-truth egoposes to align all point cloud forecasts to a global coordinate frame. Only after doing this, all the reported metrics are computed for the baselines. Second, we employ a simple motion planner based on linear dynamics, and use these planned future egoposes for evaluating our own method. We see that the metrics drop marginally, showing that the dependence of our method on ground-truth egoposes from the future is not a concern. This is also true for the ray tracing baseline, results of which are summarised in Tab. 5. Figure 10: Qualitative results on KITTI-Odometry on three different sequences at different time horizons. We compare the point cloud forecasts of ST3DCNN [11] and the ray tracing baseline. We see that this SOTA is qualitatively more geometry-aware than the SOTA on nuScenes. However, our method is still more reflective of the true rigid geometry of the underlying world. We visualize a render of the learnt occupancy and the color encodes height along the z-axis. Figure 9: Qualitative results on the nuScenes dataset on three different sequences at different time horizons. We compare the point cloud forecasts of our approach with the aggregation-based ray tracing baseline and S2Net [18], SPFNet [19] with confidence filtering, after applying a recommended confidence threshold of 0.05 on the point clouds. Our forecasts look significantly crisper than the SOTA, however we see that the ray tracing baseline is also a strong baseline. We visualize a render of the learnt occupancy and the color encodes height along the z-axis. bounded volume. We also clarify that we always test cross-sensor generalization between KITTI-Odometry and ArgoVerse2.0, with training on ArgoVerse2.0 because (1) both datasets have the same number of LiDAR beams and point clouds are captured at the same frequency, and (2) ArgoVerse2.0 is a much larger and diverse dataset than KITTI-Odometry that is suitable for pretraining. ## Appendix C Foreground vs. background query rays In order to further analyse the variants of our architecture, we separate the query rays as belonging to foreground or background regions, using the labels from nuScenes' LiDARSeg [1]. We evaluate both the regions using both the new and old metrics in Table 8 and Table 9. Poor performance on foreground objectsOur main observation is that all the variants perform poorly on the foreground objects (which includes moving or stationary foreground objects) as compared to the background. This is because a large number of rays and voxels (more than 90%) belong to background regions and thus, the foreground objects are downweighted during the training process. Even when the combined evaluation of foreground and background regions is considered (Table 3 in main draft), we see that the poor performance on the foreground fails to materialize in the metrics. This hints are improving the metrics and methods to focus more on the forecasting of foreground objects, especially those in motion. Strengths of each variantAnother observation stemming from the above fact is that even with this disentangled evaluation on foreground, the _static_ variant is the strongest baseline for short-horizon forecasting (1s). On 3s forecasting, the _dynamic_ variant shines on the ray-based evaluation of background objects (some unseen background regions may only appear at future timesteps) and the _residual_ variant shines on the ray-based evaluation of foreground objects (possibly decouples the foreground from background regions better). Comparison to the ray tracing baselineGiven the strength of the ray tracing baseline, we investigate its performance on foreground and background objects in comparison to our approach in Tab. 7. This time we further divide foreground objects into subcategories of pedestrians and vehicles, while also reporting the metrics on all foreground objects. Note that the according to the vocabulary of nuScenes, apart from different types of pedestrians and vehicles, miscellaneous movable objects like traffic cones and barriers are included in the umbrella category of foreground objects. We have the following findings: 1. For long-horizon forecasting, our method consistently does better than the ray tracing baseline, for both all types of foreground objects and background objects. 2. For short-horizon forecasting, the ray tracing baseline performs at par with our method and sometimes even better (on most types of foreground objects), hence proving to be a strong yet simple and non-learnable approach that does not require any training data. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Method & Horizon & L1 (m) & AbsRel (\%) & \begin{tabular}{c} Chamfer Distance (\(m^{2}\)) \\ Near-field \\ \end{tabular} & Vanilla \\ \hline Ray tracing & 1s & 2.39 & 15.43 & **0.56** & **1.90** \\ & 3s & 3.72 & 25.24 & 2.50 & **11.59** \\ \hline Ours & 1s & **2.25** & **10.25** & 1.53 & 60.94 \\ & 3s & **2.86** & **14.62** & **2.20** & 69.81 \\ \hline \hline \end{tabular} \end{table} Table 6: Quantitative results on the ArgoVerse2.0 [20] dataset. We compare our method trained on the ArgoVerse2.0 dataset to the ray tracing baseline and find similar trends to nuScenes and KITTI-Odometry. \begin{table} \begin{tabular}{l l c c c c} \hline \hline Method & Horizon & L1 (m) & AbsRel (\%) & \begin{tabular}{c} Chamfer Distance (\(m^{2}\)) \\ Near-field \\ \end{tabular} & Vanilla \\ \hline \hline \multirow{2}{*}{S2Net [18]} & 1s & 2.88 & 20.57 & 4.61 & 11.77 \\ & 3s & 4.97 & 24.79 & 13.10 & 30.95 \\ \hline \multirow{2}{*}{SPFNet [19]} & 1s & 5.30 & 30.12 & 21.24 & 45.12 \\ & 3s & 5.70 & 28.65 & 20.99 & 44.71 \\ \hline \multirow{2}{*}{Ray tracing} & 1s & 1.50 & 14.73 & 0.54 & **0.90** \\ & 3s & 2.44 & 26.86 & 1.66 & **3.59** \\ \hline \multirow{2}{*}{Ours} & 1s & **1.40** & **10.37** & **1.41** & 2.81 \\ & 3s & **1.71** & **13.48** & **1.40** & 4.31 \\ \hline \hline \end{tabular} \end{table} Table 4: Results on nuScenes [4] with confidence filtering on SPFNet and S2Net. As described in the main paper we threshold the points at a recommended confidence threshold of 0.05. We see that the conclusions made from the proposed metrics are more in line with the qualitative results in Fig. 9. This once again reiterates the need for metrics that intuitively evaluate the underlying _geometry_ of the scene instead of uncorrelated samples of the scene (e.g., points in space). \begin{table} \begin{tabular}{l c c c c c} \hline \hline Method & GT Egoposes & L1 (m) & AbsRel (\%) & \begin{tabular}{c} Chamfer Distance (\(m^{2}\)) \\ Near-field \\ \end{tabular} & Vanilla \\ \hline Ray tracing & Yes & 2.44 & 26.86 & 1.66 & 3.59 \\ Ray tracing & No & 2.50 & 26.35 & 1.60 & **3.39** \\ Ours & Yes & **1.71** & **13.48** & **1.40** & 4.31 \\ Ours & No & 1.84 & 13.95 & 1.50 & 4.50 \\ \hline \hline \end{tabular} \end{table} Table 5: We experiment with using a simple linear dynamics based motion planner that can replace the ground-truth future egoposes used in our analysis. Our experiments prove that even in the absence of access to ground-truth future egoposes – which are not a concern from the viewpoint of our formulation but only a means to evaluate the occupancy predictions – simple linear dynamics models such as those based on constant velocity, suffice.
2303.04765
Lorentz and CPT breaking in gamma-ray burst neutrinos from string theory
Previous studies on high-energy gamma-ray burst neutrinos from IceCube suggest a neutrino speed variation at the Lorentz violation~(LV) scale of $\sim 6.4\times 10^{17}$~GeV, with opposite velocity variances between neutrinos and antineutrinos. Within a space-time foam model, inspired by string theory, we develop an approach to describe the suggested neutrino/antineutrino propagation properties with both Lorentz invariance and CPT symmetry breaking. A threshold analysis on the bremsstrahlung of electron-positron pair~($\nu\rightarrow\nu ee^{+}$) for the superluminal~(anti)neutrino is performed. We find that, due to the energy violation caused by the quantum foam, such reaction may be restricted to occur at sufficient high energies and could even be kinematically forbidden. Constraints on neutrino LV from vacuum $ee^{+}$ pair emission are naturally avoided. Future experiments are appealed to test further the CPT violation of cosmic neutrinos and/or neutrino superluminality.
Chengyi Li, Bo-Qiang Ma
2023-03-08T18:02:54Z
http://arxiv.org/abs/2303.04765v2
# Lorentz and CPT breaking in gamma-ray burst neutrinos from string theory+ ###### Abstract Previous studies on high-energy gamma-ray burst neutrinos from IceCube suggest a neutrino speed variation at the Lorentz violation (LV) scale of \(\sim 6.4\times 10^{17}\) GeV, with opposite velocity variances between neutrinos and antineutrinos. Within a space-time foam model, inspired by string theory, we develop an approach to describe the suggested neutrino/antineutrino propagation properties with both Lorentz invariance and CPT symmetry breaking. A threshold analysis on the bremsstrahlung of electron-positron pair (\(\nu\rightarrow\nu ee^{+}\)) for the superluminal (anti)neutrino is performed. We find that, due to the energy violation caused by the quantum foam, such reaction may be restricted to occur at sufficient high energies and could even be kinematically forbidden. Constraints on neutrino LV from vacuum \(ee^{+}\) pair emission are naturally avoided. Future experiments are appealed to test further the CPT violation of cosmic neutrinos and/or neutrino superluminality. Keywords:Non-Standard Neutrino Properties, Space-Time Symmetries, String and Brane Phenomenology, Violation of Lorentz and/or CPT Symmetry ## 1 Introduction Astrophysical neutrinos are ideal portals to reveal the tiny Lorentz invariance violation (LV) as postulated by some quantum gravity (QG) theories [1; 2; 3]. The IceCube collaboration has reported the discovery of ultrahigh-energy neutrinos of extragalactic origin, including a couple of PeV events [4; 5; 6]. Recent studies [7; 8; 9; 10; 11] of events in the (near-)TeV-PeV range suggest a linearly energy dependent speed variation of neutrinos through their associations with gamma-ray bursts (GRBs). Analyses lead to a Lorentz violation scale of \(\sim 6\times 10^{17}\) GeV, comparable with that determined from GRB photons [12; 13; 14; 15; 16]. More intriguingly, it is proposed [9] that either neutrinos or antineutrinos travel faster than the constant light speed \(c\),1 whereas the other ones go slower than unity. This can be explained by the CPT-odd feature of the linear Lorentz violation [9; 10; 11], and leads further to the Charge-Parity-Time (CPT) reversal symmetry breaking between neutrinos and antineutrinos, or a matter-antimatter asymmetry [17]. But it is also found that the attempt to interpret such phenomenological picture with field-theoretic models of LV faces challenges due to the constraints on the superluminal neutrino velocity and the corresponding LV from the kinematically allowed anomalous channels, e.g., vacuum pair emission (\(\nu\rightarrow\nu ee^{+}\)) [17]. Footnote 1: Henceforth natural units in which \(c=\hbar=1\) are adopted. The main objective of the study we are here performing is to indicate that the experimental finding of LV for GRB neutrinos [7; 8; 9; 10; 11] may coincide with the predictions from certain QG scheme that cannot be cast in an effective field theory (EFT) description, i.e., the quantum (Liouville inspired) space-time foam model from string/D-brane theories. In fact, the main idea has been outlined in a letter [18], and in this paper we provide a thorough account of the calculations and elaborate on more detailed results through in depth discussions. This framework has also been used previously in explaining light speed variation from analyzing flight times of GRB photons [12; 13; 14; 15; 16] in a consistent way [19]. The prototype idea of the quantum structure of space-time at a microscopic level--"space-time foam" devised by Wheeler [20]--arises from the uncertainties of quanta. For string/brane theory, such nontrivial foamy structures are provided by solitonic defects in some Liouville-string inspired models [21; 22], according to which our Universe lives on a (compactified) D(irichlet)3-brane, roaming in a higher-dimensional bulk space, punctured by a population of D0-branes in type I/IIA strings [21; 22; 23], as we will consider below (or in IIB superstrings, of wrapped-up D-branes which are effectively pointlike [24]). The D-brane defects ("D-particles") appear to a braneworld observer as flashing-on-and-off vacuum structures when they cross the brane. Their interaction with open-string Standard-Model (SM) excitations involving capture/splitting process and subsequent recoil reduces local Lorentz invariance. Such models, dubbed string/D-defect (space-time) foam, go beyond the local EFT approach to QG with a variety of applications to study a number of phenomena, such as the so-induced vacuum refraction for photons [25] and fermions [26], origin of neutrino masses [27] and mixing [28; 29; 30], as well as string cosmologies associated with the dark sector of the Universe [31]. Our aim is to show that the suggested neutrino speed variation can be explained by means of the CPT-breaking aspects of such stringy QG models with linear Lorentz violations. Constraints implied by vacuum pair emission by the superluminal neutrino are addressed and found to be consistent with the findings of Refs. [8; 9; 10; 11] in such a string theoretic context. We also propose several viable ways on testing the CPT violation in the neutrino sector with future (astrophysical) observations. The paper is organized as follows. In Sec. 2, we introduce within the framework of stringy space-time foam a scenario that admits CPT-violating neutrinos and compute the dispersion relation, velocity and traveling times. In Sect. 3 we discuss the phenomenological implications of the results obtained by associating IceCube observations with GRBs on stringy QG. Section 4 is devoted to elaborating on the plausible mechanism permitting a stable propagation _in vacuo_ for the neutrino species against superluminal decays in the model as reported in [18]. In particular, the ways to escape the threshold constraints are given. To conclude, a summary and discussion of our results is depicted in Sec. 5. ## 2 Stringy D(efect)-foam Consider the isotropic D-foam framework, as portrayed by the seminal works [21; 22; 23] in this area, the capture/splitting of a neutral open string such as a neutrino by a D-particle causes a recoil motion of the latter, described by a deformed stringy \(\sigma\)-model operator: \[V\ni\int_{\partial\mathcal{W}}\mathrm{d}\tau\,\epsilon\,\mathcal{U}_{\ell}x^{ 0}\Theta_{\epsilon}(x^{0})\partial_{n}x^{\ell}, \tag{1}\] where \(\partial_{n}\) is the normal derivative on the boundary of the worldsheet \(\partial{\cal W}\), \(\Theta_{\epsilon\to 0^{+}}(t)=\frac{1}{2\pi{\rm i}}\int_{-\infty}^{\infty}\frac{{ \rm d}q}{q-{\rm i}e}{\rm e}^{{\rm i}qt}\), and \({\cal U}\) is the spatial part of the recoil 4-velocity of the D-defect, \({\cal U}_{\ell}={\cal V}_{\ell}(1-{\cal V}^{2})^{-1/2}\equiv{\cal V}_{\ell} \gamma_{\cal V}\), which, for heavy (nonrelativistic) D-particles, reduces to the ordinary 3-velocity that can be identified as \[{\cal V}=M_{s}^{-1}g_{s}\Delta{\bf k}\ \to\ {\cal V}_{\parallel}\simeq M_{s}^{-1}g_{s} \lambda^{(\ell)}k_{\ell}, \tag{2}\] following the (logarithmic) conformal field theory methods [32; 33; 34]. Above, \(\Delta{\bf k}\) is the momentum transfer during a collision, \(g_{s}\ll 1\) is the string coupling, and \(M_{s}\) is the string scale. Suffixes \(\parallel\) denote components along brane longitudinal dimensions, i.e., \(\ell=1,2,3\), and \(\lambda\) is the ratio of \(\Delta k_{\ell}\) with respect to the incoming neutrino momentum, that is, \(\lambda^{(\ell)}=\Delta k_{\ell}/k_{\ell}\), taken to be stochastic Gaussian [30] with the moments: \[\langle\!\langle\lambda^{(\ell)}\rangle\!\rangle=0,\quad\langle\!\langle \lambda^{(\ell)}\lambda^{(m)}\rangle\!\rangle=\mathfrak{d}^{2}\delta^{\ell m}. \tag{3}\] The variances \(\mathfrak{d}^{(\ell)}=\sqrt{\langle\!\langle(\lambda^{(\ell)})^{2}\rangle\! \rangle}\neq 0\) may in general differ from each direction \(\ell\), and \(\langle\!\langle\cdots\rangle\!\rangle\) denotes an average over both (i) statistical collections of D-particles and (ii) quantum stringy fluctuations [34], treated by resummation over worldsheet genera. Liouville dressing to the vertex operator (1) [22] then induces an off-diagonal distortion in target-space geometry \(\hat{g}\), with \(0\ell\) components [32; 21]: \[g_{0\ell}(x^{0},{\cal V}_{\parallel})\sim\epsilon^{2}{\cal V}_{\ell}t\theta(t ){\rm e}^{-2\epsilon t}\sim{\cal V}_{\ell}. \tag{4}\] This results in a local background of Finsler type: \(g_{\alpha\beta}=\eta_{\alpha\beta}+h_{\alpha\beta}\), \(h_{0\ell}={\cal V}_{\ell\parallel}^{A}{\cal\sigma}_{A}\), where \(\sigma_{A}\), \(A=1,2,3\) are appropriate flavor matrices. This metric deformation (4) then affects the dispersion relation of neutrinos with mass \(m_{\nu}\) via \(k^{\alpha}k^{\beta}g_{\alpha\beta}(k)=-m_{\nu}^{2}\), yielding \[E({\bf k}) ={\bf k}\cdot{\mathbf{\cal V}}_{\parallel}\pm|{\bf k}| \bigg{(}1+\frac{m_{\nu}^{2}}{{\bf k}^{2}}+\Big{(}{\mathbf{\cal V}}_{ \parallel}\cdot\frac{{\bf k}}{|{\bf k}|}\Big{)}^{2}\bigg{)}^{1/2}\] \[\simeq{\cal E}_{M}+\frac{g_{s}}{M_{s}}\sum_{\ell}(k^{\ell})^{2} \lambda^{(\ell)}+\frac{{\cal E}_{M}}{2}\frac{g_{s}^{2}}{M_{s}^{2}}\sum_{\ell} (k^{\ell})^{2}(\lambda^{(\ell)})^{2}, \tag{5}\] where \({\cal E}_{M}=\pm\sqrt{k^{\ell}k_{\ell}+m_{\nu}^{2}}\) denotes the Minkowski energy with indefinite signature. The flavor structures have been omitted in the above expressions by taking account of Eq. (3), i.e., \(\langle\!\langle\lambda_{A}^{(\ell)}\lambda_{B}^{(m)}\rangle\!\rangle= \mathfrak{d}^{2}\delta^{\ell m}\delta_{AB}\), since one needs to average (5) over the D-particle populations, that is \[\langle\!\langle E\rangle\!\rangle \simeq\overline{E}(k)\simeq\langle\!\langle\!\langle\pm\big{|}{ \cal E}_{M}\big{|}\Big{(}1+\frac{({\mathbf{\cal V}}\cdot{\bf k})^{2 }}{2{\bf k}^{2}}\Big{)}\!\rangle\!\rangle\] \[\simeq\pm\Big{(}1+\frac{g_{s}^{2}}{2M_{s}^{2}}\sum_{\ell}\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! On the other hand, the kinematics of the defect/neutrino scattering further yields [35], \[E_{\rm i} =E_{\rm f}+M_{D}(\gamma_{\mathcal{V}}-1)+\delta v, \tag{7}\] \[{\bf k}_{\rm i} ={\bf k}_{\rm f}+\Delta{\bf k}={\bf k}_{\rm f}+M_{D}\gamma_{ \mathcal{V}}\mathbf{\mathcal{V}}_{\parallel}, \tag{8}\] where \((E,{\bf k})_{\rm i/f}\) is the 4-momentum for initial/final state, \(\delta v\) denotes the fluctuation of brane vacuum energy during the scattering, with D-particle mass being \(M_{D}=M_{s}/g_{s}\). Then, the explicit formula of \(\overline{E}_{\rm f}\) arises (on average) from Eqs. (7) and (6), by noting \(\overline{E}_{\rm i}=\overline{E}\), as \[\overline{E}_{\rm f}(\overline{{\bf k}}_{\rm f}) =\overline{E}_{\rm i}(\overline{{\bf k}}_{\rm i})-\left\langle \!\left\langle\left(\frac{1}{2}M_{D}\mathbf{\mathcal{V}}_{\parallel} ^{2}+{\cal O}({\cal V}^{4})\right)\right\rangle\!\right\rangle\] \[\simeq\overline{E}_{\rm i}(\overline{{\bf k}}_{\rm f})-\frac{g_{ s}}{2M_{s}}\sum_{\ell}^{\prime}\!\left({\mathfrak{d}}^{(\ell)}\right)^{2}(k_{ \rm f}^{\ell})^{2}, \tag{9}\] where the momentum conservation [Eq. (8)] is used, i.e., \(\overline{{\bf k}}_{\rm f}=\overline{{\bf k}}_{\rm i}\) given that \(\left\langle\!\left\langle\mathbf{\mathcal{V}}_{\parallel}\right\rangle \!\right\rangle=0\), and \(\left\langle\!\left\langle\delta v\right\rangle\!\right\rangle=0\) is assumed. Note that Eq. (9) reflects total combined effects of the D-foam on neutrino energy-momentum relations from both metric distortion and capture/splitting. Noticeably, antiparticles of spin-\(\frac{1}{2}\) fermions can be regarded as "holes" with negative energies, one arrives at the _effectively CPT-violating_ dispersion relations for neutrino (\(\nu\)) and antineutrino (\(\bar{\nu}\)) _in vacuo_ (or, for Majorana neutrino with different chirality): \[\overline{E}_{\nu}(\overline{{\bf k}}) \coloneqq|\overline{E}_{\rm f}^{(+)}(\overline{{\bf k}};M_{D},m_{ \nu})|, \tag{10}\] \[\overline{E}_{\bar{\nu}}(\overline{{\bf k}}) \coloneqq|\overline{E}_{\rm f}^{(-)}(\overline{{\bf k}};M_{D},m_{ \nu})|,\] where \((+)/(-)\) denotes positive/negative part of Eq. (9) so that a physical particle always has energy \(\overline{E}>0\). It is essential to understand that the difference between \(\nu\)'s and \(\bar{\nu}\)'s in Eq. (10) follows from the Dirac's proposal of "hole theory". This applies only to fermions such as neutrinos, but not to bosons, as it is based upon the exclusion principle. In fact, as was elucidated in Refs. [19; 22; 23; 24; 25] and will be mentioned in the next section, neutral bosons like photons are subject to propagate subluminally, independent of their helicities, in this "medium" of (stringy) vacuum defects. The reason for that may be traced back to the fact that the propagator of a fermion is similar to the square root of that of a vector boson. ### Propagation velocities For the discussion of the GRB neutrinos here of interest, we consider an _isotropic_ foam, which further requires \(\lambda^{(\ell)}=\lambda\) for all \(\ell=1,2,3\). In such a case, the asymmetric dispersion relation (10) for high-energy (anti)neutrinos in this D-foam geometry reads \[\overline{E}_{\nu}(\overline{{\bf k}}) =\overline{k}-\frac{g_{s}}{2M_{s}}{\mathfrak{d}}_{\nu}^{2} \overline{k}^{2}+{\cal O}(1/M_{D}^{2}), \tag{11}\] \[\overline{E}_{\bar{\nu}}(\overline{{\bf k}}) =\overline{k}+\frac{g_{s}}{2M_{s}}{\mathfrak{d}}_{\nu}^{2} \overline{k}^{2}+{\cal O}(1/M_{D}^{2}), \tag{12}\] which reduces to our result in Ref. [18] once higher-order corrections are negligible. Here, \(\left\langle\!\left\langle\lambda^{2}\right\rangle\!\right\rangle=({ \mathfrak{d}}^{(\ell)})^{2}|_{\ell=1,2,3}\equiv{\mathfrak{d}}_{\nu}^{2}>0\) can be naturally up to \({\cal O}(1)\). The relativistic limit is used to omit the mass term, \(m_{\nu}\simeq 0\). To get group velocities, one can certainly employ the relation \(v=\partial E/\partial|{\bf k}|\); it is possible, though, that such velocity law fails in QG, and that [23] the free propagation of particles may even disentangle from their dispersion relations.2 Nevertheless, from a conservative point of view, we shall still assume Hamiltonian dynamics, and as such, the dispersion relation (11) then yields a deformed _subluminal_ neutrino velocity as Footnote 2: For that case, both neutrinos and antineutrinos would be subluminal due to causality-respectful delays experienced by them as interacting with the foam of [23]; whereas there could be superluminal propagation in a type IIB stringy D-foam [24]. \[\overline{v}_{\nu}\coloneqq\frac{\partial\overline{E}}{\partial\overline{k}}=1 -g_{s}\frac{\mathfrak{d}_{\nu}^{2}\overline{k}}{M_{s}}\simeq 1-\mathcal{O} \Big{(}g_{s}\frac{n_{D}\overline{E}}{M_{s}}\Big{)}, \tag{13}\] where we substituted the lowest order dispersion \(\overline{k}\simeq\overline{E}\). The parameter \(\mathfrak{d}_{\nu}^{2}\) depends on the density of D-particles, \(n_{D}\), which can be essentially arbitrary in the model, as is the (stringy) quantum-gravity mass \(M_{\rm sQG}\coloneqq M_{s}/(\mathfrak{d}^{2}g_{s})\). Though \(n_{D}=n_{D}(t)\) could in general vary with the cosmological epochs [25], say, it might evolve when the time elapses, we introduce a hypothetical _uniform_ foam situation, i.e., \(n_{D}(t_{\rm late})=n_{D}^{\star}\simeq\text{const.}\), at relatively late eras of the Universe (for, e.g., redshifts \(\lesssim 10\)). Similarly, from Eq. (12), the velocity defect, i.e., \(\delta_{v}^{D}\coloneqq\overline{v}-1\), for an antineutrino propagating in a quantum D0-brane foam becomes \[0<\delta_{v}^{D}=\frac{\overline{k}}{M_{\rm sQG}}\propto\frac{n_{D}^{\star}}{ M_{s}}\overline{E}, \tag{14}\] which implies that antineutrinos are _superluminal_ particles. That is an important feature of our approach toward D-foam induced LV neutrino propagation, and is crucial for generating desired phenomenologies which will be discussed shortly. For our purpose, symmetric corrections of \(\mathcal{O}(\mathfrak{d}^{2})\) in particle and antiparticle sectors are assumed in the discussion, such that the amounts of CPT violation are the same for both \(\nu\)'s and \(\bar{\nu}\)'s. (An asymmetry between neutrino and antineutrino sectors can be involved once there will be a need to reconcile with phenomenological constraints.) ### Lag in travel times of neutrinos Before closing this section, we remark on the relation of the induced phase-space dependent metrics (4) with Finsler geometries, which, over past few years, are known to play a role in new physics as well as quantum gravity. The Finsler structure depends on both coordinates and momenta, as is precisely the situation encountered in (4). In some sense, the D-particle recoil may be viewed as an example of Finsler geometry in string theory. One may define appropriately the Finsler norm from the metric \(\hat{g}(\mathcal{V})\): \(F=[g_{\alpha\beta}(y(\mathcal{V}))y^{\alpha}y^{\beta}]^{1/2}\) (here \(y\)'s denote "velocities" in Finslerian frameworks thereof [36]), and discuss geodesics in such space-times as in [36]. However as we have seen above, the effects of D-foam go beyond those encoded in a Finsler-like metric. Since one need to take account of the statistical effects of the quantum-fluctuating D-defects, this leads to a more general structure: stochastic Finsler geometry [31]. In fact, for the isotropic foam, the stochasticity, \(\langle\!\langle\mathcal{V}_{\ell}\rangle\!\rangle=0\) of (3), implies the _restoration_ of Lorentz symmetry _on average_ but with nontrivial (higher-order) fluctuations expressed as correlators by \(\langle\!\langle{\cal V}_{\ell}{\cal V}_{m}\rangle\!\rangle\propto\mathfrak{d}^{2} \delta\ell_{m}\), \(\mathfrak{d}^{2}\neq 0\). The effects of recoil distortions are washed out _statistically_ and the metric will be like a Riemannian metric. So one may deal with particle propagation in an expanding space-time of the Friedmann-Robertson-Walker (FRW) type as usual. The kinematical aspects of the recoil-brane scattering on matter also enter in the dispersion relation and velocity. The latter cannot be cast by a Finsler treatment. Based upon the above considerations, we estimate the flight time differences of neutrinos of astrophysical origin in a FRW Universe, where the momentum of the neutrino is redshifted by cosmological expansion. Given that the scale factor \(a\) is related to the cosmic redshift \(z^{\prime}\) by \(a=\frac{1}{1+z^{\prime}}\), the velocity at the time of \(z^{\prime}\) becomes \[\overline{v}(\overline{k},z^{\prime})\simeq 1\mp g_{\mathfrak{s}}\mathfrak{d}_{ \nu}^{2}\Big{(}\frac{\overline{k}/a}{M_{s}}\Big{)}=1\mp\mathfrak{d}_{\nu}^{2} (1+z^{\prime})\Big{(}\frac{\overline{k}}{M_{D}}\Big{)}. \tag{15}\] The comoving distance \(x_{c}\) between the source of the neutrinos, for example a GRB, and the Earth is \[x_{c}=\int_{t_{0}}^{t_{\rm h/l}}\overline{v}(\overline{k})\frac{{\rm d}t}{a}, \tag{16}\] where \(t_{\rm h/l}\) is the time when the high-/low-energy neutrinos arrive. If they are emitted at the same time \(t_{0}\) at the GRB, the time lag can be obtained, \[\delta t\coloneqq t_{\rm h}-t_{\rm l}\simeq\pm\mathfrak{d}_{\nu}^{2}\frac{ \delta\overline{k}}{M_{D}}\int_{0}^{z}\frac{(1+z^{\prime})}{H(z^{\prime})}{ \rm d}z^{\prime}, \tag{17}\] where \(\delta\overline{k}=\overline{k}_{\rm h}-\overline{k}_{\rm l}\), and the Hubble parameter \(H(z^{\prime})\) depends on the cosmological model. Since for late eras, at which we assume an approximately constant density of defects, the Hubble expansion on the D3-brane world is not affected [31], the standard Cold-Dark Matter model with a positive cosmological constant (\(\Lambda>0\)) would be a good approximation of reality, as current observation suggests, then we have \(H(z^{\prime})=H_{0}\sqrt{\Omega_{\rm M}^{0}(1+z^{\prime})^{3}+\Omega_{\Lambda }^{0}}\), where \(H_{0}\) is the Hubble constant. Hence, \[\delta t\simeq\pm\mathfrak{d}_{\nu}^{2}H_{0}^{-1}\frac{\delta\overline{E}}{M _{D}}\int_{0}^{z}{\rm d}z^{\prime}\frac{(1+z^{\prime})}{\sqrt{\Omega_{\rm M}^{ 0}(1+z^{\prime})^{3}+\Omega_{\Lambda}^{0}}}, \tag{18}\] here \(\overline{E}\simeq\overline{k}\) is identified as the neutrino (antineutrino) energy observed on the Earth and \(\delta\overline{E}\simeq E_{\rm h}\). For neutrinos, there is a "delay" in their arrival times (\(\delta t_{\nu}>0\)), i.e., high-energy neutrinos arrive later compared to lower-energy ones; whereas for antineutrinos, \(\delta t_{\bar{\nu}}<0\), corresponding to the arrival time "advance". ## 3 CPT-violating neutrino and phenomenological aspects It has been realized, on the observational side, mainly over the last decade that high-energetic neutrinos are one of the most promising portals to LV physics [1, 2, 3]. In this respect, Amelino-Camelia et al. developed a strategy of analysis on IceCube data to detect LV-modified laws of propagation for neutrinos in a quantum space-time [7], and recently a Lorentz-violating picture is suggested by a series of model-independent studies on time-of-flight lags of IceCube neutrino events with respect to the purportedly associated light signals arriving from GRBs [8; 9; 10; 11], which we proceed to review briefly. In the residue of this section we will show that this may serve as a support to the space-time foam model just presented. ### Speed variation in IceCube GRB neutrinos In Refs. [7; 8; 9; 10; 11], adopting a LV-deformed dispersion relation with a generic form, the authors get the modified propagation velocity for neutrinos as \[v(E)=1-s_{n}\frac{1+n}{2}\Big{(}\frac{E}{E_{\rm LV,n}}\Big{)}^{n} \tag{1}\] where \(n=1,2\) corresponds to linear or quadratic dependence of the energy, \(s_{n}=\pm 1\) is a sign factor of LV correction, and \(E_{\rm LV,n}\) is the \(n\)th-order LV scale. Neutrino mass, which has been constrained within 1 eV [37], can be safely neglected in the velocity formula for neutrinos with energies around or beyond TeV scale. Supposing the linear term (i.e., \(n=1\)) in (1) dominates, a regularity fitting well with TeV and PeV GRB neutrinos from IceCube (including also near-TeV events [10]) is observed, indicating a _neutrino speed variation_\(v(E)=1-sE/E_{\rm LV}^{\nu}\) with \(s=\pm 1\), at a scale of \[E_{\rm LV}^{\nu}=(6.4\pm 1.5)\times 10^{17}\ {\rm GeV}, \tag{2}\] which is close to the Planck scale \(E_{\rm Pl}\simeq 1.22\times 10^{19}\) GeV. Such an energy scale is consistent with various time-of-flight constraints, available today, from MeV neutrinos of supernova 1987A [38] and that from higher-energy neutrinos registered at IceCube, e.g., [39]; it is also compatible with the constraints [40; 41; 42] from recent multimessenger observations of blazar TXS 0506+056 coincident with \(\sim 290\) TeV track-like neutrino event [43]. We mention in passing that a light speed variation of \(v(E)=1-E/E_{\rm LV}^{\gamma}\) with \(E_{\rm LV}^{\gamma}\gtrsim 3.60\times 10^{17}\) GeV has also been suggested from time-of-flight studies on the energetic radiation from GRBs [8; 12; 13; 14; 15; 16] and from flares of active galactic nuclei [44]. This latter LV scale is not the same as the LV scale of neutrinos (2) but of the same order of magnitude. Intriguingly, it is found that there exist both time "delay" (\(s=+1\)) and "advance" (\(s=-1\)) events, which can be interpreted with different propagation properties between neutrinos and antineutrinos [9; 10; 11], implying that neutrinos are superluminal and antineutrinos are subluminal, or vice versa, due to opposite signs of LV sign factor \(s\). This proposal is at present only a hypothesis, but a reasonable one, because the IceCube detector cannot distinguish between neutrinos and antineutrinos (except using the Glashow resonance, to be mentioned later). It is thus necessary to verify this hypothesis with further experimental tests, and to check then whether the revealed regularity [8; 9; 10; 11] can still persist or not with more neutrino events by IceCube (or by other facilities) available in the future. ### D-foam as an explanation for neutrino in-vacuo dispersion feature Notice that the phenomenological LV picture revisited in the previous subsection is supported by a number of IceCube events among the gap over 4 orders of magnitude in neutrino energy scale, ranging from a few hundred GeV up to about 2 PeV, with all these GRB neutrino events falling on a pair of inclined lines surprisingly, see Refs. [8; 9; 10; 11] and also [7]. Such result yields a strong indication of a _linear energy dependence_ in the propagation speed of cosmic neutrinos, \(|v-1|\sim{\cal O}(E/E_{\rm LV}^{\nu})\), with \(E_{\rm LV}^{\nu}\), characterizing such a linear suppression of (leading) LV effect, at about \(10^{17}\) GeV, i.e., Eq. (10). On the other hand, the presence of the opposite sign factors between neutrinos with their antiparticle counterparts [9] clearly indicates a _CPT-violating propagation_ of neutrinos in cosmic space. Hence, it is clear that the above two distinct LV properties, as exposed from current time-of-flight data [9; 10; 11], are consistent with the expected behavior for (anti)neutrinos propagating in a gravitational "medium" of D-brane defects. Although it is not known from the analyses on IceCube data whether neutrinos, or antineutrinos, are superluminal, the D-particle foam scenario predicts that the latter would propagate slightly faster than the constant speed of light, so considering also the constraint that neutrinos and antineutrinos have the same amounts of speed variation, we can relate the D-foam induced velocity defect \(\delta_{v}^{D}\) to the corresponding variation implied by the generic dispersion (11). This yields, \[\begin{split}\delta_{v}^{D}&\simeq-\frac{E}{E_{\rm LV }^{\nu}},\quad\text{for subluminal}\ \ \nu\text{'s;}\\ \delta_{v}^{D}&\simeq\frac{E}{E_{\rm LV}^{\nu}}, \quad\text{for superluminal}\ \ \bar{\nu}\text{'s.}\end{split} \tag{12}\] With the established correspondence (12) for the neutrino velocities, we can then identify a combination of the foam parameters, \(M_{D}/\mathfrak{d}^{2}\simeq M_{\rm sQG}\), i.e., stringy QG mass scale, as the linear-order Lorentz violation scale \(E_{\rm LV}\) determined by IceCube neutrinos: \[\frac{\mathfrak{d}_{v}^{2}g_{s}}{M_{s}}\simeq\frac{1}{E_{\rm LV}^{\nu}}\simeq 1.6\times 10^{-18}\ \text{GeV}^{-1}, \tag{13}\] or, explicitly one has, \[M_{\rm sQG}^{(\nu)}\simeq E_{\rm LV}^{\nu}\simeq 6.4\times 10^{17}\ \text{GeV}. \tag{14}\] Such result gives an estimation of the neutrino QG scale \(\sim 10^{17}\) GeV for the string/D-particle foam scenario. This is very much in line with one's intuitive expectation that the scale \(M_{\rm sQG}\) would be comparable to the Planck mass: \(M_{\rm sQG}\sim M_{\rm Pl}\ (\simeq 10^{19}\ \text{GeV})\). Moreover, we note that the string model from the last section exhibits likewise non-trivial optical (\(\gamma\)) properties of the vacuum in terms of a velocity deformation for photon with different energy \(\overline{\omega}\): \[\delta_{v}^{D}=-\frac{\overline{\omega}}{M_{\rm sQG}^{(\gamma)}}<0. \tag{15}\] This arises from the capture/splitting process of photon open strings by D-defects, in much the same way described in the previous section for the neutrino-D-particle interactions. The main difference is that photons are necessarily subluminal, thus, following our proposal [19] of interpreting the light speed variation from GRB photons with (anisotropic) D-foam models, one can do the same thing in the context of isotropic foam scenario considered here. The arguments are similar to that of Ref. [19] (see also [45]), and will not be repeated here. It is essential to notice that, in that case, an estimate about the photon's QG scale can be got as \(M_{\rm sQG}^{(\gamma)}\simeq E_{\rm LV}^{\gamma}\gtrsim 3.6\times 10^{17}\) GeV. Hence, we immediately observe \[M_{\rm sQG}^{(\nu)}\gtrsim M_{\rm sQG}^{(\gamma)}, \tag{10}\] i.e., \(M_{\rm sQG}\) for neutrinos may slightly be higher than that of photons but of the same order of magnitude. It should be noticed that this is exactly what the D-foam model expects since, from a theoretical viewpoint, the value of \(M_{\rm sQG}\) could depend upon quantities such as the string coupling \(g_{s}\), the density of defects \(n_{D}\) and, importantly, on the strength of particle interactions with foam defects, which may not be universal among particle species [46]. Indeed, the neutrino interaction with the space-time D-defects could be very slightly suppressed compared with those of photons, because, in such a framework [47], only species that transform as trivial representations of the SM gauge group, i.e., neutral particles, are susceptible to the D-particle foam, in which case the fact that neutrinos are fermions with nontrivial \(\mathrm{SU}(2)_{L}\) properties renders such foam effects somewhat weakened. Nevertheless, neutrinos are at least electrically neutral excitations, so that the space-time foam appears not completely transparent to them, by invoking a possible background induced breaking of \(\mathrm{SU}(2)\) gauge symmetry of the Standard Model. Hence the velocities of neutrinos would deviate from the low-energy velocity of light naturally less than photons, so that, \(|\delta_{v(\nu)}^{D}|\lesssim\delta_{v(\gamma)}^{D}\), which just corresponds to the observation (10) inferred from the findings of speed variation of high-energy cosmic neutrinos [8; 9; 10; 11] and cosmic photons [12; 13; 14; 15; 16; 44]. We should emphasize, in passing, that this avoids also the conflict, as indicated elsewhere [49] for models where \(\mathrm{SU}(2)_{L}\) gauge invariance is still kept, with existing tight bounds on LV for the charged leptons (see, e.g., Ref. [47; 48] and those quoted in [49]). It is essential to notice that, in the D-foam case, Lorentz violation in neutrinos _does not_ imply LV in the charged lepton (which is actually Lorentz invariant for specifically stringy reasons to be mentioned later), or vice versa. Therefore the result (11) would _not_ lead to unacceptably large Lorentz-violating effects in the charged-lepton (e.g., electron/positron) sectors, and the difficulty encountered by [49] for the field-theoretic LV-scenario explanation does not arise when exploring a D-foam interpretation. From Eq. (11), we can further deduce, \[\mathfrak{d}_{\nu}^{2}\sim 10^{-18}\,\frac{M_{s}}{g_{s}}\ \mathrm{GeV}^{-1}. \tag{11}\] This means that the dimensionless variance \(\mathfrak{d}^{2}\), expressing stochastic fluctuations of the D-particle recoil velocity, relates to the value of the mass of the foam defect \(M_{s}/g_{s}\), which in the modern version of string theory is essentially a free parameter. So for \(\mathfrak{d}^{2}\sim\mathcal{O}(1)\) for instance, a sub-Planckian D-particle mass \(M_{s}/g_{s}\sim 10^{18}\) GeV is required so that the D-foam generates the phenomenologically suggested neutrino speed variation, by means of a Lorentz-/CPT-violating term \((g_{s}/M_{s})\mathfrak{d}^{2}k^{2}\) between \(\nu\) and \(\bar{\nu}\) sectors. We should note that this type of CPT symmetry breaking may also lead to the observed baryon asymmetry in a natural way through gravitational leptogenesis of the early Universe [35], which requires \(M_{s}/g_{s}\sim 10^{25}\mathfrak{d}^{2}\) GeV so as to provide the physically observed lepton and, thus, baryon asymmetry. This seems to go beyond our result in Eq. (3.1). However, as mentioned above, in space-time foam situations, the density of D-particles \(n_{D}\), which essentially the CPT-violating parameter, \(\mathfrak{d}^{2}\), as a (Gaussian) variance of the fraction variable \(\lambda\), depends upon, may vary with the cosmological time scale \(t\), in the sense that their bulk distribution may not be constant all the time. We may expect, in general, a time dependent variance, \(\langle\!\langle\lambda^{2}\rangle\!\rangle(t)\equiv\mathfrak{d}^{2}(t)\propto n _{D}(t)\). Thus, it could be possible that at very early eras of the Universe there is a relatively _dilute_, but still statistically significant, population of D-particles, by which a neutrino field can be encountered, thereby resulting in a somewhat small variance \(\mathfrak{d}\), upon averaging (2.2) over the D-particle populations. As the time elapses, the brane Universe, which roams in the bulk space, may however move into a region _densely_ populated with D-particles, so that for late epochs, say, redshifts \(z\lesssim\mathcal{O}(10)\), neutrinos coming from cosmologically remote sources, either GRBs or active galaxies, interact with D-particles more frequently as they propagate over a distance in the foamy space-time, yielding stronger stochastic effects \(\mathcal{O}(\mathfrak{d}^{2})\). For a Grand-Unified-Theory scale \(\sim\mathcal{O}(10^{15})\) GeV D-particle mass for instance, one acquires \[\mathfrak{d}_{\nu}(t_{\rm early})\simeq 10^{-5},\ \ \mathfrak{d}_{\nu}(t_{\rm late })\simeq 0.03, \tag{3.10}\] with corresponding distribution for the stochastic fluctuating D-particle recoil velocities:3 Footnote 3: We stress here that the modeling of recoil operator \(\mathcal{V}_{\ell}\) (or variable \(\lambda\)) by a Gaussian process is not based on analysis of the underlying string theory, but a reasonable assumption for characterizing stochasticity in recoiling space-time (D-)defects [30]. \[\mathcal{D}(\lambda;\mathfrak{d}^{2})\sim\exp\biggl{[}-\frac{1}{2}\biggl{(} \frac{\lambda}{\mathfrak{d}(t)}\biggr{)}^{2}\biggr{]}, \tag{3.11}\] which then differs with time scale \(t\), particularly between early cosmological epochs, \(\mathcal{D}_{\rm early}\), and late eras, \(\mathcal{D}_{\rm late}\), for reasons explained above. Hence, in such sense, there is no conflict between our result (3.1) (or (3.1)) for explaining neutrino speed variation from GRBs with the constraint from CPT-violating effects in the early Universe [35], and further, our estimate (3.10) indicates a much denser, but uniform to a large extent, bulk D-particle foam from late- to current-era Universe (\(z\leq 10\)). In view of the above discussion, we conclude here that the finding of the neutrino speed variation of the form of \(v(E)=1\mp E/E_{\rm LV}^{\nu}\) from IceCube observations is compatible with the string/D-foam motivated deformation of velocity of neutrinos (2.13) (and of antineutrinos (2.14)) with \(M_{\rm sQG}\simeq 6\times 10^{17}\) GeV, approaching the Planck mass, through relating the stringy QG scale to the linear-order LV scale \(E_{\rm LV}^{\nu}\) from GRBs. It is crucial that the propagation properties suggested as differing between neutrinos and antineutrinos are explained by means of CPT-violating aspects of such D-brane foam model of Lorentz violation. This fact can thus in turn serve as a support to this type of models inspired by string theory. Inhibited superluminal neutrino in-vacuo decay It was claimed however that assuming the usual conservation laws of energy and momentum, as widely postulated in field-theoretic LV models, superluminal neutrinos would dissipate much of their energies through kinematically allowed anomalous processes in a vacuum [50], such as, Cherenkov radiation (\(\nu\to\nu\gamma\)), neutrino splitting (\(\nu\to\nu\nu\bar{\nu}\)), and bremsstrahlung of electron-positron pairs (\(\nu\to\nu ee^{+}\)), among which the last one, also known as vacuum \(ee^{+}\) pair emission (VPE), a neutral-current process, dominates the neutrino energy losing. Such processes would result in significant depletion of cosmic neutrino fluxes at high energies beyond which no neutrino should arrive at Earth. This has been used to bound superluminal velocities of high-energy neutrinos from IceCube [4; 5; 6; 43]; in particular, powerful constraints on LV energy scale of \(\gtrsim(10^{3}-10^{5})\times E_{\rm Pl}\) are reported [51; 52; 53; 54]. The same constraint applies to antineutrinos, if they are slightly superluminal, for the CP-conjugated processes as one can understand. However, these constraints become inapplicable, as we shall illustrate in the following, in the sense that the dominant energy-losing channels may be inhibited/forbidden in D-particle backgrounds, owing to so-induced deformation of energy-momentum conservation. To see this clearly, we perform here a threshold analysis on pair bremsstrahlung as an illustration. In the case of space-time D-foam, high-energy antineutrinos that are superluminal particles are expected to undergo such processes and to lose energy until they are at or below the threshold. Caution, however, is needed here. Due to the presence of recoiling D-defects near the braneworld, the energy conservation law in the common sense may be violated [55], though, during reactions like \(\bar{\nu}\to\bar{\nu}ee^{+}\), the total energy remains conserved once the kinetic energy of a heavy D-particle, \(E_{D\text{-kin}}=M_{D}(\gamma_{V}-1)\simeq\frac{1}{2}M_{D}|\boldsymbol{\mathcal{ V}}|^{2}\), is taken into account. Indeed, this fact has already been used in Eq. (7) as deriving the modified relativistic dispersion; in general, one has [55], \[\sum_{i}^{(N)}(\pm)E_{i} \simeq\frac{1}{2}\frac{M_{s}}{g_{s}}\boldsymbol{\mathcal{V}}_{(N )}^{2}, \tag{10}\] \[\sum_{i}^{(N)}(\pm)\mathbf{p}_{i} \simeq M_{D}\boldsymbol{\mathcal{V}}_{(N)}, \tag{11}\] for a multi-(\(N\)-)particle reaction. Here, \(\boldsymbol{\mathcal{V}}_{(N)}\) is a proper generalization of (2), i.e., \(\boldsymbol{\mathcal{V}}=\sum_{i}(\pm)(\mathbf{p}_{i}/M_{D})\), and the notation \(\sum(\pm)\) represents, for example in the case of \(\bar{\nu}\to\bar{\nu}ee^{+}\) as we discuss now: \[\sum_{i}(\pm)E_{i} =E_{\rm in}-\sum E_{\rm out}\] \[=E-(E^{\prime}+E_{e^{-}}+E_{e^{+}}), \tag{12}\] where \(E\), \(E^{\prime}\) denote the energies for the incoming \(\bar{\nu}\) and outgoing \(\bar{\nu}\), respectively. Hence, the sum of the energies of all _observable_ particles is _not_ conserved due to a recoil-induced loss upon averaging over the foam: \[\sum_{i}^{(N)}(\pm)\overline{E}_{i}\eqqcolon\delta\overline{E}_{D}^{(N)}=\frac{M_ {D}}{2}\langle\!\langle\mathbf{\mathcal{V}}_{(N)}^{2}\rangle\!\rangle, \tag{4.4}\] which gives nonzero value due to the nontrivial stochastic fluctuations \(\langle\!\langle\mathcal{V}^{\ell}\mathcal{V}_{\ell}\rangle\!\rangle\neq 0\), despite vanishing \(\langle\!\langle\mathcal{V}_{\ell}\rangle\!\rangle\). We should mention however that the energy lost \(\delta\overline{E}_{D}\) due to stochastic interactions with the D-defects is relevant mainly for reactions with neutral elementary particles in the initial state, for which there is no obstacle to interact with the D-particles. In particular, the D-foam is predominantly transparent to (electrically) charged matter on account of charge conservation [46], as justified by the strength of Cherenkov constraints for the electron/positron sectors [47; 48].4 Hence, mainly the anomalous decay processes of the neutrino species, as discussed below, are affected dominantly by the foam effect. Footnote 4: The Crab Nebula severely limits electron LV effect with the tightest constraint: \(E_{\text{LV}}^{\epsilon}\gtrsim 10^{26}\ \text{GeV}\gg E_{\text{Pl}}\) for \(v_{e}-1\simeq E/E_{\text{LV}}^{\epsilon}\)[47; 48], by using PeV \(\gamma\)-ray observed recently by LHAASO. Consider now the superluminal \(\bar{\nu}\) decay via \(\bar{\nu}\to\bar{\nu}ee^{+}\), the averaged energy-momentum (non)conservation that follows from Eqs. (4.3) and (4.4) reads \[\overline{E} =\overline{E}^{\prime}+\overline{E}_{e^{-}}+\overline{E}_{e^{+}} +\delta\overline{E}_{D}^{(4)}, \tag{4.5}\] \[\overline{k} =\overline{k}^{\prime}+\overline{p}_{e^{-}}+\overline{p}_{e^{+}}, \tag{4.6}\] where we denote the 4-momenta of the emitted \(\bar{\nu}\) and \(ee^{+}\) pairs as, \(k^{\prime\alpha}=(\overline{E}^{\prime},\overline{\mathbf{k}}^{\prime})\), \(p_{e/e^{+}}^{\alpha}=(\overline{E},\overline{\mathbf{p}})_{e/e^{+}}\), respectively. The spatial 3-momentum is conserved on the average (4.6) due to the zero mean (2.3) of \(\mathbf{\mathcal{V}}\) arising from isotropy. The Lorentz-violating energy defect, \(\delta E\), is \[\overline{E}-\overline{k}\eqqcolon\delta E(\overline{\mathbf{k}})\geq \ (\overline{E}_{e^{-}}-\overline{p}_{e^{-}})\] \[+(\overline{E}_{e^{+}}-\overline{p}_{e^{+}})+\delta\overline{E}_ {D}^{(4)}. \tag{4.7}\] In D-particle foam models, Lorentz violation is absent for charged-lepton sectors, as explained: \[\overline{E}_{e^{\mp}}=\sqrt{m_{e}^{2}+\overline{p}_{e^{\mp}}^{2}}\simeq \overline{p}_{e^{\mp}}\Big{(}1+\frac{m_{e}^{2}}{2\overline{p}_{e^{\mp}}^{2}} \Big{)}. \tag{4.8}\] Inserting the modified dispersion for (anti)neutrinos (2.12) in Eq. (4.7), and using (4.8) yields, \[\delta\overline{E}_{D}^{(4)}-\frac{g_{s}}{2M_{s}}\mathfrak{d}_{\nu}^{2} \overline{k}^{2}\leq-\frac{m_{e}^{2}}{2\overline{p}_{e^{-}}}-\frac{m_{e}^{2}}{ 2\overline{p}_{e^{+}}}, \tag{4.9}\] from which the threshold condition can then be read off, by plugging in \(E_{\text{thr}}\lesssim\overline{E}\simeq\overline{k}=2\overline{p}_{e/e^{+}}\): \[\Big{(}\delta\overline{E}_{D}^{(4)}-\frac{g_{s}}{2M_{s}}\mathfrak{d}_{\nu}^{2} \overline{E}^{2}\Big{)}\overline{E}(\overline{\mathbf{k}})\simeq-2m_{e}^{2}, \tag{4.10}\] where we used the fact that the \(ee^{+}\) pair takes most of the total momentum, so that \(E^{\prime}\simeq 0\) at threshold. Above, the energy violation, \(\delta\overline{E}_{D}\), in a 4-particle interaction follows from Eq. (4.4) for \(N=4\). The amount of such losses, during \(\bar{\nu}\) Cherenkov-like decays in the D-foam backgrounds, can be estimated, to leading order, as \[\delta\overline{E}_{D}^{(4)}\simeq\frac{g_{s}}{M_{s}}2\dot{\gamma}_{I}^{\nu} \overline{E}^{2}+\ldots, \tag{4.11}\] where the dimensionless factor \(\varsigma_{I}\) that controls the intensity of energy violation during reactions in which neutrinos (antineutrinos) get involved is _a priori_ distinct from the QG parameter \(\mathfrak{d}^{2}\) that appears in the modified dispersions (for details on (4.11), see Appendix A or [55]) but in principle of the same order of magnitude. The fact that \(\delta\overline{E}_{D}\geq 0\) (or \(\varsigma_{I}>0\)), i.e., energy is _lost_ during particle interactions in the D-foam, follows from the underlying stringy treatment [34] of the recoil D-brane deformation. The condition that observed neutrinos are near or below the threshold energy then yields, \[-\frac{g_{s}}{2M_{s}}(4\varsigma_{I}^{\nu}-\mathfrak{d}_{\nu}^{2})\overline{k} ^{3}\leq 2m_{e}^{2}. \tag{4.12}\] This inequality means that the stability of high-energy IceCube events can merely provide bounds on a _combined quantity_ of the fundamental parameters of the foam. For example, given the most energetic event, #35 with energy of 2 PeV [4],5 of the time "advance" type [9], and hence probably a superluminal antineutrino event, we then infer a limit from (4.12): Footnote 5: A higher-energy 2.6 PeV track event ATel #7856 [5] is deduced to be a “delay” event [9] which corresponds to a subluminal neutrino in our case, hence cannot be used to cast bounds from (4.12). \[|\mathfrak{d}_{\nu}^{2}-4\varsigma_{I}^{\nu}|g_{s}\sqrt{\alpha^{\prime}} \lesssim 1.3\times 10^{-25}\ \text{GeV}^{-1}. \tag{4.13}\] Here \(\alpha^{\prime}\) is the string's Regge slope, related to the string mass scale via \(M_{s}=(\alpha^{\prime})^{-1/2}\). It is obvious that the constraint (4.13) set by the threshold effect of \(\bar{\nu}\to\bar{\nu}ee^{+}\) for superluminal antineutrino with measured energy at \(\sim 2\) PeV has nothing to do with the value we get for the stringy QG scale in Eq. (3.4) (or (3.5)). To understand this situation from another viewpoint, we invoke the explicit form of VPE threshold in D-foam cases from the equation (4.12): \[E_{\text{thr}}^{\text{VPE}}\lesssim\left[\frac{M_{D}m_{e}^{2}}{(\mathfrak{d}_ {\nu}/2)^{2}-\varsigma_{I}^{\nu}}\right]^{1/3},\quad\text{for}\ \ \bar{\nu}\text{'s}. \tag{4.14}\] It now becomes clear that the above kinematical threshold is finite only if \(4\varsigma_{I}<\mathfrak{d}^{2}\); otherwise, it would be either infinite (for \(\mathfrak{d}^{2}=4\varsigma_{I}\)) or imaginary (\(\mathfrak{d}^{2}<4\varsigma_{I}\)), implying that the pair emission is not permitted to happen in a vacuum. Even in the case of \(\mathfrak{d}^{2}>4\varsigma_{I}\), the process could be effectively inhibited if one has \(\varsigma_{I}\approx(\mathfrak{d}/2)^{2}\), in which case the energy threshold would be bumped up to a very high energy scale, for instance of order PeV, so that PeV-scale (anti)neutrinos will not be depleted by \(\bar{\nu}\to\bar{\nu}ee^{+}\). We are then able to observe superluminal events of such energy as detected by IceCube in spite of implying a very small difference between \(\varsigma_{I}\) and \(\frac{1}{4}\mathfrak{d}^{2}\), namely, one may expect that (in late-era Universe), \[\varsigma_{I}^{\nu}\sim 3\times 10^{-19}\,\frac{M_{s}}{g_{s}}\ \text{GeV}^{-1}, \tag{4.15}\] and that, \[(\mathfrak{d}_{\nu}^{2}-4\varsigma_{I}^{\nu})\lesssim 8.4\times 10^{-8},\ \ \text{for}\ \ \mathfrak{d}_{\nu}^{2}>4\varsigma_{I}^{\nu}, \tag{4.16}\] which follows from (4.13) and the fact that, for \(\mathfrak{d}^{2}\lesssim\mathcal{O}(1)\), we have \(M_{s}/g_{s}<6.4\times 10^{17}\) GeV from Eq. (3.4). The constraint (4.16), corresponding to a threshold energy around 2 PeV, which is at the desired order for observing PeV IceCube events, as explained, is _not_ unnaturally small in the context of D-particle foam models, where, as we have mentioned, both the parameters of the foam, \(\mathfrak{d}^{2}\) and \(\varsigma_{I}\), are in general free to be adjusted. We emphasize the fact that there is energy loss in particle reactions, as a result of the nontrivial recoil of defects in the string/D-particle foam, distinguishes our approach from those field-theoretic models, where energy remains conserved in the usual sense though Lorentz invariance is broken. It is the energy losses caused by the quantum D-brane foam in this approach that raise the thresholds for superluminal neutrino decay, so as to permit a stable propagation. Hence, in such sense, very tight constraints so far cast by means of pair production threshold analyses [51; 52; 53; 54], which aim at limiting superluminal velocities within LV frameworks that entail a usual conservation of energy and momentum, are _not_ applicable to our case. This can be understood more clearly if inspecting again the foam-modified VPE threshold (4.14) but adopting the part of \(M_{D}/\mathfrak{d}^{2}\) from Eq. (3.4): \[E_{\text{thr}}^{\text{VPE}}\leq(4m_{e}^{2}\mathcal{E}_{*}^{\nu})^{1/3}, \tag{4.17}\] or approximately, we may write \(E_{\text{thr}}\simeq(4m_{e}^{2}\mathcal{E}_{*}^{\nu})^{1/3}\), corresponding to the emission of a zero energy antineutrino. Here a new energy scale \(\mathcal{E}_{*}\) is defined as \[\mathcal{E}_{*}^{\nu}\coloneqq\frac{E_{\text{LV}}^{\nu}}{1-4\varsigma_{I}^{ \nu}E_{\text{LV}}^{\nu}/M_{D}}. \tag{4.18}\] This indicates in a clear manner that the threshold limits as established in, e.g., Refs. [51; 52; 53; 54], are actually imposed on the scale \(\mathcal{E}_{*}\), though can be evaded if \(\mathcal{E}_{*}\leq 0\) or easily satisfied in case of \(\mathcal{E}_{*}>0\) by proper assignment for the value of \(\varsigma_{I}\), but _not_ imposed upon the _actual_ neutrino Lorentz violation scale \(E_{\text{LV}}^{\nu}\), which corresponds to \(M_{D}/\mathfrak{d}^{2}\), as in Eq. (3.4). The latter, as stressed, may only be limited by in-vacuo velocity dispersion effect via time-of-flight analyses, such as those performed in Refs. [7; 8; 9; 10; 11], wherein a neutrino speed variation at about \(10^{17}\) GeV is suggested from IceCube neutrinos when associated with GRB candidates, or studies thereof [39; 40; 41; 42], where comparable flight-time bounds are obtained. Here, we end up with a consistent description for the findings of Refs. [9; 10; 11] with the \(\nu\) decay threshold constraint in the (stringy) QG framework adopted here. We comment briefly here, for completeness, on a previous argument [56] of a possible \(\sim 2\) PeV cutoff in the \(\nu(\bar{\nu})\) spectrum inferred from initial IceCube data set [4], which could be due to novel LV processes, particularly the neutrino splitting (NS), which, as argued in [56], might play a more important role than pair emission by the neutrinos. Though the claim of this drop off and its interpretation with neutrino splittings are outdated, and actually disfavored by more recent data (see below for details),6 it may still be useful to provide an additional refutation with our approach, by invoking the kinematical threshold for that LV splitting channel:7 Footnote 7: We postpone the discussion of superluminal \(\bar{\nu}\)-splitting, which is similar to that of vacuum pair emission, to Appendix B. \[E_{\rm thr}^{\rm NS}\simeq\bigg{[}9m_{\nu}^{2}\frac{M_{s}}{g_{s}(\mathfrak{d}_{ \nu}^{2}-2\varsigma_{I}^{\nu})}\bigg{]}^{1/3}. \tag{4.19}\] By adopting again the case in which \(\mathfrak{d}^{2}>4\varsigma_{I}(>2\varsigma_{I})\), and tuning their values, as was implemented in the VPE case, the anomalous decay through splitting into 3 neutrinos would then lead to a drop in \(\bar{\nu}\) (not \(\nu\)) fluxes above 2 PeV. However, we find that this requires \((\mathfrak{d}^{2}-2\varsigma_{I})<\mathcal{O}(10^{-18}-10^{-19})\), which is inconsistent with (4.16). This implies that there exists some (unnatural?) cancellation in \(\mathfrak{d}^{2}-2\varsigma_{I}\), indicating again, but from the point of view of the string model that the cutoff claimed elsewhere [56] seems very suspicious. ## 5 Discussion and conclusion It is worthwhile to stress again that the finding of neutrino speed variation, and the consequent Lorentz/CPT violation for cosmic neutrinos as in Refs. [8; 9; 10; 11] and [7], which we have devoted ourself to understand in Ref. [18], and in this paper, with a string theory model, still needs further comprehensive examination with data. As these results are of fundamental importance, it is necessary to check whether additional IceCube neutrino events can still support the revealed regularity [8; 9; 10; 11] or not, as well as checking the suggested subluminal and superluminal events, which, if indeed correspond, respectively, to neutrinos and antineutrinos, would be a strong sign for the type of CPT breaking as indicated by the theory. At present, as we mentioned previously, IceCube cannot distinguish between neutrinos from antineutrinos, so the intriguing correspondence uncovered by [9; 10; 11] is only a conjecture from phenomenological point of view, while coincides with the prediction from a D-foam scenario, as explained thoroughly. There is an exception for electron antineutrinos at 6.3 PeV, at which the induced event rate is enhanced via the \(W^{-}\) Glashow resonance channel [58], as this phenomenon only occurs with \(\bar{\nu}_{e}\)'s. A candidate Glashow resonance event has lately been reported by IceCube [57], indicating the existence of \(\gtrsim 6\) PeV cosmic antineutrinos. This may provide rare opportunities to test the result of [9; 10; 11] and the D-foam models. Indeed, if one can infer through analysis according to the strategy of [7; 8; 9] that such events correspond to "advance" (i.e., superluminal) events, it would largely favor the interpretation explored here, and would also allow one to tighten (4.16) by a factor of at least \((6/2)^{3}\approx 27\). Together with the track-like event #7856 [5] with deposited energy of \(\sim 2.6\) PeV (which implies that the true neutrino energy can be of \(\mathcal{O}(10)\) PeV), there appears to be no cutoff in the neutrino spectrum up to \(\sim 10\) PeV--the highest energy for which current data are available. Still, this event (ATel #7856) is inferred to be of the "delay" type according to prior analysis [9] and hence probably a subluminal neutrino, which is stable in the quantum D-brane foam. It is then natural to observe such neutrino events from cosmologically remote sources, providing no extra constraint for the string model, as opposed to the case of antineutrinos. Thus, no clearly observable cutoff is produced once the effect of VPE and similar process is strongly inhibited for superluminal neutrino species with some pertinent choice of relevant QG parameters, as what current observation actually indicates. That fact might suggest stringy space-time foam as providing a possible realistic interpretation for all of these LV phenomenologies. We propose to search for higher-energy (anti)neutrino events, which, if observed by IceCube or other neutrino telescopes in the future, would imply a much smaller difference between \(\mathfrak{d}^{2}\) and \(4\varsigma_{I}\) (or, \(2\varsigma_{I}\)) for the superluminal picture. Of course, one is also encouraged to further examine the \(\nu\)- and \(\bar{\nu}\)-spectra, which, if both extend continuously (or fall smoothly), without any cutoff, may indicate a more natural possibility that no splitting process or pair emission should ever happen for the antineutrino even if traveling _in vacuo_ slightly superluminally--a case can be naturally realized by \(\varsigma_{I}\geq(\sfrac{1}{2})\mathfrak{d}^{2}\) (\(\Leftrightarrow\widetilde{\mathcal{E}}_{*}\leq 0\)) in the model. For details on our this argument, we refer the reader to Appendix B. Before closing we mention that the stringent constraint provided by Lorentz violation deformed patterns in neutrino oscillations [59] can also be accounted for in D-foam situations, by recalling that the recoil and its resulting effects discussed above is essentially geometrical and kinematical, depending only upon neutrino energy (and the status of CPT). As such, the D-brane foam is _flavor blind_, with \(M_{\rm sQG}\) being the same for all neutrino species, hence with no effects on oscillations.8 Thus the exotic contribution (additive to that \(L_{m}\) from the mass difference) to the flavor oscillation length, \(L=2\pi/|\Delta E|\), like those arising in some generic quantum gravity models vanishes (using \(\mathfrak{d}_{A}^{2}-\mathfrak{d}_{B}^{2}=\Delta\mathfrak{d}^{2}\sim 0\)) in that case: Footnote 8: Note also that existing neutrino experiments [60] are still far from achieving the required sensitivity [28] to detect QG decoherence effects on neutrino oscillations in D-foam backgrounds. \[E\Delta\mathfrak{d}_{\nu}^{2}L_{\rm sQG}\sim 2\pi\frac{M_{D}}{E}\ \Rightarrow\ L_{\rm sQG}\gg L_{m}, \tag{5.1}\] and \(L-L_{m}=\mathcal{O}(L_{m}/L_{\rm sQG})\), from which it follows that \(1/L\approx 1/L_{m}\), \(L_{m}=4\pi E/\Delta m_{\nu}^{2}\). So, the present data [59] on neutrino oscillations do not prescribe the theory contemplated in Ref. [18] and in this work with a universal scale \(M_{\rm sQG}\sim 10^{17}\) GeV. To summarize, a CPT-violating propagation of neutrinos (antineutrinos), whose velocities scale linearly with their energies, can emerge from a string/D-brane theory model for space-time foam, being the type of _isotropic_, Lorentz-invariant _on average_, but Gaussian _stochastically fluctuating_. Given that high-energy astrophysical neutrinos have gained a prominent role in probing such novel quantum-gravitational effect, we argue that the finding of the neutrino speed variation \(v=1\mp E/E_{\rm LV}^{\nu}\) at about \(10^{17}\) GeV from IceCube GRB neutrino events [9; 10; 11] can serve as a support for such theory. The existences of both time "delay" and "advance" events with equal amounts of speed variances are well accounted for by means of CPT-breaking aspects of such scenario, indicating that neutrinos are subluminal while antineutrinos travel at speeds in excess of the light speed \(c\). For superluminal antineutrino, we show that the novel energy (non)conservation implied by the recoiling D-particle foam may offer a viable mechanism in coping with the challenge due to the anomalous decay channels like electron-positron pair production. In this respect, those threshold constraints upon the superluminal velocities can be naturally avoided, and accordingly, being consistent with the findings of [8; 9; 10; 11] in this string-theory-inspired context. We appeal to further test both theoretically and experimentally the superluminality, Lorentz invariance violation and CPT violation for cosmic neutrinos. One may anticipate that more evidence will be observed as more neutrino data is accrued, enabling the stringy foam interpretation endorsed here to be either verified or falsified. Such efforts would contribute to fundamental physics as well as resolving important issues concerning the nature of space-time. This work was supported by the National Natural Science Foundation of China (Grant No. 12075003). ## Appendix A Energy violation in reactions We briefly review the issues of energy nonconservation during interactions in a recoiling D-foam. The treatment essentially follows [55], of which the result is then applied to \(\bar{\nu}\)-decays by emitting \(ee^{+}\) pairs for our purposes. Following the theorem highlighted in Ref. [55], which states that during particle reactions the energy is in general violated, but being conserved for the _complete system_ (i.e., matter plus recoiling D-particle inaccessible by a low-energy braneworld observer), i.e., Eq. (18), the energy violation in a simplest case, namely a \(1\to 2\)-body decay (or a 3-particle vertex), is \[\overline{E}_{1}-\overline{E}_{2}-\overline{E}_{3}=\frac{1}{2}\frac{M_{s}}{g _{s}}\langle\!\langle\mathbf{\mathcal{V}}_{(3)}^{2}\rangle\!\rangle, \tag{19}\] where \(\mathbf{\mathcal{V}}_{(3)}=[\mathbf{p}_{1}-(\mathbf{p}_{2}+\mathbf{p}_{3})]/M_{D}\) is the recoil velocity of a D-particle. We expect the average \(\langle\!\langle\cdots\rangle\!\rangle\) over effects of the foam medium to be proportional to: \[\langle\!\langle\mathcal{V}^{2}\rangle\!\rangle\sim\mathcal{O}(p^{2})(\varsigma _{I}/M_{D}^{2}). \tag{20}\] Here \(\varsigma_{I}\), as is made clear below, parametrizes energy loss during reactions, and may in general differ from the variance \(\mathfrak{d}^{2}\) appeared in particle dispersion relations. To estimate the above quantity, one may use the conservation for the (average) 3-momenta, \(\langle\!\langle M_{D}\mathbf{\mathcal{V}}\rangle\!\rangle=0\), incorporating also the recoil of the background D-particle. This has been performed in [55], where we refer the reader for details. Eq. (19) then becomes \[\delta\overline{E}_{D}^{(3)}=\frac{\varsigma_{I}}{M_{D}}\overline{p}_{2}^{2}+ \frac{\varsigma_{I}}{2M_{D}}\sum_{i}\delta p_{i}^{2}+\ldots, \tag{21}\] where \(\overline{p}_{2}\) is the momentum of a particle emitted during decay, say, particle 2, while particle 1 is converted into particle 3; \(\delta p^{2}\equiv\langle\!\langle\mathbf{p}^{2}\rangle\!\rangle-\overline{p}^ {2}\) is minute variance of \(\overline{p}^{2}\), and the \(\ldots\) denote stochastic quantum uncertainties. The latter two are purportedly negligible compared to \(\overline{p}_{2}^{2}/M_{D}\), hence can never cancel the leading term. We stress that, to leading order in \(1/M_{D}\sim M_{\rm Pl}^{-1}\), the matrix element for any particle reaction should in principle agree with that in low-energy relativistic field theory, while the "recoil" gravitational background that induces a violation of energy conservation would modify the usual _kinematics_ of quantum field-theoretic result for (threshold) reactions. Multi-particle processes with \(N\geq 4\) may be factorized as products of 3-point vertices and, \(\delta\overline{E}_{D}\) is thus determined via summing up the corresponding violations in every 3-body interaction (A.3), irrelevant to the particle species considered. In this case, \(\overline{p}_{2}\) may denote a typical momentum of a mediator that may be exchanged in, e.g., a 4-particle interaction. For the pair emission, i.e., \(\bar{\nu}\to\bar{\nu}Z^{*}\to\bar{\nu}ee^{+}\), a 4-point amplitude which can be viewed as a product of two fundamental leptonic vertices mediated by the exchange of an off-shell \(Z^{0}\) boson with momentum \(\overline{p}_{2}\), as in standard electroweak theory, the outgoing \(\bar{\nu}\) with energy \(\simeq 0\) at threshold, \(\overline{k}\simeq\overline{p}_{2}=2\overline{p}_{e}\), then yields, \[\delta\overline{E}_{D}^{(4)}\simeq\frac{2\varsigma_{I}^{\nu}}{M_{D}}\overline {k}^{2}+\ldots\gtrsim\frac{g_{s}\varsigma_{I}^{\nu}}{M_{s}}2E_{\rm thr}^{2},\] (A.4) where \(E_{\rm thr}\) is the threshold for the vacuum pair emission, as given by (4.17), and the foam parameter \(\varsigma_{I}\) characterizes energy violation during processes involving the neutrino sector (indicated by the superfix \(\nu\)). ## Appendix B Neutrino splitting in D-foam To understand how a superluminal antineutrino splits into three (\(\bar{\nu}\to\bar{\nu}\nu\bar{\nu}\)) in D-foam situations, it suffices to calculate its threshold energy. Let \(\overline{E},\overline{k}\) be the (average) energy and momentum of the incoming antineutrino, and \((\overline{E}^{\prime},\overline{\bf k}^{\prime}),(\overline{E},\overline{\bf k })_{\nu/\bar{\nu}}\) be the 4-momenta carried respectively by the outgoing \(\bar{\nu}\), and by emitted \(\nu\bar{\nu}\) pairs. The D-foam modified conservation law reads \[\overline{E} =\overline{E}^{\prime}+\overline{E}_{\nu}+\overline{E}_{\bar{\nu} }+\delta\overline{E}_{D}^{(4)},\] \[\overline{k} =\overline{k}^{\prime}+\overline{k}_{\nu}+\overline{k}_{\bar{\nu}},\] (B.1) where the fact that all momenta are collinear at threshold is taken into account. For deformed dispersion relations with QG term \((\mp\nicefrac{{1}}{{2}})(\eth\overline{k})^{2}/M_{D}\), i.e., Eqs. (2.11) and (2.12) but with mass-related term \(\langle\!\langle m_{\nu}^{2}/(2|{\bf k}|)\rangle\!\rangle\simeq m_{\nu}^{2}/(2 \overline{k})\), the threshold equation can be derived as \[\frac{m_{\nu}^{2}}{\overline{k}} -\frac{m_{\nu}^{2}}{\overline{k}^{\prime}}-\frac{m_{\nu}^{2}}{ \overline{k}_{\nu}}-\frac{m_{\nu}^{2}}{\overline{k}_{\bar{\nu}}}\] \[=2\times\delta\overline{E}_{D}^{(4)}+\frac{g_{s}\eth_{\nu}^{2}}{M _{s}}\big{(}(\overline{k}^{\prime})^{2}-\overline{k}^{2}-\overline{k}_{\nu}^{2 }+\overline{k}_{\bar{\nu}}^{2}\big{)}.\] (B.2) Note that the minute neutrino mass is _not_ negligible here, as this reaction exclusively involves the neutrino sector. By putting \(\overline{k}\simeq E_{\rm thr}\simeq 3\overline{k}^{\prime}=3\overline{k}_{\nu/ \bar{\nu}}\) therefore \[4m_{\nu}^{2}=-\overline{k}\Big{(}\delta\overline{E}_{D}^{(4)}-\frac{g_{s}}{9M_ {s}}4\eth_{\nu}^{2}\overline{k}^{2}\Big{)}.\] (B.3) The amount of energy violation, \(\delta\overline{E}_{D}^{(4)}\), in this process, is again given by the sum of the violations in each of the two 3-point interactions (10), assuming factorization of relevant scattering amplitudes, via virtual \(Z\) exchange, as \(\bar{\nu}\)-splitting can be viewed as a "rotation" of the neutrino-neutrino scattering process. Hence, \[\delta\overline{E}_{D}^{(4)}\simeq\frac{2\varsigma_{I}^{\nu}}{M_{D}}\Big{(} \frac{2\overline{k}}{3}\Big{)}^{2}=\frac{8\varsigma_{I}^{\nu}}{9M_{D}} \overline{k}^{2}, \tag{11}\] where we assume that the three outgoing \(\nu\) (or \(\bar{\nu}\)) each carry off approximately 1/3 of the energy of the incoming \(\bar{\nu}\), i.e., \(\overline{k}=\overline{p}_{2}+\overline{k}^{\prime}(=\overline{k}/3)\), as used above. We end up with some remarks below based on the following kinematical threshold: \[E_{\rm thr}^{\rm NS}\simeq\sqrt{\frac{9M_{D}m_{\nu}^{2}}{(\mathfrak{d}_{\nu}^ {2}-2\varsigma_{I}^{\nu})}}=(9m_{\nu}^{2}\widetilde{\mathcal{E}}_{*}^{\nu})^{ 1/3}, \tag{12}\] with \[\widetilde{\mathcal{E}}_{*}^{\nu}\coloneqq\frac{E_{\rm LV}^{\nu}}{1-2 \varsigma_{I}^{\nu}E_{\rm LV}^{\nu}/M_{D}}. \tag{13}\] The splitting process opens only if \(\mathfrak{d}^{2}>2\varsigma_{I}\), otherwise is forbidden, with the very case of \((\mathfrak{d},\varsigma_{I})=(0,0)\), yielding an infinite \(E_{\rm thr}\), as in the SM. If \(\sqrt{2\varsigma_{I}}<\mathfrak{d}\leq 2\sqrt{\varsigma_{I}}\), the decay channel through emitting \(ee^{+}\) pairs is not allowed, though \(\bar{\nu}\)-splitting can still happen. This latter reaction can be suppressed at the kinematical level in case of \(\varsigma_{I}\approx\frac{1}{2}\mathfrak{d}^{2}\), thereby making the antineutrino practically stable _in vacuo_, against any splitting effect at high energies. The threshold (12) is proportional to the mass of order \(m_{\nu}\lesssim 1\) eV [37]. To push \(E_{\rm thr}\) to a sufficiently high scale, e.g., greater than 2 PeV, such that we are always below threshold when comparing with the IceCube data, an exceedingly small value of \((\mathfrak{d}^{2}-2\varsigma_{I})\) would be required. A more natural solution to that could be \(\varsigma_{I}\geq(\nicefrac{{1}}{{2}})\mathfrak{d}^{2}\), so that antineutrinos would never decay in D-foam geometries, despite traveling faster than light, and there is thus no need to fine-tune the model parameter.
2305.11958
Running effects on QCD axion phenomenology
We study the impact of renormalization group effects on QCD axion phenomenology. Focusing on the DFSZ model, we argue that the relevance of running effects for the axion couplings crucially depends on the scale where the heavier Higgs scalars are integrated out. We study the impact of these effects on astrophysical and cosmological bounds as well as on the sensitivity of helioscopes experiments such as IAXO and XENONnT, showing that they can be sizable even in the most conservative case in which the two Higgs doublets remain as light as the TeV scale. We provide simple analytical expressions that accurately fit the numerical solutions of the renormalization group equations as a function of the mass scale of the heavy scalars.
Luca Di Luzio, Maurizio Giannotti, Federico Mescia, Enrico Nardi, Shohei Okawa, Gioacchino Piazza
2023-05-19T18:45:35Z
http://arxiv.org/abs/2305.11958v2
# Running into QCD axion phenomenology ###### Abstract We study the impact of renormalization group effects on QCD axion phenomenology. Focusing on the DFSZ model, we argue that the relevance of running effects for the axion couplings crucially depends on the scale where the heavier Higgs doublet, charged under the Peccei-Quinn symmetry, is integrated out. We study the impact of these effects on astrophysical and cosmological bounds as well as on the sensitivity of helioscopes experiments such as IAXO and XENONnT, showing that they can be sizable even in the most conservative case in which the two Higgs doublets remain as light as the TeV scale. We provide simple analytical expressions that accurately fit the numerical solutions of the renormalization group equations as a function of the mass scale of the heavy scalars. ###### Contents * 1 Introduction * 2 Running QCD axion couplings * 2.1 Analytical understanding of RG running effects * 2.2 DFSZ axion couplings beyond tree level * 3 RG effects on QCD axion phenomenology * 3.1 Astrophysical constraints * 3.2 Thermal axion cosmology * 3.3 Helioscope experiments * 4 Conclusions * Appendix A Numerical fit to RG effects ## 1 Introduction Axions are an intrinsic prediction of the Peccei-Quinn (PQ) mechanism [1; 2], which remains, after over four decades, the most appealing solution to the strong CP problem. This problem arises because Quantum Chromodynamics (QCD) predicts CP violating effects that are not observed experimentally. The PQ mechanism involves a new global chiral symmetry U(1)\({}_{\rm PQ}\), which is anomalous under QCD and spontaneously broken at a large energy scale \(f_{a}\) (PQ scale). The axion is the Nambu-Goldstone boson associated with the spontanous breaking of this symmetry [3; 4], and is characterised by the fact that all its interactions are inversely proportional to \(f_{a}\). Although the original Weinberg-Wilczek model [3; 4], in which the scale of PQ breaking coincides with the electroweak (EW) symmetry breaking scale, was quickly ruled out, new viable models emerged early on, in which the PQ scale can be arbitrarily high so that all axion interactions can be sufficiently suppressed, yielding the so-called _invisible axion_. Two examples are particularly appealing for their simplicity: the KSVZ or hadronic axion [5; 6] and the DFSZ axion [7; 8]. The main difference between KSVZ and DFSZ-type axions is that the former do not couple to ordinary quarks and leptons at the tree level. Though many other possible axion models have been considered in the literature (see Ref. [9] for a comprehensive overview), the two above-mentioned models are by far the most studied ones and are universally regarded as benchmark QCD axion models. In recent years continuous progress in experimental technologies has brought within reach the possibility of detecting in terrestrial experiments the invisible axions arising in these models. This has stimulated a tremendous interest in this field, with several new theoretical and phenomenological studies, as well as a wealth of new experimental proposals (see [10; 11] for recent reviews). In the meanwhile, ongoing experiments have already started to probe the benchmark KSVZ/DFSZ axion models, and in the coming decades they will dig deep into the relevant parameter space region. From the theory side, this calls for the development of "precision axion physics", which will turn out to be crucial in the case of an axion discovery. Indeed, from a given determination of the low-energy axion couplings to photons and other matter fields (such as electrons and nucleons) one would like to infer the structure of the high-energy theory, that is the ultraviolet (UV) completion of the axion effective field theory (EFT). This step is highly non-trivial, since it entails a large separation of scales, from the typical low-energy scale of axion experiments up to the PQ scale, \(f_{a}\gtrsim 10^{8}\) GeV. Hence, axion related physical quantities, as for example the axion couplings to Standard Model (SM) fermions, are poten tially affected by large radiative corrections, which can induce large deviations from the tree-level expressions. In the case of the KSVZ model, it was pointed out long ago [12; 13] that although the axion coupling to electrons is zero at tree level, a non-zero electron coupling can be sourced via loop corrections by the axion-photon coupling and, more recently, it was shown that the leading correction to this coupling is generated at even higher orders via the anomalous axion coupling to gluons [14]. Nowadays, the full one-loop anomalous dimensions for the \(d=5\) axion effective Lagrangian have been computed [15; 16; 17], while running effects have been investigated for the benchmark DFSZ/KSVZ axion models in Ref. [14], and for the so-called astrophobic axion models (which feature non-universal PQ charges [18; 19]) in Ref. [20]. The purpose of this work is to study QCD axion phenomenology in light of renormalization group (RG) effects, focussing for definiteness on the mass window \(m_{a}\in[\mathrm{meV},\mathrm{eV}]\). This region of parameter space shows a remarkable complementarity among the existing bounds on the different axion couplings, namely to photons, electrons, nucleons and pions, stemming from helioscope searches, as well as from astrophysics and cosmology. It is therefore an ideal playground where to investigate the consequences of running effects for QCD axion phenomenology. In particular, we will focus on the large corrections induced by the top Yukawa coupling, which apply to a large class of axion models where the SM fermions are charged under the U(1)\({}_{\mathrm{PQ}}\) symmetry. A paradigmatic example is the universal DFSZ model, which features two Higgs doublets and one SM singlet scalar, and whose axion parameter space at tree level depends solely on \(m_{a}\) and \(\tan\beta\). However, top-Yukawa radiative corrections induce a logarithmic dependence of the effective axion couplings on the mass scale of the heavy scalar degrees of freedom of the two Higgs doublet model (2HDM) that can range from about 1 TeV up to the PQ scale, \(f_{a}\). As we shall see, these corrections are often large and may skew the parameter space region that is effectively probed by terrestrial experiments and by astrophysical/cosmological observations. The paper is organized as follows. In Sect. 2 we discuss the structure of top-Yukawa radiative corrections, and we provide approximate analytical expressions for the dependence of the axion couplings on these corrections. Sect. 3 is devoted to study the impact of running effects on QCD axion phenomenology, including the consequences for astrophysical and cosmological limits as well as for the sensitivity of future axion experiments. We conclude in Sect. 4. Details on the solutions to the RG equations are provided in Appendix A. ## 2 Running QCD axion couplings Of central interest for axion phenomenology are the axion couplings to photons and matter fields (electrons, nucleons, as well as other hadrons relevant for axion production). They are defined by the following interaction Lagrangian: \[\mathcal{L}_{a} =C_{\tau}\frac{\alpha}{8\pi}\frac{a}{f_{a}}F^{\mu\nu}\bar{F}_{\mu \nu}+\sum_{f=p,\,n,\,e}C_{f}\frac{\partial_{\mu}a}{2f_{a}}\bar{f}\gamma^{\mu} \gamma_{5}f\] \[+C_{\pi}\frac{\partial_{\mu}a}{f_{a}f_{\pi}}(2\partial^{\mu} \pi^{\mu}\pi^{\nu}\pi^{-}-\pi^{0}\partial^{\mu}\pi^{+}\pi^{-}-\pi^{0}\pi^{+} \partial^{\mu}\pi^{-})\] \[+C_{\pi N}\frac{\partial_{\mu}a}{2f_{a}f_{\pi}}(i\pi^{+}\overline {p}\gamma^{\mu}n-i\overline{n}\gamma^{\mu}p)\] \[+C_{N\Delta}\frac{\partial^{\mu}a}{2f_{a}}\!\!\left(\overline{p} \,\Delta_{\mu}^{+}+\overline{\Delta_{\mu}^{+}}\,p+\overline{n}\,\Delta_{\mu} ^{0}+\overline{\Delta_{\mu}^{0}}\,n\right)+\ldots\,, \tag{1}\] where \(F_{\mu\nu}\) denotes the electromagnetic field strength, \(\bar{F}_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\sigma}F^{\mu\sigma}\) (with \(\epsilon^{0123}=-1\)) its dual, \(f=p,n,e\) runs over low-energy matter fields, and \(C_{\gamma,\,f,\,\pi,\,\pi,\,N,\,\mathrm{AA}}\) are \(O(1)\) dimensionless coefficients. The axion-pion coupling in the second line of Eq. (1) (with \(f_{\pi}=92.1(8)\) MeV [21] the pion decay constant) is of phenomenological relevance for thermal axion production in the early Universe, while the axion contact interactions with pions and nucleons (third line) and with \(\Delta\)-resonances (fourth line) are important for axion production in Supernovae (SNe). The ellipses stand for other possible axion interaction terms which will not be considered in this paper. In the context of axion phenomenology, one usually employs the dimensional couplings \(g_{a\gamma}=\frac{a}{2f_{a}}\,C_{\gamma}/f_{a}\) and \(g_{a\bar{f}}=C_{f}\,m_{f}/f_{a}\). In particular, \(C_{\gamma}=E/N-1.92(4)\), where \(E/N\) is the ratio between the electromagnetic and QCD anomalies of the PQ current (for typical values in concrete axion models, see e.g. [22; 23]). Note that the anomalous axion-photon coupling is insensitive to running effects, being one-loop exact. Moreover, threshold corrections to \(g_{a\gamma}\) are safely negligible for \(m_{a}\ll m_{e}\) since they scale as \((m_{a}/m_{e})^{2}\) - see e.g. Ref. [24]. Hence, in the following, we will only focus on radiative corrections to the axion couplings to electrons and hadrons. Axion-hadron interactions can be expressed in terms of the model-independent axion gluon coupling (which fixes the absolute normalization in terms of \(f_{a}\)) and the axion coupings to quark fields, \(q=u,d,s,c,b,t\), defined via the Lagrangian term1 Footnote 1: In this work we focus on universal axion models, so that the axial-vector currents in Eq. (2) are flavor diagonal. However, as long as the PQ charges of different generations are not hierarchical, most of the considerations related to top-Yukawa running effects, to be discussed below, apply as well to non-universal axion models (see e.g. Ref. [20]). \[C_{\sigma}\frac{\partial_{\mu}a}{2f_{a}}\overline{q}\gamma^{\mu} \gamma_{5}q\,. \tag{2}\] In terms of the latter, the axion couplings to hadrons defined in Eq. (1) read (see e.g. [25; 26; 27; 9]) \[C_{p} =C_{u}\Delta_{u}+C_{d}\Delta_{d}+C_{s}\Delta_{s}-\left(\frac{ \Delta_{u}}{1+z}+\frac{z\,\Delta_{d}}{1+z}\right), \tag{3}\] \[C_{n} =C_{d}\Delta_{u}+C_{n}\Delta_{d}+C_{s}\Delta_{s}-\left(\frac{z\, \Delta_{u}}{1+z}+\frac{\Delta_{d}}{1+z}\right),\] (4) \[C_{\pi} =-\frac{1}{3}\!\left(C_{u}-C_{d}-\frac{1-z}{1+z}\right),\] (5) \[C_{\pi N} =-\frac{3}{\sqrt{2}}C_{\pi}\,,\qquad C_{N\Delta}=\frac{3\,\sqrt{3 }}{2}C_{\pi}\,g_{A}\,, \tag{6}\] where \(C_{u,\,d,\,s}=C_{u,\,d,\,s}(2\,{\rm GeV})\) are low-energy couplings evaluated at the scale \(\mu=2\,{\rm GeV}\) by numerically solving the RG equations from the boundary values \(C_{u,\,d,\,s}(f_{a})\) (see below), \(\Delta_{u,\,d,\,s}\) denote the nucleon matrix elements of the quark axial-vector currents. In particular, \(g_{A}\equiv\Delta_{u}-\Delta_{d}=1.2754(13)\) from \(\beta\)-decays [21], \(\Delta_{u}=0.847(18)(32)\), \(\Delta_{d}=-0.407(16)(18)\) and \(\Delta_{s}=-0.035(6)(7)\) (at 2 GeV in the \(\overline{\rm MS}\) scheme) are the \(N_{f}=2+1\) FLAG average [28], that is dominated by the results of [29], and \(z=m_{u}(2\,{\rm GeV})/m_{d}(2\,{\rm GeV})=0.49(2)\)[30]. Combining lattice values with the high-precision determination of \(g_{A}\), we obtain the weighted averages \(\Delta_{u}=0.858(22)\), \(\Delta_{d}=-0.418(22)\) and \(\Delta_{s}=-0.035(9)\). Running effects on the low-energy couplings of the axion to first generation SM fermions can be parametrized as2 (see e.g. [14]) Footnote 2: In universal axion models the same corrections apply to each generation. \[C_{u}(2\,{\rm GeV}) =C_{u}(f_{a})+\Delta C_{u}\,, \tag{7}\] \[C_{d}(2\,{\rm GeV}) =C_{d}(f_{a})+\Delta C_{d}\,,\] (8) \[C_{e}(m_{e}) =C_{e}(f_{a})+\Delta C_{e}\,, \tag{9}\] with \[\Delta C_{\Psi}\simeq r_{\Psi}^{\prime}(m_{\rm BSM})\;C_{t}(f_{a})\,, \tag{10}\] and \(\Psi=u,d,\,e\). The parameter \(r_{\Psi}^{\prime}(m_{\rm BSM})\) encodes the RG correction approximated by taking only the top-Yukawa contribution, and depends logarithmically on the parameter \(m_{\rm BSM}\simeq m_{H,\,A,\,H^{\prime}}\) that denotes collectively the mass scale of the heavy scalar degrees of freedom (we implicitly assume for the heavy modes of the scalar doublets the decoupling limit [31], in which all the heavy masses are approximately degenerate). The \(m_{\rm BSM}\) scale depends on the structure of the DFSZ scalar potential (see e.g. [32; 33]) and we take it to range from about 1 TeV (the approximate lower bound as set by LHC searches for new heavy scalars) up to \(f_{a}\). Note that as long as the couplings are considered at a renormalization scale \(\mu\) above \(m_{\rm BSM}\) there are no top-Yukawa running effects. This is because in this regime the axion couplings to the SM fermions correspond to the global charges of the PQ current, which is classically conserved, and thus they do not renormalize. For \(\mu<m_{\rm BSM}\) we enter a different regime, in which Higgs doublets with different PQ charges mix to give rise to heavy scalars (which are integrated out) and to the light Higgs, that has no well-defined charge. In this effective theory there is no more a conserved PQ current, and running effects for the axion-fermion couplings can kick in. This is the reason why the largest RG effects appear when the BSM scale is taken at the largest possible scale \(m_{\rm BSM}\sim f_{a}\). Contrary, when the 2HDM structure keeps holding all the way down to the TeV scale, running effects are much less sizeable. In Appendix A we provide a fit to \(r_{\Psi}^{\prime}(m_{\rm BSM})\) obtained by interpolating the numerical solution to the RG equations (cf. Eqs. (A.7)-(A.8) and Tab. A.4). Taking, for instance, \(m_{\rm BSM}=f_{a}=10^{10}\) GeV one finds \[C_{u}(2\,{\rm GeV}) \simeq C_{u}(f_{a})-0.264\,C_{t}(f_{a})\,, \tag{11}\] \[C_{d}(2\,{\rm GeV}) \simeq C_{d}(f_{d})+0.266\,C_{t}(f_{a})\,,\] (12) \[C_{e}(m_{e}) \simeq C_{e}(f_{a})+0.265\,C_{t}(f_{a})\,. \tag{13}\] ### Analytical understanding of RG running effects To understand the phenomenological impact of the RG corrections to the axion couplings, and to compare it with the current experimental sensitivity, it is convenient to provide some analytical approximations. To this aim, it is advantageous to introduce the iso-scalar (\(C_{0}\)) and iso-vector (\(C_{3}\)) nuclear couplings (see also [20]), defined as follows: \[C_{0} =\frac{1}{2}\big{(}C_{p}+C_{n}\big{)}=\frac{1}{2}(\Delta_{u}+ \Delta_{d})(C_{n}+C_{d}-1)-\Delta_{s}C_{s}\,, \tag{14}\] \[C_{3} =\frac{1}{2}(C_{p}-C_{n})=\frac{g_{A}}{2}\bigg{(}C_{n}-C_{d}- \frac{1-z}{1+z}\bigg{)}\,, \tag{15}\] where the right-hand sides are obtained from the expressions for \(C_{p,n}\) given in Eqs. (3)-(4). From Eqs. (5)-(6) we see that all the other couplings are proportional to the iso-vector combination \(C_{3}\): \(C_{\pi}=-\frac{2}{3}\,g_{A}^{-1}C_{3}\), \(C_{\pi N}=\sqrt{2}\,g_{A}^{-1}\,C_{3}\), \(C_{N\Delta}=-\sqrt{3}\,C_{3}\). The RG correction to the iso-vector combination \(\Delta C_{3}\simeq 0.64\,C_{t}(f_{a})\,(r_{a}^{\prime}-r_{d}^{\prime})\) may be sizeable, with the exact value depending on the \(m_{\rm BSM}\) scale. An excellent fit to the combination \(r_{u}^{\prime}-r_{d}^{\prime}\), for \(m_{\rm BSM}\) in the range 1 TeV to \(10^{18}\) GeV, is given by \[r_{u}^{\prime}-r_{d}^{\prime}\approx-0.54\ln\Bigl{(}\sqrt{x}-0.52\Bigr{)}\,, \tag{16}\] with \(x=\log_{10}(m_{\rm BSM}/{\rm GeV})\). This expression reproduces our numerical results with a precision better than 2% (see Appendix A). Then, in the relevant range for \(m_{\rm BSM}\), we have \(0.3\lesssim|r_{u}^{\prime}-r_{d}^{\prime}|\lesssim 1\). Since in universal axion models we expect \(C_{3}\sim\,C_{t}\) (the exact relation depending on the model parameters), we can conclude that \(\Delta C_{3}/C_{3}\) can be of the order of a few 10%, and even larger. For example, in the case of the DFSZ axion (to be discussed below), we find \[\left|\frac{\Delta C_{3}}{C_{3}}\right|_{\rm DFSZ}\simeq\frac{0.5}{\tan\beta} \left(r_{u}^{\prime}-r_{d}^{\prime}\right)+O\Bigl{(}(r_{d}^{\prime}-r_{u}^{ \prime})^{2}\Bigr{)}\,, \tag{17}\] which can become quite significant at small \(\tan\beta\). On the other hand, the RG correction to the iso-scalar coupling \(C_{0}\) is, in general, very small. From Eq. (14), we see that this coupling combination gets contributions from \((r_{u}^{\prime}+r_{d}^{\prime})\) and from \(\Delta_{s}C_{s}\). As it was pointed out in Ref. [20], in the leading approximation in which only the contribution of the top-quark Yukawa coupling is kept, the combination \((r_{u}^{\prime}+r_{d}^{\prime})\) is characterized by a strong cancellation, see for example Eqs. (11)-(12), and is numerically very small \(\sim 0.2\%\).3 Hence, eventually the leading correction to \(C_{0}\) comes from the RGE correction \(\Delta_{s}\Delta C_{s}\) to the last term in Eq. (14). It is easy to estimate this contribution from our general results, using \(r_{s}^{\prime}=r_{d}^{\prime}\) that holds for universal models. In the end, we find that the RG corrections to \(C_{0}\) are only about 3% of the corresponding corrections to \(C_{3}\) and hence this combination of couplings (and the corresponding iso-scalar axion coupling to nucleons, \(g_{\alpha N,0}=C_{0}m_{N}/f_{a}\)) is practically unaffected by RG running effects. ### DFSZ axion couplings beyond tree level The scalar sector of DFSZ models [7; 8] features a SM singlet complex scalar \(\Phi\) and two Higgs doublets \(H_{u,d}\) that couple respectively to up- and down-type quarks in a generation-independent way. Under SU(3)\({}_{C}\times\) SU(2)\({}_{L}\times\) U(1)\({}_{l}\) they transform as \(\Phi\sim(1,1,0)\), \(H_{u}\sim(1,2,1/2)\) and \(H_{d}\sim(1,2,-1/2)\). The threefold re-phasing symmetry of the scalar sector U(1)\({}_{\Phi}\times\) U(1)\({}_{H_{u}}\times\) U(1)\({}_{H_{d}}\) is broken to U(1)\({}_{\rm pq}\times\) U(1)\({}_{\rm ry}\) by a renormalizable non-Hermitian operator that can be chosen as \(H_{u}H_{d}\Phi^{\dagger 2}\) or \(H_{u}H_{d}\Phi^{\dagger}\).4 There are two possible variants of the model, depending on whether the lepton sector couples to \(H_{d}\) (DFSZ1) or to \(\tilde{H}_{u}=i\sigma^{2}H_{u}^{*}\) (DFSZ2). For a review see Sec. 2.7.2 in Ref. [9]. The Yukawa sector of the DFSZ1 model contains the following operators Footnote 4: The first possibility yields a number of domain walls \(N_{\rm DW}=6\) while the second \(N_{\rm DW}=3\), but they remain otherwise indistinguishable from the point of view of low-energy phenomenology. \[\overline{q}_{i}u_{j}H_{u}\,,\ \ \overline{q}_{i}d_{j}H_{d}\,,\ \ \overline{\ell}_{i}e_{j}H_{d}\,, \tag{18}\] where a sum over generation indices \(i,j=1,2,3\) is left understood, and \(q_{i}\), \(\ell_{i}\) denote the quarks and leptons SU(2)\({}_{L}\) left-handed (LH) doublets while \(u_{j}\), \(d_{j}\), \(e_{j}\) the right-handed (RH) singlets. The corresponding coefficients for the axion coupling at the UV scale \(f_{a}\) are \[\frac{E}{N}=\frac{8}{3}\,,\ \ C_{u,c,t}(f_{a})=\frac{c_{\beta}^{2}}{3}\,,\ \ C_{d,u,b}(f_{a})=C_{e,\mu,\tau}(f_{a})=\frac{s_{\beta}^{2}}{3}\,, \tag{19}\] with \(c_{\beta}\equiv\cos\beta\), \(s_{\beta}\equiv\sin\beta\) and \(\tan\beta=\langle H_{u}\rangle/\langle H_{d}\rangle\equiv v_{u}/v_{d}\). The domain in which \(\tan\beta\) is allowed to vary is obtained by requiring that the DFSZ Yukawas remain perturbative up to scales of \(\mathcal{O}(f_{a})\). This corresponds to imposing perturbative unitarity on Higgs-mediated \(2\to 2\) SM fermion scatterings (see e.g. [34]) up to \(f_{a}\). The perturbative domain is evaluated by evolving the values of the gauge couplings and of the SM Yukawa couplings at \(m_{Z}\) given in Ref. [35] up to the scale \(m_{\rm BSM}\) employing two-loop RG equations. For \(m_{\rm BSM}\sim f_{a}\) the SM Yukawa couplings are RG-evolved from \(m_{Z}\) to \(f_{a}\), and upon matching with the DFSZ couplings \(Y_{t}^{\rm DFSZ}=Y_{t}(f_{a})/s_{\beta}\) and \(Y_{b}^{\rm DFSZ}=Y_{b}(f_{a})/c_{\beta}\), by requiring \(Y_{t,b}^{\rm DFSZ}<\sqrt{16\pi/3}\)[9]. This yields the perturbative domain \[\tan\beta\in[0.14,500]\qquad(m_{\rm BSM}\sim f_{a}\sim 10^{9}\,{\rm GeV})\,, \tag{20}\] On the other hand, for \(m_{\rm BSM}\ll f_{a}\) one should require a stronger perturbativity constraint on \(Y_{t,b}^{\rm DFSZ}\), since running effects from \(m_{\rm BSM}\) to \(f_{a}\) would tend to develop Landau poles in the DFSZ Yukawa couplings below \(f_{a}\). In this case \(Y_{t,b}\) are evolved from \(m_{Z}\) to \(m_{\rm BSM}\) within the SM, and after matching with the DFSZ couplings \(Y_{t}^{\rm DFSZ}(m_{\rm BSM})=Y_{t}(m_{\rm BSM})/s_{\beta}\) and \(Y_{b}^{\rm DFSZ}(m_{\rm BSM})=Y_{b}(m_{\rm BSM})/c_{\beta}\), the running of \(Y_{t,b}^{\rm DFSZ}\) from \(m_{\rm BSM}\) to \(f_{a}\) is computed in the 2HDM. In the case when \(m_{\rm BSM}\sim 1\) TeV perturbative unitarity up to \(f_{a}\sim 10^{9}\,{\rm GeV}\) translates in the following interval \[\tan\beta\in[0.70,100]\qquad(m_{\rm BSM}\sim 1\,{\rm TeV})\,. \tag{21}\] Note that the perturbative domain of \(\tan\beta\) has a mild (logarithmic) dependence on the PQ scale \(f_{a}\). This is shown in Fig. 2 for the low \(\tan\beta\) region (a similar dependence is present also for the large \(\tan\beta\) region, where running effects are however less important). The Yukawa sector of the DFSZ2 model contains instead the following operators \[\overline{q}_{i}u_{j}H_{u}\,,\ \ \overline{q}_{i}d_{j}H_{d}\,,\ \ \overline{\ell}_{i}e_{j}\tilde{H}_{u}\,, \tag{22}\] and the corresponding axion coupling coefficients are \[\frac{E}{N}=\frac{2}{3}\,,\ \ C_{u,c,t}(f_{a})=-C_{e,\mu,\tau}(f_{a})=\frac{c_{ \beta}^{2}}{3}\,,\ \ C_{d,\,s,b}(f_{a})=\frac{s_{\beta}^{2}}{3}\,, \tag{23}\] with \(\tan\beta\) defined in the same perturbative domain as in DFSZ1. Let us now proceed to discuss the impact of running effects in the DFSZ model. Approximate RG corrections to the axion couplings are collected in Tab. 1. Note that in the case of DFSZ1 the iso-vector combination \(C_{3}\), as well as the axion-electron coupling \(C_{e}\), receive large corrections at small \(\tan\beta\), that is when the tree-level coupling vanishes.5 On the other hand, the RG corrections on the iso-scalar combination \(C_{0}\) remain small in the whole \(\tan\beta\) range. This is also displayed in Fig. 1, where the dashed line corresponds to the tree-level result, while the red band encodes the range of RG corrections obtained by varying \(m_{\rm BSM}\) between \(f_{a}=10^{9}\,{\rm GeV}\) (lower border of \begin{table} \begin{tabular}{|l|l|l|} \hline Coupling (DFSZ1) & Coupling (DFSZ2) & Approx. Correction \\ \hline \(C_{0}\simeq-0.20\) & \(C_{0}\simeq-0.20\) & \(\Delta C_{0}\approx 0\) \\ \(C_{3}\simeq-0.43\,\sin^{2}\beta\) & \(C_{3}\simeq-0.43\,\sin^{2}\beta\) & \(\Delta C_{3}\simeq-0.12\,l(x)\,\cos^{2}\beta\) \\ \(C_{e}=\frac{1}{3}\sin^{2}\beta\) & \(C_{e}=-\frac{1}{3}\cos^{2}\beta\) & \(\Delta C_{e}\simeq 0.094\,l(x)\,\cos^{2}\beta\) \\ \(C_{\gamma}=\frac{8}{3}-1.92\) & \(C_{\gamma}=\frac{2}{3}-1.92\) & \(\Delta C_{\gamma}=0\) \\ \hline \end{tabular} \end{table} Table 1: Simplified expressions for the RG corrections in the DFSZ1 and DFSZ2 models, given in terms of \(\beta\) and of \(l(x)=\ln\left(\sqrt{x}-0.52\right)\), where \(x=\log_{10}(m_{\rm BSM}/{\rm GeV})\) parameterizes the new physics scale. While (in this simplified approximation) the corrections in DFSZ1/2 are the same, for the leptons the relative corrections differ: \(\Delta C_{e}/C_{e}=\cot^{2}\beta\) (DFSZ1) and \(\Delta C_{e}/C_{e}\simeq{\rm const}\). (DFSZ2). the red region) and \(1\,\mathrm{TeV}\) (upper border of the red region). We see that although for \(m_{\mathrm{BSM}}=1\,\mathrm{TeV}\) the running couplings trace closely the tree-level couplings as long as \(\tan\beta>1\), also in this case RG corrections become non-negligible at small \(\tan\beta\). ## 3 RG effects on QCD axion phenomenology In the following section, we will discuss the phenomenological impacts of the RG corrections. In particular, we will see how these affect axion astrophysical and cosmological bounds, as well as the sensitivity of terrestrial experimental searches. For clarity, we will refer to the DFSZ axion models, even though our results can be applied to other axion models. Several observables depend dominantly (or entirely) on \(C_{3}\) and thus, as discussed in Sect. 2, are subjected to large RG-induced modifications. These include for example the coupling to pions, which is responsible for the axion thermalization in the early Universe (via \(\pi\pi\leftrightarrow\pi a\)), which controls the hot dark matter (HDM) bound discussed in Sect. 3.2. More recently, it has also been shown that the pion-nucleon scattering may be responsible for a large contribution to the axion emission rates in dense media, particularly in SNe [36; 37; 38]. Finally, the isovector coupling \(C_{3}\) is entirely responsible for the nuclear reaction process \(p+d\to\,^{3}\mathrm{He}+a\), which is one of the most efficient and widely studied production mechanisms of axion-like particles from solar nuclear reactions [39; 40; 41; 42; 43]. On the other hand, the nucleon coupling to \({}^{57}\)Fe, relevant for axion production through nuclear de-excitations in the Sun [44; 40; 39], turns out to be less sensitive to RG corrections (see Tab. 2). The axion-electron coupling \(C_{e}\), which plays a significant role in astrophysics (see Section 3.1) as well as in terrestrial experimental searches (see Section 3.3), is also subjected to large RG corrections. In fact, \(r_{e}^{t}\approx r_{d}^{t}\).6 Thus, using \(r_{u}^{t}+r_{d}^{t}\approx 0\) and Eq. (16), we find Footnote 6: This conclusion is based on the same arguments presented above, where it was argued that \(r_{u}^{t}\ll r_{d}^{t}\simeq 0\). Deviations from these relations are due to subleading contributions of Yukawa terms other than \(Y_{t}\). A detailed discussion can be found in Section 3 of Ref. [20]. \[r_{e}^{t}\simeq-\frac{1}{2}\Big{(}r_{u}^{t}-r_{d}^{t}\Big{)}\approx 0.27\ln \left(\sqrt{x}-0.52\right), \tag{24}\] where \(x=\log_{10}(m_{\mathrm{BSM}}/\mathrm{GeV})\) parameterizes the new physics Figure 1: Running axion coupling combinations (\(C_{3}\) and \(C_{0}\)) in DFSZ as a function of \(\tan\beta\). The red band encompasses the range of the corrections for \(m_{\mathrm{BSM}}/\mathrm{GeV}\in[10^{3},10^{6}]\). Perturbativity bounds on \(\tan\beta\) also depend on \(m_{\mathrm{BSM}}\), with the thick (thin) grey line corresponding to \(m_{\mathrm{BSM}}=10^{6}\) GeV (\(1\) TeV). Figure 2: \(f_{d}\) dependence of the perturbative unitarity bounds on \(\tan\beta\) at small \(\tan\beta\) values. scale.7 Therefore the corrections to \(C_{e}\) can also be expressed in terms of \(\Delta C_{3}\). Specifically, from our previous results we see that \(\Delta C_{e}=-0.5\,C_{t}(f_{a})\,(r_{u}^{\prime}-r_{d}^{\prime})=-0.78\,\Delta C _{3}\). Thus, at our level of approximation all running effects can be expressed in terms of \(\Delta C_{3}\), which for DFSZ axions is \(\propto\cos^{2}\beta\). A general consequence of this observation is that, in the case of DFSZ axions, the running effects are mostly relevant only at small \(\tan\beta\), something that is apparent from our numerical analysis.8 Footnote 7: This result should not be surprising since it holds in the limit of \(|r_{u}^{\prime}+r_{d}^{\prime}|\approx 0\) and, as discussed above, \(|r_{u}^{\prime}+r_{d}^{\prime}|/r_{u}^{\prime}-r_{d}^{\prime}|\lesssim 0.5\%\). Footnote 8: The only exception is in cases where the \(\cos\beta\) dependence cancels, as for \(\Delta C_{e}/C_{e}\) in the DFSZ2 model (see the Red Giant bound on DFSZ2 axion in the right panel of Fig. 4). Before moving to the phenomenological study, it is instructive to anticipate how RG corrections will modify the usual DFSZ bands, obtained by varying the value of \(\tan\beta\) within the perturbative unitarity limits, for the \(g_{ax}\), \(g_{ap}\), \(g_{an}\) couplings (see for example the section on axions in the Review of Particle Physics [45]). This can be easily estimated by considering the RG corrections to the electron and to the nucleon couplings \(C_{p,n}=C_{0}\pm C_{3}\) given in Table 1. The results are shown in Fig. 3, where we have taken \(m_{\rm BSM}=f_{a}\propto 1/m_{a}\) to maximize the RG effects. In each panel the bands correspond to vary \(\tan\beta\) in the interval \([0.14,100]\). The first figure shows the modification of the usual \(g_{ae}\) band in DFSZ1. For this case we obtain the most dramatic effect, that is a marked reduction of the width of the band that, after including RG corrections, shrinks down to the green hatched region. This is due to the fact that the tree-level suppression of \(g_{ae}^{\rm DFSZ1}\) in the limit \(\sin\beta\to 0\) is cut-off at small \(\tan\beta\) by the RG correction proportional to \(\cos^{2}\beta\). Instead there is not such a dramatic effect for \(g_{ae}^{\rm DFSZ2}\) since both the tree level coupling and the RG correction are proportional to \(\cos^{2}\beta\). In the second panel in Fig. 3 we show the RG effect on the band for \(g_{ap}\) (that is the same in DFSZ1 and in DFSZ2). In this case we see that the shrinking of the allowed band is much less pronounced. Finally, the third panel shows the RG effects on the \(g_{an}\) band. In this case the allowed band is sizeably widened, which this is due to a cancellation in the \(\tan\beta\) independent part of the coupling, which enhances the overall dependence on this parameter. The most important phenomenological consequences of the RG corrections to the axion couplings will be analyzed in the following sections. ### Astrophysical constraints In this Section, we discuss the impact of RG corrections to astrophysical observables. For reference, we will mostly focus on the DFSZ1 axion model. The analysis for DFSZ2 goes along similar lines. Axions can be copiously produced in stars, mostly due to thermal processes (see Ref. [50] for a recent review). Here we will not consider astrophysical bounds on the axion-photon coupling [57; 58; 59; 60], since \(C_{\gamma}\) does not receive any relevant RG correction. We focus instead on the axion-electron and on the axion-nucleon couplings. The most stringent astrophysical bound on the axion-electron coupling is derived from observations of the tip of the red giant branch (RGB) in globular clusters. The production of axions during the RGB evolution cools the core, playing a role similar to that of neutrinos, and thus delays the helium ignition. The delay leads to a larger helium core and, consequently, to a higher stellar luminosity. Thus, comparison between observations and predictions for the luminosity of the RGB tip (the brightest stars in the RGB) is an efficient way to test anomalous channels of stellar cooling. The most recent analysis has set the constraint \(|g_{ax}|\leq 1.48\times 10^{-13}\) (at 95% C.L.) [61]. From the definition of this coupling, \(g_{ax}=C_{e}m_{e}/f_{a}\), the RGB bound translates into \[|C_{e}|\leq 1.65\times 10^{-3}\Big{(}\frac{m_{a}}{\rm eV}\Big{)}^{-1}\,. \tag{25}\] This relation provides an upper bound for the axion mass at any given value of \(\tan\beta\) and \(x=\log_{10}(m_{\rm BSM}/{\rm GeV})\). The full numerical results are shown in Fig. 4, where the red-shaded Figure 3: Redefinition of the predicted bands for the DFSZ couplings \(g_{ax}^{\rm DFSZ1}\) (left), \(g_{ae}^{\rm DFSZ1/2}\) (middle), and \(g_{an}^{\rm DFSZ1/2}\) (right) induced by RG effects (hatched green region). The size of the band correspond the perturbativity unitarity bounds on \(\tan\beta\) (see text). The \(m_{\rm BSM}\) new physics scale is set to the maximal value \(m_{\rm BSM}=f_{a}\). bands incorporate the range fixed by the possible values of \(m_{\rm BSM}\in[1~{}{\rm TeV},f_{a}]\). We can gain some intuition about these effects using our approximate results, shown in Table. 1. In the case of the DFSZ1 model (left panel in Fig. 4) we can conveniently rewrite the RGB bound on the axion mass as follows9 Footnote 9: Notice that in the range of \(m_{\rm BSM}\) we are considering here, the absolute value in Eq. (26) is unnecessary. \[\left(\frac{m_{a}}{\rm eV}\right)\leq\frac{1.65\times 10^{-3}}{\left|\left( \frac{1}{3}-0.094\,l(x)\right)\sin^{2}\beta+0.094\,l(x)\right|}\,, \tag{26}\] where \(l(x)=\ln\left(\sqrt{x}-0.52\right)\). The first important observation is that in the limit \(l(x)\to 0\), that is ignoring the RG corrections, the RGB bound on the mass is a monotonic function of \(\tan\beta\) and disappears in the limit of small \(\tan\beta\). This result is modified by the RG corrections, which in the limit \(\tan\beta\to 0\) still provide a useful limit on the axion mass, \(m_{a}\leq 0.018\,{\rm eV}/l(x)\). From these considerations, we can conclude that the RG correction to the RGB bound becomes particularly important in the low \(\tan\beta\) limit, a result confirmed by the full numerical result shown in Fig. 4. The most conservative value for the RGB bound corresponds to \(m_{\rm BSM}=1\,{\rm TeV}\), for which we obtain, in our approximation, \(m_{a}\leq 0.018\,{\rm eV}/l(3)\approx 8.75\times 10^{-2}\,{\rm eV}\), which agrees well with the complete numerical result shown in the left panel of Fig. 4. In the case of the DFSZ2 model instead, there is not such a striking feature, and this is because in this case both the coupling \(g_{ae}\) and its RG correction depend on \(\cos\beta\). The RGB bound for this case is shown in the right panel of Fig. 4. Let us now move to the axion-nucleon coupling and analyze the axion bound from SN1987A.10 This is a quite more complex problem since axion production in a SN environment, at temperatures of order \(\sim 30\,{\rm MeV}\) and densities in excess of \(\sim 10^{14}\,{\rm g}/{\rm cm}^{3}\), gets contributions also from pions [36; 37; 26] and from \(\Delta\) baryon resonances [27]. Following the notation of Ref. [38], the effective low-energy axion-nucleon interaction relevant for the axion production processes in a SN environment is given in Eq. (1). The second term in the first line describes the usual axion-nucleons interactions. The third line contains the axion-pion interactions. The four-particle interaction vertex in the third line accounts for the pion-nucleon contact term recently discussed in Ref. [26], while the last line accounts for the axion couplings to the \(\Delta\)-resonances, whose contribution to the axion emissivity has been recently calculated in Ref. [27]. The interaction Lagrangian in Eq. (1) can be used to compute the axion emissivity due to nucleon-nucleon (\(NN\)) bremsstrahlung, \(NN\to NNa\), as well as the Compton-like pion scattering processes, \(\pi^{-}p\to na\), including also the contribution from the \(\Delta\) resonances (see Ref. [38] for an updated overview). Footnote 10: SN1987A is not the only astrophysical probe of the axion-nucleon coupling. Neutron stars also provide strong bounds (see, e.g., Ref. [62]). However, the bound from SN1987A is the most discussed in the literature, and thus it provides a good example for the impact of RG corrections. In general, the axion luminosity from a SN, \(L_{a}\), depends only on a particular combination of \(C_{0}\) and \(C_{3}\), and thus only this combination can be constrained. The luminosity can be expressed as [38] \[L_{u}=\epsilon_{0}\left(\frac{m_{N}}{f_{a}}\right)^{2}C_{\rm SN}^{2}\times 1 0^{70}\,{\rm erg}/{\rm s}\,, \tag{27}\] where \(\epsilon_{0}\) is a numerical factor and \[C_{\rm SN}=a\left(C_{0}^{2}+b\,C_{3}^{2}+c\,C_{0}C_{3}\right)^{1/2}\,. \tag{28}\] \begin{table} \begin{tabular}{|l|l|l|} \hline Coupling & Approx. Correction & Processes \\ \hline \(C_{0}\) & \(\Delta C_{0}\approx 0\) & \\ \(C_{3}\) & \(\Delta C_{3}\simeq 0.64\,C_{i}(f_{a})\,(r_{a}^{t}-r_{d}^{t})\) & Axion thermalization: \(\pi\pi\leftrightarrow\pi a\)[49; 13; 46; 47; 48] \\ \(C_{p}=C_{0}+C_{3}\) & \(\Delta C_{p}\approx\Delta C_{3}\) & \(\,\)Betteron processes: \(p+n\leftrightarrow d+a\)[41; 42; 43] \\ \(C_{n}=C_{0}-C_{3}\) & \(\Delta C_{n}\approx-\Delta C_{3}\) & Astrophysics/experiments [10; 11; 50] \\ \(C_{\rm SN}\simeq 1.4\left(C_{0}^{2}+0.11\,C_{0}C_{3}+1.3\,C_{3}^{2}\right)^{1/2}\,, \quad\Delta C_{\rm SN}\simeq(2.5\,C_{3}+0.11\,C_{0})\frac{\Delta C_{3}}{C_{\rm SN }}\,.\) & SN 1987A bound [38] \\ \(C_{\rm Fe}=C_{0}-0.77\,C_{3}\) & \(\Delta C_{\rm Fe}=-0.77\Delta C_{3}\) & Axion production/detection in \({}^{57}\)Fe [40; 44; 39] \\ \(C_{e}\) & \(\Delta C_{e}=-0.78\,\Delta C_{3}\) & Axion production in stars [50] \\ \(C_{\gamma}\) & \(\Delta C_{\gamma}=0\) & Axion detection (Xenon etc.) [53; 54; 55] \\ \(C_{\rm hel}=\left[C_{\gamma}^{2}\!\left(C_{\gamma}^{2}+(37C_{e})^{2}\right) \right]^{1/4}\) & \(\Delta C_{\rm hel}=\frac{C_{\rm hel}}{2}\!\left(\frac{\Delta C_{e}/C_{e}}{1+ \left(C_{\gamma}/(37C_{e})\right)^{2}}\right)\) & Axion coupling for Sikivie helioscopes [9; 56] \\ \hline \end{tabular} \end{table} Table 2: RG corrections to axion couplings, in the approximation of keeping only the contribution from \(Y_{t}\). Note that in this approximation all the various corrections can be expressed just in terms of \(\Delta C_{3}\) given in the second line, with \(r_{a}^{t}-r_{d}^{t}\) given in Eq. (16). The numerical values of the coefficients \(\epsilon_{0}\), \(a\) and \(b\) can be found in Table 3. In the table we present, in the first line, the results obtained by considering only the purely \(NN\) bremsstrahlung production, which corresponds to the first line of Eq. (1). The results for the total emission rate are given in the second line (we remind to the reader that up until very recently, the \(NN\) bremsstrahlung production was the only process considered for estimating SN axion emission rate.) Notice that, as evident from Table 3, the addition of the pion-induced scatterings increases the relative importance of \(C_{3}\) (controlled by the coefficients \(b\) and \(c\)) and thus enhances the effects of the RG corrections. More specifically, from Eq. (28), and assuming \(\Delta C_{0}\approx 0\), we find \[\Delta C_{\rm SN}\simeq a^{2}\Big{(}b\,C_{3}+\frac{c}{2}\,C_{0}\Big{)}\frac{ \Delta C_{3}}{C_{\rm SN}}\,. \tag{29}\] As expected, the RG effects are reduced in the case of purely NN-bremsstrahlung production, due to the partial cancellation between the \(b\) and \(c\) terms.[11] Imposing \(L_{\alpha}\leq L_{\nu}=3\times 10^{52}\,{\rm erg/s}\)[38], we find the bound on the axion mass \[m_{\alpha}\leq\frac{\overline{m}}{C_{\rm SN}}\,,\,\,\,\,{\rm with}\,\,\,\,\, \,\,\overline{m}=\frac{9.9\,{\rm meV}}{\sqrt{\epsilon_{0}}}\,. \tag{30}\] In the case of for DFSZ axions, this bound is shown in Fig. 4. Specializing on the DFSZ axion case, we immediately find from Tab. 1, \[C_{\rm SN}^{\rm DFSZ}=0.2\,a\,\sqrt{1+2.15\,c\sin^{2}\beta+4.5\,b\,\sin^{4} \beta}\,. \tag{31}\] and \[\left(\frac{\Delta C_{\rm SN}}{C_{\rm SN}}\right)^{\rm DFSZ}=\left[\frac{\Big{(} 0.30\,c+1.3\,b\sin^{2}\beta\Big{)}\cos^{2}\beta}{1+2.15\,c\sin^{2}\beta+4.6\,b \,\sin^{4}\beta}\right]l(x)\,. \tag{32}\] Note that the above expression is never larger than about 10%-15%. Thus, for the SN bound RG effects are somewhat less prominent than in the case of the RGB bound. For comparison, ignoring the contribution from the pion scatterings, gives the combination \(C_{\rm SN}\simeq 1.5\sqrt{C_{0}^{2}+0.50\,C_{3}^{2}-0.36\,C_{0}C_{3}}\). Notice that, as discussed above, the two results have a significantly different dependence on \(C_{0,3}\) and, in particular, the addition of the pion-induced scatterings increases the relative importance of \(C_{3}\) and thus enhances the dependence on the RG corrections. Figure 4: RG effects on astrophyisical axion bounds from Red Giants (red bands) and SN1987A (blue bands) for the DFSZ1 (left panel) and DFSZ2 (right panel) models, compared to the tree level results (black dashed lines). The gray line corresponds to the perturbative unitarity bound on \(\tan\beta\) for \(m_{\rm BSM}=f_{a}\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & \(\epsilon_{0}\) & \(a\) & \(b\) & \(c\) & \(\overline{m}\) [meV] \\ \hline \(NN\) & 2.42 & 1.5 & 0.5 & -0.36 & 6.4 \\ Total & 3.86 & 1.4 & 1.3 & 0.11 & 5.0 \\ \hline \end{tabular} \end{table} Table 3: Parameters for the axion luminosity from SN entering Eqs. (27) and (28). The coefficients are calculated at a post-bounce time of 1s (see Ref. [38]). The first row refers to the \(NN\) bremsstrahlung contribution only [63], ignoring the pion scattering processes and the \(\Delta\) resonance contribution. The second row gives the total contribution, calculated from the results in Ref. [38]. The mass parameter \(\overline{m}\) is defined in Eq. (30). ### Thermal axion cosmology If axions are in thermal equilibrium until below the quark-hadron phase transition (which can occur for \(m_{a}\gtrsim 0.1\,\)eV) the axion thermal population will give a sizeable contribution to the effective number of extra relativistic degrees of freedom [64], \(\Delta N_{\rm eff}\), that is constrained by Big Bang Nucleosynthesis (BBN) [65] and cosmic microwave background (CMB) observations [66; 67]. The highest attainable axion mass from such cosmological constraints is also known as the Hot Dark Matter (HDM) bound. The forecast sensitivity of the planned CMB-S4 [68] and Simons Observatory (SO) [69] surveys will fully cover the mass range in which the axion decouples below or across the QCD crossover, thus a precise determination of the axion-pion thermalization rate, including running effects, would be necessary to set definite targets.12 Footnote 11: We should be cautious, however, since this expression is valid only in the limit of \(\Delta C_{3}/C_{\rm SN}\ll 1\) and this condition is not always met. In particular, it is violated at low \(\tan\beta\) and high \(m_{\rm MSM}\). Footnote 12: In this paper we will refrain from assessing the impact of CMB-S4 and SO projections on \(\Delta N_{\rm eff}\), since these involve an extrapolation of the axion thermalization rate beyond the QCD crossover, which is plagued by large non-perturbative uncertainties [48], and it is still matter of investigation. For \(T\lesssim T_{c}\), where \(T_{c}\simeq 155\,\)MeV is the QCD deconfinement temperature, the dominant thermalization channels is \(a\pi\leftrightarrow\pi\pi\)[13; 46]. It has been recently shown, however, that the standard computation of this process, that is based on chiral perturbation theory (ChPT), breaks down for \(T\gtrsim 70\) MeV [47; 49]. Phenomenological extensions of the validity of the chiral expansion, based on unitarization methods, have been proposed in Refs. [48; 49]. In the following, we will consider the unitarized thermal rate based on the Inverse Amplitude Method (IAM), recently discussed in Ref. [49], which gives the thermal scattering rate: \[\Gamma_{a}^{\rm IAM}(T)=\left(\frac{C_{\pi}}{f_{a}f_{\pi}}\right)^{2}0.150\ T^{ 5}h_{\rm IAM}(m_{\pi}/T)\,, \tag{33}\] with \(C_{\pi}\) given in Eq. (5) and \(m_{\pi}=137\) MeV representing the average neutral/charged pion mass. The numerical function \(h_{\rm IAM}\) is provided in Ref. [49] (cf. Fig. 3 of this reference) and is normalized to \(h_{\rm IAM}(m_{\pi}/T_{c})=1\). We will estimate the impact of RG effects on the HDM bound relying for simplicity on the instantaneous decoupling condition \(\Gamma_{a}(T_{D})\simeq H(T_{D})\), with \(\Gamma_{a}(T)\) the axion-pion scattering rate given in Eq. (33) and \(H(T)=\sqrt{4\pi^{3}g_{\star}(T)/45}\,T^{2}/m_{\rm pl}\) the Hubble rate, where \(m_{\rm pl}=1.22\times 10^{19}\) GeV is the Planck mass and \(g_{\star}(T)\) the effective number of relativistic degrees of freedom.13 Footnote 13: For a more refined treatment of the cosmological aspects of axion thermal decoupling see e.g. Refs. [48; 70; 71]. The axion contribution to the effective number of extra relativistic degrees of freedom is given by [64] \[\Delta N_{\rm eff}\simeq\frac{4}{7}\!\left(\frac{T_{c}}{T_{\nu}}\right)^{4}= \frac{4}{7}\!\left(\frac{43}{4g_{S}(T_{D})}\right)^{4/3}\simeq 0.027\left( \frac{106.75}{g_{S}(T_{D})}\right)^{4/3}, \tag{34}\] with \(T_{d}/T_{\nu}\) the ratio of the axion to neutrino temperature at \(T\ll 1\,\)MeV (i.e. well after \(\nu\)-decoupling) and \(g_{S}(T_{D})\) the number of entropy degrees of freedom at axion decoupling, that in the last relation has been normalised to the total number of SM degrees of freedom \(g_{S}(T>m_{t})=106.75\). We then confront Eq. (34) with the bound on \(\Delta N_{\rm eff}\) from Planck's 2018 data [66; 67], and from this we extract a bound on the axion mass. Our results for the HDM bound in the DFSZ1/2 models are summarised in Fig. 5, where we show the tree-level results compared with the RG corrections included. Again we see that RG effects are especially important at small \(\tan\beta\). In Fig. 5 the DFSZ1 and DFSZ2 cases coincide because the (subleading) effects of scattering off leptons have been neglected. In Ref. [72] it was argued that thermalization channels involving axion scattering off leptons can become relevant in DFSZ2 at small \(\tan\beta\). However, since RG corrections keep the axion-pion coupling sizeable also in this regime, the effect of lepton scattering becomes accordingly less important. ### Helioscope experiments One of the most appealing result of the RG correction analysis is the implication for the next generation of experiments hunting for solar axions. The main reason is that the solar flux is strongly dependent on the axion-electron coupling and, as we have seen, this can receive large RG corrections. As a consequence, helioscope sensitivities to DFSZ axions, that have been so far estimated using tree level electron-axion couplings, have been underestimated. Figure 5: HDM bound in the DFSZ1/2 models. The red region shows the effect of RG corrections, for \(m_{\rm BSM}\) ranging from \(f_{a}\) (left border) to \(1\,\)TeV (right border). The gray line corresponds to the perturbative unitarity bound on \(\tan\beta\) for \(m_{\rm BSM}=f_{a}\). Here, we focus mostly on the Sikivie type of axion helioscopes [56]. This kind of experiment is designed to detect solar axions by converting them in X-ray photons using a large laboratory magnetic field. The importance of the axion-electron coupling for Sikivie's helioscope sensitivity to solar axions is expressed by the following relation [9] \[g_{\gamma 10}^{2}\Big{(}g_{\gamma 10}^{2}+0.7g_{e12}^{2}\Big{)}>\overline{\delta} _{\gamma 10}^{4}\,, \tag{35}\] where \(g_{\gamma 10}=g_{\alpha\gamma}/10^{-10}\text{GeV}^{-1}\), \(g_{e12}=g_{\alpha e}/10^{-12}\), and \(\overline{g}_{\gamma 10}\) is the helioscope sensitivity to \(g_{\alpha\gamma}\) (again, in units of \(10^{-10}\text{GeV}^{-1}\)). Notice that \(\overline{g}_{\alpha\gamma}\) is, in general, a function of the axion mass. Defining the effective coupling \[C_{\text{hel}}=\left[C_{\gamma}^{2}\Big{(}C_{\gamma}^{2}+(37C_{e})^{2}\Big{)} \right]^{1/4}, \tag{36}\] the above expression leads to the following helioscope sensitivity relation \[C_{\text{hel}}\gtrsim\frac{0.49\,\overline{g}_{\gamma 10}}{(m_{a}/\text{eV})}\,, \tag{37}\] which, in the case of the DFSZ axion, can be readily translated into a limit on the \(\tan\beta\) accessible to the helioscope as a function of the axion mass. Notice that, according to this expression, the DFSZ sensitivity to \(\tan\beta\) (which enters only through \(C_{e}\)), should disappear for \(C_{\gamma}\gg 37C_{e}\), which is fullfilled for \(\tan\beta\ll 0.25\). In general, if the helioscope sensitivity is good enough, there could be mass regions where the entire range of \(\tan\beta\) is accessible. To give an example of an application of Eq. (37), let us consider the case of BabyIAXO, a next-generation axion helioscope presently under construction [73]. Its sensitivity at \(m_{a}=0.1\,\text{eV}\) is expected to reach \(\overline{g}_{\gamma 10}=0.33\). Using this value and \(C_{\gamma}=8/3-1.92\) for the DFSZ1 model, if RG effects are ignored, one would conclude that at this mass value BabyIAXO could be sensitive to the region \(\tan\beta\gtrsim 0.62\). The results of our complete numerical analysis for all values of the axion mass are plotted in the left panel in Fig. 6, where the dashed line contours correspond to the estimated BabyIAXO sensitivity if RG effects are ignored. The reach of the more advanced helioscope experiment IAXO [74] is also shown in the left panel in Fig. 6. In this case we see that there is a mass region for which the experiment is sensitive to all values of \(\tan\beta\). When RG corrections are ignored, this region extends to masses between \(\sim 50\) meV and \(\sim 200\) meV. The reason of this is that IAXO is sensitive enough to see the solar axion flux even in models in which axions are only coupled to the photon and not to the electron. Let us now consider the effects of RG corrections on the projected sensitivities. As shown in Tab. 2, the RG corrections to the effective helioscope coupling is \[\frac{\Delta C_{\text{hel}}}{C_{\text{hel}}}=\frac{1}{2}\frac{\Delta C_{e}/C_{e }}{1+\left(C_{\gamma}/(37C_{e})\right)^{2}}\,, \tag{38}\] which is valid in the limit of \(\Delta C_{e}/C_{e}\ll 1\). This condition is always verified in DFSZ2, while for DFSZ1 it holds for \(\tan\beta\gg 0.5\,l(x)^{1/2}\) (Cf. Tab. 1). Since in the case of BabyIAXO the expected sensitivity is not sufficient to detect DFSZ axions unless \(\tan\beta\gtrsim 0.6\) (see Fig. 6) which implies \(C_{\gamma}/(37C_{e})\ll 1\), we can simplify the correction to the effective coupling to \[\left(\frac{\Delta C_{\text{hel}}}{C_{\text{hel}}}\right)^{\text{BabyIAXO}} \simeq\frac{\Delta C_{e}}{2C_{e}}\,, \tag{39}\] Figure 6: Experimental sensitivity to DFSZ1 axions, including RG effects. _Left Panel_: IAXO and BabyIAXO. _Right Panel_: XENON-nT. In both plots the gray line corresponds to the perturbative unitarity bound on \(\tan\beta\) for \(m_{\text{BSM}}=f_{a}\). which can be readily estimated using our results from the Tab 1. Notice that this correction can be quite sizable and implies that the reach of BabyIAXO to small electron couplings (low tan\(\beta\) values) can be pushed down significantly, as is shown by the blue region in the left panel of Fig. 6. The impact of RG corrections on the IAXO sensitivity to DFSZ1 axions is also shown in the left panel of Fig. 6, and corresponds to the red regions. In this case we notice an interesting effect, that is that the IAXO reach in the region of small tan\(\beta\) is sizeably enlarged for all values of \(m_{\text{BSM}}\), since the solar axion flux is necessarily larger than what predicted ignoring the RG corrections. As a result, the mass region for which IAXO is sensitive to the entire range of tan \(\beta\) is extended. Finally, the correction to the axion-electron coupling has also an obvious impact on experiments which detect axions through the axio-electric effect. Such experiments include large underground detectors such as Panda-X [54], LUX [53], or XENONnT [55], originally designed for dark matter searches. The RG modification of the axion-electron coupling extends the potential of these experiments to explore the DFSZ parameter space. Our numerical results in the case of XENON-nT are shown in the right panel of Fig. 6. A fundamental difference with respect to the previous results is that, because of the RG-induced corrections, in principle XENON-nT could be sensitive to DFSZ1 axions for any value of tan\(\beta\). However, the current experimental sensitivity is insufficient to reach inside the mass region allowed by the RGB bound discussed in Sect. 3.1 (see the left panel in Fig. 4). ## 4 Conclusions In this paper, we have studied the impact of RG effects on QCD axion phenomenology, focusing on DFSZ models. We have shown that running effects on axion couplings depend crucially on the scale at which the heavy Higgs states are integrated out, and the 2HDM effectively reduces to the SM with a single light Higgs. We have discussed the implications of running axion couplings on astrophysical and cosmological bounds, as well as the sensitivity of helioscope experiments such as (Baby)IAXO. We have found that running effects are sizable even in the most conservative case in which the 2HDM structure keeps holding down to the TeV-scale, and thus they can never be neglected. We have also provided simple analytic expressions fitted to reproduce the numerical solutions of the RG equations, which can be a useful tool for studying the implications of running axion couplings. In the case of an axion discovery, running effects might prove to be crucial in order to reconstruct the axion UV completion. ## Acknowledgments We thank Kiwoon Choi and Giovanni Villadoro for useful discussions. The work of L.D.L. is supported by the project "CPV-Axion" under the Supporting TAlent in Research@University of Padova (STARS@UNIPD) and the INFN Iniziative Specifica APINE. The work of L.D.L. and G.P. is supported by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 860881-HIDDEN. The work of E.N. was supported by the Estonian Research Council grant PRG1884 and by the INFN "Iniziativa Specifica" Theoretical Astroparticle Physics (TASP-LNF). S.O. and F.M. acknowledge financial support from a Maria Zambrano fellowship and the State Agency for Research of the Spanish Ministry of Science and Innovation through the Unit of Excellence Maria de Maeztu 2020-2023 award to the Institute of Cosmos Sciences (CEX2019-000918-M) and from PID2019-105614GB-C21, 2017-SGR-929 and 2021-SGR-249 grants. This article is based upon work from COST Action COSMIC WISPers CA21106, supported by COST (European Cooperation in Science and Technology). We thank the Galileo Galilei Institute for Theoretical Physics for hospitality during the completion of this work. ## Appendix A Numerical fit to RG effects Running of axion couplings is examined in detail in Refs. [14; 15; 17; 75], where a complete set of one-loop (and partially two-loop) anomalous dimensions are derived including matching corrections at the EW scale [17]. The leading contribution to the running axion couplings arises from top loop diagrams induced by the axion-top coupling \(C_{t}\). The RG evolved couplings at \(\mu=2\,\text{GeV}\) are thus expressed to a good approximation by \[C_{\Psi}(2\,\text{GeV})\simeq C_{\Psi}(f_{a})+r^{\prime}_{\Psi}(m_{\text{BSM} })\,C_{t}(f_{a}), \tag{10}\] where \(\Psi=u,d,e\). Note that the running occurs below the heavy Higgs scale \(m_{\text{BSM}}\simeq m_{HA,H^{+}}\), where in the decoupling limit the heavy scalars are assumed to be approximately degenerate, and \(r^{\prime}_{\Psi}(m_{\text{BSM}})\) is a function only of \(m_{\text{BSM}}\). Keeping only the top Yukawa and the strong gauge couplings, the running of \(C_{\Psi}\) below \(\mu=m_{\text{BSM}}\) is governed by [14; 15; 17; 75] \[\frac{dC_{\Psi}}{d\ln\mu}\simeq-T_{3,\Psi}\frac{3Y_{t}^{2}}{4\pi^{2}}\,C_{t} \,\Theta(\mu-\mu_{w})-a_{\Psi}\frac{\alpha_{s}^{2}}{\pi^{2}}\widetilde{c}_{G }\,, \tag{11}\] where \(T_{3,\Psi}\) is the weak isospin of \(\Psi\), \(a_{\Psi}=1\) for quarks and 0 for leptons, \(\mu_{w}=\mathcal{O}(m_{\text{Z}})\) is a matching scale at which weak gauge bosons, Higgs boson and top quark are integrated out, and \[\widetilde{c}_{G}=1-\sum_{q}C_{q}(\mu)\,\Theta(\mu-m_{q})\,. \tag{12}\] We see from Eq. (11) that the RG corrections to the axion couplings consist of one-loop iso-vector contribution, proportional to the weak isospin \(T_{3,\Psi}\), and a two-loop level iso-scalar contribution generated from \(\widetilde{c}_{G}\),14 and can be expressed in the form Footnote 14: In the DFSZ models, \(\widetilde{c}_{G}=0\) at \(\mu=m_{\text{BSM}}\) and it develops a nonzero value because of the running of \(C_{q}\). This means that the running effects from \(\widetilde{c}_{G}\) are also proportional to \(C_{t}(f_{a})\), allowing to parametrise this iso-scalar contribution in the form of Eq. (10). \[t^{\prime}_{\Psi}(m_{\text{BSM}})\simeq T_{3,\Psi}\,r^{\prime}_{3}(m_{\text{BSM }})+\frac{a_{\Psi}}{2}\,r^{\prime}_{0}(m_{\text{BSM}})\,, \tag{13}\] which, for the running of \(C_{3,0}\), yields \[r_{3}^{t} \simeq r_{u}^{t}-r_{d}^{t}\simeq-2r_{e}^{t}\,, \tag{10}\] \[r_{0}^{t} \simeq r_{u}^{t}+r_{d}^{t}\,. \tag{11}\] Note that \(r_{3,0}^{t}\) are independent of \(\Psi\) to a good precision, even after including the threshold corrections at the EW scale, which turn out to be iso-vector (numerically \(\left|(r_{0}^{t})_{\rm th}/(r_{3}^{t})_{\rm th}\right|\sim 10^{-6}\)). Let us now derive approximate formulae for \(r_{3,0}^{t}(m_{\rm MSM})\). To this end, we first evaluate the running effects by numerically solving the full set of the RG equations including the threshold corrections at the EW scale [17]. In the calculation the two-loop running for the SM gauge and Yukawa couplings is implemented, with their input values at \(\mu_{w}=m_{Z}\) taken from Ref. [35]. A set of numerical values for \(r_{3,0}^{t}(m_{\rm SSM})\) are tabulated in Tab. 1. These values are accurately fitted by the following fitting functions: \[r_{3}^{t}(m_{\rm BSM}) \simeq r_{u}^{t}-r_{d}^{t}\simeq-0.54\ln\left(\sqrt{x}-0.52\right), \tag{12}\] \[r_{0}^{t}(m_{\rm BSM}) \simeq r_{u}^{t}+r_{d}^{t}\simeq 3.8\times 10^{-4}\ln^{2}(x-1.25)\,, \tag{13}\] with \(x=\log_{10}(m_{\rm BSM}/\,{\rm GeV})\). Eq. (12) agrees with the numerical results within 2% accuracy in the \(1\,{\rm TeV}\leq m_{\rm BSM}\leq 10^{18}\,{\rm GeV}\) range. The precision of Eq. (13) is better than 6%. However, since \(|r_{0}^{t}/r_{3}^{t}|\lesssim 0.5\%\), this function does not affects numerically \(r_{\Psi}^{t}\).
2310.13321
Beyond Hard Samples: Robust and Effective Grammatical Error Correction with Cycle Self-Augmenting
Recent studies have revealed that grammatical error correction methods in the sequence-to-sequence paradigm are vulnerable to adversarial attack, and simply utilizing adversarial examples in the pre-training or post-training process can significantly enhance the robustness of GEC models to certain types of attack without suffering too much performance loss on clean data. In this paper, we further conduct a thorough robustness evaluation of cutting-edge GEC methods for four different types of adversarial attacks and propose a simple yet very effective Cycle Self-Augmenting (CSA) method accordingly. By leveraging the augmenting data from the GEC models themselves in the post-training process and introducing regularization data for cycle training, our proposed method can effectively improve the model robustness of well-trained GEC models with only a few more training epochs as an extra cost. More concretely, further training on the regularization data can prevent the GEC models from over-fitting on easy-to-learn samples and thus can improve the generalization capability and robustness towards unseen data (adversarial noise/samples). Meanwhile, the self-augmented data can provide more high-quality pseudo pairs to improve model performance on the original testing data. Experiments on four benchmark datasets and seven strong models indicate that our proposed training method can significantly enhance the robustness of four types of attacks without using purposely built adversarial examples in training. Evaluation results on clean data further confirm that our proposed CSA method significantly improves the performance of four baselines and yields nearly comparable results with other state-of-the-art models. Our code is available at https://github.com/ZetangForward/CSA-GEC.
Zecheng Tang, Kaifeng Qi, Juntao Li, Min Zhang
2023-10-20T07:31:23Z
http://arxiv.org/abs/2310.13321v2
# Beyond Hard Samples: Robust and Effective Grammatical Error Correction with Cycle Self-Augmenting ###### Abstract Recent studies have revealed that grammatical error correction methods in the sequence-to-sequence paradigm are vulnerable to adversarial attack, and simply utilizing adversarial examples in the pre-training or post-training process can significantly enhance the robustness of GEC models to certain types of attack without suffering too much performance loss on clean data. In this paper, we further conduct a thorough robustness evaluation of cutting-edge GEC methods for four different types of adversarial attacks and propose a simple yet very effective Cycle Self-Augmenting (CSA) method accordingly. By leveraging the augmenting data from the GEC models themselves in the post-training process and introducing regularization data for cycle training, our proposed method can effectively improve the model robustness of well-trained GEC models with only a few more training epochs as an extra cost. More concretely, further training on the regularization data can prevent the GEC models from over-fitting on easy-to-learn samples and thus can improve the generalization capability and robustness towards unseen data (adversarial noise/samples). Meanwhile, the self-augmented data can provide more high-quality pseudo pairs to improve model performance on the original testing data. Experiments on four benchmark datasets and seven strong models indicate that our proposed training method can significantly enhance the robustness of four types of attacks without using purposely built adversarial examples in training. Evaluation results on clean data further confirm that our proposed CSA method significantly improves the performance of four baselines and yields nearly comparable results with other state-of-the-art models. Our code is available at [https://github.com/ZetangForward/CSA-GEC](https://github.com/ZetangForward/CSA-GEC). ## 1 Introduction Grammatical error correction (GEC) is one of the most essential application tasks in the NLP community for its crucial values in many scenarios including, but not limited to, writing assistant (Napoles et al., 2019; Fitria, 2021), automatic speech recognition (Karat et al., 1999; Namazifar et al., 2021; Zhao et al., 2021; Wang et al., 2021; Zhang et al., 2021), information retrieval (Gao et al., 2010; Duan and Hsu, 2011; Hagen et al., 2017; Zhuang and Zuccon, 2021), which mainly aims to detect and correct various textual errors, such as spelling, punctuation, grammatical, word choice, and other article mistakes (Wang et al., 2020). Existing solutions to tackle this task can be roughly divided into two categories, i.e., sequence-to-sequence generation (_Seq2Seq_) (Ji et al., 2017; Chollampatt and Ng, 2018) and sequence-to-editing (_Seq2Edits_) (Stahlberg and Kumar, 2020; Awasthi et al., 2019; Li and Shi, 2021). The former group performs the translation from ungrammatical sentences to the corresponding error-free sentences, while the latter introduces tagging or sequence labeling to merely edit a small proportion of the input sentences, remaining the rest part unchanged. With the well-tested encoder-decoder framework (Sutskever et al., 2014; Vaswani et al., 2017) as the backbone, GEC methods in the _Seq2Seq_ paradigm can achieve promising performance but is sensitive to the quality and scale of training data. Thus, many recent works have studied the problem of automatically obtaining high-quality paired data to compensate for the lack of human-labeled data pairs (Zhao et al., 2019; Kiyono et al., 2019; Yasunaga et al., 2021). As for the Seq2Edits group, it generally achieves a faster inference speed than Seq2Seq methods by decoding the target text in parallel and meanwhile obtaining very competitive performance (Awasthi et al., 2019; Omelianchuk et al., 2020)1. Existing literature has also revealed that incorporating large-scale pre-trained language models (PLMs) can enhance the GEC performance of both _Seq2Seq_(Kaneko et al., 2020) and _Seq2Edits_(Malmi et al., 2019; Omelianchuk et al., 2020) methods. However, recent studies have disclosed that _Seq2Seq_ GEC models (even with data augmentation) are vulnerable to adversarial examples, which are purposely constructed to confuse a converged model to generate wrong predictions by perturbing its inputs (Wang and Zheng, 2020). Fooling a model by perturbing its inputs, which is also called an adversarial attack, has become an essential means of exploring the model vulnerability- ties. Studies on other classification tasks and PLMs further hint at the possible vulnerability of PLMs-based GEC methods (Li et al., 2021). In view of the above-mentioned facts, it is imperative to conduct a systematical evaluation of existing GEC methods to adversarial attacks, especially for the under-explored _Seq2Edits_ paradigm and PLMs-based models. Footnote 1: According to the leaderboard of CoNLL-14 shared task, three of the top-5 best performed GEC systems belong to the Seq2Edits paradigm. The link is: [http://nlpprogress.com/english/grammatical_error_correction.html](http://nlpprogress.com/english/grammatical_error_correction.html) To fill this gap, we propose to evaluate the robustness of cutting-edge GEC models to different adversarial attacks. More concretely, we introduce three discrete adversarial attack strategies and one continuous attack method to obtain adversarial examples. These three discrete variations are motivated by an existing GEC work (Wang and Zheng, 2020) to detect the vulnerable tokens/positions that are most likely to cause the failure of the GEC models once they are substituted with the grammatical errors people may make. In this paper, we implement three different substitution strategies, i.e., rule-based perturbations that follow the rules of Wang and Zheng (2020) and relatively imperceptible grammatical errors people may make, including synonyms and antonyms based on WordNet2. As for the continuous adversarial noise, back-translation (Sennrich et al., 2016) is widely used for neural GEC models before the era of large-size pre-trained language models (PLMs), and we leverage this type of perturbation to test whether GEC models constructed on powerful (PLMs) are robust enough to such simple adversarial noise. We also propose one evaluation metric _Recovery Rate_ with one associated attack set which contains fixed number of attack per sentence. Resembling the observation on the previous _Seq2Seq_ method attacked by mapping & rules, cutting-edge GEC models are also susceptible to the introduced attacks. Taking the BART-based method (Lewis et al., 2020; Katsumata and Komachi, 2020) for example, its performance (\(F_{0.5}\)) on CoNLL-2014 (Ng et al., 2014) decreases sharply, from 62.6 to 36.8. Intuitively, the dramatic performance decline can be mitigated by pre-training or post-training with a great number of adversarial examples for a certain type of attack (Wang and Zheng, 2020). However, such methods require preparing considerable data for each attack type in advance, which is infeasible for real-world scenarios. Another minor flaw of these methods is that the significant improvement in robustness is possibly accompanied by a performance decrease on the original testing data. Footnote 2: [https://wordnet.princeton.edu/](https://wordnet.princeton.edu/) To avoid these problems, we introduce the concept of regularization data which is a kind of strict hard sample, and propose a very effective cycle self-augmenting (CSA) method. Concretely, our proposed CSA is only introduced in the post-training process of a converged GEC model and merely needs the original training data. The self-augmented data can provide more high-quality pseudo pairs to improve model performance on the original testing data, and meanwhile, further training on the regularization data can prevent the GEC models from over-fitting on easy-to-learn samples. Thus, our proposed CSA method can significantly improve model robustness with only a few more training epochs as the extra cost. Since our CSA no longer requires well-crafted adversarial examples for model training, it is more feasible in applications and can generalize well to different GEC frameworks. Experimental results on seven strong models (e.g., BERT-fuse, BART, RoBERTa, XLNET) and **four** benchmark datasets (i.e., BEA, CoNLL, FCE, JFLEG) demonstrate the effectiveness of our proposed CSA method. Our CSA method achieves significant robustness improvement on all settings and, at the same time, yields meaningful performance improvement on the original testing data (four out of seven tested models), with nearly comparable results for the left three SOTA baselines. By further analyzing the effects of hard samples and regularization data, we observe the advancement of the regularization data. Besides, we also find that the trade-off between the robustness to attack and the performance on the original testing data is associated with regularization examples, where more regularization pairs in training lead to better robustness but with performance decline on the original testing data, and vice versa. Furthermore, we find a deeper relationship between the performance of the GEC model and data characteristics by conducting quantitative experiments on the two fine-grained data components. As for the anomalies of the slight improvement of the original testing data or attack sets for some models, we take a thorough analysis of this phenomenon and summarize a paradigm of utilizing regularization data for different model architectures in the end. It is worthy noting that the recent work (Zhang et al., 2023) also explores the robustness of the GEC systems when nuanced modifications irrelevant to errors are introduced by users, but our work is more inclined towards using CSA methods to enhance the robustness of the GEC system. ## 2 Preliminary In this section, we summarize the key components of cutting-edge GEC methods and present a few representative works correlated with the robustness of GEC models against adversarial attacks. We firstly give the definition of the grammatical error correction (GEC) task. Then, we review some typical methods for obtaining synthetic data and introduce two most popular GEC model architectures in existing literature, i.e., _Seq2Seq_ and _Seq2Edits_, In the end, we present the definition of textual adversarial attacks in NLP, following with the pilot studies of adversarial attacks in GEC and a few widely-used attack methods for other NLP tasks. ### Task Definition Grammatical error correction (GEC) is the task of converting an ungrammatical writing text into a grammatical one. Specifically, giving a text \(x_{1},\dots,x_{n}\in\mathcal{X}\), this task is to build a system \(\mathcal{F}\) which can detect and correct the ungrammatical context without changing the meaning of the original text, and finally return the grammatical text \(y_{1},\dots,y_{m}\in\mathcal{Y}\) (\(m\) maybe not equal with \(n\)). From a linguistic point of view, errors existing in the text can be classified at five levels (Kukich, 1992) which are listed with their explanations below, and Table 1 provides some cases of these five error types. * **Lexical errors** refer to misspelling a word into a non-existent one. * **Syntactic errors** violate some syntactic rules, e.g., subject-verb agreement, which causes contextual or rules mismatching. * **Semantic errors** are caused by contextual spelling mistakes. Although there is no syntactic error existing, the sentence's meaning may be changed, or there are collocation/co-occurrence errors. * **Discourse errors** break inherent coherence relations in a text, which may cause temporal or other conflicts in or among the sentences. \begin{table} \begin{tabular}{l|l} \hline \hline **Level** & **Example** \\ \hline Lexical & The best place for young people in our aree is without doubt the lake. \\ & The best place for young people in our area is without doubt the lake. \\ \hline Syntactic & I don’t recommend it to children lower than thirteen years old. \\ & I don’t recommend it to children under thirteen years old. \\ \hline Semantic & I had a big conversation yesterday in that house. \\ & I had a long conversation yesterday in that house. \\ \hline Discourse & Water is made up of one elements, hydrogen and oxygen. \\ & Water is made up of two elements, hydrogen and oxygen. \\ \hline \multirow{2}{*}{Pragmatic} & I need sunscreen because it rains so hard. \\ & I need an umbrella because it rains so hard. \\ \hline \hline \end{tabular} \end{table} Table 1: Different levels of errors existed in the text. Texts colored with **red** are wrongly written and corrected into **blue** ones. * **Pragmatic errors** correlate to the disobedience of common sense in the text. In application, GEC systems mainly focus on the first three types of errors because correcting errors from the last two types requires auxiliary tasks, e.g., discourse analysis. ### Data Expansion The recent success of GEC models highly relies on the availability of massive training data pairs (Awasthi et al., 2019; Kiyono et al., 2019; Omelianchuk et al., 2020; Rothe et al., 2021). Considering that human-labeled pairs are expensive to obtain, many efforts have been devoted to exploring the generation of pseudo data pairs for GEC (Lichtarge et al., 2019; Grundkiewicz et al., 2019; Naplava and Straka, 2019), and the combination of synthetically generated data has almost been indispensable for recently proposed GEC models (Kiyono et al., 2019). Furthermore, in addition to generating and utilizing a large amount of synthetic data for improving the model performance under the supervised setting, how to efficiently use different data variations for probing model capability or improving model performance under an unsupervised setting during the fine-tuning stage also catches much research interests. We classify the prevalent operations of synthesizing data in GEC tasks into two categories, i.e., vocabulary-based perturbation and generation-based perturbation, which are detailed below. Vocabulary-Based PerturbationRepresentative vocabulary-based perturbation methods are based on three basic noising operations: deletion, insertion, and replacement. The first two operations can be applied directly to the original grammatical sentences, while the replacement operation requires a confusion set that contains words that people often make mistakes with (Bryant and Briscoe, 2018; Rozovskaya et al., 2014). We introduce a few of them as follows: * With the convenient access to large scale general-domain or out-of-domain corpora, e.g., Wikipedia, some works (Lichtarge et al., 2018; Awasthi et al., 2019; Katsumata and Komachi, 2020) utilize vocabulary-based perturbation for these datasets to acquire massive data with relatively low-quality for pre-training a GEC model, in which the pre-training process can be further enhanced by strategies from language model pre-training, e.g., masking operation (Kiyono et al., 2019). * Many other works explore constructing high-quality data samples. For instance, Yasunaga et al. (2021); Naplava et al. (2022); Mita and Yanaka (2021); Zhao et al. (2019) focus on how to build better obfuscation sets to improve the performance further or prob the characteristics of the model by manually annotating or utilizing linguistic rules such as POS tagging. Generation-Based PerturbationAlthough the vocabulary-based perturbation is convenient and straightforward, some unrealistic error patterns that do not resemble those observed in the actual data may occur in the results (Koyama et al., 2021). Instead, some researchers utilize the neural network to learn error distributions and automatically generate errors: * Ge et al. (2018) propose a fluency metric to evaluate the quality and correctness of sentences. For each ungrammatical-corrected sentence pair, they utilize a sequence-to-sequence error generation model to create n-best pseudo pairs during inference, which are further sorted in ascending order of fluency. Then, different fluency boost learning strategies are introduced to enhance the training process. * Wan et al. (2020) perform data augmentation in a controllable manner. Specifically, they train a model which can reconstruct a grammatical sentence into an error-specific one based on the given error type. * Back-translation (Sennrich et al., 2016; Lample et al., 2018) is an effective method to improve neural machine translation with monolingual data by augmenting the parallel training corpus with target language sentences. It can also be utilized for the GEC task as we can expand the training corpus with the back-translations of grammatical texts (Lichtarge et al., 2019) or texts in other languages (Zhou et al., 2020). Different from the above two methods, the back-translation method is more direct and typically introduced in the post-processing procedure, i.e., modifying the decoding strategy (Xie et al., 2018; Kasewa et al., 2018). * To improve the performance of GEC systems under the unsupervised setting, Yasunaga et al. (2021) apply the BIFI framework (Yasunaga and Liang, 2021) to the GEC task, which contains a critic module to evaluate the outputs from the back-translation model to acquire more realistic error distributions. Some works also pursue improving the data quality in addition to expanding the data quantity. Rothe et al. (2021) release a CLANG-8 dataset by using a multilingual model (mT5 (Xue et al., 2021)) to automatically clean and relabel the original LANG-8 corpus (Mizumoto et al., 2011; Tajiri et al., 2012). Wan and Wan (2021) propose a syntax-guided model to make use of the syntactic knowledge from raw data, which can achieve comparable performance without the enhancement of pre-trained models. ### Adversarial Attack #### 2.3.1 Textual Adversarial Attacks Definition The core of building textual adversarial examples is to confuse the NLP models. As mentioned in Section 2.1, a GEC system \(\mathcal{F}\) can map a ungrammatical sentence \(x\in\mathcal{X}\) to a correct sentence \(y\in\mathcal{Y}\), while an adversarial example \(x^{\prime}=x^{\prime}_{1},\dots,x^{\prime}_{n}\notin\mathcal{X}\) is built to satisfy the following paradigm: \[\mathcal{F}(x^{\prime})\neq y,\quad sim(x,x^{\prime})\geq\delta, \tag{1}\] where \(sim\) represents a metric function that calculates the similarity of the original \(x\) and the associated adversarial example \(x^{\prime}\), and \(\delta\) is the minimum similarity. #### 2.3.2 Application of Adversarial Attack In actual application scenarios, there exist different types of noise in text, and recent studies on the GEC task have revealed that existing _Seq2Seq_ methods are quite vulnerable to adversarial examples under the white-box setting (Wang and Zheng, 2020). In other words, a well-trained off-the-shelf Figure 1: Two prevalent GEC model architectures. Figure (a) illustrates the _Seq2Seq_ framework, where \(x_{i}\), \(h_{i}\) and \(y_{i}\) represents the input token, the correlated probability distribution outputs by the decoder and the output token, respectively. The output tokens are generated by beam search (Tillmann and Ney, 2003) or other decoding strategies (Holtzman et al., 2019). Figure (b) illustrates the _Seq2Edits_ framework, where \(e_{i}\) represents an editing operation for the correlated input token \(x_{i}\). The output token \(y_{i}\) is then generated by editing the corresponding input token. model may collapse when facing texts which happen to be adversarial examples. As a result, building a GEC system with robustness and generalization capability over noisy examples is critical. Inspired by adversarial attack and defense in NLP (Jia and Liang, 2017; Zhao et al., 2018), we explore the model performance on both adversarial attack sets and regular benchmarking datasets of the GEC task. To obtain adversarial examples, Wan et al. (2020) propose to first identify the weak spots of a model and then replace the vulnerable tokens with two different strategies. One is to create a correct-to-error mapping from the GEC training set. Another is to present a series of substitution rules if there is no candidate in the mapping. Hereafter, we denote this method as Mapping & Rules for short. There are also other popular adversarial example construction methods for PLMs and other tasks but are less explored in GEC, e.g., word substitutions (Ma, 2019; Dong et al., 2021; Li et al., 2021). ### Model Architecture As mentioned above, the goal of GEC task is to map an ungrammatical piece (\(x_{1},\ldots,x_{n}\)) into a grammatical one (\(y_{1},\ldots,y_{m}\)) (\(n\) may not be equal with \(m\)) from the dataset \(\mathcal{D}\) with the use of GEC system \(\mathcal{F}\). As illustrated in Figure 1, there are two main model architectures, i.e., the _Seq2Seq_ framework (Sutskever et al., 2014; Vaswani et al., 2017; Lewis et al., 2020) and the _Seq2Edits_ framework (Awasthi et al., 2019; Devlin et al., 2019). #### 2.4.1 _Seq2Seq_ framework Many works (Xie et al., 2016; Yuan and Briscoe, 2016; Xie et al., 2018; Junczys-Dowmunt et al., 2018; Zhao et al., 2019; Sun et al., 2021; Kaneko et al., 2020) formulate GEC as a natural language generation (NLG) task and utilize an encoder-decoder structure to complete the sequence-to-sequence (_Seq2Seq_) generation task. Given an input sentence \(\mathbf{x}\) of \(n\) tokens, the encoder first encodes it into the hidden representation \(h^{s}_{1:n}\), and then the decoder outputs each token in an auto-regressive fashion. The output distribution over the vocabulary at the \(k\)-th decoding step is conditioned on \(h^{s}_{1:n}\) from the encoder and the summarized representation of previously generated \(k\)-1 tokens \(h^{t}_{1:k-1}\) from the decoder, formulated as \(\text{Pr}(y_{k}|\mathbf{y}_{<k},\mathbf{x})\) = \(\text{Pr}(y_{k}|h^{t}_{1:k-1},h^{s}_{1:n})\). The training objective of _Seq2Seq_ architecture is the negative log-likelihood, written by \[\mathcal{L}(\theta)=-\frac{1}{|\mathcal{D}|}\sum_{x,y\in\mathcal{D}}log(p(y|x)) \tag{2}\] where \(\theta\) refers to trainable model parameters. To get a optimal output, beam search decoding (Yuan and Briscoe, 2016; Chollampatt and Ng, 2018) and its variations are also utilized (Sun et al., 2021). This architecture can achieve promising performance with massive data but will sacrifice inference efficiency due to the iterative decoding. #### 2.4.2 _Seq2Edits_ framework To alleviate the embarrassed situation of inference speed and large decoding space problems in _Seq2Seq_ model architecture, _Seq2Edits_ provides another alternative that casts GEC into a tagging problem (Awasthi et al., 2019; Omelianchuk et al., 2020; Malmi et al., 2019) along with the non-autoregressive sequence prediction (Li and Shi, 2021). Instead of directly predicting the token, _Seq2Edits_ architecture first predicts the edit operation type \(e_{i}\in E\) for each input token \(x_{i}\) and then performs a series of transformation operations based on the predicted edit to realize the grammatical output. The training objective of tagging is formulated as: \[\mathcal{C}(\phi)=-\frac{1}{|\mathcal{D}|}\sum_{x\in\mathcal{D},e\in E}log(p(e| x)) \tag{3}\] where \(\phi\) corresponds to model parameters to be trained. This architecture can achieve competitive performance and faster inference speed with limited data but requires heuristic prior and human efforts to obtain labeled data for the tagging task. ## 3 Cycle Self-Augmenting Method In this section, we introduce our Cycle Self-Augmenting method (**CSA**). Specifically, we elaborate on how Self-Augmenting and Cycle Training work under our settings in Section 3.1 and Section 3.2 respectively, following with a comparison between our method and conventional data augmentation methods in Section 3.3. In the cycle training process, we will present the concept of regularization data for GEC, which is the key to the robustness against adversarial attacks. ### Self-Augmenting To enhance model robustness, existing works mainly utilize numerous well-crafted adversarial examples for certain types of adversarial attacks in the pre-training or/and post-training stages (Wang and Zheng, 2020; Li et al., 2021). Instead of elaborately designing adversarial example generation strategies for each type of attack, we make the GEC model itself perform self-augmenting to defend against various types of attack, which is more efficient and adaptive to varied GEC models. To better exploit the capability of GEC models, we introduce the self-augmenting mechanism for post-training. Figure 2: The overall framework of our proposed Cycle Self-Augmenting (CSA). The Self-Augmenting mechanism corresponds with step 1�⃃� Concretely, the crux of Self-Augmenting is to obtain augmenting data pairs \(\mathcal{D}_{Aug}\) which consists of extra pairs \(\mathcal{D}_{self}\) constructed by the model itself. The detailed process is illustrated in Figure 2. Given a well-trained GEC model \(f(\cdot)\) and the original training dataset \(\mathcal{D}\)={\((x^{(i)},y^{(i)})\}_{i=1\cdots|\mathcal{D}|}\), we feed each input \(x\) into \(f(\cdot)\) to obtain the corresponded output \(y^{{}^{\prime}}\) (step \(\mathbb{Q}\)-\(\mathbb{Z}\) in Figure 2), and compare \(y^{{}^{\prime}}\) with the golden \(y\) word for word (step \(\mathbb{R}\)). If there is any difference between \(y^{{}^{\prime}}\) and \(y\), we will add \((y^{{}^{\prime}},y)\) into \(\mathcal{D}_{self}\), and utilize these extra data for post-training (step \(\mathbb{R}\)). One merit of this is enabling the model to iteratively refine the outputs, which is a popular paradigm in many other situations, e.g., non-auto regressive machine translation task (Lee et al., 2018; Geng et al., 2021; Huang et al., 2021), and also consistent with the GEC task. However, it may cause a dramatic forgetting problem. Thus, we also collect another augmenting data variant \(\mathcal{D}_{hard}\) as \(\mathcal{D}_{Aug}\) which consists of hard samples directly extracted from the original training set. The construction of \(\mathcal{D}_{hard}\) is similar to the \(\mathcal{D}_{self}\), except that we insert each original training pair \((x,y)\) into \(\mathcal{D}_{hard}\). To better distinguish the hard samples from the original training set, we denote pairs in \(\mathcal{D}_{hard}\) as \((x_{h},y)\) in this paper. We will compare these two strategies in our experiment (Section 5). Besides, one can collect \((x,y^{{}^{\prime}})\) for self-distillation proposed by Kim et al., which is a regularization method to mitigate the over-fitting problem for the situation of mismatch between the model capacity and the number of data. However, such method is unsuitable for the GEC task since the the quantity of training data is sufficient and the the target of the GEC model should be grammatical texts. Instead, utilizing \(D_{hard}\) or \(D_{self}\) for post-training can provide more feasible training pairs and is more tally with the GEC task, i.e., only part of the input sentence is edited. We will show the superiority of our self-augmenting in Section 6.3. ### Cycle Training To effectively utilize augmenting pairs from the above introduced self-augmenting process, we further present a cycle training strategy, which is sketched out in step \(\mathbb{R}\) of Figure 2. Specifically, we use the aforementioned self-augmenting strategy to construct a new dataset \(\mathcal{D}^{k}_{Aug}\) in each cycle \(k\), where \(0\leq\)\(k\)\(\leq\)\(\epsilon\). Thus, we can leverage \(\epsilon\) augmenting datasets (\(\mathcal{D}^{1}_{Aug}\),\(\dots\),\(\mathcal{D}^{k}_{Aug}\),\(\dots\),\(\mathcal{D}^{k}_{Aug}\)) in cycle training, where these datasets are utilized differently in two training stages. In **Stage I**, the obtained augmenting datasets contain many unseen data pairs in the original training dataset, which can be simply used by conducting further training to improve both model performance and robustness. Accordingly, we adopt the following training process for each cycle at the early stage, i.e., when \(0\leq\)\(k\)\(\leq\)\(\mathcal{P}\): * Perform training on \(\mathcal{D}^{k}_{Aug}\) until convergence. * Conduct further tuning on a small high-quality GEC dataset \(\mathcal{D}_{tune}\) to prevent over-fitting on the augmenting dataset. Note that the improvement of performance and robustness is not caused by merely using the small dataset, which is discussed in Section 6.4. Along with the model training, there are fewer and fewer unseen data pairs in the augmenting datasets. Simply utilizing the augmenting dataset in each cycle for model training might yield over-fitting on these datasets. Thus, we turn to focus on these hard-to-learn data, i.e., these data pairs that have not been learned after \(\mathcal{P}\) cycles. Inspired by previous work (Zhou et al., 2021) that names some specific samples that are negatively associated with the performance of knowledge distillation as regularization examples, we treat these hard-to-learn data as **Regularization Data** for the GEC task. When \(\mathcal{P}\leq\)\(k\)\(\leq\)\(\epsilon\), the regularization data of the \(k\)-th cycle is obtained as \(\mathcal{D}^{k}_{Reg}\)=\(\mathcal{D}^{k-p+1}_{Aug}\cap\dots\cap\mathcal{D}^{k}_{Aug}\). In this stage (**Stage II**), the trained GEC model from **Stage I** is further trained as below: * Perform training on \(\mathcal{D}^{k}_{Reg}\) until convergence. * Conduct further tuning on a small high-quality GEC dataset \(\mathcal{D}_{tune}\). We summarize the whole procedure of **CSA** method into algorithm 1. The benefits of launching further training on regularization data are four-folds: 1) it prevents overfitting on the easy-to-learn data pairs in the augmenting datasets; 2) it can reduce model capacity to improve its generalization ability and robustness; 3) it gives more opportunities for the model to address hard-to-learn pairs; 4) it can accelerate each training cycle by using fewer data pairs. More analysis of regularization data is given in Section 6.5. ### Comparison with Previous Works Our CSA method contains a certain degree of correlation in **Fluency Boost Learning**(Ge et al., 2018) methods and one may view our method is a traditional data augmentation method. We will elaborate on the advancement of our method from two aspects: (1) continual improvement on a well-trained model (2) a efficient way to filter regularization Data. Continual ImprovementAs shown in Figure 3, unlike the previous work (Ge et al., 2018) which utilize dual learning method3 to improve the GEC model, our method mainly focus on self-iterative refinement on one well trained model. From the implementation perspective, we do not modify the original model structure (including decoding strategy) and do not add an auxiliary module. This plug-and-play method is significant, especially in some environments where the model architecture remains unchanged or has a strict requirement for the number of model parameters. Footnote 3: There are actually three methods proposed in their work: self-boost, back-boost and dual-boost. However, dual-boost method is finally put into implementation. Regularization Data ExtractionA critical merit of our method is to select regularization data that simultaneously improves model performance and robustness among the cyclic procedure. Following the aforementioned data augmentation paradigm shown in equation 1 for improving robustness, the regularization data can improve both the model robustness and the performance on the original testing data, as it contains two main components: * \(X_{unl}\): A set of data where the model keeps generating the same errors or is unable to edit correctly mainly contributes to improving the model's robustness on attack sets. * \(X_{unc}\): A set of data where the model keeps generating different errors or is unable to edit correctly mainly contributes to improving the model's performance on the original testing data. We will analyse the influence brought by regularization data in Section 6.5, and illustrate the procedure of generating attack sets in Section 4. Moreover, we select different variants of augmenting pairs for post-tuning, i.e., \((x^{{}^{\prime}},y)\) and \((y^{{}^{\prime}},y)\), other than simply filtering one type of data like the fluency boost learning method. ## 4 Adversarial Data Since adversarial data set is the cornerstone of testing the defense ability of the GEC model, we introduce four textual adversarial attack methods to construct different variants for each original test sets, including back-translation (Xie et al., 2018), mapping & rules (Wang and Zheng, 2020), antonym substitution (Ma, 2019) and synonyms substitution (Li et al., 2021). We divide these four ways of constructing adversarial data into two categories: discrete attack and continuous attack, which are described in Section 4.1 and 4.2, respectively. In Section 4.3, we introduce _Recovery Rate_, one evaluation metric that tests the model's robustness. ### Discrete Attack Inspired by the previous work (Wan et al., 2020) which identifies the vulnerable tokens according to the positional scores, we locate the weak spots with a probability distribution. We formally describe the implementation details of identification operation for _Seq2Seq_ model and _Seq2Edits_ Figure 3: Comparison between our **CSA** with previous data augmentation method. Picture (a) shows the framework of the **fluency boost learning** method proposed by Ge et al., which contains a generator that produces error data and a GEC model trained with both real data and the data produced by the generator. The fluency score is introduced to sample the fluency boost pairs output by models. Picture (b) illustrates our **CSA** method, which contains \(k\) cycles, and in each cycle, the GEC model is trained with both read data and the regularization data. The regularization data is selected among the cycles. model in Section 4.1.1 and 4.1.2 respectively. Then, we introduce three discrete types of substitution operations in Section 4.1.3. #### 4.1.1 Constructing data for _Seq2Seq_ model architecture For _Seq2Seq_ model architecture, we use beam search (Tillmann & Ney, 2003) decoding strategy and set beam size as 5. After the original ungrammatical sentence \(X=\{x_{1},\ldots,x_{n}\}\) passes through the well-trained cutting-edge GEC system \(\mathcal{F}\), we can obtain 5 candidate sentences, as well as token-level prediction probability for \(i\)-th token \(p_{i}^{t}\). Then, we select the sentence with the highest generation probability, and use the alignment function implemented in fairseq toolkit (Ott et al., 2019) to align the source and target sentences. We average all the token-level prediction probability to get a threshold \(\epsilon\), we take \(x_{k}\) as a candidate to be perturbed if its prediction probability \(p_{k}^{t}\) is less than the threshold \(\epsilon\). #### 4.1.2 Constructing data for _Seq2Edits_ model architecture For _Seq2Edits_ model architecture, there is token-level predicting probability for editing token. Since the **Tagger** module (as shown in Figure 1(b)) often consists of several linear layers with softmax layers on the top (Omelianchuk et al., 2020; Awasthi et al., 2019; Malmi et al., 2019), each token-level prediction probability of tag \(p_{i}^{t}\) is obtained by applying an argmax over the encoder logits. Owing to the natural one-to-one correspondence between the original ungrammatical sentence and the tags, we can calculate the threshold \(\epsilon\) and obtain the vulnerable candidates directly without aligning operation. #### 4.1.3 Substitution Operation After detecting the vulnerable spots for each sentence, a substitution operation is followed. Details of three common attack strategies, which we call mapping & rules, antonym substitution, and synonyms substitution, are listed below. Figure 4: Comparison between our **CSA** and the adversarial training method on four different attack sets. Mapping & RulesWe firstly build a \(Good\mapsto Poor\) mapping \(\mathcal{M}\) from training datasets. Specifically, given one ungrammatical sentence and the corresponding correct sentence, we can get the alignment between two sentences by utilizing ERRor ANnotation Toolkit4(Bryant et al., 2017; Felice et al., 2016). Footnote 4: [https://github.com/chrisjbryant/errant](https://github.com/chrisjbryant/errant) Then, we compare the aligned pieces from the source sentence and the golden sentence, respectively, and add the inconsistent piece pairs to the mapping set \(\mathcal{M}\). We extend such operation to the scope of all public annotated corpus (as shown in Table 2). After acquiring the mapping \(\mathcal{M}\), we apply a weighted random sampling (Efraimidis and Spirakis, 2006) method (Equation 4) to select the perturbations for the vulnerable spots in the original sentence step by step: \[e\sim(\mathbb{U}_{i})^{1/c_{i}} \tag{4}\] where \(\mathbb{U}_{i}\) is the random numbers between 0 and 1 for \(i\)-th token, and \(c_{i}\) is calculated by Equation 5: \[c_{i}=\begin{cases}sim(s_{i},s^{\prime}_{i})&sim(s_{i},s^{\prime}_{i}){>}\zeta \\ \zeta&sim(s_{i},s^{\prime}_{i})\leq\zeta\end{cases}, \tag{5}\] where \(s_{i}\) is the original ungrammatical sentence and its corresponding perturbation is \(s^{\prime}_{i}\). Following the previous work (Wang and Zheng, 2020), we select edit distance (Ristad and Yianilos, 1998) as \(sim\) function here, and \(\zeta\) is a threshold which controls similarity between two sentences. In order to keep the semantic consistency between the original text and adversarial text as much as possible, we must denote that if the edit distance \(sim(s_{i},s^{\prime}_{i})\) continually greater than \(\zeta\) for three perturbation steps(Wang and Zheng, 2020), this strategy ends. We set \(\zeta\) as 0.1 by default. Antonym and Synonyms Substitutions.We detect the vulnerable tokens by using the method proposed above and simply use the open-source tools NLPAug (Ma, 2019)5 to substitute opposite meaning word according to WordNet antonym (Miller, 1995) or substitute similar words according to WordNet/PPDB (of Hertfordshire, 2007) synonym. The sampling method and default settings of these two perturbation strategies keep up with the _Mapping & Rules_ strategy. Footnote 5: [https://github.com/makcedward/nlpaug](https://github.com/makcedward/nlpaug) ### Continuous Attack Besides, we also want to explore the robustness of the GEC models by perturbing the inputs in the representation space. We utilize the back-translation method (Sennrich et al., 2016) and the implementation details are illustrated below. Back-TranslationInspired by the previous works (Sennrich et al., 2016; Edunov et al., 2018), back-translation plays an important role in enlarging the monolingual data and boosting fluency for phrase-based statistical machine translation, as well as GEC tasks. We reverse the examples from pre-training datasets to train a back-translation model which can generate errorful examples from a clean corpus. However, this method generates too few errors when using a traditional decoding strategy, and the outputs cannot confuse the model easily as it samples the most likely token in each decoding step. Then we implement a variant back-translation method (Xie et al., 2018) which adds \(r\beta_{random}\) to penalize every hypothesis during the beam search step, where \(r\) is drawn uniformly from the interval [0, 1] and \(\beta_{random}\) is a hyper-parameter sampling from \(\mathbb{R}\). We follow the previous work (Kiyono et al., 2019) to set \(\beta_{random}=6\). ### Recovery Rate to Statistics As long as the current prevalent evaluation metric of the GEC task mainly focuses on the global performance of the post-editing sentences, a specific metric for evaluating partial changes is missing. Following the previous work (Wang and Zheng, 2020), we propose an evaluation metric Recovery Rate to measure the correction rate of the attacked position. The entire procedure is described in the algorithm 2 and the following notations are defined for clearer illustration: * \(P\) represents attacked positions of each sentence, which can be automatically generated or manually defined. As for discrete attacks, we regard vulnerable spots of the original text as \(P\), while for continuous attacks, \(P\) is automatically generated by aligning the attack texts and the original texts. * \(r\) represents the number of recovery sentences. It is worth noting that if all the attack positions are recovered in one sentence, this sentence can be judged as recovery. * \(e\) represents the number of unrecovered sentences. If some attack positions are missed to be recovered, this sentence is judged to be unrecovered. * **Align(\(X\),\(Y\))** is an aligning function implemented by ERRor ANnotation Toolkit which can produce the editing operation sequence between two sentences, i.e., sentence \(X\) can transfer to sentence \(Y\) by following the editing operation sequence. * **getPos(\(editSeq\),\(pos\))** can locate the recovery position by simulating the operations in the editing operation sequence \(editSeq\) until reaching the attack position \(pos\) and calculating the simulation result's length. We can see that the _Recovery Rate_ metric dictates that all the attack positions per sentence need to be recovered while ignoring the number of attacks which indicates the global modification information and can be abbreviated into _SR_ (**S**entence level **R**ecovery) rate. As we are concerned about the local modification, we take each recovery position into consideration, i.e, calculating the _Recovery Rate_ of each attacked token rather than totally recovery sentence, and we abbreviate this metric into _TR_ (**T**oken level **R**ecovery) rate. To unify the evaluation settings, we also build corresponding datasets which contain a fixed number of errors per sentence. Specifically, based on the _Mapping & Rules_ substitution operation mentioned in Section 4.1.3, we perturb the fixed number of positions for each sentence by ranking the generation probabilities, selecting the tokens with lower scores and recording the positions \(P\). ## 5 Experiments We conduct experiments on both original testing sets and attack sets. We first present necessary details about train sets and evaluation metrics, followed with the description of baselines and concrete implementation settings of all models. We then give the evaluation results on original testing data and attack sets. ### Datasets and Evaluations In this section, we summarize the training data in Section 5.1.1 as well as introduce the construction details for attack data in Section 5.1.2, and describe our evaluation metric in Section 5.1.3. #### 5.1.1 Training Datasets The first and second group of Table 2 describes all the datasets that are utilized in model training. Following the previous study (Omelianchuk et al., 2020), we leverage these datasets in two different training phases: Pre-TrainingIn this phase, we use 9M pseudo parallel sentences with synthetic errors (Awasthi et al., 2019)6 and pseudo-generated data (Kiyono et al., 2019) for pre-training. Specifically, as for pseudo-generated data, we follow the operation mentioned in (Kiyono et al., 2019) and use unlabeled corpus from Gigaword7(Graff et al., 2003) as seed corpus. The setting of (\(\mu_{mask}\), \(\mu_{deletion}\), \(\mu_{insertion}\), \(\mu_{keep}\)) is (0.5, 0.15, 0.15, 0.2). We randomly sample 10000 sentences from the training data as the development set and pre-train the model with 15 epochs with the batch size of 9012 tokens. Based on the results on the development set, we select the best checkpoint for the following stages. Footnote 6: [https://drive.google.com/open?id=lbl5reJ-XhPFEEaPjv045M7w0yN-0XGOA](https://drive.google.com/open?id=lbl5reJ-XhPFEEaPjv045M7w0yN-0XGOA) \begin{table} \begin{tabular}{l c c c} \hline \hline **Dataset** & **\#Sentences** & **Errors (\%)** & **Usage** \\ \hline PIE-synthetic & 9,000,000 & 100.0 \% & Pre-training \\ Pseudo-generate & 15,000,000 & 100.0 \% & Pre-training \\ \hline Lang-8 & 1,102,868 & 51.1 \% & Fine-tuning \({}^{\dagger}\) \\ FCE & 34,490 & 62.6 \% & Fine-tuning \({}^{\dagger}\) \\ NUCLE & 57,151 & 38.2 \% & Fine-tuning \({}^{\dagger}\) \\ W&I+LOCNESS & 34,308 & 66.3 \% & Fine-tuning \({}^{\ddagger}\) \\ \hline W\&I-Dev (BEA-test) & 4,384 & - & Evaluation \\ CoNLL-2014 (test) & 1,312 & - & Evaluation \\ FCE-test & 2,695 & - & Evaluation \\ JFLEG & 1,951 & - & Evaluation \\ \hline \hline \end{tabular} \end{table} Table 2: Statistics of datasets used in our experiments. The top group shows the data for pre-training. The second group shows the data for fine-tuning. The third group shows the data for evaluation. \(``\dagger"\) denotes data used in fine-tuning stage I, while \(``\dagger"\) refers to data used in both fine-tuning stage I and II. Fine-TuningDuring the fine-tuning phase, we use the official corpora from BEA-2019 shared task (Bryant et al., 2019)8 for fine-tuning, which comprises four datasets, i.e., Lang-8 Corpus of Learner English (Lang-8) (Mizumoto et al., 2011; Tajiri et al., 2012), National University of Singapore Corpus of Learner English (NUCLE) (Dahlmeier et al., 2013), the First Certificate in English (FCE) (Yannakoudakis et al., 2011), and Cambridge English Write & Improve + LOCNESS Corpus (W&I+LOCNESS) (Granger, 1998; Yannakoudakis et al., 2018). We split out validation data by random sampling from the official training corpora with a ratio of 2/98 and decompose the fine-tuning phase into two stages. In stage I, the model is fine-tuned on errorful-only sentences. In stage II, the model is tuned on a high-quality and more realistic dataset as in (Kiyono et al., 2019; Omelianchuk et al., 2020). In each stage, we set the max training epoch as 20 and select the best checkpoint according to the result on the validation set. Footnote 8: [https://www.cl.cam.ac.uk/research/nl/bea2019st/](https://www.cl.cam.ac.uk/research/nl/bea2019st/) #### 5.1.2 Attack Datasets To evaluate the defense capability of different models, we utilize four perturbation methods mentioned in Section 4.1.3 for the official evaluation data (the third group in Table 2) to generate corresponding attack sets and exploit the obtained attack sets for testing. Specifically, for the back-translation strategy, we utilize the PIE-synthetic dataset for training the model and the training details are listed in Table 3. Moreover, to report the _Recovery Rate_, i.e., _TR_ and _SR_, we stipulate the number of errors in each sentence from one to three and build three corresponding variants of CoNLL-2014 evaluation set 9 by applying the method mentioned in Section 4.3. The datasets created for calculating the _Recovery Rate_ are marked as \(\mathrm{ATK}_{i}\), where \(i\in\{1,2,3\}\) represents for the number of attack positions per sentence. Footnote 9: We ignore the sentences in the test set which are shorter than two, e.g., one single quotation mark, both in the perturbation and the calculation procedure. #### 5.1.3 Evaluations For official evaluation data (original testing sets) and attack datasets build by ourselves (attack datasets), we utilize different evaluation metrics shown below: Original Testing SetsWe report results on the official evaluation datasets of BEA, CoNLL-2014 (Ng et al., 2014), FCE, and JFLEG (Napoles et al., 2017). We measure the results of CoNLL-2014 and FCE by \(M^{2}\) scorer (Dahlmeier & Ng, 2012). For JFLEG results, we use the GLEU metric (Napoles et al., 2015, 2016). We report the scores measured by ERRANT (Bryant et al., 2017; Felice et al., 2016) for BEA-test. As the reference of the BEA-test is unavailable, we report results from CoaLab10. Footnote 10: [https://competitions.codalab.org/competitions/20228](https://competitions.codalab.org/competitions/20228) \begin{table} \begin{tabular}{l l} \hline \hline Model architecture & transformer\_vaswani\_wmt\_en\_de\_big Vaswani et al. (2017) \\ Number of epochs & 25 \\ Batch Size (Tokens) & 9012 \\ Max Sentence Length & 512 \\ Optimizer & Adam Kingma \& Ba (2014) \\ Adam setting & (\(\beta_{1}\)=0.9, \(\beta_{2}\)=0.98) \\ Learning rate & 3e-4 \\ Lr-Scheduler & inverse\_sqrt \\ Warm-up & 4000 \\ Dropout & 0.1 \\ Loss function & label smoothed cross-entropy Szegedy et al. (2016) \\ Label Smoothing & 0.1 \\ \hline \hline \end{tabular} \end{table} Table 3: Hyper-parameters of back-translation model. The whole training strategy follows the pre-training and fine-tuning paradigm, and we utilize the PIE-synthetic datasets in pre-training stage and Lang-8 datasets in fine-tuning stage. We split the validation data with a ratio of 2/98 for model selection. The hyper-parameters of the back-translation model are shown in this table. Attack SetsAs each variant of the attack set is constructed from the original test set, we leverage the same metrics to calibrate model robustness, i.e., \(M^{2}\) scorer, GLEU metric, and ERRANT. We also report _TR_ and _SR_ score for \(\mathrm{ATK}_{i}\) (\(i\in\{1,2,3\}\)) set. ### Baselines and Settings Note that our proposed CSA method is a post-training strategy, which can be utilized upon any neural GEC model. We leverage seven cutting-edge models as our baselines to conduct experiments under the supervised setting. To make sure all the baselines are well fine-tuned, if there exists a publicly available checkpoint for each baseline model, we will use it directly. Otherwise, we will follow the original settings to train a model by ourselves.11 As for the gradient-based adversarial attack, we set \(\tau\) as 1 and \(\gamma\) as 0.5. Specifically, we carry out experiments on four _Seq2Seq_ Models, including **Transformer**(Kiyono et al., 2019)12, **BERT-fuse**(Kaneko et al., 2020)13, **BART** large (Katsumata and Komachi, 2020)14,**LM-Critic** method (Yasunaga et al., 2021)15, and three _Seq2Edit_ model variants (**RoBERTa, BERT, XLNet**) based on large-scale pre-trained language models in GECToR (Omelianchuk et al., 2020)16. Footnote 11: To ensure that the baseline models are fully fine-tuned, we conduct experiments on the original testing datasets and compare the results reported in the original paper. Detailed results are presented in Section 5.3.1 Footnote 12: [https://github.com/butsuigiri/gec-pseudodata](https://github.com/butsuigiri/gec-pseudodata) Footnote 13: [https://github.com/bert-nmt/bert-nmt](https://github.com/bert-nmt/bert-nmt) Footnote 14: [https://github.com/Katsumata420/generic-pretrained-GEC](https://github.com/Katsumata420/generic-pretrained-GEC) Footnote 15: We apply LM-Critic under the unsupervised setting and the basic model architecture is Transformer. The implementation is in the [https://github.com/michiyasunaga/LM-Critic](https://github.com/michiyasunaga/LM-Critic) Footnote 16: [https://github.com/grammarly/gector](https://github.com/grammarly/gector) Besides, to show the superiority of our CSA method, we utilize one gradient-based defence method (Yasunaga et al., 2017) for comparison. Followed by the previous work, we inject the noise into the embedding of each model during the training stage. More details are shown in Appendix D. As for the CSA method, we set max cycle times \(\epsilon\) = 5 and patience \(\mathcal{P}\) = 2. If the model performance does not improve over two consecutive cycles, the training process is stopped. Note that we use the same official development set throughout the fine-tuning process of baselines and the cycle training process of our CSA method17 for checking training convergence. During the post-training stage, all hyper-parameter settings are the same with baselines. Footnote 17: [https://www.cl.cam.ac.uk/research/nl/bea2019st/](https://www.cl.cam.ac.uk/research/nl/bea2019st/) ### Main Results We report three results, including the performance on the original testing sets (Section 5.3.1), the performance on attack sets (Section 5.3.2), and the value of _Recovery Rate_ on \(\mathrm{ATK}_{i}\) (\(i\in\{1,2,3\}\)) sets (Section 5.3.3). For each baseline, we compare the performance with two CSA variants, i.e., \(y^{{}^{\prime}}\mapsto y\) and \(x_{h}\mapsto y\), and one gradient-based method (\(\nabla_{ATK}\)). Moreover, to explore whether our proposed method can adapted to non-English GEC task, we also conduct some experiments on the Chinese GEC task and report the results in Appendix D. #### 5.3.1 GEC Results We first present the experimental results on four original testing sets to calibrate the influence of our proposed CSA to baseline models, where the detailed numbers are shown in Table 4. It can be seen that with several additional cycles, our proposed CSA method yields impressive performance improvement on four baselines, i.e., Transformer, BERT-fuse, BART, and LM-Critic. Specifically, the improvement is about two points for the supervised setting (Transformer, BERT-fuse, BART). For the unsupervised setting (LM-Critic), the improvement is dramatic, e.g., 19.2 for the BEA test set. For the rest three strong baselines, our proposed method can achieve nearly comparable results18. Footnote 18: Our method achieves better performance on six out of twelve settings, where each setting refers to the combination of a specific baseline and a dataset, e.g., CSA achieves higher F.0.5 score than the BERT model on the JELEG dataset. #### 5.3.2 Attack Results The results of evaluation results on four different attack sets are put in Appendix A. In Table 5, we report the average evaluation score on attack sets. Recall that we construct four variants of attack sets with different methods for each original test set. To better show the effectiveness of our proposed CSA method, we utilize the averaged results of four attack sets for each original test set. It can be observed that our proposed CSA method yields robustness improvement on all baseline methods. In particular, our CSA methods leads to improvements on all attack sets, and there existing dramatical increase of the average score under the \(y^{{}^{\prime}}\mapsto y\) setting, i.e., the improvements of 4.9 (\(F_{0.5}\)) points over BERT-fuse and 5.1 (\(F_{0.5}\)) points over the BART model on the CoNLL-2014 attack sets. However, it is worth noting that the original _Seq2Edit_ models all have a better performance on attack sets than the _Seq2Seq_ models and even can compete with the performance of some _Seq2Seq_ models, which are reinforced by CSA method. \begin{table} \begin{tabular}{l|c c c|c c c|c c c|c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{**BEA (ERRANT)**} & \multicolumn{3}{c|}{**CoNLL-2014** (\(M^{2}\))} & \multicolumn{3}{c|}{**FCE (\(M^{2}\))**} & **JELEG** \\ \cline{2-11} & **Prec.** & **Rec.** & **F.0.5** & **Prec.** & **Rec.** & **F.0.5** & **Prec.** & **Rec.** & **F.0.5** & **GLEU** \\ \hline Transformer & 65.5 & 59.4 & 64.2 & 68.9 & 43.9 & 61.8 & 59.4\({}^{*}\) & 39.5\({}^{*}\) & 54.0\({}^{*}\) & 59.7 \\ \(y^{{}^{\prime}}\mapsto y\) (4 Cycles) & 69.6 & 64.7 & **68.6** & 69.5 & 49.5 & **64.3** & 63.2 & 43.3 & 57.9 & **62.7** \\ \(x_{h}\mapsto y\) (3 Cycles) & 67.9 & 64.6 & 67.2 & 69.0 & 50.1 & 64.2 & 63.0 & 43.9 & **58.0** & 61.8 \\ \(\nabla_{ATK}\) (2 Cycles) & 66.4 & 62.4 & 65.6 & 69.0 & 48.7 & 63.7 & 61.1 & 42.5 & 56.2 & 61.2 \\ \hline BERT-fuse & 67.1 & 60.1 & 65.6 & 69.2 & 45.6 & 62.6 & 59.8 & 46.9 & 56.7 & 61.3 \\ \(y^{{}^{\prime}}\mapsto y\) (3 Cycles) & 68.9 & 64.5 & **68.0** & 69.4 & 49.8 & **64.4** & 64.4 & 46.6 & **59.9** & **62.5** \\ \(x_{h}\mapsto y\) (2 Cycles) & 68.1 & 64.0 & 67.2 & 67.5 & 49.4 & **62.9** & 63.7 & 46.9 & 59.4 & 62.0 \\ \(\nabla_{ATK}\) (1 Cycle) & 68.1 & 62.6 & 66.9 & 68.1 & 50.4 & 63.6 & 61.4 & 43.1 & 56.6 & 61.4 \\ \hline BART & 68.3 & 57.1 & 65.6 & 69.3 & 45.0 & 62.6 & 59.6\({}^{*}\) & 40.3\({}^{*}\) & 54.4\({}^{*}\) & 57.3 \\ \(y^{{}^{\prime}}\mapsto y\) (2 Cycles) & 70.9 & 61.9 & **68.9** & 70.4 & 46.7 & **63.9** & 65.2 & 34.4 & **55.3** & **59.4** \\ \(x_{h}\mapsto y\) (1 Cycle) & 66.8 & 61.2 & 65.6 & 66.3 & 45.7 & 60.8 & 60.1 & 39.8 & 54.5 & 58.6 \\ \(\nabla_{ATK}\) (2 Cycles) & 73.4 & 56.5 & 69.2 & 71.6 & 42.6 & 63.0 & 62.9 & 34.7 & 54.1 & 57.4 \\ \hline LM-Critic & 51.6 & 24.7 & 42.4 & 64.4 & 35.6 & 55.5 & 49.6\({}^{*}\) & 24.6\({}^{*}\) & 41.2\({}^{*}\) & 51.4\({}^{*}\) \\ \(y^{{}^{\prime}}\mapsto y\) (2 Cycles) & 68.4 & 61.6 & **66.9** & 65.7 & 47.4 & **61.0** & 58.0 & 39.6 & **53.1** & **59.1** \\ \(x_{h}\mapsto y\) (1 Cycle) & 69.2 & 49.6 & 64.2 & 64.8 & 36.4 & 56.1 & 59.9 & 30.7 & 50.3 & 55.1 \\ \(\nabla_{ATK}\) (1 Cycle) & 63.2 & 56.0 & 61.6 & 61.9 & 42.4 & 56.7 & 57.0 & 35.3 & 50.7 & 56.6 \\ \hline BERT & 71.5 & 55.7 & **67.6** & 72.1 & 42.0 & **63.0** & 66.2\({}^{*}\) & 42.0\({}^{*}\) & **59.4\({}^{*}\)** & 57.5\({}^{*}\) \\ \(y^{{}^{\prime}}\mapsto y\) (1 Cycle) & 67.7 & 57.2 & 65.3 & 70.0 & 44.3 & 62.3 & 64.0 & 43.1 & 58.3 & 57.8 \\ \(x_{h}\mapsto y\) (1 Cycle) & 66.4 & 56.0 & 64.0 & 68.7 & 42.8 & 61.3 & 63.8 & 42.8 & 58.1 & **58.4** \\ \(\nabla_{ATK}\) (1 Cycle) & 66.1 & 57.8 & 64.3 & 66.3 & 44.5 & 60.4 & 57.9 & 36.5 & 51.8 & 57.5 \\ \hline RoBERTa & 68.4 & 60.8 & 66.8 & 68.7 & 47.2 & **62.9** & 61.6\({}^{*}\) & 45.3\({}^{*}\) & 57.5\({}^{*}\) & 59.1\({}^{*}\) \\ \(y^{{}^{\prime}}\mapsto y\) (1 Cycle) & 68.8 & 60.3 & **66.9** & 68.0 & 46.9 & 62.4 & 62.7 & 44.8 & **58.0** & 58.6 \\ \(x_{h}\mapsto y\) (1 Cycle) & 66.2 & 60.4 & 64.9 & 66.3 & 47.7 & 61.5 & 61.4 & 44.8 & 57.2 & **59.2** \\ \(\nabla_{ATK}\) (1 Cycle) & 64.2 & 61.7 & 63.7 & 64.4 & 49.1 & 60.6 & 56.2 & 40.1 & 52.0 & 59.0 \\ \hline XLNet & 79.2 & 53.9 & **72.4** & 77.5 & 40.1 & **65.3** & 71.9\({}^{*}\) & 41.3\({}^{*}\) & 62.7\({}^{*}\) & 56.0\({}^{*}\) \\ \(y^{{}^{\prime}}\mapsto y\) (1 Cycle) & 77.8 & 55.0 & 71.8 & 75.3 & 41.6 & 64.8 & 71.5 & 42.7 & **63.1** & 56.5 \\ \(x_{h}\mapsto y\) (1 Cycle) & 65.9 & 62.7 & 65.3 & 66.7 & 48.6 & 62.1 & 64.5 & 51.1 & 61.3 & **60.1** \\ \(\nabla_{ATK}\) (1 Cycle) & 67.9 & 60.0 & 66.1 & 67.2 & 47.5 & 62.0 & 59.4 & 38.5 & 53.6 & 59.0 \\ \hline \hline \end{tabular} \end{table} Table 4: Evaluation results on the original testing data. The numbers labeled with “*****” refer to the results tested by ourselves with the released checkpoints from the original papers, while all the left numbers are copied from the original papers. We report the performance of two variants of regularization data for each model and the corresponding best cycle times, where \(\mapsto\) represents the direction of data flow. The bold fonts are the best performance in each comparison. Note that all the non-CSA baselines here are fine-tuned on the high-quality fine-tuning set (except for LM-Critic, which is an unsupervised method). The CSA and its variants are trained from the same checkpoints as baselines. #### 5.3.3 Recovery Rate Results The results of _Recovery Rate_ are shown in Table 6. We report the _TR_ (token level recovery rate) and _SR_ (sentence level recovery rate) for each model. It can be observed that the performance of the _Seq2Seq_ model (Transformer, BERT-fuse, BART and LM-Critic) is obviously better than that of the _Seq2Edit_ model (i.e., including RoBERTa, BERT, and XLNet ). We can also see that, with the increasing number of attack positions, the results of both _TR_ and _SR_ decrease a lot, and the improvement of CSA method is diminishing. Although the performance degradation in this _Seq2Edit_ model is not extensive in magnitude, the slight improvement and unstable performance of _Seq2Edit_ models attract our interests, and we will dig for the reasons behind it subsequently. ## 6 Analysis and Discussion In this section, we conduct extensive studies from different perspectives to better understand our **CSA** method and regularization data. Since the most important merit of our CSA method is simplicity and effectiveness in improving the robustness against multiple types of attack, we first evaluate its defense capability of **CSA** with a recently proposed defense method for the GEC task (Wan et al., 2020) in Section 6.2. Then, we conduct experiments to study the effects of self-augmenting and hard samples in Section 6.3, followed by hyper-parameter analysis in Section 6.4. Afterward, we conduct \begin{table} \begin{tabular}{l|c c c|c c c|c c c|c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**BEA** (ERRANT)} & \multicolumn{2}{c|}{**CoNLL-2014** (\(M^{2}\))} & \multicolumn{2}{c|}{**FCE (\(M^{2}\))} & \multicolumn{2}{c}{**JFLEG**} \\ \cline{2-11} & **Prec.** & **Rec.** & **F.0.5** & **Prec.** & **Rec.** & **F.0.5** & **Prec.** & **Rec.** & **F.0.5** & **GLEU** \\ \hline Transformer & 21.0 & 48.0 & 23.4 & 34.1 & 39.7 & 34.9 & 29.7 & 34.2 & 30.3 & 45.4 \\ \(y^{\prime}\mapsto y\) (4 Cycles) & 23.9 & 53.3 & 26.6 & 37.6 & 45.5 & 38.8 & 32.5 & 38.8 & 33.4 & 46.4 \\ \(x_{h}\mapsto y\) (3 Cycles) & 23.8 & 53.6 & **26.5** & 38.2 & 45.9 & **39.4** & 32.5 & 38.8 & **33.8** & **47.2** \\ \(\nabla_{ATK}\) (2 Cycles) & 22.9 & 51.8 & 25.6 & 34.9 & 39.7 & 35.7 & 30.1 & 35.2 & 30.6 & 44.8 \\ \hline BERT-fuse & 20.4 & 46.1 & 22.6 & 33.5 & 38.2 & 34.1 & 31.0 & 34.5 & 31.4 & 45.4 \\ \(y\mapsto y\) (3 Cycles) & 23.9 & 53.7 & 26.6 & 37.9 & 45.4 & 39.0 & 33.7 & 39.9 & 34.6 & 47.0 \\ \(x_{h}\mapsto y\) (2 Cycles) & 23.8 & 53.6 & 26.5 & 37.4 & 45.5 & 38.7 & 33.9 & 40.6 & **34.9** & **47.2** \\ \(\nabla_{ATK}\) (1 Cycles) & 24.5 & 54.8 & 27.3 & 38.6 & 46.1 & 39.7 & 33.1 & 39.5 & 33.9 & 47.3 \\ \hline BART & 20.9 & 44.7 & 23.0 & 34.5 & 38.8 & 35.0 & 30.1 & 31.5 & 30.0 & 43.8 \\ \(y^{\prime}\mapsto y\) (2 Cycles) & 25.0 & 53.9 & 27.7 & 39.1 & 46.0 & 40.1 & 32.1 & 37.5 & 32.9 & 45.8 \\ \(x_{h}\mapsto y\) (1 Cycle) & 23.4 & 51.4 & 25.9 & 36.7 & 43.4 & 37.7 & 31.8 & 36.3 & 32.3 & 45.1 \\ \(\nabla_{ATK}\) (2 Cycles) & 23.8 & 50.8 & 26.2 & 36.5 & 43.3 & 37.6 & 31.0 & 32.6 & 30.8 & 44.6 \\ \hline LM-Critic & 18.6 & 39.0 & 20.5 & 34.5 & 35.9 & 34.5 & 23.6 & 24.7 & 23.5 & 41.1 \\ \(y\mapsto y\) (2 Cycles) & 24.5 & 52.1 & 27.1 & 41.1 & 46.0 & 41.8 & 31.9 & 36.3 & 32.5 & 46.1 \\ \(x_{h}\mapsto y\) (1 Cycle) & 20.6 & 43.3 & 22.6 & 31.7 & 35.3 & 32.1 & 28.2 & 29.4 & 28.1 & 42.9 \\ \(\nabla_{ATK}\) (1 Cycles) & 22.8 & 41.3 & 24.5 & 36.3 & 37.7 & 36.3 & 30.0 & 32.7 & 30.2 & 44.1 \\ \hline BERT & 23.1 & 50.2 & 25.7 & 35.6 & 42.9 & 37.4 & 33.2 & 39.4 & 34.2 & 45.7 \\ \(y\mapsto y\) (1 Cycle) & 23.4 & 51.3 & 26.1 & 37.4 & 41.9 & 38.0 & 33.7 & 40.1 & 34.6 & 45.8 \\ \(x_{h}\mapsto y\) (1 Cycle) & 22.0 & 47.8 & 24.4 & 35.2 & 40.4 & 35.9 & 32.5 & 37.5 & 33.2 & 45.3 \\ \(\nabla_{ATK}\) (1 Cycles) & 23.8 & 50.8 & 26.3 & 36.8 & 43.0 & 37.7 & 30.2 & 35.2 & 31.0 & 45.6 \\ \hline RoBERTa & 24.8 & 52.4 & 27.4 & 38.2 & 44.1 & 39.0 & 33.9 & 39.9 & 34.8 & 46.3 \\ \(y^{\prime}\mapsto y\) (1 Cycle) & 25.0 & 52.7 & 27.6 & 38.7 & 44.6 & 39.5 & 34.3 & 40.3 & 35.1 & 46.5 \\ \(x_{h}\mapsto y\) (1 Cycle) & 24.7 & 52.6 & 27.3 & 38.5 & 45.0 & 39.4 & 34.0 & 40.2 & 34.9 & 46.4 \\ \(\nabla_{ATK}\) (3 Cycles) & 25.0 & 52.5 & 27.6 & 38.2 & 44.6 & 39.6 & 32.2 & 37.0 & 32.9 & 46.3 \\ \hline XLNet & 25.7 & 54.6 & 28.4 & 38.9 & 46.6 & 40.1 & 36.5 & 44.9 & 37.7 & 47.3 \\ \(y^{\prime}\mapsto y\) (2 Cycles) & 25.8 & 54.8 & 28.5 & 39 & 46.6 & 40.1 & 36.6 & 44.9 & 37.9 & 47.5 \\ \(x_{h}\mapsto y\) (1 Cycle) & 25.0 & 53.0 & 27.6 & 38.1 & 45.0 & 39.1 & 36.3 & 44.0 & 37.4 & 47.3 \\ \(\nabla_{ATK}\) (1 Cycles) & 27.2 & 50.0 & 28.3 & 38.9 & 45.8 & 39.8 & 32.5 & 36.5 & 33.1 & 46.5 \\ \hline \hline \end{tabular} \end{table} Table 5: The average of evaluation results on four attack sets and each test set corresponds to four variants for attack, including _Mapping & Rules_, _Synonyms_, _Back-Translation_ and _Antonym_. We report the performance of two variants of regularization data for each model and the corresponding best cycle times, where \(\mapsto\) represents the direction of data flow. The bold fonts indicate the optimal performance of each comparison. a case study of regularization data, elaborate the influence brought by two data components \(X_{unl}\) and \(X_{unc}\) decomposed from regularization data, and try different variations of our **CSA** method in Section 6.5. Finally, we analyze the reason behind the slight improvement of _Seq2Edits_ framework in Section 6.5.3. These studies are mainly taken on CoNLL-2014 dataset and the attack set is constructed by the Mapping & Rules method unless there is a clear explanation, and all the experiments are launched on the Transformer. ### Large Langeage Models Robustness We evaluate the robustness of GPT-3.5 (text-davinci-003) using the original CoNLL-2014 testing data and its three types of attack sets. Given that the format and the exact wording of GPT-3.5's prompts can significantly impact the task performance, we conduct zero-shot and few-shot prompt setting proposed by previous work (Coyne & Sakaguchi, 2023), which performs best in prompt engineering experiments. The results are listed in Table 7, indicate that GPT-3.5 suffers from a significant decrease in performance on the attack sets, e.g., from 34.8 to 28.4 on the Vector-based set in few-shot setting. Despite being trained on a vast amount of data, GPT-3.5 is still susceptible to a reduction in its robustness. ### Defense Capability Comparison previous adversarial training method (Wan et al., 2020), which add Mapping & Rules attack samples into the training set. Figure 5 presents the evaluation results on four test sets under the four types mentioned above of adversarial attack. Our CSA has better defense capability than the baseline model under three types of attack and achieves comparable results under the Mapping & Rules \begin{table} \begin{tabular}{l|c c|c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**ATK 1 (\(\uparrow\))**} & \multicolumn{2}{c|}{**ATK 2 (\(\uparrow\))**} & \multicolumn{2}{c}{**ATK 3 (\(\uparrow\))**} \\ \cline{2-7} & _TR(\%)_ & _SR(\%)_ & _TR(\%)_ & _SR(\%)_ & _TR(\%)_ & _SR(\%)_ \\ \hline Transformer & 32.6 & 32.6 & 23.5 & 13.3 & 16.1 & 4.1 \\ \(y^{\prime}\mapsto y\) (4 Cycles) & **37.7** & **37.7** & 26.7 & 15.9 & **18.7** & **5.4** \\ \(x_{h}\mapsto y\) (4 Cycles) & 35.6 & 35.6 & **28.2** & **17.6** & 18.2 & 5.4 \\ \hline BERT-fuse & 33.6 & 33.6 & 25.0 & 15.0 & 15.1 & 3.7 \\ \(y^{\prime}\mapsto y\) (3 Cycles) & **36.0** & **36.0** & **26.6** & **15.4** & **17.9** & **4.9** \\ \(x_{h}\mapsto y\) (2 Cycles) & 35.1 & 35.1 & 26.5 & 15.4 & 17.8 & 4.9 \\ \hline BART & 38.2 & 38.2 & 27.2 & 16.6 & 18.1 & 5.3 \\ \(y^{\prime}\mapsto y\) (2 Cycles) & **38.5** & **38.5** & **28.6** & **18.1** & **18.6** & **5.3** \\ \(x_{h}\mapsto y\) (2 Cycles) & 38.2 & 38.2 & 27.0 & 16.9 & 17.8 & 5.2 \\ \hline LM-critic & 34.5 & 34.5 & 23.5 & 12.9 & 15.8 & 4.6 \\ \(y^{\prime}\mapsto y\) (2 Cycles) & **34.9** & **34.9** & 24.4 & 14.4 & **16.4** & **5.0** \\ \(x_{h}\mapsto y\) (1 Cycle) & 34.5 & 34.5 & **26.0** & **15.8** & 16.1 & 4.2 \\ \hline RoBERTa & 36.9 & 36.9 & 26.5 & 16.1 & 16.9 & 4.9 \\ \(y^{\prime}\mapsto y\) (1 Cycle) & 36.9 & 36.9 & 25.3 & 15.1 & 16.6 & 4.9 \\ \(x_{h}\mapsto y\) (1 Cycle) & **37.1** & **37.1** & **26.9** & **16.3** & **17.9** & **6.0** \\ \hline BERT & **33.4** & **33.4** & 23.2 & 13.4 & 15.9 & 4.1 \\ \(y^{\prime}\mapsto y\) (1 Cycle) & 33.3 & 33.3 & **23.3** & **13.7** & **16.7** & **4.5** \\ \(x_{h}\mapsto y\) (1 Cycle) & 31.7 & 31.7 & 22.8 & 13.3 & 15.8 & 4.2 \\ \hline XLNet & 35.5 & 35.5 & 25.8 & 15.4 & **17.7** & **5.1** \\ \(y^{\prime}\mapsto y\) (2 Cycles) & **36.3** & **36.3** & **26.3** & **15.7** & 8.1 & 5.0 \\ \(x_{h}\mapsto y\) (2 Cycles) & 35.7 & 35.7 & 25.4 & 15.4 & 17.7 & 4.9 \\ \hline \hline \end{tabular} \end{table} Table 6: Recovery Rate for \(\mathrm{ATK}_{i}\) (\(i\in\{1,2,3\}\)) sets. We report both the token level recovery rate (_TR_) and the sentence level recovery rate (_SR_), and the cycle times are correlated with the results on clean data (Table 4). We report the performance of two variants of regularization data for each model and the corresponding best cycle times, where \(\mapsto\) represents the direction of data flow. The bold fonts indicate the optimal performance of each comparison. attack. In other words, our CSA can achieve competitive results with the previous defense method, which uses many well-crafted adversarial examples for training. Our CSA achieves much better model robustness for other attacks without specifically designed adversarial training examples. These results demonstrate the effectiveness and generalization ability of our method. ### The Effect of Self-Augmenting and Hard Samples As mentioned before, there are different strategies during the self-augmenting process. One is to follow the self-distillation method which combines each incorrect output \(y^{{}^{\prime}}\) generated by GEC model with the corresponding source data \(x\) from the original training datasets to post-train the model. The other strategy is to utilize \(D_{self}\) or \(D_{hard}\) as new training set. To illustrate the advancement of our CSA method, we compare these training strategies by reporting the results on the original testing data and the number of augmenting pairs per cycle in table 8. Afterward, to figure out the effect of \begin{table} \begin{tabular}{l|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Data**} & \multicolumn{3}{c|}{**Transformer**} & \multicolumn{3}{c|}{**Davinci-003 \(\dagger\)**} & \multicolumn{3}{c}{**Davinci-003 \(\ddagger\)**} \\ \cline{2-10} & **Prec.** & **Rec.** & **F.0.5** & **Prec.** & **Rec.** & **F.0.5** & **Prec.** & **Rec.** & **F.0.5** \\ \hline Origin & 68.9 & 43.9 & 61.8 & 31.5 & 52.3 & 34.2 & 32.1 & 52.3 & 34.8 \\ Mapping \& Rules & 36.5 & 37.8 & 36.7 & 27.6 & 48.2 & 28.1 & 28.0 & 48.0 & 30.6 \\ Vector-Based & 26.5 & 36.0 & 28.0 & 27.6 & 48.2 & 30.2 & 25.9 & 46.6 & 28.4 \\ Back-Translation & 33.0 & 48.9 & 35.3 & 26.4 & 50.0 & 29.1 & 26.5 & 49.7 & 29.2 \\ \hline \hline \end{tabular} \end{table} Table 7: Experimental results on CoNLL-2014 testing data and its correlated attack set, where \(\dagger\) represents the zero-shot setting and \(\dagger\) represents the few-shot setting. Figure 5: Comparison between our **CSA** and the adversarial training method on four different attack sets. In each sub-figure, the orange straight square represents for the traditional defence method, while blue one represents for our **CSA** method. the regularization data, we utilize augmenting pairs \(D_{Aug}\) to retrain the model directly by applying the **Stage I** of the CSA method and report the results in Table 9. \begin{table} \begin{tabular}{c|c|c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Straergies**} & \multirow{2}{*}{**\#Cycles**} & \multicolumn{3}{c|}{**\#Pairs**} & \multicolumn{3}{c}{**CoNLL-2014 (\(M^{2}\))**} \\ \cline{3-10} & & & & & **Prec.** & **Rec.** & **F.0.5** \\ \hline \multirow{3}{*}{\(x\mapsto y^{{}^{\prime}}\)} & 0 & 625,467 & 68.9 & 43.9 & 61.8 \\ & 1 & 511,006 & 66.5 & 51.1 & 62.6 \\ & 2 & 436,229 & 65.2 & 46.3 & 60.3 \\ \hline \multirow{3}{*}{\(y^{{}^{\prime}}\mapsto y\)} & 0 & 625,467 & 68.9 & 43.9 & 61.8 \\ & 1 & 506,572 & 67.2 & 49.4 & 62.6 \\ & 2 & 263,993 & 68.3 & 50.3 & 63.8 \\ \hline \multirow{3}{*}{\(x_{h}\mapsto y\)} & 0 & 625,467 & 68.9 & 43.9 & 61.8 \\ & 1 & 24,213 & 68.0 & 50.9 & 63.7 \\ \cline{1-1} & 2 & 466,295 & 68.6 & 50.1 & 63.9 \\ \hline \hline \end{tabular} \end{table} Table 8: Results of two strategies for self-augmenting. The first group refers to the strategy of self-distillation by using failed pairs (\(x\mapsto y^{{}^{\prime}}\)) to re-train the model. The second and third group corresponds to our strategy of using the model outputs to construct (\(y^{{}^{\prime}}\mapsto y\)) pairs and using hard samples to construct (\(x_{h}\mapsto y\)). \begin{table} \begin{tabular}{l|c c|c c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{**CoNLL (CLN)**} & \multicolumn{3}{c|}{**CoNLL (ATK)**} & \multicolumn{3}{c|}{**ATK 1**} & \multicolumn{3}{c|}{**ATK 2**} & \multicolumn{3}{c}{**ATK 3**} \\ \cline{2-13} & **Prec.** & **Rec.** & **F.0.5** & **Prec.** & **Rec.** & **F.0.5** & **TR** & **SR** & **TR** & **SR** & **TR** & **SR** \\ \hline Transformer & 68.9 & 43.9 & 61.8 & 34.1 & 39.7 & 34.9 & 32.6 & 32.6 & 23.5 & 13.3 & 16.1 & 4.1 \\ \(y\mapsto y\) (CSA) & 69.5 & 49.5 & 64.3 & 37.7 & 45.5 & 38.9 & **37.7** & **37.7** & 26.7 & 15.9 & 18.7 & 5.4 \\ \(x_{h}\mapsto y\) (CSA) & 69.0 & 50.1 & 64.2 & 38.1 & 45.6 & 39.2 & 35.6 & 35.6 & **28.2** & **17.6** & 18.2 & 5.4 \\ \(x_{h}\mapsto y\) (DIR) & 69.4 & 50.8 & **64.6** & 38.6 & 46.1 & **39.7** & 35.9 & 35.9 & 27.2 & 16.1 & **18.3** & **5.8** \\ \hline BERT-fuse & 69.2 & 45.6 & 62.6 & 33.5 & 38.2 & 34.1 & 33.6 & 33.6 & 25.0 & 15.0 & 15.1 & 3.7 \\ \(y\mapsto y\) (CSA) & 69.4 & 49.8 & **64.4** & 37.9 & 45.5 & **39.0** & **36.0** & **36.0** & 26.6 & 15.4 & 17.9 & 4.9 \\ \(x_{h}\mapsto y\) (CSA) & 67.5 & 49.4 & 62.9 & 37.4 & 45.5 & 38.7 & 35.1 & 35.1 & 26.5 & 15.4 & 17.8 & 4.9 \\ \(x_{h}\mapsto y\) (DIR) & 67.2 & 50.2 & 62.9 & 37.7 & 45.9 & 38.9 & 35.2 & 35.2 & **26.9** & **15.7** & **18.3** & **5.1** \\ \hline BART & 69.3 & 45.0 & 62.6 & 34.5 & 38.8 & 35.0 & 38.2 & 38.2 & 27.2 & 16.6 & 18.1 & 5.3 \\ \(y^{\prime}\mapsto y\) (CSA) & 70.5 & 46.7 & **64.0** & 39.1 & 46.1 & **40.1** & **38.5** & **38.5** & **28.6** & **18.1** & 18.6 & 5.3 \\ \(x_{h}\mapsto y\) (CSA) & 66.3 & 45.7 & 60.8 & 36.7 & 43.4 & 37.7 & 38.2 & 38.2 & 27.0 & 16.9 & 17.8 & 5.2 \\ \(x_{h}\mapsto y\) (DIR) & 65.1 & 47.4 & 60.6 & 37.6 & 45.1 & 37.5 & 39.2 & 39.2 & 27.8 & 17.7 & **19.2** & **5.9** \\ \hline \hline \multirow{2}{*}{\(y^{{}^{\prime}}\)-critic} & 64.4 & 35.6 & 55.5 & 34.5 & 35.9 & 34.5 & 34.5 & 34.5 & 23.5 & 12.9 & 15.8 & 4.6 \\ \(y\mapsto y\) (CSA) & 65.7 & 47.4 & **61.0** & 41.1 & 46.1 & **41.8** & 34.9 & **34.9** & 24.4 & 14.4 & 16.4 & **5.0** \\ \(x_{h}\mapsto y\) (CSA) & 64.8 & 36.4 & 56.1 & 31.7 & 35.3 & 32.1 & 34.5 & 34.5 & 26.0 & **15.8** & 16.1 & 4.2 \\ \(x_{h}\mapsto y\) (DIR) & 58.5 & 35.8 & 51.9 & 31.0 & 35.1 & 31.5 & 34.3 & 34.3 & 24.4 & 14.4 & 16.1 & 5.0 \\ \hline BERT & 72.1 & 42.0 & **63.0** & 35.6 & 42.9 & 37.4 & **33.4** & **33.4** & **23.2** & 13.4 & 15.9 & 4.1 \\ \(y^{{}^{\prime}}\mapsto y\) (CSA) & 70.0 & 44.3 & 62.3 & 37.3 & 41.8 & **38.3** & 33.3 & 33.3 & **23.3** & **13.7** & **16.7** & **4.5** \\ \(x_{h}\mapsto y\) (CSA) & 68.7 & 42.8 & 61.3 & 35.2 & 40.4 & 35.9 & 31.7 & 31.7 & 22.8 & 13.3 & 15.8 & 4.2 \\ \(x_{h}\mapsto y\) (DIR) & 68.0 & 43.6 & 61.2 & 35.6 & 41.2 & 36.3 & 32.6 & 32.6 & 23.4 & 13.5 & 16.2 & 4.4 \\ \hline RoBERTa & 68.7 & 47.2 & **62.9** & 38.2 & 44.1 & 39.0 & 36.9 & 36.9 & 26.5 & 16.1 & 16.9 & 4.9 \\ \(y^{{}^{\prime}}\mapsto y\) (CSA) & 68.0 & 46.9 & 62.4 & 38.7 & 44.5 & **39.5** & **36.9** & **36.9** & 25.3 & 15.1 & 16.6 & 4.9 \\ \(x_{h}\mapsto y\) (CSA) & 66.3 & 47.7 & 61.5 & 38.5 & 45.0 & 39.4 & 37.1 & 37.1 & **26.9** & **16.3** & 17.9 & 6.0 \\ \(x_{h}\mapsto y\) (DIR) & 66.4 & 47.5 & 61.5 & 38.5 & 44.8 & 39.4 & 36.5 & 36.5 & 26.8 & 15.9 & **18.1** & **6.0** #### 6.3.1 Effect of Self-Augmenting We find that the performance of the baseline model can be improved in the first training cycle but will decrease after the second cycle under the self-distillation setting. As for our introduced strategy in the self-augmenting process, the model performance rises continuously after two training cycles, with fewer training pairs. This is because the GEC model is sensitive to the golden data, and the incorrect targets will make the model think that the syntactically incorrect parts are "correct". This also confirms the above statement that the goal of self-distillation method is contrary to the GEC task. #### 6.3.2 Effect of Regularization data We can conclude that the over-fitting problem of the GEC model hardly comes up when simply post-training the model with hard samples since the quantity of data is sufficient. Specifically, for the Transformer model, we can observe that the hard samples can simultaneously improve the performance on the original testing sets and attack sets. Three out of five experiments (**CoNLL(CLN)**, **CoNLL(ATK)**, **ATK 3**) indicate the improvement brought by directly utilizing hard samples for training can even surpass the CSA method. However, in terms of the overall results, the effects brought by regularization data are more prominent. In short, on the one hand, we can infer that the success of our CSA method derives from the hard samples to some extent. On the other hand, the effects of regularization are beyond hard samples. ### Hyper-Parameter Analysis In the aforementioned experiments, we ignore analysing whether the improvement comes from the small high-quality data \(\mathcal{D}_{tune}\) or the regularization data \(\mathcal{D}_{Reg}\), and the influence of the hyper-parameter \(\epsilon\) and \(\mathcal{P}\) remains to be discovered. As a result, we train the model by simply utilizing the original training data or regularization data as well as compare the performance under the different settings of \(\epsilon\) in Section 6.4.1. Meanwhile, we strictly control the training steps and the data selection strategy to study the influence of hyper-parameter \(\mathcal{P}\) in Section 6.4.2. In the end, we gradually reduce the proportion of the regularization data by controlling the reserving rate \(\nabla\) during the training step and compare the performance of the GEC model in the original testing set and attack test set simultaneously in Section 6.4.3. \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline **\#Cycles** & **0** & **1** & **2** & **3** & **4** & **5** \\ \hline Ori Set & 61.8 & 56.2 & 57.0 & 56.5 & 57.1 & 55.2 \\ **+ Ours** & 61.8 & 62.5 & 62.6 & 62.7 & 62.9 & 62.7 \\ \hline Attack Set & 36.7 & 37.4 & 37.0 & 36.6 & 37.0 & 37.1 \\ **+ Ours** & 36.7 & 42.2 & 41.3 & 41.6 & 41.8 & 42.2 \\ \hline \hline \end{tabular} \end{table} Table 10: Comparison between re-training the GEC model on \(\mathcal{D}\cup\mathcal{D}_{tune}\) with the same epochs as our CSA, where Ori Set means the original testing set. \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline \multirow{2}{*}{\(\mathcal{P}\)} & \multicolumn{3}{c|}{**CoNLL-2014**} & \multicolumn{3}{c}{**CoNLL-2014**\((ATK)\)} \\ \cline{2-7} & **Prec.** & **Rec.** & **F.0.5** & **Prec.** & **Rec.** & **F.0.5** \\ \hline - & 67.9 & 44.1 & 61.3 & 34.1 & 39.7 & 34.9 \\ 2 & 67.2 & 49.4 & 62.6 & 40.6 & 44.3 & 41.3 \\ 3 & 68.1 & 48.5 & 63.2 & 40.3 & 43.6 & 40.9 \\ 4 & 68.2 & 48.9 & 63.2 & 40.6 & 43.7 & 41.2 \\ 5 & 68.6 & 48.6 & 63.4 & 40.0 & 43.4 & 40.7 \\ 6 & 66.2 & 48.9 & 61.8 & 40.2 & 43.2 & 40.8 \\ \hline \hline \end{tabular} \end{table} Table 11: The influence of \(\mathcal{P}\) to model performance. “-” denotes baseline, and \((ATK)\) refers to the attack set. #### 6.4.1 The Influence of Threshold \(\epsilon\) Table 10 presents the results of the model with different training cycles, correlating with the hyperparameters of \(\epsilon\). With the increase of training cycles, we can observe that our CSA method can surpass the baseline on the attack set in each cycle (except for Cycle 5) and significantly improve the original test set, i.e., around 5 points improvement in Cycle 1. To explore whether the improvement in cycle training comes from the introduced small dataset \(\mathcal{D}_{tune}\), we train the baseline model on \(\mathcal{D}\cup\mathcal{D}_{tune}\) with the same training epochs as our CSA method. The dramatic decrease of the performance along with the cycle training proves that the improvement of robustness and performance on the original testing set is not simply brought by \(\mathcal{D}_{tune}\) in the cycle training process. #### 6.4.2 The Influence of Patience \(\mathcal{P}\) Table 11 shows the performance of the model along with the different settings of patience \(\mathcal{P}\). With the increase of \(\mathcal{P}\), the model can achieve better performance on the original testing set in five cycles, and the robustness is unstable but much better than the baseline. It can be seen that \(\mathcal{P}\)=2 is sufficient to achieve competitive performance on the original testing set and best performance on the attack set, which is also used as the standard-setting in our implementations. #### 6.4.3 The Influence of Reserving Rate We launch a preliminary experiment to show the relationship between the quantity of regularization data and the model's performance. Table 12 presents the experimental results of removing different proportions of regularization data in the last training cycle. It can be seen that more regularization data can improve the model robustness but suffer from the decreased performance on the original testing data. This seemingly incompatible phenomenon validates previous work (Li et al., 2021) that there is a relative balance between the performance on the original training set and robustness of the model, and also reminds us of the possibility of utilizing fewer augmented data to achieve better results. ### The Influence of Regularization Data In this section, We'll take a thorough analysis of the impact of the two components \(X_{unl}\) and \(X_{unc}\) in the regularization data \(D_{Reg}\). We select some cases of regularization examples and make a case study in Section 6.5.1. To solve the problem of why regularization data can improve the performance on both attack set and original testing set mentioned in Section 3.3, we utilize \(X_{unl}\) and \(X_{unc}\) to fine-tune a plug-and-play GEC pre-trained model (transformer-big) to probe the latent capacity brought by the two data components in Section 6.5.2. Since there are some unexpected phenomena of the _Seq2Edits_ models, i.e., performing badly after post-training with regularization data, we will analyze such phenomena and propose a set of paradigms for different models about how to utilize regularization data efficiently in Section 6.5.3 and 6.5.4, respectively. #### 6.5.1 Case Study As mentioned in Section 3.3, we decompose the regularization data into two components \(X_{unl}\) and \(X_{unc}\), and utilize those datasets to fine-tune the GEC model directly. Similar to the process of building \(D_{Reg}\), we take the intersection among different regularization data sets to extract \(X_{unl}\), which \begin{table} \begin{tabular}{c|c c c|c c} \hline \hline **Reserving** & \multicolumn{3}{c|}{**CoNLL-2014**\((M^{2})\)} & \multicolumn{3}{c}{**CoNLL-2014**\((ATK)\)} \\ \cline{2-6} **Rates (\%)** & **P** & **R** & **F.0.5** & **P** & **R** & **F.0.5** \\ \hline 0 \% & 68.6 & 48.6 & 63.4 & 40.0 & 43.4 & 40.7 \\ 25 \% & 68.5 & 48.6 & 63.3 & 40.1 & 43.3 & 40.7 \\ 50 \% & 68.3 & 48.8 & 63.2 & 40.3 & 43.7 & 40.9 \\ 75 \% & 68.3 & 48.5 & 63.1 & 40.5 & 43.8 & 41.1 \\ 100 \% & 67.5 & 49.4 & 62.9 & 40.9 & 44.5 & 41.6 \\ \hline \hline \end{tabular} \end{table} Table 12: The influence of regularization data amount to model performance and defence capability. can be formally written as \(X^{k}_{unl}=\mathcal{D}^{1}_{heg}\bigcap\mathcal{D}^{2}_{heg}\cdots\bigcap \mathcal{D}^{k}_{heg}\) (\(k\in\{2,3,4\}\)). As for \(X^{k}_{unc}\) of the \(k\)-th cycle, we first combine all the regularization data of \(k\) cycles and subtract the corresponding \(X^{k}_{unl}\) set, which can be formally written as \(X^{k}_{unc}=\mathcal{D}^{1}_{heg}\bigcup\mathcal{D}^{2}_{heg}\cdots\bigcup \mathcal{D}^{k}_{heg}-X^{k}_{unl}\) (\(k\in\{2,3,4\}\)). Afterward, we select some representative cases from \(X_{unl}\) and \(X_{unc}\), and present them in Table 23, where the first three cases are \(X_{unl}\) cases, and the others are \(X_{unc}\) cases. We summarize five main types of regularization data as well as show some examples below: 1. The intermediate results are syntactic and semantic correct, while the editing positions are replaced by synonyms texts. For example, in the first case, the regularization sentences have no syntactic error and these two phrases "follow my suggestion" and "act on my suggestion" have the same meaning. 2. The target sentence is mislabeled, i.e., in the second case which convert "it" to "It". Such mislabeled samples impair the correction ability of the GEC model as they confuse the optimization goal, i.e., transferring \(\mathit{Poor}\rightarrow\mathit{Good}\) to \(\mathit{Good}\rightarrow\mathit{Poor}\). 3. The intermediate results are syntactic correct but semantic incorrect. For example, in the third case, the goal is to correct the spelling of the word "frightened" but the regularization sentence change the meaning of original sentence from "being frightened" to "being free". \begin{table} \begin{tabular}{l l} \hline \hline **Examples of** & **Regularization Data** \\ \hline _Poor_: & I hope you ’ll attend my suggestions. \\ _Reg 1,2,3,4_: & I hope you ’ll follow my suggestions. \\ _Good_: & I hope you ’ll act on my suggestions. \\ \hline _Poor_: & So It was very detry. \\ _Reg 1,2,3,4_: & So it was very dirty. \\ _Good_: & \begin{tabular}{l} 1 was very frethend, but I knew, that I should do something. \\ _Reg 1,2,3,4_: \\ _Good_: \\ \end{tabular} \\ \hline _Poor_: & \begin{tabular}{l} I was very frethend, but I knew that I should do something. \\ _Reg 1,2,3,4_: \\ _Good_: \\ \end{tabular} \\ \hline _Poor_: & \begin{tabular}{l} But I do n’t stop to walk. \\ But I did n’t stop walking. \\ But I do n’t stop walking. \\ But I did n’t stop walking. \\ \end{tabular} \\ \hline _Poor_: & \begin{tabular}{l} Today, It ’s fist time to entry my message. \\ Today is my first time to write a message. \\ Today, It ’s my first time to write a message. \\ Today is the first time for me to enter a message. \\ \end{tabular} \\ \hline _Poor_: & I thank you about my normal day. \\ _Reg 1_: & Thank you for my normal day. \\ _Reg 2,3_: & I thank you for my normal day. \\ _Reg 4_: & I am thankful for my normal day. \\ _Good_: & I thank you for my normal day. \\ \hline _Poor_: & \begin{tabular}{l} Hiroshima Carp lost the game, but it was very exciting game. \\ Hiroshima Carp lost the game, but it was a very exciting game. \\ The Hiroshima Carp lost the game, but it was a very exciting game. \\ _Reg 4_: & Hiroshima Carp lost the game, but it was a very exciting game. \\ _Good_: & Hiroshima Carp lose the game, but it was very exciting game. \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 13: Examples of regularization data, where _Poor_, _Reg_\(k\) and _Good_ represent for ungrammatical sentence, regularization data from \(k\)-th cycle and grammatical sentence respectively. We annotate the different positions of each sentence for three kinds of data by utilizing red, blue and olive color. We sample the sentences from four cycles of regularization data generated by _Seq2Seq_ model. To save space for the table, if there are multiple cycles of regularization with the same data, we will put them on the same row. * Because of the lack of prompt message, the intermediate results may have syntactic errors. For example, in the forth case, the original sentence (_Poor_) is in the simple present tense and the target sentence is in the simple past tense, but there is no obvious hint, which cause the changing of the generated sentence's tense. * For sentences which do not require modification, the intermediate results are excessively modified as shown in the last case. From the above classification and explanation, we can summarize that the intermediate results are mostly syntactic correct but semantic incorrect, which impairs the performance. From the perspective of data quality, the reasons behind this are the lack of enough context for _Seq2Seq_ models, and some mislabeled data interfere with optimization objectives. From another aspect, we also think the shortcomings inherent in the Casual Language Model (CLM) count for this problem. Specifically, _Seq2Seq_ model suffers from the degeneration (Li et al., 2016) and over-reasoning problems19, and that is why it generates different variants of sentences even there is no need to modify the _Poor_ sentences. To verify this opinion, we collect the regularization data among different cycles and plot the recall and precision of editing operation in operation tier (Bryant, 2019) in Figure 6, where the above three sub-figures describe the recall and the bottom three describe the precision of editing operation, respectively. As we can infer from the recall rate that the editing results (intermediate results) of _Seq2Seq_ model changes considerably with the increase of the number of cycles, especially for Unnecessary and Missing error types, which represents that the model tends to add or delete different tokens when generating the results. Compared with recall for each error type, the precision tends to be stable, which means many editing operations are redundant. In general, the editing ability is enhanced by the addition of regularization data, which counts for the improving performance of the GEC model. Figure 6: The trend of error type distribution along with the number of cycles. The above three figures describes the recall (R) of each error types while the bottom three figures describes the precision (P) of each error types. The error types are classified in terms of edit operation tier, i.e., whether tokens are missing (M), replaced (R) or unnecessary (U). #### 6.5.2 Comparison between \(X_{unl}\) and \(X_{unc}\) To figure out the effect of two data components \(X_{unl}\) and \(X_{unc}\), we utilize those data sets respectively to fine-tune the plug-and-play GEC pre-trained model (Kiyono et al., 2019)20 and compare the model's robustness and performance on the original testing set. We divide the experiments into two groups: one is conducted on **ATK**\(i\) (\(i\in\{1,2,3\}\)) to compare the robustness, and the other one is conducted on original testing set and standard CoNLL-2014 attack set. Considering that the \(X_{unl}^{k}\) set always contains more data than the \(X_{unc}^{k}\) set, we count the number of pairs in the \(X_{unl}^{k}\) set and randomly sample the same number of sentences from the \(X_{unc}^{k}\) set. In order to reduce the randomness of sampling, we perform the above sampling operation five times with different random seeds and report the average result. Footnote 20: [https://gec-pseudo-data.s3-ap-northeast-1.amazonaws.com/ldc_giga.pret.checkpoint_last.pt](https://gec-pseudo-data.s3-ap-northeast-1.amazonaws.com/ldc_giga.pret.checkpoint_last.pt) To calibrate the data quality, we also propose an evaluation metric **#IPS (**I**mprovement **Per** Score) which can be formally written as: \(\#\text{IPS}=\frac{\#N_{sen}}{(score^{t}-score^{t})/\gamma}\), where \(\#N_{sen}\) represents for the number of sentences, \(score^{t}-score^{t}\) represents for the difference value of scores, and \(\gamma\) represents for the zoom factor. We set \(\gamma\) as 0.1 and utilize the #IPS score to calibrate the quality of GEC data. The results are shown in Table 14 and 15. We report the performance of plug-and-play GEC model in the first group of each table. We can see that the pre-trained GEC model has surprising strong \begin{table} \begin{tabular}{l|c c|c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**ATK 1** (\(\uparrow\))} & \multicolumn{2}{c|}{**ATK 2** (\(\uparrow\))} & \multicolumn{2}{c}{**ATK 3** (\(\uparrow\))} \\ \cline{2-7} & **TR(\%)** & **SR(\%)** & **TR(\%)** & **SR(\%)** & **TR(\%)** & **SR(\%)** \\ \hline Transformer (pre-train) & **35.4** & **35.4** & **27.5** & **16.1** & **19.2** & **6.1** \\ Transformer (benchmark) & 32.6 & 32.6 & 23.5 & 13.3 & 16.1 & 4.1 \\ \hline +\(X_{unl}\) (2 Cycles) & 34.4 & 34.4 & 26.8 & 15.7 & 17.4 & 4.4 \\ +\(X_{unl}\) (3 Cycles) & 34.0 & 34.0 & **27.2** & **16.1** & 17.2 & 4.4 \\ +\(X_{unl}\) (4 Cycles) & **33.4** & **33.4** & 26.7 & 15.3 & **17.2** & **4.6** \\ \hline +\(X_{unl}\) (2 Cycles / reverse) & 34.5 & 34.5 & **27.7** & **17.2** & 17.8 & 4.9 \\ +\(X_{unl}\) (3 Cycles / reverse) & 34.4 & 34.4 & 27.4 & 16.9 & 17.1 & 4.0 \\ +\(X_{unl}\) (4 Cycles / reverse) & **34.6** & **34.6** & 27.1 & 16.5 & **18.1** & **5.1** \\ \hline +\(X_{unc}\) (2 Cycles / seed 1) & 34.9 & 34.9 & 27.0 & 16.2 & 18.2 & 5.0 \\ +\(X_{unc}\) (2 Cycles / seed 2) & 35.1 & 35.1 & 27.4 & 16.5 & 18.9 & 5.4 \\ +\(X_{unc}\) (2 Cycles / seed 3) & 34.6 & 34.6 & 27.5 & 16.8 & 18.0 & 5.0 \\ +\(X_{unc}\) (2 Cycles / seed 4) & 35.1 & 35.1 & 27.5 & 16.7 & 18.4 & 5.3 \\ +\(X_{unc}\) (2 Cycles / seed 5) & 34.0 & 34.0 & 27.2 & 16.3 & 18.2 & 5.3 \\ +\(X_{unc}\) (2 Cycles / Avg. ) & **34.7** & **34.7** & **27.3** & **16.5** & **18.3** & **5.2** \\ \hline +\(X_{unc}\) (3 Cycles / seed 1) & 35.0 & 35.0 & 27.3 & 16.7 & 18.4 & 5.4 \\ +\(X_{unc}\) (3 Cycles / seed 2) & 35.1 & 35.1 & 26.8 & 16.0 & 18.7 & 5.5 \\ +\(X_{unc}\) (3 Cycles / seed 3) & 34.7 & 34.7 & 27.5 & 16.9 & 18.0 & 5.0 \\ +\(X_{unc}\) (3 Cycles / seed 4) & 34.7 & 34.7 & 27.2 & 16.3 & 18.2 & 5.0 \\ +\(X_{unc}\) (3 Cycles / seed 5) & 34.7 & 34.7 & 27.8 & 17.1 & 18.8 & 5.5 \\ +\(X_{unc}\) (3 Cycles / Avg. ) & **34.8** & **34.8** & **27.3** & **16.6** & **18.4** & **5.3** \\ \hline +\(X_{unc}\) (4 Cycles / seed 1) & 35.0 & 35.0 & 27.6 & 17.3 & 18.7 & 5.4 \\ +\(X_{unc}\) (4 Cycles / seed 2) & 35.1 & 35.1 & 26.8 & 16.1 & 18.2 & 5.3 \\ +\(X_{unc}\) (4 Cycles / seed 3) & 34.0 & 34.0 & 27.3 & 16.8 & 17.7 & 4.8 \\ +\(X_{unc}\) (4 Cycles / seed 4) & 33.9 & 33.9 & 27.3 & 16.8 & 18.4 & 5.2 \\ +\(X_{unc}\) (4 Cycles / seed 5) & 34.4 & 34.4 & 26.4 & 15.9 & 18.2 & 5.1 \\ +\(X_{unc}\) (4 Cycles / Avg. ) & **34.5** & **34.5** & **27.1** & **16.6** & **18.2** & **5.2** \\ \hline \hline \end{tabular} \end{table} Table 14: Comparison of the recovery rate among the regularization data variants. **ATK**\(i\) (\(i\in\{1,2,3\}\)) represents for the evaluation sets with fixed number of attack positions per sentence, i.e., \(i\) represents for the number of attack positions. We report the _TR_ and _SR_ score for each model, and annotate the cycle time \(k\) of filtered regularization data. The bold fonts indicate the optimal performance of each comparison or the average score of five seeds. robustness as it has the highest _Recovery Rate_ on **ATK 1** and **ATK 2** sets. Although the performance of fine-tuned GEC model on the original testing data is much higher than the pre-trained one, it has weak resistance to the attack data and even gets a negative value of #IPS score. The Influence of \(X_{unl}\)The results of \(X_{unl}\) are shown in the second group and third group of each table. With the increase of \(k\) (Cycles), the quality of \(X_{unl}\) becomes higher since the model achieve a relatively high _Recovery Rate_ and the #IPS score is the lowest in the forth cycle compared with other datasets, i.e., we can utilize fewer data to achieve a comparable performance. Moreover, as mentioned above, the type of errors in \(X_{unl}\) is generally semantic but not syntactic, so we reverse the data pairs and use these data to fine-tune the pre-trained GEC model. The results are shown in the third group of the table. We surprisingly find that using reversed \(X_{unl}\) can also improve the model robustness and performance on the original testing data and has the same trend as the original \(X_{unl}\) with the increase of \(k\). One explanation for this phenomenon is that most of the source sentence (_Poor_) in \(X_{unl}\) set is grammatical in the syntactic level, and the reversed pairs can also make up for mislabeling problem as mentioned in Section 6.5.1. The Influence of \(X_{unc}\)The results of \(X_{unc}\) are shown from the fourth to the fifth group of each table. Each group shows the results of five seeds and one average at the bottom. We can infer that \begin{table} \begin{tabular}{l|c|c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**\#Pairs**} & \multicolumn{2}{c|}{**CoNLL-2014 (CLN)**} & \multicolumn{2}{c}{**CoNLL-2014 (ATK)**} \\ \cline{3-6} & & **F.0.5 (\(\uparrow\))** & **\#IPS (\(\downarrow\))** & **F.0.5 (\(\uparrow\))** & **\#IPS (\(\downarrow\))** \\ \hline Transformer (pre-train) & 9,000,000 & 50.1 & - & 36.0 & - \\ Transformer (benchmark) & 659,775 & 61.8 & 5,658.4 & 34.9 & -59,979.5 \\ \hline +\(X_{unl}\) (2 Cycles) & 298,301 & 62.7 & 2,375.0 & 38.9 & 10,198.3 \\ +\(X_{unl}\) (3 Cycles) & 271,499 & 62.6 & 2,179.0 & 38.8 & 9,696.4 \\ +\(X_{unl}\) (4 Cycles) & **259,780** & **62.8** & **2,052.0** & **38.8** & **9,195.8** \\ \hline +\(X_{unl}\) (2 Cycles / reverse) & 298,301 & 62.5 & 2,413.4 & 38.6 & 11,584.5 \\ +\(X_{unl}\) (3 Cycles / reverse) & 271,499 & **62.9** & **2,127.7** & 38.7 & 10,245.3 \\ +\(X_{unl}\) (4 Cycles / reverse) & **259,780** & 62.7 & 2,068.3 & **38.7** & **9,711.4** \\ \hline +\(X_{unc}\) (2 Cycles / seed 1) & 298,301 & 62.4 & 2,433.1 & 38.8 & 10,847.3 \\ +\(X_{unc}\) (2 Cycles / seed 2) & 298,301 & 62.8 & 2,356.2 & 38.9 & 10,286.2 \\ +\(X_{unc}\) (2 Cycles / seed 3) & 298,301 & 62.7 & 2,375.0 & 38.8 & 10,653.6 \\ +\(X_{unc}\) (2 Cycles / seed 4) & 298,301 & 62.6 & 2,394.1 & 38.7 & 10,946.8 \\ +\(X_{unc}\) (2 Cycles / seed 5) & 298,301 & 62.6 & 2,394.1 & 38.7 & 11,151.4 \\ +\(X_{unc}\) (2 Cycles / Avg. ) & **298,301** & **62.6** & **2,390.5** & **38.8** & **10,777.1** \\ \hline +\(X_{unc}\) (3 Cycles / seed 1) & 271,499 & 62.4 & 2,214.5 & 38.5 & 10,752.4 \\ +\(X_{unc}\) (3 Cycles / seed 2) & 271,499 & 62.8 & 2,144.5 & 38.8 & 9,783.7 \\ +\(X_{unc}\) (3 Cycles / seed 3) & 271,499 & 61.9 & 2,308.7 & 38.5 & 10,860.0 \\ +\(X_{unc}\) (3 Cycles / seed 4) & 271,499 & 62.7 & 2,161.6 & 38.7 & 10,245.3 \\ +\(X_{unc}\) (3 Cycles / seed 5) & 271,499 & 62.8 & 2,144.5 & 38.8 & 9,872.7 \\ +\(X_{unc}\) (3 Cycles / Avg. ) & **271,499** & **62.5** & **2,144.8** & **38.6** & **10,302.8** \\ \hline +\(X_{unc}\) (4 Cycles / seed 1) & 259,780 & 62.6 & 2,084.9 & 38.7 & 9,533.2 \\ +\(X_{unc}\) (4 Cycles / seed 2) & 259,780 & 63.0 & 2,020.1 & 38.7 & 9,621.5 \\ +\(X_{unc}\) (4 Cycles / seed 3) & 259,780 & 62.9 & 2,035.9 & 38.8 & 9,631.4 \\ +\(X_{unc}\) (4 Cycles / seed 4) & 259,780 & 62.3 & 2,136.3 & 38.6 & 9,991.5 \\ +\(X_{unc}\) (4 Cycles / seed 5) & 259,780 & 62.3 & 2,136.3 & 38.5 & 10,603.3 \\ +\(X_{unc}\) (4 Cycles / Avg. ) & **259,780** & **62.6** & **2,082.7** & **38.7** & **9,822.2** \\ \hline \hline \end{tabular} \end{table} Table 15: Comparison of model performance among regularization data variants. **CLN** and **ATK** represents for original test data and standard attack set of CoNLL-2014 corpus respectively. We report the **F.0.5** score and the corresponding **#IPS** value. We regard the F.0.5 of Transformer (pre-train) model as \(score^{i}\) for calculating **#IPS**. The bold fonts indicate the optimal performance of each comparison or the average score of five. It is worth noting that we keep two decimal places in the actual calculation, but we only keep one decimal place in this report. the \(X_{unc}\) is the main component for improving the model robustness as it achieves better _Recovery Rate_ scores than \(X_{unl}\) in **ATK** sets. Moreover, although the F_0.5 score on standard CoNLL-2014 (ATK) set of \(X_{unc}^{k}\) and corresponding \(X_{unl}^{k}\) is similar, _Recovery Rate_ of \(X_{unc}^{k}\) on **ATK**\(i\) set are higher than the F_0.5 of \(X_{unl}^{k}\), which means the GEC model trained with \(X_{unc}^{k}\) is more sensitive to the robustness of local attacks. In other words, to improve the robustness of the local attacks, more quantity of data is required as the #IPS score of \(X_{unc}^{k}\) on CoNLL-2014 (ATK) set in Table 15 is much higher than the #IPS of the corresponding \(X_{unl}^{k}\). As for the model performance on the original testing data, utilizing \(X_{unc}\) does improve it and #IPS score for each \(X_{unc}^{k}\) is close to the #IPS of the corresponding \(X_{unl}^{k}\) as shown in Table 15, i.e., 2,82.7 for \(X_{unc}^{k}\) and 2,052.0 for \(X_{unl}^{k}\). #### 6.5.3 Analysis of Anomalies on _Seq2Edit_ models In order to figure out the slight improvement and unstable performance of the _Seq2Edit_ model in the previous experiments, we firstly compare the generated intermediate data between the _Seq2Edit_ model and _Seq2Seq_ model by utilizing Transformer and BERT as representatives. We selected some representative cases from each intermediate data after two cycles of our CSA method and presented them in Table 16. Combined with the six main types of regularization data generated by _Seq2Seq_ model (Section 6.5.1), We can find some characteristics of the results generated by _Seq2Edit_ model: * The _Seq2Edit_ model tends to modify more for syntactic errors than for the semantics of the _Poor_ sentences. For example, in the first case, the _Seq2Seq_ model changes the original semantics while _Seq2Edit_ just corrects the wrong spelling of 'frightened'. * Compared with _Seq2Seq_ model, _Seq2Edit_ model seems to be less context-dependent when correcting syntax errors. For example, in the second case, both'serviced' and 'provided' can be used in this sentence if there is no context, while _Seq2Edit_ can choose the latter one which matches the _Good_ sentences. * The over-reasoning problem is mitigated in the _Seq2Edit_ model, and editing positions are fewer than _Seq2Seq_ model generally. In the third case, the outcome by _Seq2Edit_ model only edits one position while the _Seq2Seq_ generates another piece of content, so is the fourth case. \begin{table} \begin{tabular}{l l} \hline \hline \multicolumn{2}{l}{**Examples of Regularization Data**} \\ \hline _Poor_: & I was very frethend, but I knew, that I should do something. \\ _BERT_: & I was very frightened, but I knew that I should do something. \\ _Transformer_: & I was very free, but I knew that I should do something. \\ _Good_: & I was very frightened, but I knew that I had to do something. \\ \hline _Poor_: & It is a free soft wear which is serviced by Google. \\ _BERT_: & It is a free software which is provided by Google. \\ _Transformer_: & It is a free software which is serviced by Google. \\ _Good_: & It is a free software provided by Google. \\ \hline _Poor_: & I thank you about my normal day. \\ _BERT_: & I thank you for my normal day. \\ _Transformer_: & I am thankful for my normal day. \\ _Good_: & I thank you for my normal day. \\ \hline _Poor_: & I have every day, holding a pillow in bed. \\ _BERT_: & I have every day, holding a pillow in bed. \\ _Transformer_: & I have a pillow every day in bed. \\ _Good_: & I have the whole day to hold a pillow in my bed. \\ \hline \hline \end{tabular} \end{table} Table 16: Comparison of regularization data between _Seq2Edit_ model and _Seq2Seq_ model, where _Poor_ and _Good_ represent for ungrammatical sentence and grammatical sentence respectively. We use BERT and Transformer as the representatives of the _Seq2Edit_ and _Seq2Seq_ model architecture. We annotate the different positions by utilizing red and blue colors for _Poor_ and _Good_ sentences. As for regularization data belong to different model architectures, we utilize orange and brown colors to annotate the differences. From the above case study, we can infer that _Seq2Edit_ tends to keep the structure and semantics of the original sentence. Although this editing-based method is highly accurate, controllable, and interpretable, it to some extent goes against our CSA method since the regularization data generated by _Seq2Edit_ model is not as informative as the _Seq2Seq_ model. Moreover, since the set of tags is generally not too large for _Seq2Edit_ model, it works better on closed sets, i.e., the official test datasets, but the complexities and long texts on open sets are challenging to deal with. In other words, the limitation of tags, on the one hand, can improve the accuracy of editing operation on what the model has seen. On the other hand, it hurts the improvement in robustness since some co-occurrence tags never appeal in the dictionary, and the addition of regularization cannot change the distribution of tags. Even worse, the released checkpoints of such model (_Seq2Edit_) have already been meticulously trained on existing data, and any further post-training may hurt their performance. To verify the above conjecture, we investigate the correction results for each error type. Specifically, We use ERRANT(Felice et al., 2016; Bryant et al., 2017) to measure the precision and recall of the model for each error type. We compare the original plug-and-play GEC models with their enhanced versions, which are reinforced with our CSA method. The results are shown in Table 17. As we can see from the Table, the original _Seq2Edit_ model does much better than the original _Seq2Seq_ model in the missing and unnecessary error types which are corresponding to insert and delete operations, and the editing-based operation also exceeds the generating-based operation on the four out of the top-5 frequent types of errors. However, after the CSA method, We can also surprisingly find the considerable improvement of _Seq2Seq_ model on the top-5 frequent types of errors, and it exceeds _Seq2Edit_ model on all the error types. We can also find that the performance of _Seq2Edit_ model gets worse with the addition of the regularization data. #### 6.5.4 Paradigms for Utilizing Regularization Data From the above experiments and analysis, we find different effects of \(X_{unc}\) set and \(X_{unl}\) set on improving the performance, and we also compare the influence of regularization data between the _Seq2Seq_ model and the _Seq2Edit_ model. In this section, we summarize the use of regularization data for different model architectures. For _Seq2Seq_ model, the \(X_{unc}\) set in regularization data contains much information which can make the model more robust and generalized since such model tends to generate diverse outcomes due to the decoding strategies, while the \(X_{unl}\) set serves to compensate for the model's natural shortcomings because this data component force the model to reduces the diversity and reinforce the distributions which are not fully learned. For _Seq2Edit_ model, lack of information in regularization data and the limitation of tags result in the side effects of regularization data on the performance and robustness. However, there is no need to \begin{table} \begin{tabular}{l|c c c c|c c c c} \hline \hline \multirow{2}{*}{**Error Type**} & \multirow{2}{*}{**Transformer**} & \multicolumn{2}{c|}{**+ CSA (2 cycles)**} & \multicolumn{2}{c}{**BERT**} & \multicolumn{2}{c}{**+ CSA (2 cycles)**} \\ \cline{3-8} & & **Prec.** & **Rec.** & **Prec.** & **Rec.** & **Prec.** & **Rec.** & **Prec.** & **Rec.** \\ \hline **Missing** & 50.3 & 30.2 & **55.8** & **51.4** & **57.1** & **50.2** & 55.4 & 47.5 \\ **Replacing** & 53.9 & 35.5 & **53.9** & **40.3** & 51.9 & 33.2 & **54.2** & **35.5** \\ **Unnecessary** & 44.7 & 40.1 & **48.4** & **44.0** & **49.6** & **38.9** & 45.4 & 36.3 \\ \hline **PUNCT** & 61.5 & 25.5 & **61.7** & **56.5** & **60.3** & **55.6** & 58.8 & 51.6 \\ **OTHER** & 31.1 & 15.6 & **30.9** & **17.5** & **29.7** & **12.0** & 24.1 & 11.5 \\ **DET** & 52.8 & 46.6 & **55.8** & **53.1** & **57.2** & **53.1** & 56.1 & 48.7 \\ **PREP** & 52.8 & 38.9 & **52.3** & **45.4** & **52.3** & **41.6** & 51.7 & 38.0 \\ **VERRB:TENSE** & 49.8 & 33.0 & **52.9** & **40.4** & **51.9** & **37.0** & 50.7 & 31.9 \\ \hline \hline \end{tabular} \end{table} Table 17: Comparison of the _Seq2Seq_ model and _Seq2Edit_ model on most error types including all the top-5 frequent types of errors in W&I-Dev evaluation set. We report the precision and recall for each error type. The first group represents the operation tier error types, and the second group shows the top-5 frequent types of errors. The bold fonts indicate the optimal performance of each comparison. worry about this situation, as the original _Seq2Edit_ models work well on the closed sets of evaluation data due to the powerful encoder of PLMs. ## 7 Conclusion Recent works have revealed that _Seq2Seq_ GEC models (even with data augmentation) are vulnerable to adversarial examples Wan et al. (2020). Nevertheless, those previous works mainly rested on simplex attack sets and tedious defense methods, i.e., training with adversarial samples, and the scope of the evaluation is narrow. In this paper, we further explore the robustness of the GEC models by thoroughly evaluating various types of adversarial attacks and implementing an effective cycle self-augmenting method to improve model robustness. Specifically, with our method, the model can be improved by 1.0 point (F_0.5) on the original testing set and 3.9 points (F_0.5) on the attack set by utilizing only about \(39.4\%\) of the original data without requiring well-crafted adversarial examples at scale for a specific type of adversarial attack. Experimental results on **seven** strong baselines, **four** benchmark test sets, **five** types of adversarial attacks, and **two** newly introduced evaluation metrics confirm the effectiveness of our proposed method, which can generalize well to various GEC models with only a few more training epochs as the extra cost. Moreover, we also indicate that the reason behind the improvement is attributed to two data components in the regularization data, where one data component contributes to improving the correction ability while the other component focuses on improving the robustness. In the future, we will explore how to improve the performance of the _Seq2Edit_ model on the open attack sets and some more efficient ways to use regularization data. ## Acknowledge We would like to thank the efforts of anonymous reviewers for improving this paper.
2307.06485
Orbifold completion of 3-categories
We develop a general theory of 3-dimensional ``orbifold completion'', to describe (generalised) orbifolds of topological quantum field theories as well as all their defects. Given a semistrict 3-category $\mathcal{T}$ with adjoints for all 1- and 2-morphisms (more precisely, a Gray category with duals), we construct the 3-category $\mathcal{T}_{\textrm{orb}}$ as a Morita category of certain $E_1$-algebras in $\mathcal{T}$ which encode triangulation invariance. We prove that in $\mathcal{T}_{\textrm{orb}}$ again all 1- and 2-morphisms have adjoints, that it contains $\mathcal{T}$ as a full subcategory, and we argue, but do not prove, that it satisfies a universal property which implies $(\mathcal{T}_{\textrm{orb}})_{\textrm{orb}} \cong \mathcal{T}_{\textrm{orb}}$. This is a categorification of the work in [CR]. Orbifold completion by design allows us to lift the orbifold construction from closed TQFT to the much richer world of defect TQFTs. We illustrate this by constructing a universal 3-dimensional state sum model with all defects from first principles, and we explain how recent work on defects between Witt equivalent Reshetikhin--Turaev theories naturally appears as a special case of orbifold completion.
Nils Carqueville, Lukas Müller
2023-07-12T23:12:06Z
http://arxiv.org/abs/2307.06485v2
# Orbifold completion of 3-categories ###### Abstract We develop a general theory of 3-dimensional "orbifold completion", to describe (generalised) orbifolds of topological quantum field theories as well as all their defects. Given a semistrict 3-category \({\cal T}\) with adjoints for all 1- and 2-morphisms (more precisely, a Gray category with duals), we construct the 3-category \({\cal T}_{\text{orb}}\) as a Morita category of certain \(E_{1}\)-algebras in \({\cal T}\) which encode triangulation invariance. We prove that in \({\cal T}_{\text{orb}}\) again all 1- and 2-morphisms have adjoints, that it contains \({\cal T}\) as a full subcategory, and we argue that it satisfies a universal property which implies \(\left({\cal T}_{\text{orb}}\right)_{\text{orb}}\cong{\cal T}_{\text{orb}}\). This is a categorification of the work in [CR]. Orbifold completion by design allows us to lift the orbifold construction from closed TQFT to the much richer world of defect TQFTs. We illustrate this by constructing a universal 3-dimensional state sum model with all defects from first principles, and we explain how recent work on defects between Witt equivalent Reshetikhin-Turaev theories naturally appears as a special case of orbifold completion. ###### Contents * 1 Introduction and summary * 2 Background * 2.1 Pivotal 2-categories and 2-dimensional orbifold completion * 2.2 3-categories with adjoints * 3 Morita categories of \(E_{1}\)-algebras * 4 Orbifold completion of 3-categories * 4.1 3-category of orbifold data * 4.2 Properties * 4.2.1 Adjoints * 4.2.2 Universal property * 5 Examples of completions * 5.1 State sum models * 5.1.1 Recollection on 2-dimensional state sum models * 5.1.2 Oriented Eilenberg-Watts theorem * 5.1.3 3-dimensional state sum models * 5.2 Domain walls between Reshetikhin-Turaev theories * 6 Orbifold construction of defect TQFTs * 6.1 Defect TQFTs * 6.2 Defect TQFTs associated to orbifold completion * A 2-(co)limits Introduction and summary Topological quantum field theories are symmetric monoidal functors on bordism categories. The latter may or may not come with labelled stratifications, and the former may or may not take values in higher categories. Among the ordinary (1-)categories of labelled stratified bordisms, the oriented case has so far received the most attention, and leads to \(n\)-dimensional defect TQFTs \({\cal Z}\colon\operatorname{Bord}_{n,n-1}^{\text{defect}}({\rm D})\longrightarrow{ \cal C}\), cf. [1, 1]. Here \({\cal C}\) is some prescribed target category, e. g. of (super) vector spaces, and \({\rm D}\) consists of prescribed label sets \(D_{j}\) for \(j\)-dimensional strata as well as adjacency rules on how they can meet. A sketch of a defect morphism for \(n=2\) is (1.1) where the labels \(u_{i}\in D_{2}\) correspond to "bulk theories" or closed TQFTs, \(X_{j}\in D_{1}\) describe "line defects", and \(\varphi_{k}\in D_{0}\) are "point defects". By restricting \({\cal Z}\) to trivially stratified bordisms with only a single label, one recovers the closed TQFTs of [At]. We refer to [Ca] for a review of 2-dimensional defect theories, and note that recently topological defects have been used to discuss "non-invertible symmetries" of full QFTs (see [CDIS] for a recent review). This notion of generalised symmetries is intimately related to the theory of (generalised) orbifolds of [FFRS, CR, CRS1], where the qualifier "generalised" covers all non-invertible defects; if instead of orientations one considers framings, one arrives at the "condensation monads" of [GJF]. In arbitrary dimension \(n\), the orbifold construction \({\cal Z}\longmapsto{\cal Z}_{\cal A}\) produces (new) closed TQFTs \({\cal Z}_{\cal A}\) from (old) defect TQFTs \({\cal Z}\), with state sum models and gaugings of (finite) symmetry group actions as particular examples.1 Besides \({\cal Z}\), the construction needs a collection of special defect labels \({\cal A}\) as input. Then in a nutshell, \({\cal Z}_{\cal A}\) is evaluated on any bordism by choosing a triangulation, labelling the Poincare dual stratification with \({\cal A}\) (the possibility of which imposes a first set of conditions on these labels), evaluating with \({\cal Z}\), and taking a colimit. Independence of the choice of triangulation imposes further defining conditions on the orbifold datum \({\cal A}\), see [1, Sect. 3] for a detailed discussion. Footnote 1: The name “(generalised) orbifold” draws from the case of sigma models whose target manifolds \(X\) come with a \(G\)-action; then the orbifold of the sigma model \(X\) should be a sigma model whose target is the orbifold \(X/G\). It is generally expected that an orbifold datum \({\cal A}\) is an \(E_{1}\)-algebra with extra structure in the \(n\)-category associated to the defect TQFT \({\cal Z}\). For \(n\in\{2,3\}\), this has been made precise in [CR, CMS, CRS1]. For \(n=2\), \({\cal A}\) amounts to a \(\Delta\)-separable symmetric Frobenius algebra with defining relations (1.2) in the pivotal 2-category \({\cal B}_{\cal Z}\) associated to \({\cal Z}\) (cf. [DKR]). For \(n=3\), an orbifold datum \({\cal A}\) is a generalisation of spherical fusion categories internal to the 3-category with coherent adjoints \({\cal T}_{\cal Z}\) (more precisely, a "Gray category with duals"2 as defined in [BMS], cf. Section 2.2) constructed in [CMS]; we give the precise definition of 3-dimensional orbifold data in Definition 4.1. Footnote 2: The name is slightly unfortunate, because it does not only involve the existence of duals (or adjoints) of morphisms, but also coherent identifications between left and right adjoints. It is also expected that by considering \({\cal Z}_{\cal A}\) for all orbifold data \({\cal A}\) at once, the orbifold construction should lift to a map \({\cal Z}\longmapsto{\cal Z}_{\text{orb}}\) from defect TQFTs to defect TQFTs. This amounts to identifying all defects between the closed TQFTs \({\cal Z}_{\cal A}\), all defects between them, etc. Algebraically, this leads to a higher Morita category of the algebras \({\cal A}\) with compatibility with adjoints (coming from orientation-reversal of strata) built in. For \(n=2\), this was made precise in [CR] by associating to any pivotal 2-category \({\cal B}\) its orbifold completion, i. e. the 2-category \(\mathcal{B}_{\mathrm{orb}}\) of \(\Delta\)-separable symmetric Frobenius algebras \(\mathcal{A}\) in \(\mathcal{B}\), with 1- and 2-morphisms given by bimodules and bimodule maps. Then one shows that \(\mathcal{B}_{\mathrm{orb}}\) has a natural pivotal structure, and that it is complete in the sense that \((\mathcal{B}_{\mathrm{orb}})_{\mathrm{orb}}\cong\mathcal{B}_{\mathrm{orb}}\). This can be thought of as an "oriented" variant of 2-idempotent completion where the symmetry condition on \(\mathcal{A}\in\mathcal{B}_{\mathrm{orb}}\) is the property that distinguishes it from the 2-idempotent construction of [DR, GJF] in the context of framed TQFTs; the symmetry condition can also be thought of as a potential obstruction for "gauging" the generalised symmetry \(\mathcal{A}\). Via application to TQFTs, the results of [CR] lead to the discovery of new relations between closed TQFTs and singularity theory [CRCR, RW], and a general description of "non-abelian quantum symmetry" [BCP]. In the present paper, we develop a 3-categorical version of orbifold completion, and we study some of its properties and applications. More precisely, given a Gray category with duals \(\mathcal{T}\), we construct a 3-category \(\mathcal{T}_{\mathrm{orb}}\) whose objects are orbifold data \(\mathcal{A}\) in \(\mathcal{T}\), i. e. \(E_{1}\)-algebras with additional properties listed in Figure 4.1 (using the graphical calculus reviewed in Section 2). These properties categorify the relations in (1.2) and were shown in [CRS1] to encode invariance under 3-dimensional oriented Pachner moves. In particular, \(\mathcal{A}\) involves a 1-endomorphism \(A\colon a\longrightarrow a\), a multiplication 2-morphism \(\mu\), and an associator \(\alpha\): (1.3) Among the defining relations in Figure 4.1 is the condition that quantum dimensions of \(\mu\) are identities, as well as pentagon and invertibility axioms like (1.4) 1-morphisms in \(\mathcal{T}_{\mathrm{orb}}\) are bimodules \(\mathcal{M}\) over the underlying \(E_{1}\)-algebras (cf. Definition 3.2) that satisfy "coloured" versions of the conditions for orbifold data \(\mathcal{A}\) such as (1.5) We give all details in Definition 4.3, which also explicitly presents 2- and 3-morphisms of \(\mathcal{T}_{\mathrm{orb}}\) as those in the Morita 3-category of \(E_{1}\)-algebras in \(\mathcal{T}\) (reviewed in Section 3) which are compatible with all the orbifold structure. These conditions had previously appeared in [12, 13], here we explain how they naturally arise from the perspective of higher representation theory. Our first main result is that \(\mathcal{T}_{\mathrm{orb}}\) has the desired duality properties: **Theorem 1.1**.: Let \(\mathcal{T}\) be a Gray category with duals (satisfying a mild condition on the existence of certain (co)limits, cf. Section 4.1). Then \(\mathcal{T}_{\mathrm{orb}}\) is a \(3\)-category that admits adjoints for all \(1\)- and \(2\)-morphisms. The main ingredients of the proof are a description of the composition of \(1\)-morphisms in terms of the splitting of \(2\)-idempotents (Definition 4.7 and Proposition 4.8) as well as explicit constructions of all adjoints (Lemma 4.10 and Proposition 4.12). The verification of coherence axioms is considerably facilitated by the graphical calculus of [10]. To fully justify the term orbifold completion also in three dimensions, one expects an equivalence of \(3\)-categories \(\left(\mathcal{T}_{\mathrm{orb}}\right)_{\mathrm{orb}}\cong\mathcal{T}_{ \mathrm{orb}}\). We leave this for future work, but we do propose (in Section 4.2.2) a universal property for orbifold completion in arbitrary dimension, and we prove that the \(2\)-dimensional orbifold completion \(\mathcal{B}\longmapsto\mathcal{B}_{\mathrm{orb}}\) of [11] does satisfy this property (cf. Proposition 4.16). The completion property \(\left(\mathcal{B}_{\mathrm{orb}}\right)_{\mathrm{orb}}\cong\mathcal{B}_{ \mathrm{orb}}\) is a direct corollary of this more conceptual description, and we expect similar relations in higher dimensions. The applications of Theorem 1.1 that we cover in the present paper (in Sections 5 and 6) concern \(3\)-dimensional state sum models with defects, and more generally Reshetikhin-Turaev theory. To summarise these results, we shall first briefly discuss that our original motivation for \(3\)-dimensional orbifold completion was to construct \(4\)-dimensional state sum models from first principles, and re-discover the \(4\)-manifold invariants of Douglas and Reutter [12] as part of a closed orbifold TQFT. This application will appear in joint work with Vincentas Mulevicius in [13]. A key idea, in part inspired by [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 224, 217, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 334, 335, 34, 351, 352, 36, 371, 38, 39, 311, 33, 33, 34, 36, 39, 32, 34, 36, 38, 39, 32, 35, 36, 39, 37, 38, 39, 30, 31, 33, 38, 39, 32, 39, 33, 34, 38, 39, 34, 39, 35, 36, 37, 39, 38, 39, 30, 31, 39, 32, 33, 35, 39, 36, 38, 39, 37, 39, 38, 39, 39, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 54, 52, 59, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 83, 85, 89, 91, 84, 86, 88, 89, 92, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 100, 99, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 109, 111, 112, 113, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 142, 132, 133, 134, 135, 136, 137, 138, 139, 143, 144, 145, 146, 147, 148, 149, 150, 161, 176, 186, 187, 188, 199, 200, 203, 204, 205, 206, 207, 208, 209, 210, 212, 213, 214, 215, 216, 217, 218, 219, 232, 233, 234, 235, 236, 237, 238, 239, 24, 240, 241, 24, 248, 249, 250, 261, 273, 274, 275, 276, 278, 289, 292, 293, 289, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 309, 310, 311, 323, 334, 309, 320, 333, 34, 351, 352, 36, 371, 38, 39, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 51, 50, 53, 59, 61, 50, 54, 50, 55, 57, 59, 62, 50, 57, 59, 63, 52, 50, 59, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 100, 99, 11 Frobenius algebras in Vect. In particular, we construct (cf. Proposition 5.9) a pivotal equivalence between ssFrob and a 2-category CYcat\({}^{\text{s}}\) of semisimple Calabi-Yau categories, and we prove: **Theorem 5.18**.: The 3-category sFus of spherical fusion categories, bimodule categories with trace, bimodule functors, and bimodule natural transformations defined in [Sc2] is a subcategory of \(E(\text{BCYcat}^{\text{s}})_{\text{orb}}\), or equivalently of \(E(\text{BssFrob})_{\text{orb}}\). A direct consequence of this is a new proof of the fact, first proved in [DSPS], that spherical fusion categories are 3-dualisable in the 3-category \(\text{Alg}_{1}(2\text{Vect})\). In Section 6 we briefly review 3-dimensional defect TQFTs and then apply our algebraic theory of orbifold completion to them to show: **Theorem 6.4**.: Let \(\mathcal{Z}\colon\text{Bord}^{\text{def}}_{3,2}(\text{D})\longrightarrow\text{Vect}\) be a defect TQFT. The orbifold construction gives a defect TQFT \(\mathcal{Z}_{\text{orb}}\colon\text{Bord}^{\text{def}}_{3,2}(\text{D}_{\text{ orb}})\longrightarrow\text{Vect}\). Applying this to the special case of the trivial defect TQFT, we obtain the 3-dimensional defect state sum model \(\mathcal{Z}_{3}^{\text{ss}}\) along the lines outlined for arbitrary dimension above. It restricts to Turaev-Viro-Barrett-Westbury theory on trivially stratified bordisms, and we expect \(\mathcal{Z}_{3}^{\text{ss}}\) to also generalise the work of [Me] to arbitrary defect bordisms. To make the connection to general TQFTs of Reshetikhin-Turaev type we fix any modular fusion category \(\mathcal{M}\) and consider the 2-category \(\text{ssFrob}(\mathcal{M})\). In [KMRS] a connection was made between the general analysis [FSV] of semisimple defect theories in three dimensions and the general orbifold theory of [CRS1, CRS3]. A key step in [KMRS] was the clever ad hoc introduction of the notion of Frobenius algebras \(F\) over a pair \((A,B)\) of commutative \(\Delta\)-separable Frobenius algebras to describe surface defects. In Section 5.2 we show that this notion as well as the related higher maps naturally occur as morphisms in the orbifold completion of \(\text{BssFrob}(\mathcal{M})\): **Theorem 5.21**.: Let \(\mathcal{M}\) be a modular fusion category. The 3-category in which * objects are commutative \(\Delta\)-separable Frobenius algebras in \(\mathcal{M}\), * 1-morphisms from \(A\) to \(B\) are \(\Delta\)-separable symmetric Frobenius algebras \(F\) over \((A,B)\), * 2-morphisms from \(F\) to \(G\) are \(G\)-\(F\)-bimodules \(M\) over \((A,B)\), and * 3-morphisms are bimodule maps is a subcategory of \((\text{B}\Delta\text{ssFrob}(\mathcal{M}))_{\text{orb}}\). Combining this with Theorem 6.4 above we recover the defect TQFT of [KMRS] from the universal orbifold construction. **Acknowledgements.** We thank Theo Johnson-Freyd, Christopher Lieberum, Vincentas Mulevicius, David Reutter, Claudia Scheimbauer, and Lukas Woike for helpful discussions and comments. N. C. is supported by the DFG Heisenberg Programme. L. M. gratefully acknowledges support by the Max Planck Institute for Mathematics in Bonn, where part of this work was carried out, and the Simons Collaboration on Global Categorical Symmetries. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Colleges and Universities. The Perimeter Institute is in the Haldimand Tract, land promised to the Six Nations. ## 2 Background In this section we collect some background on 2- and 3-categories with adjoints and their graphical calculus. In Section 2.1 we review pivotal 2-categories \(\mathcal{B}\) as well as their orbifold completions, i. e. the representation theory of \(\Delta\)-separable symmetric Frobenius algebras internal to \(\mathcal{B}\). In Section 2.2 we discuss a semistrict version of 3-categories with compatible adjoints, namely the "Gray categories with duals" introduced and studied in [BMS], and we recall some of the coherence results on these structures. ### Pivotal 2-categories and 2-dimensional orbifold completion For the basic definitions and theory of 2-categories (which we do not assume to be strict) we refer to [JY]. We use "\(\cdot\)" or mere concatenation to denote vertical composition, and "\(\circ\)" for horizontal composition. Our conventions for the graphical calculus are to read diagrams from bottom to top and from right to left. Hence for objects \(u,v,w\) as well as suitably composable 1-morphisms \(X,Y,Z,P,Q\) and 2-morphisms \(\varphi,\psi,\zeta\) in some 2-category, we have \[\tikzfig{fig:pivotal2-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-category-of-the-basic-categories-and-the-basic-categories-and-the-category-of-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-and-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-and-the-basic-categories-the-basic-categories-the-basic-categories-and-the-the-basic-categories-the-basic-categories-and-the-the-basic-categories-the-basic-categories-and-the-the-basic-categories-the-basic-categories-the-basic-categories-and-the-the-basic-categories-the-basic-categories-and-the-the-basic-categories-the-basic-categories-and-the-the-basic-categories-the-basic-categories-and-the-the-the-basic-categories-the-basic-categories-and-the 3. A 2-functor \(F\colon\mathcal{B}\longrightarrow\mathcal{B}^{\prime}\) between pivotal 2-categories is pivotal if for all 1-morphisms \(X\) in \(\mathcal{B}\) the diagram commutes, where the vertical arrows are induced by the fact that 2-functors preserve adjoints, and the uniqueness property of adjoints. Examples of pivotal 2-categories are naturally obtained from 2-dimensional defect TQFTs, see [DKR] or the review [Ca]: objects are labels for 2-strata (or "bulk theories"), 1-morphisms are lists of composable labels for 1-strata (or (fusion products of) "line defects"), while 2-morphisms and their compositions are extracted from the TQFT itself. Adjunctions and pivotality arise from orientation reversal, which is strictly involutive. In particular, 2-dimensional defect state sum models give rise to a pivotal structure on the 2-category of separable symmetric Frobenius algebras, or equivalently of semisimple Calabi-Yau categories, as we review in Section 5.1 below. It is not true that every pivotal 2-functor that is an equivalence also admits a pivotal inverse. To show this we give a general Whitehead-style theorem identifying conditions for a pivotal inverse to exist, for which we first introduce the following: **Definition 2.2**.: Let \(\mathcal{B}\) be a pivotal 2-category with idempotent complete morphism categories. A 1-morphism \(X\colon u\longrightarrow v\) is a pivotal equivalence if \(X^{\vee}\colon v\longrightarrow u\) is a weak inverse to \(X\) whose witnessing 2-isomorphisms to and from the identity are given in terms of the left and right adjunction data for \(X^{\vee}\). Note that a 1-morphism is a pivotal equivalence if and only if its quantum dimensions are trivial. **Proposition 2.3** (Whitehead theorem for pivotal 2-categories).: Let \(\mathcal{F}\colon\mathcal{B}\longrightarrow\mathcal{B}^{\prime}\) be a pivotal 2-functor. There exists a pivotal functor \(\mathcal{F}^{-1}\colon\mathcal{B}^{\prime}\longrightarrow\mathcal{B}\) such that \(\mathcal{F}\circ\mathcal{F}^{-1}\cong\operatorname{id}_{\mathcal{B}^{\prime}}\) and \(\mathcal{F}^{-1}\circ\mathcal{F}\cong\operatorname{id}_{\mathcal{B}}\) if every object \(b^{\prime}\in\mathcal{B}^{\prime}\) is pivotally equivalent to \(\mathcal{F}(b)\) for some \(b\in\mathcal{B}\), and \(\mathcal{F}\) is fully faithful on morphism category. Proof.: It is standard that we can construct an inverse functor to \(\mathcal{F}\) as follows: For every object \(b^{\prime}\in\mathcal{B}\) we pick an object \(\mathcal{F}^{-1}(b^{\prime})\in\mathcal{B}\) together with an equivalence \(\gamma_{b^{\prime}}\colon\mathcal{F}(\mathcal{F}^{-1}(b^{\prime}))\longrightarrow b ^{\prime}\). The value on a 1-morphism \(f\colon a^{\prime}\longrightarrow b^{\prime}\) can be constructed by choosing a morphism \(\mathcal{F}^{-1}(f)\colon\mathcal{F}^{-1}(a^{\prime})\longrightarrow\mathcal{ F}^{-1}(b^{\prime})\) such that \(\mathcal{F}(\mathcal{F}^{-1}(f))\) is 2-isomorphic to \(\gamma_{b^{\prime}}^{-1}\circ f\circ\gamma_{a^{\prime}}\). The value on 2-morphisms can then be constructed by using that \(\mathcal{F}\) is fully faithful on Hom categories; this is however not important for the proof. To conclude that \(\mathcal{F}^{-1}\) is pivotal we can use that \(\mathcal{F}\) is pivotal if we know that the adjunction data on \(\mathcal{F}\mathcal{F}^{-1}(f^{\vee})\cong\gamma_{a^{\prime}}^{-1}f^{\vee} \gamma_{b^{\prime}}\) with respect to \(\mathcal{F}\mathcal{F}^{-1}(f)\cong\gamma_{b^{\prime}}^{-1}\circ f\circ\gamma _{a^{\prime}}\) are those coming from \(\mathcal{B}^{\prime}\). The adjoint coming from \(\mathcal{B}\) is \(\gamma_{a^{\prime}}^{\vee}\circ f^{\vee}\circ\gamma_{b^{\prime}}^{-1\vee}\). We see that the relation we want to check holds exactly when \(\gamma_{-}\) is a pivotal equivalence. But by assumption we can choose all the \(\gamma_{-}\) to be pivotal. This finishes the proof. A Frobenius algebra (on an object \(a\) of a 2-category \(\mathcal{B}\)) is a 1-morphism \(A\in\mathcal{B}(a,a)\) together with (co)multiplication and (co)unit 2-morphisms that satisfy (2.7) where here and below we show no labels and colourings if they can be suppressed at no relevant cost. The algebra is separable if there exists a section for the multiplication as a bimodule map, and \(\Delta\)-separable if this section is the comultiplication, i. e. if (2.8) If \(\mathcal{B}\) is pivotal, then a Frobenius algebra is symmetric if \[\raisebox{-14.226378pt}{\includegraphics[]{14.eps}}\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ and more concisely in [CMS, Sect. 3.1.2], a Gray category \(\mathcal{T}\) is a 3-categorical structure whose Hom 2-categories (with horizontal and vertical compositions "\(\circ\)" and "\(\cdot\)", respectively) are strict, as are the composition 2-functors \[u\,\square\,(-)\colon\mathcal{T}(c,a)\longrightarrow\mathcal{T}(c,b)\,,\quad( -)\,\square\,u\colon\mathcal{T}(b,c)\longrightarrow\mathcal{T}(a,c) \tag{2.12}\] for all 1-morphisms \(u\in\mathcal{T}(a,b)\). The only not necessarily strict aspect of \(\mathcal{T}\) is the tensorator, i. e. a natural family of 3-isomorphisms \[\sigma_{X,Y}\colon\big{(}X\,\square\,1_{u^{\prime}}\big{)}\circ\big{(}1_{v}\, \square\,Y\big{)}\stackrel{{\cong}}{{\longrightarrow}}\big{(}1_ {v^{\prime}}\,\square\,Y\big{)}\circ\big{(}X\,\square\,1_{u}\big{)} \tag{2.13}\] for all \(\square\)-composable 2-morphisms \(Y\colon u\longrightarrow u^{\prime}\) and \(X\colon v\longrightarrow v^{\prime}\). The tensorator is subject to axioms which are manifest in the graphical calculus developed in [Tr, BMS] which we discuss next. Our conventions deviate from those in [BMS], but they are in line with those in [CMS, Sect. 3], to which we refer for more details and further illustrations. Our graphical conventions for the compositions "\(\circ\)" and "\(\cdot\)"are as in Section 2.1. The additional composition "\(\square\)" is read "from front to back". Hence for example \[\xy(0,0)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,- 1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{ \ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{- }};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1 )*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{ \ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{- }};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,- 1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{ \ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{ -}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{ \ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{ -}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{ \ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{ -}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{ \ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-} };(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1 )*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{ \ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{ -}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1 )*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{ \ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{ -}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,- 1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{ \ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{ -}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)*{\ar@{-}};(-1,-1)* {\ar@{-}};(-1,-1)*{\ar@{-}}; for composable 2-morphisms \(Y\colon v\longrightarrow w\) and \(X\colon u\longrightarrow v\) in a Gray category with duals, cf. [BMS, Fig. 23 c)]. Moreover, there is also a canonical 3-isomorphism \[\Theta_{X}\colon X^{\#\#}= \tag{2.18}\] which is constructed from cusp, tensorator and adjunction 3-morphisms, cf. [BMS, Eq. (24) & Fig. 32]. These maps are compatible in the sense that \[\begin{CD}Y^{\#\#}\circ X^{\#\#}@>{\Phi_{Y\#,X\#}}>{}>\bigl{(}X^{\#}\circ Y^{ \#}\bigr{)}^{\#}\\ @V{}V{\Theta_{Y}\circ\Theta_{X}}V@V{}V{}V\\ Y\circ X@>{}>{\Theta_{Y}^{-1}}>\bigl{(}Y\circ X\bigr{)}^{\#\#}\end{CD} \tag{2.19}\] commutes, which means that \(\Theta_{Y}\circ\Theta_{X}\) and \(\Theta_{Y\circ X}\) are equal up to tensorator and cusp isomorphisms. Given a monoidal 2-category \(\mathcal{C}\), we denote its delooping 3-category by \(\mathrm{B}\mathcal{C}\), which by definition has only a single object whose End 2-category is \(\mathcal{C}\). Clearly \(\mathrm{B}\mathcal{C}\) is a Gray category with duals if \(\mathcal{C}\) is a pivotal monoidal 2-category with compatible duals for objects. Further examples of Gray categories with duals are naturally obtained from 3-dimensional defect TQFTs, see [CMS]: objects are labels for 3-strata, 1- and 2-morphisms are lists of labels for 2- and 1-strata, respectively, and only 3-morphisms are constructed from the TQFT itself. In particular, 3-dimensional defect state sum models, i. e. Turaev-Viro-Barrett-Westbury models with arbitrary defects, should provide 3-categories that are equivalent to Gray categories with duals. Work in this direction was carried out in [CRS1, CRS2, CRS3, Me], and can be concluded with the main results of the present paper, see Section 6 below. ## 3 Morita categories of \(E_{1}\)-algebras Let \(\mathcal{T}\) be a 3-category such that all Hom 2-categories admit finite sifted 2-colimits that commute with composition. (We refer to Appendix A for some background on 2-colimits; a diagram 2-category \(\mathcal{D}\) is sifted if 2-colimits of shape \(\mathcal{D}\) commute with finite products in Cat.) In this section we define a new 3-category \(\mathcal{E}(\mathcal{T})\) which locally is a version of the 3-category \(\mathrm{Alg}_{1}(\mathcal{B})\) of \(E_{1}\)-algebras in a monoidal 2-category \(\mathcal{B}\), see e. g. [JFS], in the sense that if \(\mathcal{T}\) has a single object \(*\), then \(\mathcal{E}(\mathcal{T})=\mathrm{Alg}_{1}(\mathcal{T}(*,*))\). **Definition 3.1** (Objects).: An object \(\mathcal{A}\) of \(\mathcal{E}(\mathcal{T})\) consists of an object \(a\in\mathcal{T}\) and a unital \(E_{1}\)-algebra \[\begin{CD}\xy@V{}V{}V@V{}V{}V\\ A\end{CD},\quad\xy@V{}V{}V@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \xy@V{}V{}V\\ \end{CD},\quad\xy \(\alpha\colon\mu\circ(1_{A}\,\square\,\mu)\longrightarrow\mu\circ(\mu\,\square\,1_{A})\), \(u^{!}\colon\mu\circ(u\,\square\,1_{A})\longrightarrow 1_{A}\) and \(u^{!}\colon\mu\circ(1_{A}\,\square\,u)\longrightarrow 1_{A}\), such that \[\tikzfig{1-morphisms} \tag{3.2}\] **Definition 3.2** (1-morphisms).: Let \(\mathcal{A}=(a,A,\mu_{A},\alpha_{A},u_{A},u_{A}^{!},u_{A}^{!})\) and \(\mathcal{B}=(b,B,\mu_{B},\alpha_{B},u_{B},u_{B}^{!},u_{B}^{!})\) be objects of \(\mathcal{E}(\mathcal{T})\). A 1-morphism \(\mathcal{M}\colon\mathcal{A}\longrightarrow\mathcal{B}\) consists of a 1-morphism \(M\colon a\longrightarrow b\) together with 2-morphisms \[\tikzfig{1-morphisms} \tag{3.3}\] and 3-isomorphisms \[\tikzfig{1-morphisms} \tag{3.4}\] in \(\mathcal{T}\) such that \[\tikzfig{1-morphisms} \tag{3.5}\] (3.6) commute. **Definition 3.3** (2-morphisms).: A 2-morphism \[\mathcal{F}\colon\mathcal{M}=\left(M,\triangleright_{M},\triangleleft_{M},u^{ \mathrm{l}}_{M},u^{\mathrm{t}}_{M},\alpha^{\mathrm{l}}_{M},\alpha^{\mathrm{m}}_{ M},\alpha^{\mathrm{t}}_{M}\right)\longrightarrow\mathcal{M}^{\prime}=\left(M^{\prime}, \triangleright_{M^{\prime}},\triangleleft_{M^{\prime}},u^{\mathrm{l}}_{M^{ \prime}},u^{\mathrm{t}}_{M^{\prime}},\alpha^{\mathrm{l}}_{M^{\prime}},\alpha^ {\mathrm{m}}_{M^{\prime}},\alpha^{\mathrm{r}}_{M^{\prime}}\right) \tag{3.11}\] between 1-morphisms \(\mathcal{M},\mathcal{M}^{\prime}\colon\mathcal{A}\longrightarrow\mathcal{B}\) in \(\mathcal{E}(\mathcal{T})\) consists of 2- and 3-morphisms \[\mathcal{F}\colon\mathcal{M}=\left(M,\triangleright_{M},\triangleleft_{M},u^{ \mathrm{l}}_{M},u^{\mathrm{t}}_{M},\alpha^{\mathrm{l}}_{M},\alpha^{\mathrm{m}} _{M},\alpha^{\mathrm{t}}_{M}\right)\longrightarrow\mathcal{M}^{\prime}=\left( M^{\prime},\triangleright_{M^{\prime}},\triangleleft_{M^{\prime}},u^{\mathrm{l}}_{M^{ \prime}},u^{\mathrm{t}}_{M^{\prime}},\alpha^{\mathrm{l}}_{M^{\prime}},\alpha^ {\mathrm{m}}_{M^{\prime}},\alpha^{\mathrm{r}}_{M^{\prime}}\right) \tag{3.12}\] between 1-morphisms \(\mathcal{M},\mathcal{M}^{\prime}\colon\mathcal{A}\longrightarrow\mathcal{B}\) in \(\mathcal{E}(\mathcal{T})\) consists of 2- and 3-morphisms \[\mathcal{F}\colon\mathcal{M}=\left(M,\triangleright_{M},\triangleleft_{M},u^{ \mathrm{t}}_{M},u^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{m}} _{M},\alpha^{\mathrm{t}}_{M}\right)\longrightarrow\mathcal{M}^{\prime}=\left( M^{\prime},\triangleright_{M^{\prime}},\triangleleft_{M^{\prime}},u^{\mathrm{t}}_{M^{ \prime}},u^{\mathrm{t}}_{M^{\prime}},u^{\mathrm{t}}_{M^{\prime}},\alpha^{ \mathrm{t}}_{M^{\prime}},\alpha^{\mathrm{m}}_{M^{\prime}},\alpha^{\mathrm{r}}_ {M^{\prime}}\right) \tag{3.13}\] \[\mathcal{F}\colon\mathcal{M}=\left(M,\triangleright_{M},\triangleleft_{M},u^{ \mathrm{t}}_{M},u^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{m}} _{M},\alpha^{\mathrm{t}}_{M}\right)\longrightarrow\mathcal{M}^{\prime}=\left( M^{\prime},\triangleright_{M^{\prime}},\triangleleft_{M^{\prime}},u^{\mathrm{t}}_{M^{ \prime}},u^{\mathrm{t}}_{M^{\prime}},u^{\mathrm{t}}_{M^{\prime}},\alpha^{ \mathrm{t}}_{M^{\prime}},\alpha^{\mathrm{t}}_{M^{\prime}},\alpha^{\mathrm{m}} _{M^{\prime}},\alpha^{\mathrm{r}}_{M^{\prime}}\right) \tag{3.14}\] \[\mathcal{F}\colon\mathcal{M}=\left(M,\triangleright_{M},\triangleleft_{M},u^{ \mathrm{t}}_{M},u^{\mathrm{t}}_{M},u^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M}, \alpha^{\mathrm{m}}_{M},\alpha^{\mathrm{t}}_{M}\right)\longrightarrow\mathcal{M }^{\prime}=\left(M^{\prime},\triangleright_{M^{\prime}},\triangleleft_{M^{ \prime}},u^{\mathrm{t}}_{M^{\prime}},u^{\mathrm{t}}_{M^{\prime}},u^{\mathrm{ t}}_{M^{\prime}},\alpha^{\mathrm{t}}_{M^{\prime}},\alpha^{\mathrm{t}}_{M^{ \prime}},\alpha^{\mathrm{t}}_{M^{\prime}}\right) \tag{3.15}\] \[\mathcal{F}\colon\mathcal{M}=\left(M,\triangleright_{M},\triangleleft_{M},u^{ \mathrm{t}}_{M},u^{\mathrm{t}}_{M},u^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M}, \alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^ {\mathrm{t}}_{M^{\prime}},\alpha^{\mathrm{t}}_{M^{\prime}},\alpha^{\mathrm{t }}_{M^{\prime}},\alpha^{\mathrm{t}}_{M^{\prime}}\right) \tag{3.16}\] \[\mathcal{F}\colon\mathcal{M}=\left(M,\triangleright_{M},\triangleleft_{M},u^{ \mathrm{t}}_{M},u^{\mathrm{t}}_{M},u^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M}, \alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^ {\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M^{\prime}},\alpha^{\mathrm{t}}_{M^{ \prime}},\alpha^{\mathrm{t}}_{M^{\prime}},\alpha^{\mathrm{t}}_{M^{\prime}}, \alpha^{\mathrm{t}}_{M^{\prime}}\right) \tag{3.17}\] \[\mathcal{F}\colon\mathcal{M}=\left(M,\triangleright_{M},\triangleleft_{M},u^{ \mathrm{t}}_{M},u^{\mathrm{t}}_{M},u^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M}, \alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^ {\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M^{\prime}}, \alpha^{\mathrm{t}}_{M^{\prime}},\alpha^{\mathrm{t}}_{M^{\prime}},\alpha^{ \mathrm{t}}_{M^{\prime}},\alpha^{\mathrm{t}}_{M^{\prime}}\right) \tag{3.18}\] \[\mathcal{F}\colon\mathcal{M}=\left(M,\triangleright_{M},\triangleleft_{M},u^{ \mathrm{t}}_{M},u^{\mathrm{t}}_{M},u^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M}, \alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{ \mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M^{\prime}},\alpha^ {\mathrm{t}}_{M^{\prime}},\alpha^{\mathrm{t}}_{M^{\prime}},\alpha^{\mathrm{t}}_{M ^{\prime}},\alpha^{\mathrm{t}}_{M^{\prime}}\right) \tag{3.19}\] \[\mathcal{F}\colon\mathcal{M}=\left(M,\triangleright_{M},\triangleleft_{M},u^{ \mathrm{t}}_{M},u^{\mathrm{t}}_{M},u^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M}, \alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{ \mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{ \mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M^{\prime}}\right) \tag{3.20}\] \[\mathcal{F}\colon\mathcal{M}=\left(M,\triangleright_{M},\triangleleft_{M},u^{ \mathrm{t}}_{M},u^{\mathrm{t}}_{M},u^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M}, \alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^ {\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{ \mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M^{\prime}}\right) \tag{3.21}\] \[\mathcal{F}\colon\mathcal{M}=\left(M,\triangleright_{M},\triangleleft_{M},u^{ \mathrm{t}}_{M},u^{\mathrm{t}}_{M},u^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M}, \alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M}, \alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M}, \alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M^{ \prime}}\right) \tag{3.22}\] \[\mathcal{F}\colon\mathcal{M}=\left(M,\triangleright_{M},\triangleleft_{M},u^{ \mathrm{t}}_{M},u^{\mathrm{t}}_{M},u^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M}, \alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{\mathrm{t}}_{M},\alpha^{ \mathrm{t}}_{M},\alpha^{ in \(\mathcal{T}\) such that (3.13) (3.14) For later use we note that the inverses of (3.12) are presented as follows: (3.15) For later use we note that the inverses of (3.12) are presented as follows: (3.16) **Definition 3.4** (3-morphisms).: A 3-morphism \(\xi\colon(F,\triangleleft_{F},\triangleright_{F})\longrightarrow(G,\triangleleft_ {G},\triangleright_{G})\) between 2-morphisms in \(\mathcal{E}(\mathcal{T})\) is a 3-morphism \(\xi\colon F\longrightarrow G\) in \(\mathcal{T}\) such that (3.17) The 2-category structure on \(\mathcal{E}(\mathcal{T})(\mathcal{A},\mathcal{B})\) is induced from the composition in \(\mathcal{T}\) in a straightforward way. The composition of 1-morphisms is given in terms of relative box products. Here is where our assumptions on the existence of certain 2-colimits enters, as we discuss next. Our guide for the description below is the exposition of tricategories in [3, Sect. A.4]. Let \(\mathcal{A},\mathcal{B},\mathcal{C}\in\mathcal{E}(\mathcal{T})\). The definition of the composition \(\square_{\mathcal{B}}\colon\mathcal{E}(\mathcal{T})(\mathcal{B},\mathcal{C} )\times\mathcal{E}(\mathcal{T})(\mathcal{A},\mathcal{B})\longrightarrow \mathcal{E}(\mathcal{T})(\mathcal{A},\mathcal{C})\) can be split into two steps, where we refer to Appendix A for details on 2-colimits: * First we define a 2-functor \(\square^{\Delta}_{\mathcal{B}}\colon\mathcal{E}(\mathcal{T})(\mathcal{B}, \mathcal{C})\times\mathcal{E}(\mathcal{C})(\mathcal{A},\mathcal{B})\longrightarrow \mathcal{E}(\mathcal{T})(\mathcal{A},\mathcal{C})^{\tau_{2}\Delta}\) sending a pair of objects \((\mathcal{M},\mathcal{N})\) to the truncated simplicial object \[M\,\square\,B\,\square\,B\,\square\,N\xrightarrow{\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad is an equivalence. The definition of the composition in terms of colimits allows us to construct coherence isomorphisms by using the universal property. This will automatically ensure that the necessary coherence conditions are satisfied. We begin with the associator, which is given by (3.23) where the upper rhombus commutes strictly,5 and the lower subdiagrams commute because of the compatibility of the colimit with composition. The unit 1-morphisms are given by the 2-functor Footnote 5: This is the case because the composition in \(\mathcal{T}\) is strict. Otherwise its associator would enter here. \[\begin{split} 1_{\mathcal{A}}\colon*&\longrightarrow \mathcal{E}(\mathcal{T})(\mathcal{A},\mathcal{A})\\ *&\longmapsto\left(A,\mu_{A},\mu_{A},u_{A}^{ \dagger},u_{A}^{\tau},\alpha_{A},\alpha_{A},\alpha_{A}\right).\end{split} \tag{3.24}\] Next we construct the \(\text{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{ \small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{ \small{\small{ }}}}}}}}}}}}}}}\) and \(\mathcal{F}_{\mathcal{A},\mathcal{B}}\). For this consider the natural transformation (3.25) where \(c_{-}\) denotes the functor sending an object to the constant diagram, and \(\tilde{l}\) is the natural transformation with component at \(\mathcal{N}\in\mathcal{E}(\mathcal{T})(\mathcal{A},\mathcal{B})\) given by (3.26) where the vertical arrows denote the action of \(B\) on \(N\) and the coherence data for the action can be used to fill the squares. It is straightforward to see that the map \(\triangleright\colon B\,\square\,N\longrightarrow N\) defines a universal cocone, which implies that the 2-colimit of the upper diagram is \(N\) and hence that the map induced by \(\tilde{l}\) on the level of 2-colimits is an equivalence. This allows us to define \(l\) by (3.27) where the lower triangle is filled by a natural isomorphism which exists because the 2-colimit of the constant diagram is the identity (this is not true for 2-colimits over constant diagrams in general). The unitor \(r\) is defined in a completely analogous way. To give the component of the pentagonator at \((\mathcal{M},\mathcal{N},\mathcal{O},\mathcal{P})\in\mathcal{E}(\mathcal{T})( \mathcal{D},\mathcal{E})\times\mathcal{E}(\mathcal{T})(\mathcal{C},\mathcal{D} )\times\mathcal{E}(\mathcal{T})(\mathcal{B},\mathcal{C})\times\mathcal{E}( \mathcal{T})(\mathcal{A},\mathcal{B})\), we use the following notational convention: if we write a composite like \(\mathcal{M}\,\square_{\mathcal{D}}\,\mathcal{N}\,\square_{\mathcal{C}}\, \mathcal{O}\) without any brackets we mean the colimit over the truncated bisimplicial object featuring also in (3.23). We adopt analogous conventions for situations involving more bimodules. In this notation the associator is then the map \[\alpha_{\mathcal{M},\mathcal{N},\mathcal{O}}\colon(\mathcal{M}\,\square_{ \mathcal{D}}\,\mathcal{N})\,\square_{\mathcal{C}}\,\mathcal{O}\longrightarrow \mathcal{M}\,\square_{\mathcal{D}}\,\mathcal{N}\,\square_{\mathcal{C}}\, \mathcal{O}\longrightarrow\mathcal{M}\,\square_{\mathcal{D}}(\mathcal{N}\, \square_{\mathcal{C}}\,\mathcal{O}) \tag{3.28}\] where the morphisms are induced from the compatibility of composition with colimits and the Fubini theorem for colimits, cf. Appendix A. The pentagonator is now given by (suppressing the symbols relative box products) (3.29) where the triangles commute by definition and the squares commute because of the coherence of the Fubini theorem. Finally, there are three modifications \(\lambda,\mu,\rho\) encoding the compatibility between the unit constraints \(l,r\) and the associator \(\alpha\). The one most complicated to construct is \(\mu\), the other two follow directly from the universal property of 2-colimits. The component of \(\mu\) at a pair of 1-morphisms \((\mathcal{M},\mathcal{N})\in\mathcal{E}(\mathcal{T})(\mathcal{B},\mathcal{C}) \times\mathcal{E}(\mathcal{T})(\mathcal{A},\mathcal{B})\) is a 2-morphism (3.30) which we define to be (3.31) where \(r^{\prime},l^{\prime}\) are the maps induced by acting with \(B\) to the left and right in the truncated bisimplical diagram composed with the projection to the relative tensor product. Since the projection is balanced we get an induced natural transformation between the two cocones which induces \(\mu^{\prime}\). The upper triangle in (3.31) commutes strictly, the outer ones by the universal property of \(2\)-colimits. This concludes the construction of the \(3\)-category \(\mathcal{E}(\mathcal{T})\). ## 4 Orbifold completion of 3-categories In this section we introduce a categorification of the \(2\)-dimensional orbifold completion of [CR], as briefly reviewed in Section 2.1. In Section 4.1 we start out with a Gray category with duals \(\mathcal{T}\) satisfying a mild finiteness condition, and construct a \(3\)-category \(\mathcal{T}_{\mathrm{orb}}\) as a subcategory of the \(3\)-category \(\mathcal{E}(\mathcal{T})\) of Section 3. The defining conditions on the data of \(\mathcal{T}_{\mathrm{orb}}\) are such that they encode invariance under oriented Pachner moves when used as labels for stratified \(3\)-bordisms. Composition of \(1\)-morphisms in \(\mathcal{T}_{\mathrm{orb}}\) is given by the relative product, and we show how to compute it by splitting higher idempotents. Then in Section 4.2.1 we prove that \(1\)- and \(2\)-morphisms in \(\mathcal{T}_{\mathrm{orb}}\) have compatible adjoints, while in Section 4.2.2 we identify a universal property for \(2\)-dimensional orbifold completion, and outline its generalisation to arbitrary dimension. ### 3-category of orbifold data Let \(\mathcal{T}\) be a Gray category with duals such that all Hom \(2\)-categories admit finite sifted \(2\)-limits and \(2\)-colimits that commute with composition. The subcategory \(\mathcal{T}_{\mathrm{orb}}\) of \(\mathcal{E}(\mathcal{T})\) is defined by imposing further conditions on its objects and morphisms which as we will show below ensure the existence of adjoints and allows us to compute the relative box product by splitting a "higher idempotent". The conditions we want to impose on \(\mathcal{A}\in\mathcal{E}(\mathcal{T})\) are those which make it a "special orbifold datum" in the sense of [12], where the notion was introduced and studied as an algebraic structure that encodes invariance under Pachner moves in arbitrary dimension \(n\). Here we are interested in \(n=3\), see [12, Def. 4.2]. **Definition 4.1**.: An object \(\mathcal{A}=(a,A,\mu,\alpha,u,u^{1},u^{r})\in\mathcal{E}(\mathcal{T})\) is a (\(3\)-dimensional, special) orbifold datum in \(\mathcal{T}\) if the conditions (O1)-(O8) in Figure 4.1 are satisfied, where (4.1) are constructed from the associator \(\alpha\) and the adjunction data for \(\mu\). We note that every \(\mathcal{A}\in\mathcal{E}(\mathcal{T})\) satisfies the conditions (O1)-(O3) which simply express the pentagon axiom Figure 4.1: Defining conditions on orbifold data \(\mathcal{A}=(a,A,\mu,\alpha,u,u^{\dagger},u^{\tau})\), with \(\overline{\alpha}:=\alpha^{-1}\). and the invertibility of \(\alpha\). The other conditions (O4)-(O8) are proper constraints on \(\mathcal{A}\) (but none of them involve the unit \(u\) or the unitors \(u^{\mathrm{l}},u^{\mathrm{r}}\)). **Remark 4.2**.: * The original motivation to introduce \(n\)-dimensional orbifold data for arbitrary \(n\) (generalising the case \(n=2\) of [CR]) was as follows (see [CRS1] for details). Given an \(n\)-dimensional defect TQFT \(\mathcal{Z}\), i. e. a symmetric monoidal functor on a category of stratified bordisms whose \(j\)-strata are labelled with elements in prescribed sets \(D_{j}\), one can construct a closed TQFT \(\mathcal{Z}_{\mathcal{A}}\) from defect labels \(\mathcal{A}_{j}\in D_{j}\) if they satisfy certain conditions. To wit, \(\mathcal{Z}_{\mathcal{A}}\) is evaluated on any given (non-stratified) bordism by first choosing a "good" stratification, decorating its \(j\)-strata with \(\mathcal{A}_{j}\), evaluating with \(\mathcal{Z}\) and taking a colimit over all good stratifications. The latter can be taken to be Poincare duals of oriented triangulations (or the more convenient "admissible skeleta" of [CMRSS1] for \(n=3\)). For this procedure to be well-defined for \(n=3\), we view the labels \(\mathcal{A}_{j}\) as \((3-j)\)-morphisms in the \(3\)-category \(\mathcal{T}_{\mathcal{Z}}\) associated to \(\mathcal{Z}\) in [CMS] and impose the conditions of Definition 4.1 with the identification \((a,A,\mu,\alpha)=(\mathcal{A}_{3},\mathcal{A}_{2},\mathcal{A}_{1},\mathcal{A} _{0})\). * The definition of orbifold datum can be broadened to include additional data \(\phi\in\mathrm{Aut}(1_{1_{\mathrm{a}}})\) and \(\psi\in\mathrm{Aut}(1_{A})\). This is often useful for applications, e. g. to Reshetikhin-Turaev theory in [CRS3, CMRSS1, CMRSS2, MR1, MR2, Mu]. As explained in [CRS3, Sect. 4.2], the technical complications involving \(\phi,\psi\) are not relevant for the development of the general theory, hence for now we disregard them. (In particular, the additional data \(\phi,\psi\) are precisely included by passing to the "Euler completion" reviewed in Section 5.) The orbifold procedure summarised in Remark 4.2(i) produces ordinary closed TQFTs \(\mathcal{Z}_{\mathcal{A}}\) for any given defect TQFT \(\mathcal{Z}\). It is natural to allow for all defects between them and combine all \(\mathcal{Z}_{\mathcal{A}}\) into a single defect TQFT \(\mathcal{Z}_{\mathrm{orb}}\). This was done in the \(2\)-dimensional case in [CR]. Using the analysis of decomposition invariance of [CRS1, CMRSS1] we are let to the following enhancement of the \(3\)-category \(\mathcal{E}(\mathcal{T})\) of Section 3: **Definition 4.3**.: The orbifold completion \(\mathcal{T}_{\mathrm{orb}}\) of \(\mathcal{T}\) is the sub-\(3\)-category of \(\mathcal{E}(\mathcal{T})\) * whose objects are orbifold data, i. e. satisfy the conditions (O1)-(O8) in Figure 4.1, * whose \(1\)-morphisms \(\mathcal{M}=(M,\triangleright_{M},\triangle_{M},u^{\mathrm{l}}_{M},u^{\mathrm{r} }_{M},\alpha^{\mathrm{l}}_{M},\alpha^{\mathrm{m}}_{M},\alpha^{\mathrm{r}}_{M})\) satisfy constraints analogous to those in (O1)-(O7), where some \(\alpha\) are replaced by \(\alpha^{\mathrm{l}}_{M},\alpha^{\mathrm{m}}_{M}\) or \(\alpha^{\mathrm{r}}_{M}\) and some \(\mu\) are replaced by \(\triangleright_{M}\) or \(\triangle_{M}\), as well as (4.2) * whose \(2\)-morphisms satisfy the conditions (T1)-(T7) in Figures 4.2 and 4.3, * and whose \(3\)-morphisms \(\xi\colon(F,\triangle_{F},\triangleright_{F})\longrightarrow(G,\triangle_{G}, \triangleright_{G})\) need not satisfy any other conditions but those in (3.17). We note that the conditions on \(1\)-morphisms \(\mathcal{M}\) in \(\mathcal{T}_{\mathrm{orb}}\) analogous to (O1)-(O3) are precisely those in (3.5)-(3.8) together with the invertibility of \(\alpha^{\mathrm{l}}_{M},\alpha^{\mathrm{m}}_{M},\alpha^{\mathrm{r}}_{M}\). The conditions analogous to (O4)-(O7) are indeed obtained in complete analogy; for example, one of the eight relations corresponding to (O6) is (4.3) In Proposition 4.5 we give a simpler condition which implies that the additional conditions are satisfied. We also observe that the conditions (T1)-(T5) on \(2\)-morphisms \(\mathcal{F}\) already hold in \(\mathcal{E}(\mathcal{T})\), see Definition 3.3, while the conditions (T6) and (T7) are proper constraints on \(\mathcal{F}\) to be in \(\mathcal{T}_{\mathrm{orb}}\). \(1\)-morphisms in \(\mathcal{T}_{\mathrm{orb}}\) are closed under composition, given by relative products \(\mathcal{M}\,\square_{\mathcal{B}}\,\mathcal{N}\) as discussed around (3.19), thanks to the definition in terms of a universal property: the structure maps of \(\mathcal{M}\,\square_{\mathcal{B}}\,\mathcal{N}\) with respect to the actions of \(\mathcal{A}\) and \(\mathcal{C}\) are induced from those of \(\mathcal{N}\) and \(\mathcal{M}\), respectively, and are only spectators in the construction of \(\mathcal{M}\,\square_{\mathcal{B}}\,\mathcal{N}\). **Remark 4.4**.: The definition of \(\mathcal{T}_{\mathrm{orb}}\) is such that if \(\mathcal{T}=\mathcal{T}_{\mathcal{Z}}\) is the \(3\)-category associated to a defect TQFT \(\mathcal{Z}\) as in Remark 4.2(i), then the \(k\)-morphisms in \(\left(\mathcal{T}_{\mathcal{Z}}\right)_{\mathrm{orb}}\) can be used as the \((3-k)\)-dimensional defects of the orbifold defect TQFT \(\mathcal{Z}_{\mathrm{orb}}\). Moreover, one then finds \(\left(\mathcal{T}_{\mathcal{Z}}\right)_{\mathrm{orb}}\cong\mathcal{T}_{ \mathcal{Z}_{\mathrm{orb}}}\). This will be discussed in more detail in Section 6. The conditions (T1)-(T7) on \(1\)-morphisms are needed to ensure that \(\mathcal{T}_{\mathrm{orb}}\) has adjoints, as we will see in Section 4.2. This gives a second, purely algebraic motivation for Definition 4.3. It is straightforward to check that an orbifold datum \(\mathcal{A}\) in \(\mathcal{T}\) induces a Frobenius algebra in the homotopy \(2\)-category of \(\mathcal{T}\). Indeed, there are "Frobeniusator" \(3\)-isomorphisms (4.4) With this one finds that every \(\mathcal{B}\)-\(\mathcal{A}\)-bimodule \(\mathcal{M}\), i. e. \(1\)-morphism \(\mathcal{M}\colon\mathcal{A}\longrightarrow\mathcal{B}\) in \(\mathcal{E}(\mathcal{T})\), also has the structure of a \(\mathcal{B}\)-\(\mathcal{A}\)-bicomodule with coactions (4.5) Using that \(\mu_{A}^{\vee}\), \(\mu_{B}^{\vee}\) are adjoint to \(\mu_{A}\) and \(\mu_{B}\), respectively, we can equip the \(2\)-morphisms \(\triangleright_{M}{}^{\dagger}\) and \(\triangle_{M}{}^{\dagger}\) with the structure of both a left and right adjoint to \(\triangleright_{M}\) and \(\triangle_{M}\), respectively. The adjunction data for \(\triangleright_{M}\) is given by (4.6) and \[\begin{split}\includegraphics[scale=0.5]{fig/fig/fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig--fig-fig-fig-fig-fig-fig-fig-fig-fig-fig-fig--fig-fig-fig-fig--fig--fig--fig-fig--fig-fig--fig-fig-fig--fig--fig--fig-fig--fig--fig-fig--fig--fig-fig--fig--fig-fig--fig--fig-fig--fig-fig--fig-fig--fig--fig--fig--fig--fig-fig--fig--fig-fig---fig--fig--fig--fig--fig-fig--fig--fig-fig--fig--fig--fig--fig--fig--fig--fig--fig--fig--fig--fig--fig--fig--fig--fig--fig--fig--fig--fig--fig--fig--fig---fig--fig--fig--fig--fig--fig--fig--fig--fig---fig---fig--fig--fig---fig--fig---fig---fig--fig--fig---fig--fig--fig---fig--fig---fig---fig--fig--fig---fig--fig--fig--fig---fig--fig--fig---fig---fig---fig--fig--fig---fig---fig---fig---fig---fig---fig---fig---fig---fig---fig--fig---fig---fig---fig---fig---fig---fig---fig---fig----fig---fig----fig---fig---fig---fig---fig---fig----fig---fig--fig---fig---fig--fig---fig---fig---fig---fig---fig---fig---fig----fig---fig---fig---fig---fig---fig---fig---fig---fig---fig---fig---fig---fig---fig---fig----fig---fig----fig----fig---fig----fig----fig----fig----fig----fig----fig----fig----fig---fig---fig----fig----fig----fig----fig---fig----fig----fig----fig----fig-----fig----fig---fig----fig----fig-----fig-----fig----fig----fig-----fig-----fig---fig----fig----fig-----fig----fig----fig-----fig-----fig----fig-----fig----fig-----fig----fig---fig-----fig----fig-----fig----fig-----fig----fig----fig------fig----fig-----fig-----fig----fig-----fig-----fig-----fig------fig----fig------fig------fig------fig-----fig-----fig------fig------fig-----fig------fig-----fig------fig------fig------fig-------fig-------fig------fig------fig-------fig--------fig--------fig------fig---------fig---------fig--------fig--------fig---------fig-----------fig----------fig-----------fig------------fig--------------fig- The data for \(\triangleleft_{M}\) is completely analogous. Note that this does not imply that they agree with \(\triangleright_{M}{}^{\vee}\) and \(\triangleleft_{M}{}^{\vee}\) (which are part of the data of the Gray category with duals \(\mathcal{T}\)). Uniqueness of left and right adjoints gives two potentially different isomorphisms between \({}^{\dagger}\) and \({}^{\vee}\). Using this we can give a simple condition for a \(1\)-morphism between orbifold data to be in \(\mathcal{T}_{\mathrm{orb}}\): **Proposition 4.5**.: Let \(\mathcal{M}\colon\mathcal{A}\longrightarrow\mathcal{B}\) be a \(1\)-morphism in \(\mathcal{E}(\mathcal{T})\) between orbifold data such that the above \(3\)-isomorphisms \(\triangleright_{M}{}^{\vee}\longrightarrow\triangleright_{M}{}^{\dagger}\) and \(\triangleleft_{M}{}^{\vee}\longrightarrow\triangleleft_{M}{}^{\dagger}\) agree pairwise. Then \(\mathcal{M}\) is also a \(1\)-morphism in \(\mathcal{T}_{\mathrm{orb}}\). Proof.: The assumption of the proposition allows us to replace \(\triangleright_{M}{}^{\vee}\) and \(\triangleleft_{M}{}^{\vee}\) with \(\triangleright_{M}{}^{\dagger}\) and \(\triangleleft_{M}{}^{\dagger}\) when verifying the analogues of (O4)-(O8) for \(\mathcal{M}\). This allows us to use the defining properties of orbifold data (O1)-(O8) to prove the relations for \(\mathcal{M}\). The verification is now a straightforward computation. Especially, if one uses that the \(3\)-morphisms associated to two different "admissible" stratifications of two stratified \(3\)-balls with identical boundaries agree by the defining properties of orbifold data (see [12, 13] for a detailed discussion). In the remainder of Section 4.1, we will compute the relative product \(\square_{\mathcal{A}}\) in \(\mathcal{T}_{\mathrm{orb}}\). We stress that for this it is not necessary to impose the conditions of Figure 4.2 on \(1\)-morphisms. Hence we will work in \(\mathcal{E}(\mathcal{T})\), but we exclusively consider objects \(\mathcal{A}\in\mathcal{E}(\mathcal{T})\) that satisfy the conditions in Figure 4.1, i. e. \(\mathcal{A}\in\mathcal{T}_{\mathrm{orb}}\). For a \(1\)-morphism \(\mathcal{N}\) in \(\mathcal{E}(\mathcal{T})\) with codomain \(\mathcal{A}\), we can set \[P\equiv P_{\mathcal{M},\mathcal{A},\mathcal{N}}:= \tag{4.8}\] where here and in similar situations, we usually drop indices like "\(\mathcal{M},\mathcal{A},\mathcal{N}\)" as indicated. Importantly, \(P\) is a 2-cocone over \(M\,\square\,A\,\square\,A\,\square\,N\,\rightrightarrows M\,\square\,A\,\square\,N\rightrightarrows M\,\square\,N\), i. e. it is balanced with respect to the \(\mathcal{A}\)-actions (recall the discussion around (3.20)): **Lemma 4.6**.: \(P\) together with (4.9) is a balanced 2-morphism \(M\,\square\,N\longrightarrow M\,\square\,N\). Proof.: We have to verify the commutativity of the following pentagon diagram: (4.10) The left and right subdiagrams commute thanks to the 2-3 move (O1). The two paths around the middle diagram can be represented, in the graphical calculus, as two different "admissible" stratifications of two stratified 3-balls with identical boundaries. Hence by the defining properties of orbifold data, both paths are equal (see again [12, 13] for a detailed discussion). By our assumption on \(\mathcal{T}\), the 2-colimit \(M\,\square_{\mathcal{A}}\,N\) exists. Part of its structure is a (balanced) 2-morphism \[\varPi\equiv\varPi_{\mathcal{M},\mathcal{A},\mathcal{N}}\colon M\,\square\,N \longrightarrow M\,\square\,N \tag{4.11}\] such that the pre-composition functor \[\varPi^{*}\colon\operatorname{Hom}_{\mathcal{T}}\big{(}M\,\square\,N,T \big{)} \longrightarrow\operatorname{Bal}(M\,\square\,N,T\big{)}\] \[\varphi \longmapsto\varphi\circ\varPi \tag{4.12}\] is an equivalence for every \(1\)-morphism \(T\) parallel to \(M\,\square\,N\). In particular, for \(T=M\,\square\,N\) we obtain from this universal property an essentially unique \(2\)-morphism \[\varSigma:=(\varPi^{*})^{-1}(P)\colon M\,\underset{\mathcal{A}}{\square}\,N \longrightarrow M\,\square\,N\,, \tag{4.13}\] from which it immediately follows that \[\varSigma\circ\varPi\cong P\,. \tag{4.14}\] Using the graphical notation introduced in (3.19) we summarise (4.11), (4.13) and (4.14) as \[\varPi=\underset{\mathcal{A}}{\square}\,\underset{\mathcal{A}}{M}\,\,,\quad \varSigma=\underset{N}{M}\,\,,\quad\varSigma=\underset{N}{M}\,\,,\quad P \cong\underset{N}{\overset{\varSigma}{\varSigma}}\,\,. \tag{4.15}\] Below in Proposition 4.8 we will see that \(\varPi\) and \(\varSigma\) are part of the splitting data of the higher idempotent \(P\). Following [DR, GJF], we introduce the latter as follows: **Definition 4.7**.: Let \(p\colon V\longrightarrow V\) be a \(2\)-morphism in a \(3\)-category. 1. The structure of a \(2\)-idempotent on \(p\) is given by \(3\)-morphisms \(f\colon p\circ p\longrightarrow p\) and \(g\colon p\longrightarrow p\circ p\) such that \(fg=1_{p}\), and such that \(f\) and \(g\) give \(p\) the structure of a non-unital \(\Delta\)-separable Frobenius algebra. 2. A splitting of a \(2\)-idempotent \((p,f,g)\) is given by two \(2\)-morphisms \(\pi\colon V\longrightarrow U\), \(\sigma\colon U\longrightarrow V\) together with two \(3\)-morphisms \(\varepsilon\colon\pi\circ\sigma\longrightarrow 1_{U}\), \(\eta\colon 1_{U}\longrightarrow\pi\circ\sigma\) such that \(\sigma\circ\pi\cong p\) and \(\varepsilon\eta=1_{1_{U}}\), and \(f,g\) are induced by \(\varepsilon,\eta\). Then \(U\) is the image of the split idempotent \((p,\pi,\sigma,\varepsilon,\eta)\). **Proposition 4.8**.: Let \(\mathcal{A}\) be an orbifold datum in \(\mathcal{T}\), and let \(\mathcal{M},\mathcal{N}\) be \(1\)-morphisms in \(\mathcal{E}(\mathcal{T})\) such that \(M\,\square_{\mathcal{A}}\,N\) exists. Then \(M\,\square_{\mathcal{A}}\,N\) is the image of the split \(2\)-idempotent \((P,\varPi,\varSigma,\varepsilon,\eta)\), where \(\varepsilon,\eta\) are induced by \(\widetilde{\mathrm{ev}}_{\mu},\mathrm{coev}_{\mu}\) (as explained around (4.16) below), respectively. Proof.: We first observe that \[\varPi\circ\varSigma\circ\varPi\cong\varPi\circ P=\underset{\mathcal{A}}{ \overset{\varPi}{\varPi}}\,\,\cong\,\,\underset{\mathcal{A}}{\overset{ \varPi}{\varPi}}\,\,\,\cong\,\,\underset{\mathcal{A}}{\overset{\varPi}{ \varPi}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ for its weak inverse. Hence applying \(\varPi^{*}=(-)\circ\varPi\) to the image of \((\varPi^{*})^{-1}\) amounts to deleting the shading associated to the relative product. Given \(\mathcal{A},\mathcal{M},\mathcal{N}\) as above, the categories \(\operatorname{Cobal}(S,M\,\square\,N)\) of cobalanced \(2\)-morphisms \(S\longrightarrow M\,\square\,N\) are defined dually to \(\operatorname{Bal}(M\,\square\,N,T)\). In particular, objects \(\xi\) of \(\operatorname{Cobal}(S,M\,\square\,N)\) come together with \(3\)-isomorphisms (4.19) that are subject to analogous coherence constraints. Furthermore, the corelative product \(M\,\square^{\mathcal{A}}\,N\) may be dually defined as a \(2\)-limit involving a structure map \(\widetilde{\Sigma}\colon M\,\square^{\mathcal{A}}\,N\longrightarrow M\, \square\,N\), or equivalently by the universal property that there are equivalences \[\widetilde{\Sigma}_{*}\colon\operatorname{Hom}_{\mathcal{T}}\big{(}S,M \,\overset{\mathcal{A}}{\square}N\big{)} \overset{\cong}{\longrightarrow}\operatorname{Cobal}\big{(}S,M\, \square\,N\big{)}\] \[\varphi \longmapsto\widetilde{\Sigma}\circ\varphi\,. \tag{4.20}\] Analogously to the case of \(\square_{\mathcal{A}}\) one finds that there is a \(2\)-morphism \(\widetilde{\varPi}:=(\widetilde{\Sigma}_{*})^{-1}(P)\colon M\,\square\,N \longrightarrow M\,\square^{\mathcal{A}}\,N\) such that \(\widetilde{\varPi}\circ\widetilde{\Sigma}\cong 1_{M\,\square^{\mathcal{A}}\,N}\), which is also part of a splitting of \(P\). Since such splittings are essentially unique, we must have \(\varPi\cong\widetilde{\varPi}\) and \(\Sigma\cong\widetilde{\Sigma}\), and hence \[M\,\underset{\mathcal{A}}{\square}\,N\cong M\,\overset{\mathcal{A}}{\square} \,N\,. \tag{4.21}\] In fact we can, and do, choose \(\varPi=\widetilde{\varPi}\), \(\Sigma=\widetilde{\Sigma}\) and write (4.22) so that \[\varPi\cong(\Sigma_{*})^{-1}(P)\colon M\,\square\,N\longrightarrow M\, \underset{\mathcal{A}}{\square}\,N\,, \tag{4.23}\] which corresponds to (4.18) and (4.12), respectively. As an application that will be useful below in Section 4.2, we note that while \(\mu\circ\mu^{\vee}\not\cong 1_{A}\) in general, we have: **Lemma 4.9**.: For any orbifold datum \(\mathcal{A}\) in \(\mathcal{T}\), it holds that (4.24) Proof.: Since by definition (4.25) the isomorphism (4.24) exists if \[(\Sigma_{*})^{-1}(\mu^{\vee})\cong\raisebox{-14.226378pt}{\includegraphics[]{A1.eps}}\,. \tag{4.26}\] But since \(\Sigma_{*}\) is an equivalence, this is equivalent to (4.27) ### Properties #### 4.2.1 Adjoints In this section we show that 1- and 2-morphisms in \(\mathcal{E}(\mathcal{T})\) have adjoints. Let \(\mathcal{T}\) be a Gray category with duals as before, and let \(\mathcal{F}=(F,\triangleright_{F},\triangleleft_{F})\colon\mathcal{M}\longrightarrow \mathcal{M}^{\prime}\) be a 2-morphism in \(\mathcal{T}_{\mathrm{orb}}\) as in Definitions 3.3 and 4.3. We set (4.28) and observe that \[\mathcal{F}^{\vee}=\left(F,\triangleright_{F},\triangleleft_{F}\right)^{\vee }\stackrel{{\mathrm{def}}}{{=}}\left(F^{\vee},\widetilde{ \triangleright}_{F},\widetilde{\triangleleft}_{F}\right) \tag{4.29}\] is a 2-morphism in \(\mathcal{T}_{\mathrm{orb}}\). Indeed, the defining properties hold thanks to the properties of \(\triangleright_{F},\triangleleft_{F}\) and the Zorro moves for \(F\) in \(\mathcal{T}\). **Lemma 4.10**.: Let \((F,\triangleright_{F},\triangleleft_{F})\) be a 2-morphism in \(\mathcal{T}_{\mathrm{orb}}\) such that \(\widetilde{\triangleright}_{F},\widetilde{\triangleleft}_{F}\) are invertible and (4.30) Then \((F,\triangleright_{F},\triangleleft_{F})^{\vee}\) is left and right adjoint to \((F,\triangleright_{F},\triangleleft_{F})\) in \(\mathcal{T}_{\mathrm{orb}}\), as witnessed by the adjunction 3-morphisms of \(F\) in \(\mathcal{T}\). Proof.: To show that \(\mathcal{F}^{\vee}\) is a left adjoint we note that (4.31) and a similar identity holds for \(\widehat{\triangleright}_{F}\). Hence \(\operatorname{ev}_{F}\) is indeed a 3-morphism in \(\mathcal{E}(\mathcal{T})\). Compatibility of \(\operatorname{coev}_{F}\) and \(\widetilde{\triangle}_{F},\widehat{\triangleright}_{F}\) is checked analogously, and the Zorro moves in \(\mathcal{T}_{\operatorname{orb}}\) are identical to those in \(\mathcal{T}\). Note that we do not need the assumption (4.30) for \(\mathcal{F}^{\vee}\) to be left adjoint to \(\mathcal{F}\). To show that \(\mathcal{F}^{\vee}\) is also right adjoint to \(\mathcal{F}\), we compute (4.32) where we used a Zorro move and (4.30) in the first step, and the fact that \(\widetilde{\triangle}_{F}^{-1}\) is left inverse to \(\widetilde{\triangle}_{F}\) in the second step. This proves compatibility of \(\widetilde{\operatorname{coev}}_{F}\) and \(\widetilde{\triangle}_{F}\), and the other three compatibility conditions are checked analogously. We note that the conditions in (4.30) are precisely the conditions (T6) and (T7). These relations generalise [10, (T6) & (T7)], which in turn have those in [18, Fig. 3.1] as a motivating special case. We now turn to adjoints of 1-morphisms \(\mathcal{M}=(M,\triangleright_{M},\triangle_{M},u_{M}^{r},u_{M}^{r},\alpha_{M}^ {1},\alpha_{M}^{m},\alpha_{M}^{1})\colon\mathcal{A}\longrightarrow\mathcal{B}\) in \(\mathcal{T}_{\operatorname{orb}}\), cf. Definitions 3.2 and 4.3. To lift the adjoint \(M^{\#}\) of \(M\) in \(\mathcal{T}\) to an adjoint \(\mathcal{M}^{\#}\) of \(\mathcal{M}\) in \(\mathcal{T}_{\operatorname{orb}}\), we first set (4.33) Here the 3-isomorphisms \(\mathcal{P}_{\mathcal{A}},\mathcal{P}_{\mathcal{B}}\) are obtained from the 3-isomorphisms \(\Theta_{\triangle_{M}},\Theta_{\triangleright_{M}}\) (recall (2.19)) and the cusp isomorphisms for \(A,B,M\). It then follows from the relations (2.19) that the diagram (4.34) commutes, and similary for the \(\mathcal{A}\)-action. Next we define \[u^{\mathrm{l}}_{M\#}:=\Bigg{(} \tag{4.35}\] (4.36) and \[\alpha^{\mathrm{l}}_{M^{\#}}:=\left(\begin{array}{ The map \(\alpha^{\mathrm{r}}_{M^{\#}}\) is defined analogously, and we set \[\alpha^{\mathrm{m}}_{M^{\#}}:= \tag{4.38}\] where here and below we do not indicate the use of cusp isomorphisms and tensorators unless we think it is helpful (as in the fourth step above, to produce something that is manifestly the domain of \(\mathcal{P}^{-1}_{\mathcal{B}}\)). **Lemma 4.11**.: Let \(\mathcal{M}=(M,\triangleright_{M},\triangleleft_{M},u^{\mathrm{r}}_{M},\alpha^{ \mathrm{l}}_{M},\alpha^{\mathrm{m}}_{M},\alpha^{\mathrm{r}}_{M})\colon\mathcal{ A}\longrightarrow\mathcal{B}\) be a 1-morphism in \(\mathcal{T}_{\mathrm{orb}}\). Then \[\mathcal{M}^{\#}:=\left(M^{\#},\triangleright_{M^{\#}},\triangleleft_{M^{\#}}, u^{\mathrm{l}}_{M^{\#}},u^{\mathrm{r}}_{M^{\#}},\alpha^{\mathrm{l}}_{M^{\#}}, \alpha^{\mathrm{m}}_{M^{\#}},\alpha^{\mathrm{r}}_{M^{\#}}\right)\colon\mathcal{ B}\longrightarrow\mathcal{A} \tag{4.39}\] is also a 1-morphism in \(\mathcal{T}_{\mathrm{orb}}\). Proof.: The coherence axioms for \(u^{1}_{M^{\#}},u^{r}_{M^{\#}}\) follow from those for \(u^{1}_{M},u^{r}_{M}\) and the definitions, for example: (4.40) Here the outer subdiagrams commute by definition, the upper inner subdiagram commutes by naturality, and the commutativity of the lower inner subdiagram is a consequence of the coherence axioms for \(u^{1}_{M},u^{r}_{M}\). The 2-3 move for \(\alpha^{1}_{M}\) implies the 2-3 move for \(\alpha^{r}_{M\#}\), (4.41) where all subdiagrams commute either by definition or due to naturality and coherence for cusp and tensorator isomorphisms. Similarly, the 2-3 move for \(\alpha^{1}_{M\#}\) follows from the one for \(\alpha^{r}_{M}\). Finally, the two 2-3 moves (3.6) and (3.7) involving \(\alpha^{\rm m}_{M\#}\) hold because of those for \(\alpha^{\rm m}_{M}\) as well as naturality of \(\mathcal{P}_{\mathcal{A}},\mathcal{P}_{\mathcal{B}}\) and the identities (4.34). We provide details for the 2-3 move involving \(\alpha^{\rm m}_{M\#}\) and \(\alpha^{1}_{M\#}\) in Figure 4.4, where all subdiagrams commute by naturality and/or the relations indicated. The 2-3 move involving \(\alpha^{\rm m}_{M\#}\) and \(\alpha^{\rm r}_{M\#}\) is checked analogously. We want to identify \(\mathcal{M}^{\#}\) as a left adjoint of \(\mathcal{M}\) in \(\mathcal{T}_{\rm orb}\). To do so, we first define the 3-isomorphism (4.42) Figure 4.4: The 2-3 move involving \(\alpha^{\rm m}_{M^{\#}}\) and \(\alpha^{1}_{M^{\#}}\) states that the outer paths between the five shaded 2-morphisms are equal; non-labelled arrows are given by cusp and tensorator structure 3-morphisms. where we use \[\xi:=\raisebox{-14.226378pt}{\includegraphics[]{figures/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/fig/figfig/fig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/fig/figfig/figfig/fig/figfig/figfig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/fig/figfig/figfig/fig/figfig/figfig/figfig/figfig/fig/figfig/fig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/fig/figfig/figfig/fig/figfig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfigfig/fig/figfigfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfigfig/figfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfig/figfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfig/figfig/figfigfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfig/figfig/figfigfig/figfig/figfig/figfigfig/figfig/figfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfigfig/fig only for the most involved case of \(\spherical_{\text{ev}_{\mathcal{M}}}\). Here the constraint to be checked is \[\includegraphics[width=142.26378pt]{figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figfigs/figs/figs/figs/figsfigs/figs/figs/figsfigs/figs/figs/figs/figs/figsfigs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figsfigs/figs/figsfigs/figs/figs/figsfigs/figfigs/figs/figsfigs/figs/figsfigs/figs/figs/figsfigs/figs/figsfigs/figs/figs/figfigs/figs/figsfigs/figs/figsfigs/figs/figs/figsfigs/figs/figsfigs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figsfigs/figs/figs/figs/figsfigs/figs/figsfigs/figs/figsfigs/figs/figs/figsfigs/figs/figsfigs/figs/figsfigs/figs/figsfigs/figsfigs/figs/figsfigs/figs/figsfigs/figs/figsfigs/figs/figsfigs/figsfigs/figs/figsfigs/figs/figsfigs/figsfigs/figs/figsfigs/figs/figsfigs/figs/figsfigs/figs/figsfigs/figfigs/figs/figs/figsfigs/figsfigs/figs/figsfigs/figs/figs/figsfigs/figs/figsfigs/figs/figsfigs/figsfigs/figsfigs/figs/figsfigs/figsfigs/figsfigs/figsfigs/figsfigs/figsfigs/figfigs/figsfigs/figsfigs/figsfigs/figsfigs/figsfigs/figsfigs/figsfigs/figsfigs/figs/figsfigs/figsfigs/figs/figsfig where we used (4.12), (4.23) as well as \(\varPi^{*}=(-)\circ\varPi\) and \(\varSigma_{*}=\varSigma\circ(-)\). The other Zorro move follows more directly with the help of (4.9): (4.53) **Remark 4.13**.: In the case that \(\mathcal{T}\) is symmetric monoidal we expect \(\mathcal{T}_{\mathrm{orb}}\) to be symmetric monoidal as well. If \(\mathcal{T}\) admits duals for objects we expect in addition that the objects in \(\mathcal{T}_{\mathrm{orb}}\) also admit duals. In the case that \(\mathcal{T}\) has only one object this follows from the fact that every algebra is \(1\)-dualisable in the usual Morita category, see e. g. [GS]. Together with Proposition 4.12 and Lemma 4.10 this would imply that every object in \(\mathcal{T}_{\mathrm{orb}}\) is fully dualisable. Based on the close connection to oriented theories we expect all objects in \(\mathcal{T}_{\mathrm{orb}}\) to have a canonical \(\mathrm{SO}(3)\)-homotopy fixed-point structure, in the sense that they trivialise the \(\mathrm{SO}(3)\)-action on \(\mathcal{T}_{\mathrm{orb}}^{\times}\). Furthermore, the \(1\)-morphisms should come with a relative \(\mathrm{SO}(2)\)-homotopy fixed-point structure as discussed in [Lu, Sect. 4.3]. #### 4.2.2 Universal property In this section we discuss a universal property for orbifold completion. Our discussion is rigorous in dimensions \(1\) and \(2\), but it is short on technical details in the \(3\)- and higher-dimensional case. This can be understood as an extension of higher idempotent completions as developed in [GJF] to pivotal higher categories. Recall that the idempotent completion of a \(1\)-category \(\mathcal{C}\) is a fully faithful functor \(\mathcal{C}\longleftrightarrow\overline{\mathcal{C}}\) to a category \(\overline{\mathcal{C}}\) in which every idempotent splits, such that the following universal property holds: for every functor \(\mathcal{C}\longrightarrow\overline{\mathcal{D}}\) with idempotent complete codomain, there exists an essentially unique functor \(\overline{\mathcal{C}}\longrightarrow\overline{\mathcal{D}}\) such that (4.54) commutes up to isomorphism. It is straightforward to check that the category whose objects are idempotents in \(\mathcal{C}\) and whose morphisms \(e\longrightarrow e^{\prime}\) are morphisms in \(\mathcal{C}\) that are invariant under pre- and post-composition with \(e\) and \(e^{\prime}\), respectively, satisfies this universal property. The \(2\)-dimensional case is completely analogous. The idempotent completion of a \(2\)-category \(\mathcal{B}\) is a fully faithful \(2\)-functor \(\mathcal{B}\longleftrightarrow\overline{\mathcal{B}}\) to a \(2\)-category \(\overline{\mathcal{B}}\) in which every \(2\)-idempotent splits (in the sense of Definition 4.7 with \(2\)-morphism replaced by \(1\)-morphisms, and \(3\)-morphisms replaced by \(2\)-morphisms) and every Hom category is idempotent complete, such that the following universal property holds: for every \(2\)-functor \(\mathcal{B}\longrightarrow\overline{\mathcal{D}}\) with idempotent complete codomain, there exists an essentially unique \(2\)-functor \(\overline{\mathcal{B}}\longrightarrow\overline{\mathcal{D}}\) such that (4.55) commutes up to equivalence. As explained in [GJF] (see [Fr] for a detailed discussion), \(\overline{\mathcal{B}}\) can be taken to be the \(2\)-category of not necessarily (co)unital \(\Delta\)-separable algebras, their bimodules and bimodule maps in \(\mathcal{B}\) (after idempotent completing the Hom categories of \(\mathcal{B}\)). Apart from the structure of units and counits, this is precisely the "equivariant completion" \(\mathcal{B}_{\mathrm{eq}}\) introduced in [CR]. As argued in [GJF], this further generalises to a notion of idempotent completion of \(n\)-categories for arbitrary dimension \(n\). This notion of "Karoubi envelope" or "condensation monads", i. e. idempotents for \(n=1\) and \(\Delta\)-separable Frobenius algebras for \(n=2\), is motivated by the cobordism hypothesis for fully extended framed TQFTs. Our notion of orbifold completion arises in the context of oriented TQFTs. For \(n=1\) this coincides with idempotent completion, in line with the fact that framings and orientations are equivalent for \(1\)-dimensional manifolds. For \(n=2\), we recalled the orbifold completion \(\mathcal{B}_{\mathrm{orb}}\) of a pivotal \(2\)-category \(\mathcal{B}\) in Definition 2.4. To define a natural variant of split idempotents in such a context recall that for every \(1\)-morphism \(X\colon\alpha\longrightarrow\beta\) in a pivotal \(2\)-category \(\mathcal{B}\) the morphism \(X^{\vee}\circ X\) has a canonical Frobenius algebra structure [CR, Prop. 4.3] where for example the multiplication is induced by the evaluation: \[X^{\vee}\circ X\circ X^{\vee}\circ X\xrightarrow{\operatorname{id}_{X^{\vee }}o\widetilde{\operatorname{ev}}_{X}\operatorname{id}_{X}}X^{\vee}\circ X\,. \tag{4.56}\] Based on this observation we make the following: **Definition 4.14**.: Let \(\mathcal{B}\) be a pivotal \(2\)-category, and let \(\alpha,\beta\in\mathcal{B}\). 1. An orbifold condensation of \(\alpha\) onto \(\beta\) is a \(1\)-morphism \(X\colon\alpha\longrightarrow\beta\) such that \(1_{\beta}\xrightarrow{\operatorname{coev}_{X}}X\circ X^{\vee}\xrightarrow{ \widetilde{\operatorname{ev}}_{X}}1_{\beta}\) is the identity \(2\)-morphism on \(\operatorname{id}_{\beta}\). 2. An orbifold datum \(A\colon\alpha\longrightarrow\alpha\) in \(\mathcal{B}\) splits if there exists an orbifold condensation \(X\) of \(\alpha\) such that \(X^{\vee}\circ X\cong A\) as Frobenius algebras, where \(X^{\vee}\circ X\) is equipped with the canonical Frobenius structure recalled above. It is immediate from the definition that if \(X\) is an orbifold condensation, then \(X^{\vee}\circ X\) carries the structure of an orbifold datum. The pair of maps \((\widetilde{\operatorname{ev}}_{X},\operatorname{coev}_{X})\) is a \(1\)-condensation of \(X^{\vee}\circ X\) onto \(1_{\beta}\) in as defined in [GJF]. Orbifold completion in dimension two satisfies the property that every orbifold datum splits: **Proposition 4.15**.: Let \(\mathcal{B}\) be a pivotal \(2\)-category. Then every orbifold datum in \(\mathcal{B}_{\mathrm{orb}}\) splits. Proof.: Let \(A\colon\alpha\longrightarrow\alpha\) be an object in \(\mathcal{B}_{\mathrm{orb}}\), and let \(\mathcal{A}\colon A\longrightarrow A\) be an orbifold datum in \(\mathcal{B}_{\mathrm{orb}}\). It is straightforward to check that the Frobenius algebra structure of \(\mathcal{A}\) in \(\mathcal{B}_{\mathrm{orb}}\) induces the structure of an orbifold datum on the underlying \(1\)-morphism of \(\mathcal{A}\) in \(\mathcal{B}\), see the proof of [CR, Prop. 4.2] for details. We denote this object of \(\mathcal{B}_{\mathrm{orb}}\) by \((\alpha,\mathcal{A})\), and write \((\alpha,A)\in\mathcal{B}_{\mathrm{orb}}\) for our chosen orbifold datum \(A\colon\alpha\longrightarrow\alpha\). The underlying \(1\)-morphism \(\mathcal{A}\) in \(\mathcal{B}\) has a canonical \(\mathcal{A}\)-\(A\)-bimodule structure, which makes it a \(1\)-morphism \(X:={}_{\mathcal{A}}\mathcal{A}_{A}\colon(\alpha,A)\longrightarrow(\alpha, \mathcal{A})\) in \(\mathcal{B}_{\mathrm{orb}}\). We claim that \(X\) is an orbifold condensation of \((\alpha,A)\) onto \((\alpha,\mathcal{A})\) in \(\mathcal{B}_{\mathrm{orb}}\). To see this, we note that by composing the two middle arrows in (4.57) to form an endomorphism of \((\alpha,A)\), we directly obtain the orbifold datum \(\mathcal{A}\colon(\alpha,A)\longrightarrow(\alpha,A)\) in \(\mathcal{B}_{\mathrm{orb}}\), while the opposite composition \(X\circ X^{\vee}\) condenses onto \(1_{(\alpha,\mathcal{A})}\). Indeed, this splitting is witnessed by the adjunction \(2\)-morphisms \(\widetilde{\operatorname{ev}}_{X},\operatorname{coev}_{X}\) which are induced from the (co)multiplication of \(\mathcal{A}\). Better still, \(\mathcal{B}_{\mathrm{orb}}\) is universal among all pivotal \(2\)-categories in which all orbifold data split: **Proposition 4.16**.: Let \(\mathcal{B}\) be a pivotal 2-category whose Hom categories are idempotent complete. Then the inclusion \(\mathcal{B}\longleftrightarrow\mathcal{B}_{\mathrm{orb}}\), \(\alpha\longmapsto(\alpha,1_{\alpha})\) satisfies the following universal property: for every pivotal 2-functor \(\mathcal{B}\longrightarrow\overline{\mathcal{D}}\) with pivotal codomain in which every orbifold condensation splits, there exists an essentially unique pivotal 2-functor \(\mathcal{B}_{\mathrm{orb}}\longrightarrow\overline{\mathcal{D}}\) such that (4.58) commutes up to equivalence. Proof.: Let \(F\colon\mathcal{B}\longrightarrow\overline{\mathcal{D}}\) be a pivotal 2-functor. Then its lift \(\mathcal{B}_{\mathrm{orb}}\longrightarrow\overline{\mathcal{D}}\) sends \((\alpha,A)\in\mathcal{B}_{\mathrm{orb}}\) to the image of the splitting of the orbifold datum \(F(A)\colon F(\alpha)\longrightarrow F(\alpha)\), which exists by assumption, and it is essentially unique by construction. As a direct consequence, we recover the fact that \(\left(\mathcal{B}_{\mathrm{orb}}\right)_{\mathrm{orb}}\cong\mathcal{B}_{ \mathrm{orb}}\). Moreover, the universal property of \(\mathcal{B}_{\mathrm{orb}}\) should imply that various potential additional structures on \(\mathcal{B}\) are inherited by \(\mathcal{B}_{\mathrm{orb}}\), in particular braided monoidal structures. The above suggests the following universal property for an algebraic notion of orbifold completion in arbitrary dimension \(n\). Let \(\mathcal{T}\) be an \(n\)-category with compatible adjoints for all morphisms and orbifold complete Hom \((n-1)\)-categories. For a 1-morphism \(X\colon\alpha\longrightarrow\beta\) consider the endomorphism \(A\coloneqq X^{\vee}\circ X\colon\alpha\longrightarrow\alpha\). The adjunction data induce on \(A\) the potential structure of an orbifold datum. In case this satisfies the orbifold conditions we call \(X\) an orbifold condensation of \(\alpha\) onto \(\beta\). An orbifold datum in \(\mathcal{T}\) splits if there exists an orbifold condensation giving rise to it. Then the orbifold completion \(\mathcal{T}_{\mathrm{orb}}\) satisfies the universal property that for every pivotal \(n\)-functor \(F\colon\mathcal{T}\longrightarrow\overline{\mathcal{U}}\) whose codomain has compatible adjoints and in which every orbifold datum splits, there exists an essentially unique lift of \(F\) to an \(n\)-functor \(\mathcal{T}_{\mathrm{orb}}\longrightarrow\overline{\mathcal{U}}\). As in the 2-categorical case the splitting of an orbifold datum \(\mathcal{A}\) in the orbifold completion should be given by \(\mathcal{A}\) seen as a 1-morphism in \(\mathcal{T}_{\mathrm{orb}}\). ## 5 Examples of completions In this section we study some natural examples of 3-dimensional orbifold completion and show how they relate to the existing literature. We start, in Section 5.1, by studying the delooping of 2-dimensional state sum models described by the pivotal 2-category of separable symmetric Frobenius algebras ssFrob. Then in Section 5.2 we turn to a example involving modular fusion categories. The 3-categories \(\mathcal{T}\) appearing in this section are not strict enough to be Gray categories with duals. We expect that our construction of the 3-category \(\mathcal{E}(\mathcal{T})\) can be adapted to work for less strict 3-categories. However, an appropriate weakening of the notion of a generic 3-category with duals has not appeared in the literature, to the best of our knowledge. Giving a general description of the subcategory \(\mathcal{T}_{\mathrm{orb}}\) is not straightforward. In the examples we will consider there are however natural guesses on how to adapt the conditions. Another approach to the problem is to construct strictifications of the 3-categories which appear. One way of doing this is to first construct a 3-dimensional defect TQFT and then extract the corresponding Gray category with duals constructed in [CMS]. ### State sum models Here we discuss how state sum models fit within the theory of orbifold completion. In Section 5.1.1 we briefly recall the 2-dimensional case and formalise the notion of the "Euler completion" of a pivotal 2-category. In Section 5.1.2 we build on [He] to construct a pivotal equivalence between the 2-categories of separable symmetric Frobenius \(\Bbbk\)-algebras and of semisimple \(\Bbbk\)-linear Calabi-Yau categories. One of our main applications is then proved in Section 5.1.3, where we show that the 3-category of spherical fusion categories and bimodule categories with trace, constructed by Schaumann in [12], is a subcategory of the orbifold completion of the Euler completion of the delooping of the 2-categories considered in Section 5.1.2. #### 5.1.1 Recollection on 2-dimensional state sum models We start by briefly recalling the construction of 2-dimensional state sum models from separable symmetric Frobenius algebras, from the perspective of orbifold completion. The pivotal 2-category of defects in the trivial 2-dimensional field theory is the delooping \(\mathrm{BVect}\), where \(\mathrm{Vect}\) is the category of finite-dimensional vector spaces. Its orbifold completion is the pivotal 2-category \(\mathrm{\Delta ssFrob}\) of \(\mathrm{\Delta}\)-separable symmetric Frobenius algebras, bimodules and bimodule maps, see Section 2.1. The category \(\mathrm{\Delta ssFrob}\) can be used to construct 2-dimensional defect TQFTs of state sum type - however not all of them. For example, among the invertible state sum models with underlying vector space \(A=\mathds{C}\), only those whose counit \(\varepsilon\) is multiplication with \(\pm 1\) are \(\mathrm{\Delta}\)-separable, while for every invertible \(\lambda\in\mathds{C}^{\times}\) there is a TQFT \(\mathcal{Z}_{\lambda}^{\mathrm{eu}}\) with counit \(\varepsilon(c)=\lambda c\) that assigns the invariant \(\lambda^{2g-2}\) to a closed surface of genus \(g\). Note that here \(2g-2\) is the Euler characteristic of the surface. The TQFT \(\mathcal{Z}_{\lambda}^{\mathrm{eu}}\) can be obtained from \(\mathcal{Z}_{\pm 1}^{\mathrm{eu}}\) by the procedure of "Euler completion" described in [11, Sect. 2.5]. This example suggests that we should consider the Euler completion of the defect theory corresponding to \(\mathrm{\Delta ssFrob}\) in general. We now describe the Euler completion \(E(\mathcal{B})\) of a pivotal 2-category \(\mathcal{B}\). This follows closely [11, Sect. 4.2] and [12, App. B]. An object of \(E(\mathcal{B})\) is a pair of an object \(b\in\mathcal{B}\) and an element \(\psi_{b}\in\mathrm{Aut}(1_{b})\). 1- and 2-morphisms are the same as those of \(\mathcal{B}\) and their composition is unchanged. The pivotal structure is changed by setting the adjunction 2-morphisms of 1-morphisms \(X\colon(b,\psi_{b})\longrightarrow(b^{\prime},\psi_{b^{\prime}}^{\prime})\) to be (5.1) **Remark 5.1**.: Let \(\mathcal{Z}\) be a 2-dimensional defect TQFT, let \(\mathcal{Z}^{\odot}\) be its Euler completion as in [11, Def. 2.24], and let \(\mathcal{B}_{\mathcal{Z}}\) and \(\mathcal{B}_{\mathcal{Z}^{\odot}}\) be the associated pivotal 2-categories as constructed in [10]. Then it follows directly from the definitions that \(E(\mathcal{B}_{\mathcal{Z}})\) is not exactly equal to \(\mathcal{B}_{\mathcal{Z}^{\odot}}\), but they are pivotally equivalent by a rescaling of 2-morphisms as in [12, App. B]. Note that in the case \(\mathcal{B}=\mathrm{\Delta ssFrob}\) the elements of \(\mathrm{Aut}(1_{A})\) are the invertible elements in the centre of \(A\). There is a canonical equivalence of 2-categories \(\varGamma\colon\mathrm{ssFrob}\longrightarrow E(\mathrm{\Delta ssFrob})\), where ssFrob is the 2-category of all separable symmetric Frobenius algebras. We use this equivalence to define a pivotal structure on ssFrob. To write down the equivalence explicitly recall [13] that for every separable symmetric Frobenius algebra \((A,1,\varepsilon,\mu,\Delta)\) in \(\mathrm{Vect}\), the characteristic element \(\omega\coloneqq\mu\circ\Delta(1)\) is an invertible element of the centre \(Z(A)\). The equivalence \(\varGamma\) sends \(A\) to \((A,1,\mu,\Delta^{\prime}=\Delta\circ\mu(-\otimes\omega^{-1}),\varepsilon^{ \prime}=\varepsilon\circ\mu(-\otimes\omega))\) together with \(\psi_{\varGamma(A)}=\omega\). This shows that the 2-category of 2-dimensional defect state sum models can be identified with ssFrob, in agreement with the answer expected from the stratified cobordism hypothesis. In the next section we show that ssFrob is equivalent to the 2-category of semisimple Calabi-Yau categories, which will be helpful when analysing the orbifold completion of \(\mathrm{BssFrob}\). #### 5.1.2 Oriented Eilenberg-Watts theorem Let \(\mathds{k}\) be an algebraically closed field. The (semisimple) classical Eilenberg-Watts theorem is the statement that the 2-category \(\mathrm{Alg}^{\mathrm{s}}\) of semisimple \(\mathds{k}\)-algebras, bimodules and bimodule maps is equivalent to the 2-category 2Vect of Kapranov-Voevodsky 2-vector spaces (i. e. semisimple \(\mathds{k}\)-linear abelian categories with finitely many isomorphism classes of simple objects), linear functors and natural transformations. The symmetric monoidal equivalence \(\mathrm{EW}\colon\mathrm{Alg}^{\mathrm{s}}\longrightarrow 2\mathrm{Vect}\) sends a semisimple algebra \(A\) to its category of finite-dimensional left modules \(A\)-Mod, a bimodule \({}_{B}Y_{A}\) to the linear functor \[\mathrm{EW}(Y)\colon A\text{-Mod} \longrightarrow B\text{-Mod} \tag{5.2}\] \[M \longmapsto Y\otimes_{A}M \tag{5.3}\] and a bimodule map \(f\colon Y\longrightarrow X\) to the corresponding natural transformation constructed from the morphisms \(\operatorname{EW}(f)_{M}\colon Y\otimes_{A}M\xrightarrow{f\otimes_{A}\operatorname {id}_{M}}X\otimes_{A}M\). The maximal subgroupoids \((\operatorname{Alg}^{\operatorname{s}})^{\times}\) and \(2\text{Vect}^{\times}\) on both sides of the equivalence classify the space of fully extended framed TQFTs with the corresponding target. Passing to the 2-groupoid of homotopy fixed-points for the \(\operatorname{SO}(2)\)-action coming from the cobordism hypothesis induces an equivalence \(\operatorname{ssFrob}^{\times}\longrightarrow(\text{CYcat}^{\operatorname{s}}) ^{\times}\) between the core of the 2-category of separable symmetric Frobenius algebras with compatible bimodules and the 2-category of Calabi-Yau categories and Calabi-Yau functors [He]. Both sides classify fully extended oriented TQFTs with values in Alg and 2Vect, respectively. We extend this to an equivalence of pivotal 2-categories, \[\operatorname{EW}^{\operatorname{or}}\colon\operatorname{ssFrob}\xrightarrow{ \cong}\text{CYcat}^{\operatorname{s}}\,, \tag{5.4}\] which we call the oriented Eilenberg-Watts equivalence due to its relation to oriented field theories. While we do not make this precise here, this should be thought of as the induced equivalence on categories of fully extended TQFTs with defects induced by the ordinary Eilenberg-Watts equivalence. In the next section we use this equivalence to systematically study the relation between spherical fusion categories and orbifold data for the trivial 3-dimensional defect TQFT. We begin by defining the objects of interest in the semisimple case relevant for us. (Many interesting examples of Calabi-Yau categories are not semisimple, such as derived categories of coherent sheaves on Calabi-Yau manifolds, or homotopy categories of matrix factorisations.) **Definition 5.2**.: A semisimple Calabi-Yau category is a 2-vector space \(\mathcal{C}\) together with a linear trace map \(\operatorname{tr}_{c}\colon\operatorname{End}(c)\longrightarrow\Bbbk\) for each \(c\in\mathcal{C}\) such that * for all morphisms \(f\colon c\longrightarrow c^{\prime}\) and \(g\colon c^{\prime}\longrightarrow c\) in \(\mathcal{C}\) the relation \(\operatorname{tr}_{c^{\prime}}(f\circ g)=\operatorname{tr}_{c}(g\circ f)\) holds, and * for all \(c,c^{\prime}\in\mathcal{C}\) the pairing \(\mathcal{C}(c,c^{\prime})\otimes\mathcal{C}(c^{\prime},c)\xrightarrow{ \circ}\mathcal{C}(c,c)\xrightarrow{\operatorname{tr}_{c}}\Bbbk\) is non-degenerate. A Calabi-Yau functor \(\mathcal{F}\colon(\mathcal{C},\operatorname{tr})\longrightarrow(\mathcal{C} ^{\prime},\operatorname{tr}^{\prime})\) is a linear functor \(\mathcal{F}\) such that \(\operatorname{tr}_{c}(f)=\operatorname{tr}^{\prime}_{\mathcal{F}(c)}( \mathcal{F}(f))\) for all \(c\in\mathcal{C}\) and \(f\in\operatorname{End}(c)\). We denote by \(\text{CYcat}^{\operatorname{s}}\) the symmetric monoidal 2-category of semisimple Calabi-Yau categories, linear functors and natural transformations. **Remark 5.3**.: Note that the 1-morphisms in \(\text{CYcat}^{\operatorname{s}}\) are all linear functors and not just the Calabi-Yau functors. This implies that \(\text{CYcat}^{\operatorname{s}}\) is equivalent to 2Vect as a 2-category. The identity functor gives an isomorphism between different Calabi-Yau structures on the same category, and every semisimple category admits a Calabi-Yau structure. However, we will see that \(\text{CYcat}^{\operatorname{s}}\) is equipped with an additional structure, namely that of a pivotal 2-category. **Example 5.4**.: Let \((A,\lambda)\) be a separable symmetric Frobenius algebra, where \(\lambda\) denotes the Frobenius trace. The category \(A\)-Mod of finite-dimensional \(A\)-modules has a natural Calabi-Yau structure: We give an interpretation of the trace in terms of the pivotal 2-category \(\operatorname{ssFrob}\). A left \(A\)-module \(M\) is a 1-morphism from the monoidal unit \(\Bbbk\) to \(A\). The trace of an endomorphism \(f\colon M\longrightarrow M\) is given by computing the trace in the pivotal 2-category \(\operatorname{ssFrob}\) which gives a 2-endomorphism of \(\operatorname{1}_{\Bbbk}\), which can be canonically identified with a number. In the language of defect TQFTs, \(M\) describes a line defect between the trivial theory and the TQFT associated to \(A\), and \(f\) describes a point defect. The trace agrees with the partition function of \(S^{2}\) with one defect line labelled by \(M\) and one point insertion labelled by \(f\). In [He] a slightly different Calabi-Yau structure on \(A\)-Mod was constructed: For \(M\in A\)-Mod consider the dual right module \(M^{*}\coloneqq\operatorname{Hom}_{A}(M,A)\) which comes with a natural isomorphism \(\operatorname{End}_{A}(M)\cong M^{*}\otimes_{A}M\) and an evaluation map \(\operatorname{ev}_{M}\colon M^{*}\otimes_{A}M\longrightarrow A\). The linear maps \(\operatorname{tr}_{M}\colon\operatorname{End}_{A}(M)\cong M^{*}\otimes_{A}M \xrightarrow{\circ_{M}}A\xrightarrow{\lambda}\Bbbk\) define the Calabi-Yau structure on \(A\)-Mod as shown in [He, Sect. 3.1]. To highlight the difference between this Calabi-Yau structure and ours, we look at the Frobenius algebra \(\Bbbk\) with counit given by an invertible scalar \(\lambda\). A \(\Bbbk\)-module is just a vector space. The trace constructed in [He] is the usual trace times \(\lambda\). Ours instead is the usual trace times \(\lambda^{2}\). The justification for our choice is that it gives rise to a pivotal equivalence between \(\operatorname{ssFrob}\) and \(\text{CYcat}^{\operatorname{s}}\) as we show in Proposition 5.9 below. **Example 5.5**.: Every spherical fusion category \(\mathcal{C}\) is Calabi-Yau with trace induced by the pivotal structure. The tensor product is a Calabi-Yau functor \(\otimes\colon\mathcal{C}\boxtimes\mathcal{C}\longrightarrow\mathcal{C}\). A module category \(M\) with trace over \(\mathcal{C}\) as defined in [12] is a module category \(M\) over \(\mathcal{C}\) together with a compatible Calabi-Yau structure (see Definition 5.16) which implies that the action functor is Calabi-Yau. One reason we do not restrict to Calabi-Yau functors is that the adjoint of a Calabi-Yau functor is not Calabi-Yau in general. There is another interplay between Calabi-Yau structures and adjoints. Namely, it allows us to coherently identify left and right adjoints, leading to a pivotal 2-category. **Proposition 5.6**.: Let \(F\colon(\mathcal{C},\mathrm{tr})\longrightarrow(\mathcal{C}^{\prime},\mathrm{ tr}^{\prime})\) be a functor between Calabi-Yau categories, and let \(F^{\vee}\colon\mathcal{C}^{\prime}\longrightarrow\mathcal{C}\) its right adjoint. Then \(F^{\vee}\) is also left adjoint to \(F\). Furthermore, this gives a pivotal structure on the 2-category \(\mathrm{CYcat}^{*}\). Proof.: We first show that \(F^{\vee}\) is also left adjoint. This follows from the sequence of natural isomorphisms \(\mathcal{C}(c,F(c^{\prime}))\cong\mathcal{C}(F(c^{\prime}),c)^{*}\cong \mathcal{C}^{\prime}(c^{\prime},F^{\vee}(c))^{*}\cong\mathcal{C}^{\prime}(F^{ \vee}(c),c^{\prime})\), where the identification with the dual spaces is through the trace and the middle map is the dual inverse from the isomorphism which is part of the original adjunction. It should be straightforward to check the conditions required for this to define a pivotal structure directly. However, there is also a different way: in Remark 5.10 below we construct an equivalence \(\mathrm{CYcat}^{*}\longrightarrow\mathrm{sseFrob}\) which is compatible with the tentative pivotal structure considered here. But then the relations we want to check directly follow from the fact that \(\mathrm{sseFrob}\) is a pivotal 2-category, and that any equivalence induces a bijection on 2-morphisms. For a pair of objects \(c,c^{\prime}\) in a Calabi-Yau category \((\mathcal{C},\mathrm{tr})\) the non-degenerate trace pairing \(\mathcal{C}(c,c^{\prime})\otimes\mathcal{C}(c^{\prime},c)\xrightarrow{\circ} \mathcal{C}(c,c)\xrightarrow{\mathrm{tr}_{c}}\Bbbk\) induces an isomorphism \[\tau_{c}\colon\mathcal{C}(c^{\prime},c)\xrightarrow{\cong}\mathcal{C}(c,c^{ \prime})^{*} \tag{5.5}\] which we will use in Lemma 5.7 below. To make \(F^{\vee}\colon\mathcal{C}^{\prime}\longrightarrow\mathcal{C}\) a right adjoint to \(F\colon\mathcal{C}\longrightarrow\mathcal{C}^{\prime}\) we can equivalently equip it with a unit \(\eta^{R}\colon\,\mathrm{id}_{\mathcal{C}}\Longrightarrow F^{R}\circ F\) and a counit \(\epsilon^{R}\colon F\circ F^{R}\Longrightarrow\mathrm{id}_{\mathcal{C}^{ \prime}}\) satisfying the Zarro identities. Similarly the structure of a left adjoint can be described by \(\eta^{L}\colon\,\mathrm{id}_{\mathcal{C}^{\prime}}\Longrightarrow F\circ F^ {L}\) and a counit \(\epsilon^{L}\colon F^{L}\circ F\Longrightarrow\mathrm{id}_{\mathcal{C}}\). **Lemma 5.7**.: Let \(F\colon\mathcal{C}\longrightarrow\mathcal{C}^{\prime}\) be a functor between Calabi-Yau categories, and let \((F^{\vee},\eta^{R},\epsilon^{R})\) be a right adjoint to \(F\). The structure of a left adjoint on \(F^{\vee}\) introduced in Proposition 5.6 has unit and counit given by \[\epsilon^{L}_{c} =\tau_{c}^{-1}[\mathrm{tr}_{F(c)}(\epsilon^{R}_{F(c)}\circ F(-)) ]\in\mathcal{C}(F^{R}F(c),c)\,, \tag{5.6}\] \[\eta^{L}_{c^{\prime}} =\tau_{c^{\prime}}^{-1}[\mathrm{tr}_{F^{R}(c^{\prime})}(F^{R}(-) \circ\eta^{R}_{F^{R}(c^{\prime})})]\in\mathcal{C}^{\prime}(c^{\prime},FF^{R}(c ^{\prime}))\,. \tag{5.7}\] Proof.: We will only prove the formula for \(\epsilon^{L}_{c}\), the proof for \(\eta^{L}\) is analogous. The map \(\epsilon^{L}_{c}\colon F^{R}\circ F(c)\longrightarrow c\) is the image of the identity on \(F(c)\) under the isomorphism \(\mathcal{C}^{\prime}(F(c),F(c))\cong\mathcal{C}(F^{R}F(c),c)\). We just have to chase it through the isomorphism constructed in the proof of Proposition 5.6: \[\mathcal{C}^{\prime}(F(c),F(c))\ni\mathrm{id}_{F(c)} \longmapsto\mathrm{tr}_{F(c)}(-)\in\mathcal{C}^{\prime}(F(c),F(c))^ {*}\] \[\longmapsto\mathrm{tr}_{F(c)}(\epsilon^{R}\circ-)\in\mathcal{C}^{ \prime}(F(c),FF^{R}F(c))^{*}\] \[\longmapsto\mathrm{tr}_{F(c)}(\epsilon^{R}\circ F(-))\in\mathcal{ C}(c,F^{R}F(c))^{*}\] \[\longmapsto\tau_{c}^{-1}[\mathrm{tr}_{F(c)}(\epsilon^{R}_{F(c)} \circ F(-))]\,. \tag{5.8}\] Here we used that the isomorphism \(\mathcal{C}^{\prime}(c,F^{R}(c^{\prime}))\cong\mathcal{C}(F(c),c^{\prime})\) can be expressed in terms of the counit as \[\mathcal{C}(c,F^{R}(c^{\prime}))\xrightarrow{F}\mathcal{C}^{\prime}(F(c),FF^{R} (c^{\prime}))\xrightarrow{\epsilon^{R_{0}-}}\mathcal{C}^{\prime}(F(c),c^{\prime })\,. \tag{5.9}\] Now we can relate the notion of pivotal equivalence introduced in Definition 2.2 to Calabi-Yau functors in \(\mathrm{CYcat}^{*}\). **Lemma 5.8**.: An equivalence \(F\colon\mathcal{C}\longrightarrow\mathcal{C}^{\prime}\) between semisimple Calabi-Yau categories is a pivotal equivalence if and only if \(F\) and \(F^{-1}\) are Calabi-Yau functors. Proof.: To prove the statement it is enough to check it on simple objects. For this we fix a set \(I\) of representatives of simple objects \(i\in\mathcal{C}\), and similarly a set \(I^{\prime}\) of simple objects \(i^{\prime}\in\mathcal{C}^{\prime}\). The Calabi-Yau structures can be described by a collection of non-zero numbers \(\{\lambda_{i}\}_{i\in I}\) and \(\{\lambda_{i^{\prime}}\}_{i^{\prime}\in I^{\prime}}\) defining the trace on the endomorphisms of the simple object. Since \(F\) is an equivalence it sends a simple object \(i\in\mathcal{C}\) to a simple \(i^{\prime}\in\mathcal{C}^{\prime}\) (up to isomorphism). We can fix a right adjoint \(G\) by setting \(G(i^{\prime})=i\). The counit \(\epsilon_{i}^{R}\colon FG(i^{\prime})=i^{\prime}\longrightarrow i^{\prime}\) and unit \(\eta_{i}^{R}\colon i\longrightarrow GF(i)=i\) can both be chosen to be the identity. From Lemma 5.7 we can directly read off that \(\epsilon_{i}^{L}=\frac{\lambda_{i^{\prime}}}{\lambda_{i}}\operatorname{id}_{i}\) and \(\eta_{i}^{L}=\frac{\lambda_{i}}{\lambda_{i^{\prime}}}\operatorname{id}_{i^{ \prime}}\). We conclude that the condition that \(F\) is a pivotal equivalence is equivalent to \(\lambda_{i}=\lambda_{i^{\prime}}\). Replacing the 2-category \(\operatorname{Alg}^{s}\) by ssFrob, Example 5.4 shows that the Eilenberg-Watts equivalence lifts to an equivalence \(\operatorname{EW}^{\operatorname{or}}\colon\operatorname{ssFrob} \longrightarrow\operatorname{CYcat}^{s}\). This is also compatible with the pivotal structures introduced above: **Proposition 5.9**.: The 2-functor \(\operatorname{EW}^{\operatorname{or}}\colon\operatorname{ssFrob} \longrightarrow\operatorname{CYcat}^{s}\) is pivotal. Proof.: Let \({}_{B}M_{A}\) be a 1-morphism from \(A\) to \(B\) in ssFrob, with adjoint \({}_{A}M_{B}^{*}\). We can assume that the choice of right adjoint to \(\operatorname{EW}^{\operatorname{or}}(M)\) is \(\operatorname{EW}^{\operatorname{or}}(M^{*})\) with adjunction data \[\operatorname{Hom}_{B}(M\otimes_{A}Y,Z) \longrightarrow\operatorname{Hom}_{A}(Y,M^{*}\otimes_{B}Z) \tag{5.10}\] where here and below we graphically denote the relative tensor product by a stronger colouring. This means that the right vertical arrow in (2.6) is the identity, so we need to show that the left vertical map is also the identity. For this first note that the adjunction data coming from ssFrob is \[\operatorname{Hom}_{B}(Z,M\otimes_{A}Y) \longrightarrow\operatorname{Hom}_{A}(M^{*}\otimes_{B}Z,Y) \tag{5.11}\] The statement now follows from the commutativity of the diagram (5.12) where the anti-clockwise composition starting in the top left corner and ending in the top right corner is the left adjoint structure coming from the Calabi-Yau structure. **Remark 5.10**.: Note that Proposition 5.9 directly implies that there exist an inverse to \(\operatorname{EW}^{\operatorname{or}}\) which is also a pivotal functor. A pivotal inverse \((\operatorname{EW}^{\operatorname{or}})^{-1}\colon\operatorname{CYcat}^{*} \longrightarrow\operatorname{ssFrob}\) can be described explicitly by picking for every Calabi-Yau category \(\mathcal{C}\in\operatorname{CYcat}^{*}\) a set of representatives \(I\) for all isomorphism classes of simple objects in \(\mathcal{C}\). The 2-functor \((\operatorname{EW}^{\operatorname{or}})^{-1}\) sends a Calabi-Yau category \((\mathcal{C},\operatorname{tr})\in\operatorname{CYcat}^{*}\) to the algebra \(\operatorname{End}(\bigoplus_{i\in I}i)\) which has a Frobenius structure coming from \(\operatorname{tr}\). However, this is not the Frobenius structure on \((\operatorname{EW}^{\operatorname{or}})^{-1}\) which can be constructed by picking a square root for the window element \(\omega\) and changing the Frobenius structure by its inverse as we did in Section 5.1.1. That such square roots exist was shown in [Mu]. A functor \(F\colon\mathcal{C}\longrightarrow\mathcal{C}^{\prime}\) is sent to the bimodule \(\operatorname{Hom}_{\mathcal{C}^{\prime}}(\bigoplus_{j\in I}j,F(\bigoplus_{i \in I}i))\). A natural transformation gets mapped to the post-composition with its component at \(\bigoplus_{i\in I}i\). That this is pivotal follows because \(\operatorname{EW}^{\operatorname{or}}(\operatorname{End}(\bigoplus_{i\in I}i))\) is pivotally equivalent to \(\mathcal{C}\), and because of Proposition 2.3. **Remark 5.11**.: We expect that the above extends to higher dimensions. In [CMM] we will consider the case \(n=3\), i. e. an equivalence \(2\operatorname{EW}\colon\operatorname{mFus}\longrightarrow 3\text{Vect}\) between multifusion categories and 3-vector spaces from [De], and then provide an oriented version, i. e. an equivalence \(2\operatorname{EW}^{\operatorname{or}}\colon\operatorname{msFus} \longrightarrow\operatorname{CY2cat}\) of 3-categories between the 3-category of spherical multifusion categories and Calabi-Yau 2-categories, to study the relation between spherical fusion 2-categories and orbifold data for the trivial 4-dimensional defect TQFT. #### 5.1.3 3-dimensional state sum models As we have seen in Section 5.1.1 the 2-category describing 2-dimensional defect state sum models is ssFrob. Following the ideas outlined in Section 1 its delooping describes defects of the trivial 3-dimensional TQFT. The 3-category describing 3-dimensional defect state sum models is (the Euler completion of) its orbifold completion. To analyse the orbifold completion we can employ the oriented Eilenberg-Watts equivalence to work within CYcat\({}^{\text{s}}\) as explained in the previous section. To cover more interesting orbifold data, we have to work in an appropriate Euler completion. This means we consider the 3-category \(E(\text{BssFrob})\) whose objects are given by pairs consisting of an object of \(\text{BssFrob}\) (there is only one) and an invertible 3-morphism \(\phi\colon 1_{1_{*}}\longrightarrow 1_{1_{*}}\). Explicitly this 3-category can be constructed by considering the pivotal 3-category [CMS] associated to the Euler completion of the corresponding defect field theory constructed in [CRS1]. This construction also involves adding Euler defects for surface defects. Here we can ignore them because they are already taken care of by the Euler completion of \(\Delta\text{ssFrob}\) in Section 5.1.1. In practice this leads to including \(\phi\)-insertions into the condition on orbifold data. Our first goal is to relate \(E(\text{BssFrob})_{\text{orb}}\) to spherical fusion categories and their bimodule categories with trace. In [CRS3] it was already shown that spherical fusion categories give rise to orbifold data. From the perspective of Section 5.1.2 this construction can be understood as follows. Recall from Example 5.5 that spherical fusion categories \(S\) gives rise to \(E_{1}\)-algebras in CYcat\({}^{\text{s}}\). Using the square root of the global dimension \(\phi=(\dim S)^{-1/2}\) as the Euler defects upgrades this to an orbifold datum in \(E(\text{BCYcat}^{\text{s}})\). The paper [CRS3] works with the image of \((\text{EW}^{\text{or}})^{-1}\) in ssFrob when doing computations. Before restricting to the subcategory of orbifold data corresponding to spherical fusion categories we make a few comments on the general structure of orbifold data in \(E(\text{BCYcat}^{\text{s}})\). For this we fix an object \((\mathcal{C},\otimes,1,\alpha)\in\mathcal{E}(\text{BCYcat}^{\text{s}})\). This is a monoidal category \(\mathcal{C}\) which in addition is equipped with a Calabi-Yau structure. For this to be an orbifold datum, \((\mathcal{C},\otimes,1,\alpha)\) has to satisfy the conditions in Figure 4.1. These conditions give rise to additional properties of the monoidal structure. One of the conditions is that \(\mathcal{C}\) is rigid. To show this we use an equivalent reformulation of rigidity: **Proposition 5.12**.: Let \(A\in 2\text{Vect}\) be endowed with the structure of a monoidal category. The following are equivalent: 1. \(A\) is rigid. 2. the canonical lax bimodule structure on the right adjoint of \(\otimes\colon A\boxtimes A\longrightarrow A\) is strong, 3. the canonical op-lax bimodule structure on the left adjoint of \(\otimes\colon A\boxtimes A\longrightarrow A\) is strong. Proof.: This is essentially [BJS, Def.-Prop. 4.1] applied to the semisimple case, where adjoints always exist. **Remark 5.13**.: The equivalence breaks down as soon as one leaves the realm of 2-vector spaces. For example for compactly generated presentable categories condition (ii) is equivalent to the condition that every compact projective object has a left and right dual, see [BJS, Def.-Prop. 4.1]. **Remark 5.14**.: The natural lax and op-lax bimodule structures are exactly given by \(\alpha^{\prime},\alpha^{\prime\prime}\), \(\bar{\alpha}^{\prime}\) and \(\bar{\alpha}^{\prime\prime}\) as in (4.1). The orbifold conditions enforce more than just the fact that these are invertible; they give specific inverses. As in every pivotal 2-category, the left and right adjoints of 1-morphisms between Calabi-Yau categories agree. Thanks to Proposition 5.12, the relations in Figure 4.1 except the bubble cancellation can now be reformulated as the statement that the two bimodule structures on the left and right adjoint agree under this identification (up to a scalar since we work in the Euler completion). **Proposition 5.15**.: Let \(A\) be an object of \(E(\text{BCYcat}^{\text{s}})_{\text{orb}}\). Then \(A\) is a rigid linear monoidal category. Proof.: This is a direct consequence of Proposition 5.12 and Remark 5.14. We now restrict to orbifold data coming from spherical fusion categories and turn to 1-morphisms in \(E(\text{BCYcat}^{\text{s}})_{\text{orb}}\). According to Definition 4.3, a 1-morphism \((\mathcal{C},\otimes,1,\alpha)\longrightarrow(\mathcal{C}^{\prime},\otimes^{ \prime},1^{\prime},\alpha^{\prime})\) is given by a \(\mathcal{C}\)-\(\mathcal{C}^{\prime}\)-bimodule category \(\mathcal{M}\) which is equipped with a Calabi-Yau structure, subject to the conditions from Definition 4.3. For orbifold data coming from spherical fusion categories this is related to the notion of a trace on a bimodule category, introduced in [Sc2, Def. 3.7]. **Definition 5.16**.: Let \(\mathcal{C},\mathcal{C}^{\prime}\) be spherical fusion categories and \(\mathcal{M}\) a \(\mathcal{C}\)-\(\mathcal{C}^{\prime}\)-bimodule category. A trace on \(\mathcal{M}\) is a choice of Calabi-Yau structure \(\operatorname{tr}_{\mathcal{M}}\) on \(\mathcal{M}\) such that \[\operatorname{tr}_{c\rhd m}\left(\begin{array}{c}\includegraphics[width=142.26378pt]{ -2.eps}\end{array}\right)=\operatorname{tr}_{m}\left(\begin{array}{c} \includegraphics[width=142.26378pt]{-2.eps}\end{array}\right)\,,\quad \operatorname{tr}_{m\triangle c^{\prime}}\left(\begin{array}{c}\includegraphics [width=142.26378pt]{-2.eps}\end{array}\right)=\operatorname{tr}_{m}\left( \begin{array}{c}\includegraphics[width=142.26378pt]{-2.eps}\end{array}\right) \tag{5.13}\] for all \(c\in\mathcal{C},c^{\prime}\in\mathcal{C}^{\prime}\) and \(m\in\mathcal{M}\). Here we use the graphical notation of putting lines next to each other for the left and right action, see [2] for more details. **Proposition 5.17**.: If \(\mathcal{M}\) is a bimodule category with trace as above, then it gives rise to a \(1\)-morphism in \(E(\operatorname{BCYcat}^{s})_{\operatorname{orb}}\). Proof.: We use Proposition 4.5 and show the statement only for the left action \(\triangleright\) of \(\mathcal{C}\) on \(\mathcal{M}\). The proof for \(\triangleleft\) is completely analogous. Applied to the situation at hand, Proposition 4.5 implies that we have to show that two isomorphisms between the left and right adjoint of \(\triangleright\) agree. One of these isomorphisms involves the Calabi-Yau structure on \(\mathcal{C}\boxtimes\mathcal{M}\), the other one only that on \(\mathcal{C}\). We proceed by explicit computation. For this we denote by \(I\) a set of representatives for simple objects in \(\mathcal{C}\). The right adjoint is given by \[\triangleright^{R}\colon\mathcal{M} \longrightarrow\mathcal{C}\boxtimes\mathcal{M} \tag{5.14}\] \[m \longmapsto\bigoplus_{i\in I}i\boxtimes i^{*}\triangleright m \tag{5.15}\] with unit and counit induced from those for the tensor product \(\mu:=\otimes\) of \(\mathcal{C}\), as explained just above Proposition 4.5. Concretely we find for the unit \[\eta^{R}\colon c\boxtimes m\longmapsto\bigoplus_{i\in I}i\boxtimes i^{*} \triangleright(c\triangleright m)\coloneqq(\operatorname{id}_{\mathcal{C}} \boxtimes\triangleright)[\eta_{c,1}\boxtimes 1_{m}]\,, \tag{5.16}\] suppressing coherence isomorphisms related to the unit of \(\mathcal{C}\). We compute the unit and counit for the structure of a left adjoint on \(\triangleright^{R}\) using the formulas from Lemma 5.7. It is enough to check the desired relation for the unit, since this fixes the counit automatically. Spelling out the formula from Lemma 5.7 we find for the component of \(\eta^{L}\colon\operatorname{id}_{\mathcal{M}}\Longrightarrow\triangleright \triangleright^{R}\) at \(m\in\mathcal{M}\) that \[\eta^{L}_{m}=\tau^{-1}_{m}\left[\operatorname{tr}_{\bigoplus_{i\in I}i \boxtimes i^{*}\triangleright m}\left((i\boxtimes i^{*}\triangleright(-))\circ \eta^{R}_{\triangleright^{R}(m)}\right)\right]=\tau^{-1}_{m}\left[\operatorname{ tr}_{\bigoplus_{i\in I}i\otimes i^{*}\triangleright m}\left((i\otimes i^{*} \triangleright(-))\circ\triangleright^{R}(\eta^{R}_{\triangleright^{R}(m)})\right)\right] \tag{5.17}\] where \(\tau_{m}\) is the isomorphism introduced in (5.5), and the equality uses the assumption on the compatibility of the action with the trace on \(\mathcal{M}\). Using the graphical calculus the last expressions is given by \[\tau^{-1}_{m}\operatorname{tr}_{\bigoplus_{i\in\partial i^{*}\triangleright m}} \left(\begin{array}{c}\includegraphics[width=142.26378pt]{-2.eps}\end{array} \right)=\tau^{-1}_{m}\operatorname{tr}_{m}\left(\begin{array}{c} \includegraphics[width=142.26378pt]{-2.eps}\end{array}\right) \tag{5.18}\] from which we can read off \[\eta^{L}_{m}=\left(\begin{array}{c}\includegraphics[width=142.26378pt]{-2.eps} \end{array}\right)\,. \tag{5.19}\] We can do exactly the same computation with \(\mathcal{M}\) replaced by \(\mathcal{C}\) to find a formula for \(\eta^{L}_{\mu}\). This shows that the structure of a left adjoint on \(\triangleright^{R}\) coming from the Calabi-Yau structure on \(\mathcal{C}\boxtimes\mathcal{M}\) agrees with the one induced by \(\mu^{R}\). This finishes the proof. From the definition it is clear that 2- and 3-morphism in \(E(\mathrm{BCYcat}^{\mathrm{s}})_{\mathrm{orb}}\) are given by bimodule functors and natural transformations. The discussion in this section can hence be summarised as follows. **Theorem 5.18**.: The 3-category sFus of spherical fusion categories, bimodule categories with trace, bimodule functors, and bimodule natural transformations defined in [31] is a subcategory of \(E(\mathrm{BCYcat}^{\mathrm{s}})_{\mathrm{orb}}\), or equivalently of \(E(\mathrm{BssFrob})_{\mathrm{orb}}\). **Remark 5.19**.: A consequence of Theorem 5.18 is that every spherical fusion category is 3-dualisable as an object of the symmetric monoidal 3-category \(\mathrm{Alg}_{1}(2\mathrm{Vect})\). Indeed, as already mentioned it is always 1-dualisable. The higher dualisability follows from Lemma 4.10 and Proposition 4.12. This recovers a special case of the main result of [40]. Spherical fusion categories are expected to also have a canonical \(\mathrm{SO}(3)\)-homotopy fixed-point structure. We hope that the techniques developed in the present paper are helpful in proving this. ### Domain walls between Reshetikhin-Turaev theories Any semisimple modular fusion category \(\mathcal{M}\) can be used to construct a 3-dimensional TQFT known as the Reshetikhin-Turaev theory based on \(\mathcal{M}\), see [13] for a textbook account. In [14, 15] it was shown that surface defects within a Reshetikhin-Turaev theory can be constructed from \(\Delta\)-separable symmetric Frobenius algebras in \(\mathcal{M}\). Furthermore, line and point defects can be constructed from bimodules and bimodule maps, respectively. This leads us to consider the 3-category with duals \(\mathrm{B}\Delta\mathrm{ssFrob}(\mathcal{M})\), and in this section we explain how the study of \(E(\mathrm{B}\Delta\mathrm{ssFrob}(\mathcal{M}))_{\mathrm{orb}}\) can be used to reproduce and contextualise the results [13] of Koppen-Mulevicius-Runkel-Schweigert on defect TQFTs with different Reshetikhin-Turaev phases. Orbifold data in \(\mathrm{B}\Delta\mathrm{ssFrob}(\mathcal{M})\) are extensively studied in [15]. One source for orbifold data are commutative \(\Delta\)-separable Frobenius algebras \((A,\mu,\eta,\Delta,\varepsilon)\) in \(\mathcal{M}\). The \(A\)-\((A\otimes A)\)-bimodule which is part of the orbifold datum is \(A\) where the action of \(A\otimes A\) uses the multiplication \(\mu\). This can be understood as follows. Consider the 2-category \(\Delta\mathrm{ssFrob}^{\mathrm{pt}}(\mathcal{M})\) with \(\Delta\)-separable symmetric Frobenius algebras in \(\mathcal{M}\) as objects, algebra homomorphisms as 1-morphisms, and only identities as 2-morphisms. There is a 2-functor \(\Delta\mathrm{ssFrob}^{\mathrm{pt}}(\mathcal{M})\longrightarrow\Delta\mathrm{ssFrob }(\mathcal{M})\) sending an algebra homomorphism to the bimodule it induces. Dunn additivity implies that objects \(\mathcal{A}\) in \(\Delta\mathrm{ssFrob}^{\mathrm{pt}}(\mathcal{M})\) are commutative \(\Delta\)-separable symmetric Frobenius algebras. By the results of [15], such algebras \(\mathcal{A}\) always have the structure of orbifold data in \(\mathrm{B}\Delta\mathrm{ssFrob}(\mathcal{M})\).6 We can now use a similar approach to construct 1-morphisms in \(\mathrm{(B}\Delta\mathrm{ssFrob}(\mathcal{M}))_{\mathrm{orb}}\) which lie in the image of \(\Delta\mathrm{ssFrob}^{\mathrm{pt}}(\mathcal{M})\). Concretely, a 1-morphism from \(B\) to \(A\) in \(\mathrm{(B}\Delta\mathrm{ssFrob}(\mathcal{M}))_{\mathrm{orb}}\) is given by a \(\Delta\)-separable Frobenius algebra \(F\) in \(\mathcal{M}\) which at the same time is an \(A\)-\(B\)-bimodule where the algebra actions are morphisms of Frobenius algebras. The following result explains the relation to [13]. Footnote 6: Note that it does not make sense to look for orbifold data in \(\Delta\mathrm{ssFrob}^{\mathrm{pt}}(\mathcal{M})\) because its morphisms do not admit adjoints. **Proposition 5.20**.: Let \(A\) and \(B\) be objects of \(\Delta\mathrm{ssFrob}(\mathcal{M})\) and \(F\) an \(A\)-\(B\)-bimodule additionally equipped with the structure of a \(\Delta\)-separable symmetric Frobenius algebra. Then the following are equivalent: * The structure maps encoding the right and left action are morphisms of Frobenius algebras. * The conditions (5.20) (5.21) of [KMRS, Def. 2.10] hold.7 Footnote 7: We thank Vincentas Mulevicius for sharing these diagrams from [KMRS], and for producing the diagrams in (5.22)–(5.23). Proof.: It is enough to show the relation for the multiplication, because those imply the relations for the co-multiplication. We show the equivalence for the left \(A\)-action. The proof for the right \(B\)-action is completely analogous. The condition that the left action is a bimodule map is (5.22) By inserting the unit of \(A\) into one of the green strands we receive the desired relations. The other direction is also straightforward: (5.23) where we only used the conditions from [KMRS, Def. 2.10] as well as the bimodule condition. A Frobenius-algebra-cum-bimodule \(F\) satisfying either of the equivalent definitions above is called a \(\Delta\)-separable symmetric Frobenius algebra over \((A,B)\) in [KMRS]. To check that these define 1-morphisms in the orbifold completion, note that the 3-morphisms we want to check to be equal can be computed by inserting networks of \(F\), \(A\) and \(B\). The relations follow exactly as in [CRS3] from using \(\Delta\)-separability and the bimodule property. To construct 2-morphism in \(\left(\operatorname{B}\Delta\mathrm{ssFrob}\right)_{\mathrm{orb}}\) we cannot work within \(\Delta\mathrm{ssFrob}^{\mathrm{pt}}(\mathcal{M})\). Instead we just work in \(\operatorname{B}\Delta\mathrm{ssFrob}\). For this let us fix two \(\Delta\)-separable symmetric Frobenius algebras \(F,G\) over a pair of commutative \(\Delta\)-separable Frobenius algebras \((A,B)\) and an \(G\)-\(F\)-bimodule \(M\). We want to equip \(M\) with the structure of a 2-morphism in \(\left(\operatorname{B}\Delta\mathrm{ssFrob}\right)_{\mathrm{orb}}\). According to Definition 3.3, for this we have to specify a \(G\)-\((F\otimes B)\)-bimodule isomorphism8 Footnote 8: Our choice of notation in this section are chosen so that they agree with the paper [KMRS]. This leads to some potential confusion when comparing to Definition 3.3: \(A\) and \(B\) are swapped, what is called \(F\) here is called \(M\) in Definition 3.3, and what is called \(M\) here is called \(F\) in Definition 3.3. \[M\otimes_{F}F\longrightarrow G\otimes_{G\otimes B}(M\otimes B) \tag{5.24}\] (corresponding to the third diagram in (3.12)), and a \(G\)-\((A\otimes F)\)-bimodule isomorphism \[M\otimes_{F}F\longrightarrow G\otimes_{A\otimes G}(A\otimes M) \tag{5.25}\] (corresponding to the second diagram in (3.12)), where the commutativity of \(A\) is used to turn the right action on \(G\) into a left action. All bimodules appearing here are canonically isomorphic to \(M\) as a vector space. The isomorphism from \(M\) to the corresponding bimodule inserts units. However there is no reason for the \(G\)-\((A\otimes F)\)- or \(G\)-\((F\otimes B)\)-bimodule structures to agree. In case they agree the identity will automatically give the bimodule isomorphisms we are looking for. The left actions in (5.24) are the same under the identification with \(M\). For the right actions to agree we find the condition \[\left(M\otimes B\xrightarrow{u_{F}}M\otimes F\otimes B \xrightarrow{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, which is compatible with the boundary parametrisation. Composition is defined by gluing manifolds along boundaries, illustrated for \(n=2\) by (6.1) The monoidal structure of \(\operatorname{Bord}_{n,n-1}\) is given by taking disjoint unions. Since their introduction various generalisations of the above closed oriented TQFTs have been considered, among them variants with different tangential structures, different target categories, and extended TQFTs, see e. g. [Lu]. To describe "defects" in such theories an extension of the framework involving stratified manifolds is used. In comparison with (6.1), the main idea of this refinement is captured in the example (6.2) where all the coloured submanifolds come with prescribed orientations (which we suppress in the picture; orientations of top-dimensional strata by definition are induced by the orientation of the underlying manifold) and labels. We only display the labels of 2-strata, but also the 1- and 0-strata come with prescribed labels, which we suppress to avoid clutter. To make this picture precise we recall from [CMS, CRS1] that by a 3-dimensional oriented stratified manifold (without boundary) we mean an oriented 3-manifold \(Y\) together with a filtration \(Y=F_{3}\supset F_{2}\supset F_{1}\supset F_{0}\supset F_{-1}=\varnothing\) such that * \(F_{i}\setminus F_{i-1}\) is an \(i\)-dimensional submanifold of \(Y\) which is furthermore equipped with an orientation. The connected components of \(F_{i}\setminus F_{i-1}\) are called \(i\)-strata, and the orientations of 3-strata are induced by the given orientation of \(Y\). * There are only finitely many \(i\)-strata for every \(i\in\{0,1,2,3\}\). * If a stratum \(\sigma\) intersects non-trivially with the closure of another stratum \(\sigma^{\prime}\) (i. e. \(\sigma\cap\overline{\sigma^{\prime}}\neq\varnothing\)), then \(\sigma\subset\overline{\sigma^{\prime}}\). The definition can be extended to manifolds with boundary which in particular entails a stratification of the boundary. For many purposes (including our applications to defect TQFTs) the topology of stratified manifolds defined in this way is too wild. Conically stratified manifolds solve this by requiring that locally the stratified manifold is diffeomorphic to a (cylinder over a stratified \((n-1)\)-ball, or a) cone over an \((n-1)\)-dimensional stratified sphere. The type of allowed spheres is defined inductively; illustrative examples of such cylinders and cones for \(n=2\) and \(n=3\) are (6.3) We refer to [11, Sect. 2] for a detailed exposition. In the setting of defect TQFTs we additionally assign labels to all strata. The 3-strata are labelled by elements of a prescribed set \(D_{3}\) (that is part of the data of a defect TQFT, see below) which we interpret as "bulk theories". Similarly 2-, 1- and 0-strata are labelled by elements of prescribed sets \(D_{2},D_{1}\) and \(D_{0}\) of "surface defects", "line defects" and "point defects", respectively. A \(j\)-stratum labelled by an element in \(D_{j}\) is a \(j\)-dimensional defect. The label sets \(D_{j}\) are completed to a set of defect data D by supplying adjacency data, i. e. prescribed rules which \(D_{2}\)-labels are allowed for the two 2-strata adjacent to a 3-stratum labelled by a given element of \(D_{3}\), and similarly how defects are allowed to meet in lines and points. We refer to [11, Sect. 2.3] and [11, Sect. 2.2.2] for a formalisation of the notion of D, as well as the precise definition of the symmetric monoidal category of defect bordisms Bord\({}^{\text{def}}_{3,2}(\text{D})\). Its morphisms are basically equivalence classes of D-labelled stratified 3-bordisms, objects are stratified closed 2-manifolds whose \(j\)-strata are compatibly labelled with elements of \(D_{j+1}\), and glueing is a 3-dimensional version of the case illustrated in (6.2). **Definition 6.1**.: Let D be a chosen set of 3-dimensional defect data, and let \(\mathcal{C}\) be a symmetric monoidal category. A 3-dimensional defect TQFT for D with values in \(\mathcal{C}\) is a symmetric monoidal functor \[\mathcal{Z}\colon\text{Bord}^{\text{def}}_{3,2}(\text{D})\longrightarrow \mathcal{C}\,. \tag{6.4}\] **Remark 6.2**.: 1. Usually one takes the target \(\mathcal{C}\) to be the category of (super) vector spaces. To any such defect TQFT \(\mathcal{Z}\) one naturally associates a Gray category with duals \(\mathcal{T}_{\mathcal{Z}}\), which we think of as an important invariant of \(\mathcal{Z}\). This is the "categorification" of the analogous construction of pivotal 2-categories in one dimension lower, first explained in [12]. Objects of \(\mathcal{T}_{\mathcal{Z}}\) are elements in \(D_{3}\), 1- and 2-morphisms are (lists of) elements in \(D_{2}\) and \(D_{1}\), respectively, and the vector spaces of 3-morphisms are obtained by applying \(\mathcal{Z}\) to defect spheres (which are objects in Bord\({}^{\text{def}}_{3,2}(\text{D})\)) as illustrated by the boundary of the defect ball in (6.3). Hence \(i\)-cells in \(\mathcal{T}_{\mathcal{Z}}\) correspond to \((3-i)\)-dimensional defects. For more details we refer to [11, Sect. 3]. 2. Conversely, given a Gray category with duals \(\mathcal{T}\), one immediately obtains a set of defect data \(\text{D}^{\mathcal{T}}\). Its sets \(D^{\mathcal{T}}_{j}\) are made up of the \((3-j)\)-cells of \(\mathcal{T}\), and its adjacency data are supplied by the source and target maps of \(\mathcal{T}\) as well as its composition rules. ### Defect TQFTs associated to orbifold completion Let \(\mathcal{Z}\colon\text{Bord}^{\text{def}}_{3,2}(\text{D})\longrightarrow\text {Vect}\) be a defect TQFT and \(\mathcal{T}_{\mathcal{Z}}\) the associated 3-category. The goal of this section is to construct a new defect TQFT \[\mathcal{Z}_{\text{orb}}\colon\text{Bord}^{\text{def}}_{3,2}(\text{D}_{\text {orb}})\longrightarrow\text{Vect} \tag{6.5}\] where \(\text{D}_{\text{orb}}:=\text{D}^{(\mathcal{T}_{\mathcal{Z}})_{\text{orb}}}\) are the defect data obtained from the orbifold completion of \(\mathcal{T}_{\mathcal{Z}}\). The construction uses triangulations of stratified manifolds adapted to the stratification. They are a natural extension of the transversal triangulations considered in [12] to the slightly more general type of stratified manifolds considered in the present paper (where we allow more than two 2-strata to meet along a 1-stratum). In the following we use "triangulations with total order" \(t\) (on the 0-simplices) as discussed e. g. in [11, Sect. 3.1], to merge the original stratification of a bordism with the Poincare dual of \(t\): **Definition 6.3**.: Let \(Y\) be a defect bordism in Bord\({}^{\text{def}}_{3,2}(\text{D})\). A stratified triangulation of \(Y\) is a triangulation with total order \(t\) of the underlying manifold of \(Y\) such that: 1. All intersections of \(i\)-simplices of \(t\) with \(j\)-strata of \(Y\) are transversal9 in \(Y\). Footnote 9: Transversality means that the fibres of the tangent spaces of the \(i\)-simplex and the \(j\)-stratum together span the fibres of \(TY\). 2. We construct a proper substratification \(S^{t}_{Y}\) of \(Y\) from a stratification which is Poincare dual to \(t\) as follows, where we demand that every \(j\)-stratum \(\sigma\) of \(Y\) is properly substratified in \(S^{t}_{Y}\), and that in every \((j+1)\)-stratum in \(Y\) adjacent to \(\sigma\) there are (new) \(j\)-strata in \(S^{t}_{Y}\setminus Y\) which meet \(\sigma\) (where here by "\(\tau\in S^{t}_{Y}\setminus Y\)" we mean "\(\tau\in S^{t}_{Y}\) is a \(j\)-stratum that is not contained in an (original) \(j\)-stratum in \(Y\)"): * For every 3-simplex \(\delta_{3}\in t\), there is a 0-stratum \(\sigma_{0}\in S^{t}_{Y}\setminus Y\) in the interior of \(\delta_{3}\), unless every 2-face of \(\delta_{3}\) intersects with an original 2-stratum of \(Y\). If there are original 2-strata \(\sigma_{2}\) of \(Y\) which intersect \(\delta_{3}\), then \(\sigma_{0}\) is placed in one such \(\delta_{2}\) (becoming part of a substratification of \(\sigma_{2}\)); otherwise \(\sigma_{0}\) is likewise placed in an original 3-stratum of \(Y\). * For every 2-simplex \(\delta_{2}\in t\), there is a 1-stratum \(\sigma_{1}\in S^{t}_{Y}\). If there are original 1-strata in \(Y\) which intersect \(\delta_{2}\), \(\sigma_{1}\) is taken to be (part of a substratification of) one of them; otherwise \(\sigma_{1}\in S^{t}_{Y}\setminus Y\) is a new 1-stratum which is Poincare dual to \(\delta_{2}\). If in the former case \(\sigma_{1}\) intersects with a 1-stratum \(\ell\) of \(Y\), there is another new 0-stratum at \(\ell\cap\sigma_{1}\), substratifying both 1-strata. * For every 1-simplex \(\delta_{1}\in t\), there is a 2-stratum \(\sigma_{2}\in S^{t}_{Y}\). If there are original 2-strata in \(Y\) which intersect \(\delta_{1}\), \(\sigma_{2}\) is taken to be (a substratification of) one of them; otherwise \(\sigma_{2}\in S^{t}_{Y}\setminus Y\) is a new 2-stratum which is Poincare dual to \(\delta_{1}\). * The new strata in \(S^{t}_{Y}\) which meet the boundary of \(Y\) do so transversally. * The new \(j\)-strata \(\widetilde{\sigma}\in S^{t}_{Y}\setminus Y\) carry orientations such that the orientation of \(\widetilde{\sigma}\) together with the order-induced orientation of the Poincare dual \((3-j)\)-simplex in \(t\) (in that order) give the orientation of \(Y\). The new sub-\(j\)-strata of \(j\)-strata of \(Y\) carry the induced orientation. In the special case that \(Y\) is trivially stratified we have that the stratification of \(S^{t}_{Y}\) is precisely Poincare dual to \(t\). For non-trivial original stratifications, Definition 6.32 involves choices about how new Poincare dual strata meet with old strata. In the construction of the TQFT \(\mathcal{Z}_{\mathrm{orb}}\) below these choices will be immaterial by design of the orbifold completion \(\left(\mathcal{T}_{\mathcal{Z}}\right)_{\mathrm{orb}}\). Any sufficiently fine triangulation of \(Y\) can be deformed into a stratified triangulation by slightly moving its cells. Furthermore, any two stratified triangulations of a closed stratified manifold can be related by a sequence of ordinary Pachner moves between stratified triangulations, which can be seen by choosing a stratified triangulation of \(Y\times[0,1]\) that restricts to the stratified triangulations on the boundary. Using the notion of stratified triangulation we can now describe the new defect TQFT \(\mathcal{Z}_{\mathrm{orb}}\) in (6.5) in terms of the set of defect data \[\mathbbm{D}_{\mathrm{orb}}:=\mathbbm{D}^{\left(\mathcal{T}_{\mathcal{Z}} \right)_{\mathrm{orb}}} \tag{6.6}\] associated to \(\left(\mathcal{T}_{\mathcal{Z}}\right)_{\mathrm{orb}}\) by the procedure outlined in Remark 6.22. On a closed defect manifold \(Y\colon\varnothing\longrightarrow\varnothing\) in \(\mathrm{Bord}^{\mathrm{def}}_{3,2}(\mathbbm{D}_{\mathrm{orb}})\), we extend the construction of [11, Sect. 3.2] as follows to define the evaluation of the TQFT \(\mathcal{Z}_{\mathrm{orb}}\): 1. Pick a stratified triangulation \(t\) of \(Y\). 2. Pass to a substratification \(S^{t}_{Y}\) of \(Y\) as in Definition 6.3. In the next steps we describe how to label the new strata in \(S^{t}_{Y}\subset Y\). 3. Let \(\widetilde{\sigma}_{3}\) be a new 3-stratum in \(S^{t}_{Y}\setminus Y\); because of the transversality condition in Definition 6.32, \(\widetilde{\sigma}_{3}\) is part of a substratification of a 3-stratum \(\sigma_{3}\) in \(Y\). Since \(Y\) is a \(\mathbbm{D}_{\mathrm{orb}}\)-labelled defect bordism, \(\sigma_{3}\) is labelled by an element \(\mathcal{A}=(a,A,\mu_{A},\alpha_{A},u_{A},u^{1}_{A},u^{r}_{A})\in(D_{\mathrm{ orb}})_{3}=\mathrm{Ob}((\mathcal{T}_{\mathcal{Z}})_{\mathrm{orb}})\). We assign the label \(a\) to \(\widetilde{\sigma}_{3}\). 4. Let \(\widetilde{\sigma}_{2}\) be a new 2-stratum in \(S^{t}_{Y}\setminus Y\) which is part of a substratification of an \(\mathcal{A}\)-labelled 3-stratum \(\sigma_{3}\) in \(Y\). We assign the label \(A\) to \(\widetilde{\sigma}_{2}\). 5. Let \(\widetilde{\sigma}_{2}\) be a new 2-stratum in \(S^{t}_{Y}\setminus Y\) which is part of a substratification of a 2-stratum \(\sigma_{3}\) in \(Y\) which is labelled by \(\mathcal{M}=(M,\triangleright_{M},\triangle_{M},u^{1}_{M},u^{r}_{M},\alpha^{1}_{M },\alpha^{m}_{M},\alpha^{r}_{M})\in(D_{\mathrm{orb}})_{2}\). We assign the label \(M\) to \(\widetilde{\sigma}_{2}\). 6. Let \(\widetilde{\sigma}_{1}\) be a new 1-stratum in \(S^{t}_{Y}\setminus Y\) which is Poincare dual to a 2-simplex in \(t\), and hence part of a substratification of an \(\mathcal{A}\)-labelled 3-stratum \(\sigma_{3}\) in \(Y\). We assign the label \(\mu_{A}\) to \(\widetilde{\sigma}_{1}\). We also observe that due to the transversality condition in Definition 6.32, \(\sigma_{1}\) cannot be adjacent to an old 2-stratum in \(Y\). 7. Let \(\widetilde{\sigma}_{1}\) be a new 1-stratum in \(S^{t}_{Y}\setminus Y\) which is part of a substratification of 2-stratum \(\sigma_{2}\) in \(Y\) which is labeled by \(\mathcal{M}=(M,\triangleright_{M},\triangle_{M},u^{1}_{M},u^{r}_{M},\alpha^{1}_{M },\alpha^{m}_{M},\alpha^{r}_{M})\in(D_{\mathrm{orb}})_{2}\). We assign the label \(\triangleright_{M}\) or \(\triangle_{M}\) to \(\widetilde{\sigma}_{1}\). Which action of an algebra (whose underlying 1-morphism in \(\mathcal{T}_{\mathcal{Z}}\) is assigned to the new 2-stratum which meets \(\widetilde{\sigma}_{1}\) and which is part of a substratification of an old 3-stratum) is to be used is uniquely determined by the orientations of the strata involved as well as the graphical calculus in Definition 3.2. * Let \(\widetilde{\sigma}_{0}\) be a new \(0\)-stratum in \(S^{t}_{Y}\setminus Y\) which is part of a substratification of an \(\mathcal{A}\)-labelled \(3\)-stratum \(\sigma_{3}\) in \(Y\). We assign the label \(\alpha_{A}\) or its inverse to \(\widetilde{\sigma}_{0}\), uniquely determined by the orientations of the strata involved as well as the graphical calculus in Definition 3.1. * Let \(\widetilde{\sigma}_{0}\) be a new \(0\)-stratum in \(S^{t}_{Y}\setminus Y\) which is part of a substratification of an \(\mathcal{M}\)-labelled \(2\)-stratum \(\sigma_{2}\) in \(Y\). We assign the label \(\alpha_{M}^{1},\alpha_{M}^{\mathrm{m}}\) or \(\alpha_{M}^{\mathrm{r}}\) (or their inverse) to \(\widetilde{\sigma}_{0}\), uniquely determined by the orientations of the strata involved as well as the graphical calculus in Definition 3.2. * Let \(\widetilde{\sigma}_{0}\) be a new \(0\)-stratum in \(S^{t}_{Y}\setminus Y\) which is part of a substratification of a \(1\)-stratum \(\sigma_{1}\) in \(Y\) which is labelled by \(\mathcal{F}=(F,\triangleleft_{F},\triangleright_{F})\). We assign the label \(\triangleleft_{F}\) or \(\triangleright_{F}\) (or their inverse) to \(\widetilde{\sigma}_{0}\), uniquely determined by the orientations of the strata involved as well as the graphical calculus in Definition 3.3. * We have thus lifted \(Y\) to a morphism \(Y^{t}\colon\varnothing\longrightarrow\varnothing\) in \(\mathrm{Bord}^{\mathrm{def}}_{3,2}(\mathrm{D})\), and we define \[\mathcal{Z}_{\mathrm{orb}}(Y):=\mathcal{Z}(Y^{t})\,.\] (6.7) The relations imposed in the construction of orbifold completion imply that \(\mathcal{Z}_{\mathrm{orb}}(Y)\) does not depend on the specific choice of the stratified triangulation \(t\) and the induced substratification \(S^{t}_{Y}\), as follows directly from the discussion in [11, Sect. 3.4] and Definition 4.3. Extending the invariant \(\mathcal{Z}_{\mathrm{orb}}(\varnothing\xrightarrow{Y}\varnothing)\) to a full defect TQFT can be done in a standard way following [11, Constr. 3.8 & 3.9]. We first describe the evaluation of \(\mathcal{Z}_{\mathrm{orb}}\) on objects. The state space \(\mathcal{Z}_{\mathrm{orb}}(\varSigma)\) associ Figure 6.1: A sketch of a \(2\)-dimensional slice for the construction of a local patch of a defect bordism \(Y^{t}\) in \(\mathrm{Bord}^{\mathrm{def}}_{3,2}(\mathrm{D})\) from a bordism \(Y\) in \(\mathrm{Bord}^{\mathrm{def}}_{3,2}(\mathrm{D}_{\mathrm{orb}})\). The original stratification consists of the fat lines (corresponding to \(2\)-strata in \(Y\)) labelled by the ingredients \(M,M^{\prime},M^{\prime\prime}\) of \(\mathcal{M},\mathcal{M}^{\prime},\mathcal{M}^{\prime\prime}\in(\mathrm{D}_{ \mathrm{orb}})_{2}\), as well as the \(F\)-labelled point (corresponding to a \(1\)-stratum in \(Y\)). The stratified triangulation is drawn in black. The strata of the original stratification of \(Y\) are substratified in \(Y^{t}\); for example, the \(M\)-labelled line is substratified into four lines and three points. The light and dark blue lines are labelled with the \(1\)-morphisms \(A,A^{\prime},A^{\prime\prime}\) corresponding to the orbifold data \(\mathcal{A},\mathcal{A}^{\prime},\mathcal{A}^{\prime\prime}\) assigned in \(Y\) the top-dimensional strata. The blue vertices are labelled by the corresponding multiplication \(2\)-morphisms \(\mu,\mu^{\prime},\dots\) or their adjoints, as dictated by the chosen total order of \(0\)-simplices. In addition, the new vertices (which are part of substratifications of lines) are labelled with the \(2\)-morphisms \(\triangleright\) and \(\triangleleft\) which are part of the \(1\)- morphisms \(\mathcal{M},\mathcal{M}^{\prime},\mathcal{M}^{\prime\prime}\) in \((\mathcal{T}_{\mathcal{Z}})_{\mathrm{orb}}\). Note that we suppress orientations to avoid clutter. ated to a stratified \(\mathrm{D}_{\mathrm{orb}}\)-labelled \(2\)-manifold \(\varSigma\in\mathrm{Bord}^{\mathrm{def}}_{3,2}(\mathrm{D}_{\mathrm{orb}})\) is constructed in two steps. For any stratified triangulation \(\tau\) of \(\varSigma\), its Poincare dual stratification \(S^{\tau}\) refines the original stratification of \(\varSigma\) to a stratification \(S^{\tau}_{\Sigma}\), whose new strata we label with elements of \(\mathrm{D}_{\mathrm{orb}}\) in analogy to the above discussion. This produces a new defect surface \(\varSigma^{\tau}\in\mathrm{Bord}^{\mathrm{def}}_{3,2}(\mathrm{D}_{\mathrm{orb}})\), and we can consider the vector space \(\mathcal{Z}(\varSigma^{\tau})\). For two distinct triangulations \(\tau\) and \(\tau^{\prime}\), the vector spaces \(\mathcal{Z}(\varSigma^{\tau})\) and \(\mathcal{Z}(\varSigma^{\tau^{\prime}})\) are not necessarily isomorphic or even equal. Any stratified triangulation \(t\) of \(\varSigma\times[0,1]\) restricting to \(\tau\) and \(\tau^{\prime}\) on the boundary gives rise to a linear map \(C^{t}_{\varSigma}:=\mathcal{Z}((\varSigma)^{t}\times[0,1])\colon\mathcal{Z}( \varSigma^{\tau})\longrightarrow\mathcal{Z}(\varSigma^{\tau^{\prime}})\) which is independent of \(t\). The value of \(\mathcal{Z}_{\mathrm{orb}}\) on \(\varSigma\) is defined to be the colimit over all stratified triangulations of \(\varSigma\) related by the maps \(C^{t}_{\varSigma}\). This vector space \(\mathcal{Z}_{\mathrm{orb}}(\varSigma)\) is isomorphic to the image of the idempotent corresponding to the cylinder for any fixed stratified triangulation \(\tau^{\prime}=\tau\) of \(\varSigma\). Finally to construct the value on a bordism \(Y\colon\varSigma\longrightarrow\varSigma^{\prime}\in\mathrm{Bord}^{\mathrm{ def}}_{3,2}(\mathrm{D}_{\mathrm{orb}})\) we fix a stratified triangulation \(t\) of \(Y\) extending fixed triangulations \(\tau\) and \(\tau^{\prime}\) of its source and target, respectively. As above we can construct a labelling of the dual triangulation by \(\mathrm{D}\) and hence get a morphism \(Y^{t}\colon\varSigma^{\tau}\longrightarrow(\varSigma^{\prime})^{\tau^{ \prime}}\) in \(\mathrm{Bord}^{\mathrm{def}}_{3,2}(\mathrm{D})\). By evaluating the original defect TQFT, this gives a linear map \[\mathcal{Z}(Y^{t})\colon\mathcal{Z}(\varSigma^{\tau})\longrightarrow\mathcal{Z }((\varSigma^{\prime})^{\tau^{\prime}}) \tag{6.8}\] which only depends on \(\tau\) and \(\tau^{\prime}\), and which induces a well-defined map \[\mathcal{Z}_{\mathrm{orb}}(Y)\colon\mathcal{Z}_{\mathrm{orb}}(\varSigma) \longrightarrow\mathcal{Z}_{\mathrm{orb}}(\varSigma^{\prime}) \tag{6.9}\] between the colimits. It follows directly from the construction that \(\mathcal{Z}_{\mathrm{orb}}\) is a defect TQFT: **Theorem 6.4**.: Let \(\mathcal{Z}\colon\mathrm{Bord}^{\mathrm{def}}_{3,2}(\mathrm{D})\longrightarrow\mathrm{Vect}\) be a defect TQFT. The orbifold construction above gives a defect TQFT \(\mathcal{Z}_{\mathrm{orb}}\colon\mathrm{Bord}^{\mathrm{def}}_{3,2}(\mathrm{D}_ {\mathrm{orb}})\longrightarrow\mathrm{Vect}\). We call \(\mathcal{Z}_{\mathrm{orb}}\) the orbifold (TQFT) of \(\mathcal{Z}\). Note that the definite article is appropriate; all the orbifold closed TQFTs \(\mathcal{Z}_{\mathcal{A}}\) constructed for some \(\mathcal{A}\in(\mathcal{T}_{\mathcal{Z}})_{\mathrm{orb}}\) in [10] are obtained from \(\mathcal{Z}_{\mathrm{orb}}\) by restricting to trivially stratified \(\mathcal{A}\)-labelled bordisms. **Example 6.5**.: The \(3\)-category \(E(\mathrm{BssFrob})\) describes defects between \(3\)-dimensional Euler theories of [10], cf. Section 5.1.1. The \(3\)-category sFus of spherical fusion categories is a subcategory of the orbifold completion of \(E(\mathrm{BssFrob})\) by Theorem 5.18. Hence from Theorem 6.4 we get an associated defect TQFT. As shown in [10], the closed TQFT corresponding to an object in this orbifold construction is the one constructed by Turaev-Viro-Barrett-Westbury. Hence Theorem 6.4 constructs defects between \(3\)-dimensional state sum models from bimodule categories with trace and their compatible functors. We expect them to agree with the defects constructed in [11] when considering only line defects with precisely two adjacent surface defects. We leave a detailed comparison for future work. **Example 6.6**.: Let \(\mathcal{M}\) be a modular fusion category. Defects in the corresponding Reshetikhin-Turaev theory are described by \(\mathrm{B}\Delta\mathrm{ssFrob}(\mathcal{M})\). In Theorem 5.21 we identified a subcategory of its orbifold completion. The corresponding defect TQFT agrees with the one constructed in [11]. Our construction extends theirs in the sense that we also describe the state spaces and linear maps associated to bordism with non-empty boundary. A special case of the constraints on \(2\)-morphisms of \(\mathcal{T}_{\mathrm{orb}}\) (recall Definition 4.3) was first considered in the context of (a \(1\)-categorical description of) Reshetikhin-Turaev theory in [12, Fig. 1.1 & 3.1], and used further in [11]. It is worth noting that from the \(3\)-categorical Morita theoretic perspective adopted here, most of these constraints are identified as instances of universal coherence relations. ## Appendix A \(2\)-(co)limits In this appendix we recall some definitions related to \(2\)-colimits relevant for the main part of the paper, see for example [13] for a text book treatment. As a simple consequence of the definitions we derive a coherence result for the computation of colimits over products. Before defining 2-colimits we recall that a 2-functor \(\mathcal{F}\colon\mathcal{C}\longrightarrow\operatorname{Cat}\) is representable if there exists an object \(c_{\mathcal{F}}\in\mathcal{C}\) and a natural isomorphism \(\mathcal{C}(c_{\mathcal{F}},-)\longrightarrow\mathcal{F}(-)\) of functors \(\mathcal{C}\longrightarrow\operatorname{Cat}\). The representing object \(c_{\mathcal{F}}\), if it exists, is unique up to a contractible choice. In the 2-categorical setting this means that the 2-category \(\operatorname{Rep}(\mathcal{F})\) with * objects pairs of an object \(c_{\mathcal{F}}\) together with a natural isomorphism \(\varphi_{c_{\mathcal{F}}}\colon\mathcal{C}(c_{\mathcal{F}},-)\longrightarrow \mathcal{F}(-)\), * 1-morphisms given by a 1-morphism \(c_{\mathcal{F}}\stackrel{{ f}}{{\longrightarrow}}c_{\mathcal{F}}^ {\prime}\) in \(\mathcal{C}\) together with a 2-isomorphism \(\mathcal{F}(-)\) * and 2-morphisms given by 2-morphisms \(\omega\colon f_{1}\longrightarrow f_{2}\) in \(\mathcal{C}\) which are compatible with \(\vartheta_{f}\) in the sense that holds. is either empty or equivalent to the 2-category with one object and only identity morphisms. This implies that there is an isomorphism between any two objects \(c_{\mathcal{F}}\) and \(c_{\mathcal{F}}^{\prime}\) and a unique 2-isomorphism between any pair of 1-morphisms. This in particular implies that if two compositions of 2-morphisms have the same source and target they must be equal. **Definition A.1**.: Let \(\mathcal{F}\colon D\longrightarrow\mathcal{B}\) be a diagram in a 2-category \(\mathcal{B}\). The 2-colimit of \(\mathcal{F}\), if it exists, is an object \(\operatorname{colim}_{D}\mathcal{F}\in\mathcal{B}\) which represents the functor \[\mathcal{B} \longrightarrow\operatorname{Cat}\] \[b \longmapsto\operatorname{Nat}(\mathcal{F},\Delta_{b})\enspace,\] where \(\Delta_{b}\) denotes the constant diagram at \(b\). From the definition it is clear that if the 2-colimit of \(\mathcal{F}\) exists then it is unique up to a contractible choice. Objects of the category \(\operatorname{Nat}(\mathcal{F},\Delta_{b})\) are called cocones with apex \(b\). Evaluating the natural isomorphism \(\mathcal{B}(\operatorname{colim}_{D}\mathcal{F},-)\longrightarrow\operatorname {Nat}(\mathcal{F},\Delta_{-})\) at the identity \(\operatorname{colim}_{D}\mathcal{F}\longrightarrow\operatorname{colim}_{D} \mathcal{F}\) gives rise to a cocone \(\mathcal{F}(-)\longrightarrow\Delta_{\operatorname{colim}_{D}}\in \operatorname{Nat}(\mathcal{F},\Delta_{\operatorname{colim}_{D}\mathcal{F}})\). The definition of a 2-colimit given above is equivalent to the condition that this cocone is universal, in the sense that the map \[(-)_{*}\colon\mathcal{B}(\operatorname{colim}_{D}\mathcal{F},b)\longrightarrow \operatorname{Nat}(\mathcal{F},\Delta_{b})\] induced by composition with the cocone is an equivalence. The contractibility of representing objects translates to the contractibility of the 2-category of universal cocones. A 2-category \(\mathcal{B}\) is called (finitely) cocomplete if all (finite) 2-colimits exist in \(\mathcal{B}\). In this case the weak uniqueness of colimits implies that any choice of 2-colimits will lead to a functor \(\operatorname{colim}_{D}\colon\mathcal{B}^{D}\longrightarrow\mathcal{B}\). Let \(\mathcal{G}\colon\mathcal{B}\longrightarrow\mathcal{B}^{\prime}\) be a functor and \(\mathcal{F}\colon D\longrightarrow\mathcal{B}\) a diagram. Note that applying \(\mathcal{G}\) to the universal cocone \(\mathcal{F}(-)\longrightarrow\operatorname{colim}_{D}\mathcal{F}\) gives rise to a cocone over \(\mathcal{G}\circ\mathcal{F}\) and hence up to contractible choice there is a 1-morphism \(\operatorname{colim}_{D}\mathcal{G}\circ\mathcal{F}\longrightarrow\mathcal{G} (\operatorname{colim}_{D}\mathcal{F})\). We say \(\mathcal{G}\) preserves the colimit of \(\mathcal{F}\) if this map is an equivalence. We say that \(\mathcal{G}\) preserves (sifted) 2-colimits if this map is an equivalence for all (sifted) diagram categories. **Proposition A.2** (Fubini theorem for \(2\)-colimits).: Let \(\mathcal{F}\colon D_{1}\times D_{2}\longrightarrow\mathcal{B}\) be a diagram. There are equivalences \[\operatorname{colim}_{d_{1}\in D_{1}}(\operatorname{colim}_{D_{2}}F(d_{1},-)) \cong\operatorname{colim}_{D_{1}\times D_{2}}\mathcal{F}\cong\operatorname{ colim}_{d_{2}\in D_{2}}(\operatorname{colim}_{D_{1}}F(-,d_{2}))\,,\] (A.1) unique up to contractible choice. Proof.: We will only construct the equivalence for \(\operatorname{colim}_{d_{1}\in D_{1}}(\operatorname{colim}_{D_{2}}\mathcal{F}( d_{1},-))\). The second equivalence can be constructed analogously. For this it is enough to note that the map \(F(d_{1},d_{2})\longrightarrow\operatorname{colim}_{D_{2}}\mathcal{F}(d_{1},-) \longrightarrow\operatorname{colim}_{d_{2}\in D_{2}}(\operatorname{colim}_{D _{1}}F(-,d_{2}))\) composing the maps which are part of the universal cocones, has the structure of a universal cocone for \(\mathcal{F}(-,-)\). The statement now follows from the uniqueness of universal cocones. Let us pick the equivalences described in the previous proposition. Then for a functor \(\mathcal{F}\colon D_{1}\times D_{2}\times D_{3}\longrightarrow\mathcal{B}\) we can iterate the result of the proposition to get two equivalences In general there is no reason for these maps to agree. Furthermore, there is even a third isomorphism between \(\operatorname{colim}_{D_{1}\times D_{2}\times D_{3}}\mathcal{F}\) and \(\operatorname{colim}_{D_{1}}(\operatorname{colim}_{D_{2}}(\operatorname{colim}_ {D_{3}}\mathcal{F}(d_{1},d_{2},-))\) which we can construct by noting, similarly to the proof of the Fubini theorem, that \(\operatorname{colim}_{D_{1}}(\operatorname{colim}_{D_{2}}(\operatorname{colim }_{D_{3}}\mathcal{F}(d_{1},d_{2},-))\) defines a universal cocone for \(\mathcal{F}\). However, all these maps are related by unique \(2\)-isomorphisms, since they all are part of maps of universal cocones. If we now consider diagrams indexed by the product of four categories there are even more ways to relate the different orders in which we can compute the \(2\)-colimits. Again all these will be related by unique \(2\)-isomorphisms. Furthermore, since the \(2\)-category of universal cocones is contractible, two \(2\)-isomorphisms which we can build by composition of those will agree as soon as they have the same source and target. This pattern continues to finite products of diagram categories and we record it in the following proposition. **Proposition A.3** (Coherence of the Fubini theorem for \(2\)-colimits).: Let \(\mathcal{F}\colon D_{1}\times\dots\times D_{n}\longrightarrow\mathcal{B}\) be a functor such that all the colimits appearing in the statement exist. The \(2\)-category with * objects given by chosen representatives for different iterative ways and orders of computing the \(2\)-colimit of \(\mathcal{F}\) (note that all of them are canonically a universal cocone for \(\mathcal{F}\)), * \(1\)-morphisms given by morphisms of universal cocones * \(2\)-morphisms given by \(2\)-morphisms compatible with the universal cocone structure is contractible. Proof.: The statement follows directly from the fact that the \(2\)-category described in the proposition is a full subcategory of the \(2\)-category of all universal cocones, which is contractible.
2309.01646
ReLoc-PDR: Visual Relocalization Enhanced Pedestrian Dead Reckoning via Graph Optimization
Accurately and reliably positioning pedestrians in satellite-denied conditions remains a significant challenge. Pedestrian dead reckoning (PDR) is commonly employed to estimate pedestrian location using low-cost inertial sensor. However, PDR is susceptible to drift due to sensor noise, incorrect step detection, and inaccurate stride length estimation. This work proposes ReLoc-PDR, a fusion framework combining PDR and visual relocalization using graph optimization. ReLoc-PDR leverages time-correlated visual observations and learned descriptors to achieve robust positioning in visually-degraded environments. A graph optimization-based fusion mechanism with the Tukey kernel effectively corrects cumulative errors and mitigates the impact of abnormal visual observations. Real-world experiments demonstrate that our ReLoc-PDR surpasses representative methods in accuracy and robustness, achieving accurte and robust pedestrian positioning results using only a smartphone in challenging environments such as less-textured corridors and dark nighttime scenarios.
Zongyang Chen, Xianfei Pan, Changhao Chen
2023-09-04T14:54:47Z
http://arxiv.org/abs/2309.01646v1
# ReLoc-PDR: Visual Relocalization Enhanced Pedestrian Dead Reckoning via Graph Optimization ###### Abstract Accurately and reliably positioning pedestrians in satellite-denied conditions remains a significant challenge. Pedestrian dead reckoning (PDR) is commonly employed to estimate pedestrian location using low-cost inertial sensor. However, PDR is susceptible to drift due to sensor noise, incorrect step detection, and inaccurate stride length estimation. This work proposes ReLoc-PDR, a fusion framework combining PDR and visual relocalization using graph optimization. ReLoc-PDR leverages time-correlated visual observations and learned descriptors to achieve robust positioning in visually-degraded environments. A graph optimization-based fusion mechanism with the Tukey kernel effectively corrects cumulative errors and mitigates the impact of abnormal visual observations. Real-world experiments demonstrate that our ReLoc-PDR surpasses representative methods in accuracy and robustness, achieving accurate and robust pedestrian positioning results using only a smartphone in challenging environments such as less-textured corridors and dark nighttime scenarios. Visual relocalization, pedestrian inertial navigation, factor graph, incremental smoothing, sensor fusion, smartphone ## I Introduction Robust and accurate indoor pedestrian navigation plays a crucial role in enabling various location-based services (LBS). The determination of pedestrian location in satellite-denied environments is a fundamental requirement for numerous applications, including emergency rescue operations, path guidance systems, and augmented reality experiences [1, 2]. The existing indoor positioning solutions relying on the deployment of dedicated infrastructures are susceptible to signal interference and non-line-of-sight (NLOS) conditions, and their widespread deployment can be prohibitively expensive. In contrast to infrastructure-based positioning methods, pedestrian dead reckoning (PDR) relies on only inertial data to estimate pedestrian location, providing a relatively robust approach in various environments. However, due to the inherent noise in inertial sensors, PDR is susceptible to trajectory drift over long-term positioning. To mitigate this issue, researchers have explored the combination of PDR with additional positioning methods, such as Ultra-Wideband (UWB), WiFi, Bluetooth, among others [3, 4, 5]. These methods have demonstrated impressive results in indoor localization. However, they often necessitate the presence of extra infrastructure and require rigorous calibration procedures in advance. The pursuit of a low-cost, robust, and self-contained positioning system is of great importance to flexible and resilient pedestrian navigation. Visual relocalization, which estimates the 6 degree-of-freedom (DoF) pose of a query image against an existing 3D map model, holds potential for achieving drift-free global localization using only a camera. Considering that smartphones commonly are with built-in cameras, utilizing image-based localization methods on mobile devices becomes a viable approach. However, the challenge lies in the susceptibility of image-based relocalization to environmental factors such as changes in lighting conditions and scene dynamics. Existing 3D structure-based visual localization methods [6, 7, 8, 9] are computationally demanding and fail to provide real-time pose output. Furthermore, the positioning accuracy of visual relocalization significantly deteriorates in challenging environments due to the scarcity of recognizable features. Considering the continuity and autonomy advantages of PDR, combining visual relocalization with PDR serves as a complementary approach. This integration allows for the correction of accumulated errors in PDR using visual relocalization results, while PDR improves the continuity and real-time performance of trajectory estimation. Several recent studies have begun exploring this direction [10, 11, 12, 13]. Existing methods primarily employ dynamic weighting strategies to loosely integrate PDR with visual relocalization. However, these approaches may lack robustness in challenging environ ments and can result in significant trajectory inconsistencies due to the interference caused by abnormal visual observations. To tackle these challenges, this work presents ReLoc-PDR, a robust framework for pedestrian inertial positioning aided by visual relocalization. ReLoc-PDR leverages recent advancements in deep learning-based feature extraction [14] and graph optimization [15] to ensure reliable visual feature matching and robust localization. By integrating these techniques, our method effectively mitigates the risk of visual relocalization failure, enhancing the system's robustness in visually degraded environments. To fuse the pose results from PDR and visual relocalization effectively, we design a graph optimization-based fusion mechanism using the Tukey kernel. This mechanism facilitates cumulative error correction and eliminates the impact of abnormal visual observations on positioning accuracy. As a result, the ReLoc-PDR system exhibits stability and reliability. Real-world experiments were conducted to evaluate the performance of the proposed method. The results demonstrate the efficacy of our approach in various challenging environments, including texture-less areas and nighttime outdoor scenarios. Our contributions can be summarized as follows: * We propose ReLoc-PDR, a robust pedestrian positioning system that effectively integrates Pedestrian Dead Reckoning (PDR) and visual relocalization to mitigate positioning drifts. * We design a robust visual relocalization pipeline that leverages learned global descriptors for image retrieval and learned local feature matching. It enhances the robustness of the positioning system, particularly in visually degraded scenes where traditional methods may struggle. * We introduce a pose fusion mechanism by incorporating the Tukey kernel into graph optimization that facilitates cumulative error correction. It effectively eliminates the impact of abnormal visual observations on positioning accuracy, ensuring the stability and reliability of the ReLoc-PDR system. ## 2 Related Work ### Pedestrian Dead Reckoning Pedestrian dead reckoning (PDR) relies on measurements obtained from the built-in sensors of smartphones to detect human gait events and estimate step length and heading angle for pedestrian positioning [1, 16]. However, PDR alone is prone to cumulative error over time, resulting in inaccurate positioning, particularly over long distances. To address this limitation, previous studies have attempted to integrate PDR with other absolute localization technologies such as Wi-Fi, Bluetooth, and Ultra-Wideband (UWB). These integration approaches aim to periodically correct the accumulated error of PDR by incorporating absolute position information [3, 4, 5]. Despite their potential benefits, these methods are heavily reliant on infrastructure availability and can be susceptible to changes in the physical environment. Factors such as variations in signal strength, infrastructure coverage, and environmental conditions may impact the accuracy and reliability of these infrastructure-dependent approaches. The integration of pedestrian dead reckoning (PDR) with visual relocalization has emerged as a promising research direction for achieving self-contained and highly accurate positioning using smartphones, without the need for additional infrastructure. This research area has gained increasing attention in recent years. Elloumi et al. [17] conducted a comparative study between inertial and vision-based methods for indoor pedestrian localization using smartphones. However, they did not explore the integration of these approaches to further enhance the performance of the localization system. In another study, a refined indoor localization framework was proposed in [10], which combines image-based localization with PDR to improve pedestrian trajectory estimation. Wang et al. [18] introduced a vision-aided PDR localization system that integrates visual and PDR measurements into a unified graph model. Furthermore, a novel indoor localization method called V-PDR was proposed in [11], which integrates image retrieval and PDR using a weighted average strategy. This approach successfully reduces the accumulated error of PDR. Shu et al. [12] employed a dynamic fusion strategy that integrates PDR and image-based localization (IBL) based on the number of inliers in the IBL process. Their approach enables continuous and accurate 3D location estimation for long-term tracking using smartphones. Additionally, in [13], a multimodal fusion algorithm was proposed that loosely couples PDR with visual localization to correct cumulative errors in PDR results. However, despite these advancements, the robustness of these approaches remains limited, and the positioning accuracy may significantly degrade in visually challenging environments. Further improvements are necessary to ensure reliable and accurate positioning in such scenarios. ### Visual Relocalization Visual relocalization, also known as image-based localization, refers to the task of estimating the precise 6 DOF camera pose of a query image within a known map. This task can be approached using retrieval-based and structure-based methods. Retrieval-based approaches [19, 20, 21, 22] estimate the pose of the query image by leveraging the geolocation of the most similar image retrieved from an image database. However, these methods often fall short in terms of localization accuracy. Structure-based methods [6, 7, 8, 9, 10], rely on establishing correspondences between 2D features in the query image and 3D points in a Structure from Motion (SFM) model using local descriptors. To handle large-scale scenes, these methods typically employ image retrieval as an initial step, restricting the 2D-3D matching to the visible portion of the query image [8, 9]. However, the robustness of traditional localization methods is limited due to the insufficient invariance of handcrafted local features. In recent years, CNN-based local features [14, 23, 24] have exhibited impressive robustness against illumination variations and viewpoint changes. These features have been employed to enhance localization accuracy. For example, [8] presents a comprehensive pipeline for structure-based visual relocalization, incorporating global image retrieval, local feature matching, and pose estimation. However, this method may encounter challenges during the image retrieval stage in scenes with significant appearance variations, as the representation power of global features is limited. Additionally, these methods often yield a low-frequency global position estimation and require high-performance computing platforms. ## 3 Visual Relocalization Enhanced Pedestrian Dead Reckoning ### System Design #### 3.1.1 System Overview The framework of our ReLoc-PDR is depicted in Figure 1, comprising three primary modules: inertial sensor-based Pedestrian Dead Reckoning (PDR), visual relocalization, and graph optimization-based pose fusion. In this architecture, the PDR algorithm is employed to compute the per-step pose using the built-in inertial sensors of a commercially available smartphone. The visual relocalization pipeline aims to accurately and robustly estimate the pose of the triggered image captured by the smartphone camera. This estimation is performed in relation to a pre-built 3D feature map, enabling global visual observations that periodically correct the accumulated error in the PDR. Finally, the pose fusion module integrates the pose results from PDR and visual relocalization using graph optimization with a robust Tukey kernel. This integration enables the continuous and smooth estimation of the pedestrian's position and trajectory during long-term walking. #### 3.1.2 Pedestrian Dead Reckoning The PDR algorithm utilizes inertial data to estimate the pedestrian's position based on human motion characteristics. It consists of four main steps: step detection, step length estimation, heading angle estimation, and position update. Step detection relies on the repetitive pattern observed in the accelerometer measurements during human walking. In our work, we employ a multi-threshold peak detection algorithm to identify pedestrian gait: \[\left\{\begin{array}{l}\delta_{min}<|a_{m}-g|<\delta_{max}\\ \Delta T>\delta_{t}\end{array}\right. \tag{1}\] Here, \(a_{m}\) represents the magnitude of acceleration, \(\delta_{min}\) and \(\delta_{max}\) denote the minimum and maximum acceleration values within one step, \(g\) represents the gravity value, \(\Delta T\) denotes the time interval between adjacent peaks, and \(\delta_{t}\) represents the minimum duration threshold. Considering the complexity of pedestrian movement and the presence of inevitable noises in low-cost MEMS inertial sensors, we employ a fourth-order low-pass filter to preprocess the acceleration signal. This pre-processing step enhances the quality of the gait characteristics obtained. Furthermore, to eliminate false peaks caused by external interference, we evaluate whether each peak is a local maximum value within a specific sliding window size. This additional criterion helps ensure the accuracy of the detected gait peaks. The step length estimation component aims to calculate the distance covered by a pedestrian in a single step, which is influenced by the pedestrian's motion states. In our approach, we employ the Weinberg model [25] to estimate the pedestrian step length via: \[S_{k}=K\cdot\sqrt[4]{a_{z,max}-a_{z,min}} \tag{2}\] Here, \(K\) represents the calibrated step-length coefficient, while \(a_{z,max}\) and \(a_{z,min}\) denote the maximum and minimum values of vertical acceleration during step \(k\), respectively. Heading estimation is utilized to determine the walking direction of the pedestrian. We leverage the gyroscope data from the smartphone's built-in Inertial Measurement Unit (IMU) to estimate the pedestrian's heading angle. This is achieved through an attitude update equation based on the median integration method. Finally, the pedestrian's position is updated based on their previous position, incorporating the estimated step length and heading angle. The updated position is as follows: \[\left[x_{k}\ y_{k}\right]=\left[x_{k-1}\ y_{k-1}\right]+\left[S_{k}\cos(\psi _{k})\ S_{k}\sin(\psi_{k})\right] \tag{3}\] Here, \(\psi_{k}\) represents the heading angle, while \(x_{k}\) and \(y_{k}\) indicate the pedestrian's horizontal position at step \(k\). #### 3.1.3 Data Flow As depicted in Figure 1, the ReLoc-PDR system takes in acceleration and gyroscope data from the built-in inertial sensor of the smartphone to execute the PDR algorithm on the mobile device, incrementally calculating the pedestrian's position \(\mathbf{P}_{k}^{\text{PDR}}\) based on human motion characteristics. Figure 1: The system architecture of the proposed visual relocalization-aided PDR method. Once a step is successfully detected, a query image is triggered by capturing a photo using the smartphone's camera. Simultaneously, the captured query image, along with the current pedestrian pose, is transmitted to the visual relocalization module. Relocalization is achieved by matching the current image against a pre-constructed 3D sparse feature map and estimating the 6-DOF pose of the query image. Upon completing the visual relocalization pipeline, the ReLocalization module returns the pose of the query image \(\mathbf{P}_{k}^{\text{ReLoc}}\) and the number of matched inliers \(M\) to the pose fusion module. Finally, the fusion module processes the results from the reLocalization and PDR module, and performs pose graph optimization in a separate thread. ### Visual Relocalization with Learned Feature Given that pedestrian navigation spans diverse indoor and outdoor environments, the visual relocalization method needs to exhibit robustness in handling various viewpoint and lighting conditions, including illumination, weather, seasonal changes, and consistent performance in both indoor and outdoor settings. The robustness of traditional retrieval-based [19, 20] or structure-based relocalization [6, 7, 10, 13] methods is limited due to the insufficient invariance of handcrafted designed features [26, 27], resulting in reduced stability performance under conditions with low texture or poor lighting. Recent advancements in deep neural network-based methods, such as NetVLAD [21] and SuperPoint [14], have demonstrated superior capabilities in image feature extraction, keypoint detection, and matching. These deep learning-based approaches surpass traditional baselines like bag-of-words [19], VLAD [20], and SIFT [26] in terms of robustness. Motivated by these developments, we incorporate the advancements in learned global descriptors and learned local features into the visual relocalization pipeline. This integration enhances the robustness of the pedestrian positioning system in visually degraded scenarios. The flowchart illustrating the visual relocalization process with learned features is presented in Figure 1. Given a pre-built 3D sparse feature map database comprising images with known poses \(\{\mathbf{I}_{i}^{r},\mathbf{T}_{i}^{r}\}\), 3D point clouds \(\{\mathbf{P}_{j}\}\), and global feature descriptors \(\{f(\mathbf{I}_{i}^{r})\}\), the objective of visual relocalization is to estimate the 6-DoF pose \(\mathbf{T}^{q}\) of the query image \(\mathbf{I}^{q}\) requested by the mobile device. To accomplish this, we propose a hierarchical three-stage visual localization pipeline. Firstly, to identify similar reference images in the database, we employ a learned global descriptor [21] and exploit co-visible information encoded in the sparse feature model to retrieve the top \(N\) scene clusters \(\mathcal{S}_{covis}\). Next, we establish 2D-3D correspondences between the query image \(\mathbf{I}^{q}\) and the 3D features \(\mathbf{P}_{j}\) by leveraging 2D-2D matching with the learned Superpoint feature [14]. Finally, we solve for the camera pose \(\mathbf{T}^{q}\) using the PnP [28] algorithm with a geometric consistency check within a RANSAC [29] loop. #### 3.2.1 Learned Global Descriptor based Image Retrieval Image retrieval aims to identify a subset of database images that share a covisibility relationship with the query image. For the query image \(\mathbf{I}^{q}\), we initially extract learned global descriptors \(f(\mathbf{I}^{q})\) using the NetVLAD model (Remarkable CNN-based descriptor) [21], which has demonstrated superior performance compared to non-learned image representations [19, 20] in visual place recognition benchmarks. Subsequently, we retrieve the top \(k\) reference images \(\mathcal{\tilde{S}}=\mathbf{\tilde{I}}_{1}^{r},\mathbf{\tilde{I}}_{2}^{r}, \cdots,\mathbf{\tilde{I}}_{k}^{r}\) from the sparse features model based on the distance metric \(d(f(\mathbf{I}^{q}),f(\mathbf{I}^{r}))\). Rather than directly clustering these retrieved reference images based on co-observed 3D points or performing feature matching between the query image and retrieved reference images (as done in HF-Net) [8], we expand the retrieval results by leveraging the co-visible information embedded in the sparse features model. This approach helps to search for additional potential 3D points and enlarge the view region. Specifically, for each retrieved image \(\mathbf{\tilde{I}}_{i}^{r}\), we search for the \(n\) database images in the Structure from Motion (SFM) model that share the same observed 3D map points. We then merge these images to form a scene cluster \(\mathcal{S}_{i}\), with the size of each scene cluster limited to 20 images to reduce computational costs. If a retrieved image \(\mathbf{\tilde{I}}_{i}^{r}\) already exists in a previous cluster, it is disregarded to avoid redundant expansion of the retrieval results. Finally, we obtain \(N\) scene clusters \(\mathcal{S}_{covis}=\mathcal{S}_{1},\cdots,\mathcal{S}_{N}\) and sort them according to the size of the cluster set. #### 3.2.2 Learned Local Feature Matching After obtaining the co-visible candidates \(\mathcal{S}_{covis}\) for the query image \(\mathbf{I}^{q}\), our focus shifts to establishing 2D-3D correspondences between the query image and 3D points based on 2D-2D matching utilizing the SuperPoint learned feature [14]. SuperPoint is Figure 2: Local feature matching with learned feature via SuperPoint network. (a) Kay/Points and descriptors extraction for query image and each reference image in the scene cluster through SuperPoint network. (b) Establishing 2D-3D correspondence between 2D features detected in the query image and 3D points contained in the scene cluster depending on learned local feature matching. a self-supervised framework that extracts both interest points and descriptors. It has been demonstrated to generate denser and more accurate matching results compared to traditional handcrafted detectors like SIFT [26] and ORB [27]. Moreover, SuperPoint exhibits excellent robustness against illumination changes. For each scene cluster \(\mathcal{S}_{i}\in\mathcal{S}_{covis}\), we begin by extracting the set of ketpoint positions \(\mathbf{p}^{q}\) and associated local descriptors \(\mathbf{d}^{q}\) of the query image using the SuperPoint network, as illustrated in Figure 2(a), jointing them \((\mathbf{p}^{q},\mathbf{d}^{q})\) as local features \(\mathbf{F}^{q}\). Meanwhile, the 2D features of reference images, represented by \(\mathbf{F}_{i}^{\tau}=(\mathbf{p}_{i}^{\tau},\mathbf{d}_{i}^{\tau})\), within the cluster set \(\mathcal{S}_{i}\). During the process of local feature matching, we employ nearest neighbor (NN) matcher to establish 2D-2D matches between the query features \(\mathbf{F}^{q}\) and each reference feature \(\mathbf{F}_{i}^{\tau}\) through visual descriptors similarity, followed by a Lowe's ratio test [30] to filter out ambiguous matched pairs. Finally, the 2D-3D correspondences are established by retrieving the 3D points corresponding to the keypoints detected in the database image based on the previous 2D-2D match results (Figure 2(b)). These correspondences are crucial for solving the camera pose estimation. #### 2.3.3 Robust Pose Estimation For the query image \(\mathbf{I}^{q}\), we effectively establish the 2D-3D correspondence between the 2D keypoints detected in the query image and the 3D points within the scene cluster \(\mathcal{S}_{i}\). Upon building this correspondence, we proceed to solve the pose of the query image using the PnP algorithm [28] within a RANSAC loop [29]. During the iterative solving process, if the number of inliers in the estimated pose surpasses a predefined threshold, the program will terminate early. This early termination condition ensures efficiency and expedites the pose estimation procedure. ### Integrating PDR and Visual Relocalization via Factor Graph Optimization In the process of visual and inertial pose fusion, ensuring the reliability of observation information is crucial for maintaining the stability of the multi-sensors positioning system. However, we observed that evaluating the quality of visual relocalization solely based on the number of inliers [12; 13] is not robust enough in visually degraded scenes. Examples of such scenes include texture-less walls, areas with similar structures, and dark roadways. In these scenarios, abnormal visual observations can still be introduced into the positioning system, leading to a significant degradation in accuracy. To address this challenge, we propose a robust pose fusion algorithm that integrates Pedestrian Dead Reckoning (PDR) with visual relocalization using graph optimization [15] and the Tukey robust kernel [31]. By incorporating the Tukey kernel function into the pose graph, we can adaptively assess the impact of current visual relocalization results on the system's states. This adaptive assessment dynamically determines the weight of the visual observation, effectively mitigating the risk of visual relocalization failures. Furthermore, unlike existing visual-inertial fusion methods [32], we employ an inertial-centric data processing scheme. This scheme enables the dynamic integration of visual relocalization observations into the graph. In the pose graph, as illustrated in Figure 3, each step node serves as a vertex, connecting to other vertices through two types of edges. #### 2.3.1 PDR Factor As discussed in Section 3.1, the Pedestrian Dead Reckoning (PDR) algorithm offers the advantages of autonomy and continuity, enabling high-accuracy positioning within a short period. Upon successful detection of a step during pedestrian walking, a PDR factor is established to connect it with the previous step. This PDR factor represents the relative change in the pedestrian's position and is obtained directly from the PDR algorithm. Given our inertial-sensor-centric pose graph, it continuously expands as pedestrian steps are taken, rendering it relatively robust to environmental variations. For step \(k\) and its previous step \(k-1\), the residual of the PDR factor is formulated as follows: \[\mathbf{r}_{k,k-1}(\mathbf{z}_{k}^{\text{PDR}},\mathbf{x}_{k-1}, \mathbf{x}_{k}) =\mathbf{z}_{k}^{\text{PDR}}-f\left(\mathbf{x}_{k-1},\mathbf{x}_{ k}\right) \tag{4}\] \[=\left[\begin{array}{c}S_{k}\cos(\psi_{k})-\left(x_{k}-x_{k-1} \right)\\ S_{k}\sin(\psi_{k})-\left(y_{k}-y_{k-1}\right)\end{array}\right]\] Here, \(\mathbf{r}_{k,k-1}(\cdot)\) represents the residual of the PDR factor. \(\mathbf{z}_{k}^{\text{PDR}}\) denotes the PDR observation, which corresponds to the position increment from step \(k\) to \(k-1\). The state estimations at steps \(k-1\) and \(k\) are denoted as \(\mathbf{x}_{k-1}\) and \(\mathbf{x}_{k}\), respectively, after optimization. Figure 3: An illustration of the inertial-centric pose graph for fusing PDR and visual relocalization results. The pose graph include two types of edges, where the PDR edge connect the every two adjacent step nodes and is continuously added into graph with pedestrian steps. However, not every state node has the ReLoc edge, due to visual relocalization may fail or invalid in challenging scenarios. #### 3.2.2 **Relocalization Factor** Before incorporating relocalization edges into the pose graph, a reliability assessment is performed to mitigate the impact of visual relocalization failures on positioning accuracy. In our approach, we utilize the number of inliers as the criterion to determine the success of visual relocalization results. If the number of inliers exceeds 25, we consider the relocalization results reliable. In such cases, we add a ReLoc edge to the current state node and subsequently perform incremental smoothing optimization. However, if the number of inliers is below the threshold, we skip the pose graph optimization step and rely solely on the previously optimized state and PDR estimation to determine the current pedestrian position. Assuming that the visual relocalization result is reliable at step \(k\), the residual of the relocalization factor is calculated using equation (5): \[\mathbf{r}_{k}(\mathbf{z}_{k}^{\text{ReLoc}},\mathbf{x}_{k}) =\mathbf{z}_{k}^{\text{ReLoc}}-\mathbf{x}_{k} \tag{5}\] \[=[\mathbf{T}_{k}^{4}]_{\mathbf{p}_{xy}}-\mathbf{x}_{k}\] Here, \(\mathbf{r}_{k}(\cdot)\) represents the residual of the relocalization factor. \(\mathbf{z}_{k}^{\text{ReLoc}}\) denotes the visual relocalization observation, specifically the horizontal position \([\cdot]\mathbf{p}_{xy}\) of the camera pose. Additionally, the prior factor information \(\mathbf{r}_{0}\) is also provided by the visual relocalization result, which aids in determining the initial position of the pedestrian before walking. #### 3.2.3 Incremental Smoothing Optimization for Poses Fusion The set of states \(\mathcal{X}\) comprises the state variables from the first step to the \(k\)th step, represented as \(\mathcal{X}=\{\mathbf{x}_{0},\mathbf{x}_{1},\cdots,\mathbf{x}_{k}\}\). The entire pose graph, consisting of PDR factors and ReLoc factors, is optimized by minimizing the following cost function: \[\mathcal{X}^{*} =\operatorname*{arg\,min}_{\mathcal{X}}\left\{\sum_{k\in\mathcal{ B}}\|\mathbf{r}_{k-1,k}\|_{\mathbf{\Omega}_{\mathbf{k}}^{\text{PRR}}}^{2}+ \sum_{i\in\mathcal{L}}\rho\left(\|\mathbf{r}_{i}\|_{\mathbf{\Omega}_{\mathbf{ k}}^{\text{RLR}}}^{2}\right)\right\} \tag{6}\] \[\rho(x) =\left\{\begin{array}{ll}\frac{1}{6}\delta^{2}\left(1-\left[1- \left(x/\delta\right)^{2}\right]^{3}\right),&\text{if}\,|x|\leq\delta\\ \frac{1}{6}\delta^{2},&\text{otherwise}\end{array}\right. \tag{7}\] Here, \(\mathcal{B}\) is the set of all PDR factors, \(\mathcal{L}\) is the set of all ReLoc factors, and \(\mathbf{\Omega}_{\mathbf{k}}^{\text{PRR}}\) and \(\mathbf{\Omega}_{\mathbf{k}}^{\text{ReLoc}}\) denote the corresponding covariance matrices. To account for variations in the quality of relocalization results under different visual conditions, we dynamically adjust the weight of the ReLoc edges based on the number of inliers. Although the reliability evaluation strategy based on the number of inliers has helped eliminate incorrect visual relocalization results, we further incorporate the Tukey robust kernel \(\rho(\cdot)\)[31] to mitigate the potential impact of any erroneous visual observations. The loss function of the Tukey robust kernel is defined in equation (7). In contrast, we do not employ any kernel function for the PDR edge, as PDR can achieve high-accuracy positioning within a short period. To achieve real-time pose optimization, we utilize an adaptive-lag smoother called Incremental Smoothing and Mapping (iSAM2) [15]. Unlike batch optimizers that repeatedly compute and update all historical states, iSAM2 dynamically determines which historical states are affected by the current observations and selectively optimizes and updates only those affected states. This adaptive approach significantly reduces unnecessary computations, resulting in near-optimal results comparable to batch graph optimization but at a lower computational cost. For implementation, we employ the open-source GTSAM library [33] to construct the factor graph and perform incremental smoothing optimization. The use of GTSAM enables efficient construction and manipulation of the factor graph, facilitating the real-time pose optimization process. ## 4 Experiments In this section, we first present the experimental setup, encompassing the necessary equipment and intricate details of the experiment. Subsequently, we conduct comprehensive experiments to assess the robustness and accuracy of the fusion positioning system proposed in this study. We evaluate our method across three distinct environmental conditions, i.e. a textureless corridor, overcast weather, and a dark roadway, each presenting unique visual challenges. Figure 4: The reconstruction results of 3D feature map with SuperPoint features. (a) Indoor experimental environment map. (b) Outdoor experimental environment map. ### Experimental Setup In the experiment, a Xiaomi 10 smartphone was employed for both offline map construction and online testing purposes. To reconstruct the 3D map model, video sequences of the scene were captured using the smartphone at a frequency of 30 Hz and a resolution of 1920x1080. These sequences were then downsampled to obtain discrete database images. In this study, COLMAP [34], a structure-from-motion (SFM) tool, was utilized for generating sparse SFM models. We made certain modifications to adapt it to pedestrian navigation. Specifically, we employed the NetVLAD [21] feature to retrieve the top 50 image matching pairs for each database image, which were subsequently inputted into the COLMAP pipeline to aid the image matching process. Additionally, we introduced a scale estimation module into COLMAP to convert the 3D point cloud into real-world scale, allowing for an understanding of the environment's dimensions. This approach involved using pre-placed artificial markers [35] with known lengths to restore the 3D point cloud to a real-world scale. Finally, a new 3D SFM model was constructed using keypoints detected by SuperPoint [14], based on the hloc toolbox [8]. The resulting 3D SFM model, depicting both indoor and outdoor environments, can be observed in Fig. 4. During the testing phase, our system records IMU data at a rate of 100Hz and captures image frames at 30Hz. However, the query image is triggered specifically on the node where gait is detected during pedestrian walking. The resolution of all query images is standardized to 600x800 pixels. To assess the localization performance and robustness of the proposed method, we conducted experiments in three distinct environments characterized by varying visual challenges. These experiments involved holding a smartphone in each of these environments. ### Indoor Experiment in Textureless Corridor As illustrated in Fig. 5, the initial experiment took place in an indoor corridor and office environment, known for its visually challenging characteristics such as walls with limited texture and the presence of moving pedestrians. During this experiment, the participant navigated the corridor while holding the smartphone and encountered multiple sharp turns along a pre-determined path. These sharp turns have the potential to induce significant heading drift. The experiment had a duration of approximately 355 seconds, covering a total distance of approximately 240 meters. To highlight the advantages of our method in indoor positioning environments, we conducted a comparative analysis with three other approaches. The first method examined was a pure inertial-based pedestrian navigation (PDR) approach. The second approach involved VINS-Mono [32], which is a state-of-the-art Visual-Inertial SLAM method known for its exceptional tracking performance and competitive positioning accuracy. The third approach combined PDR with visual localization using a dynamic weighting strategy [12, 13], referred to as DW-PDR/vision. Figure 6 illustrates the trajectory comparison among the different methods in an indoor environment. The PDR algorithm demonstrates relatively smooth overall trajectory; however, it suffers from trajectory drift due to accumulated errors in heading estimation. The positioning accuracy of VINS-Mono is severely compromised in indoor environments due to the presence of less-textured features and the utilization of low-quality sensors in mobile devices, leading to inferior performance compared to the pure inertial-based PDR. Leveraging stronger geometric constraints from a prior 3D map and the robustness of learned features, the visual relocalization method achieves accurate positioning in most cases. By combining PDR with the visual relocalization results, the cumulative errors in PDR can be effectively corrected using the visual measurements. However, it is evident that the trajectory of DW-PDR/vision lacks robustness and smoothness, often experiencing significant discontinuities due to interference from abnormal visual relocalization observations in visually similar scenarios. In contrast, our proposed method exhibits robustness against abnormal visual relocalization observations. It dynamically assesses the reliability of visual relocalization results using the Tukey robust kernel, enabling adaptive decision-making regarding the reliance on either PDR or global visual observations. Additionally, our method leverages the incremental smoothing iSAM2 algorithm to provide a smoother and more continuous trajectory compared to other approaches, as depicted in the locally enlarged region in Figure 6. Table 1 provides the statistical analysis of horizontal positioning errors for the different methods. As obtaining the ground truth of pedestrian trajectory is not feasible, we adopt Figure 5: The process of indoor experiment. The volunteer walked along a preset route (red line) with holding a smartphone (Xiaomi 10). Figure 6: Comparison of the trajectory estimated by different methods in textureless corridor. artificial marker points with known positions as a reference benchmark. The results presented in Table 1 indicate that our proposed method achieves superior positioning accuracy in complex indoor environments, effectively reducing the root mean square error (RMSE) of the pure inertial-based PDR by 96.3%. VINS-Mono exhibits the lowest accuracy due to its degraded tracking performance in indoor environments characterized by less-textured conditions. The DW-PDR/vision method experiences a maximum error of 12.4825 m, attributed to the influence of abnormal visual observations. In comparison, our method surpasses the performance of DW-PDR/vision by improving the positioning accuracy by 91.9% in terms of RMSE and reducing the maximum error to 0.4435 m. These results effectively demonstrate the robustness of our method in challenging indoor environments. ### Outdoor Experiment in Overcast Weather Condition To evaluate the positioning performance of our proposed method in challenging outdoor environments, a second experiment was conducted along a route encircling a hill. This test encompassed dynamic vehicle movements, variations in cloudy weather, and changes in scene structure, all of which have the potential to impede visual tracking. The experiment had a walking duration of approximately 230 seconds, covering a total path length of approximately 225 meters. As the satellite signal was obstructed by tall trees and buildings, it was not feasible to obtain a reference positioning trajectory from the RTK recorder. Instead, we employed the trajectory estimation of FAST-LIO2 [36], one of the state-of-the-art LiDAR-inertial odometers, as a reference value. To synchronize the timestamps between the smartphone-based results and the LiDAR-based output, the volunteer held the experimental device (Fig. 7) in an upward position to stimulate the accelerometer and produce a spike before walking. By aligning the first peak of the smartphone acceleration data with the first peak of the LiDAR's built-in IMU acceleration data, we obtained the time difference between them, achieving synchronization. Figure 8 displays the trajectory results of various algorithms in outdoor cloudy environments. It is evident that our proposed method closely aligns with the reference trajectory and exhibits a smooth trajectory without any sudden jumps. The PDR algorithm, due to the inherent noise of inertial sensors, significantly deviates from the ground truth. While VINS-Mono performs reasonably well in outdoor environments, its positioning accuracy is limited by the lower quality of mobile phone built-in sensors. Compared to visual-inertial SLAM methods, the visual relocalization aided PDR methods achieve superior trajectory estimation. However, the DW-PDR/vision approach based on dynamic weighting strategy, despite achieving remarkable positioning results, demonstrates noticeable trajectory jumps under abnormal visual relocalization observations, a phenomenon not observed in our method. The proposed optimization-based fusion positioning method effectively mitigates the impact of erroneous visual observations on positioning accuracy through the use of a robust kernel function, resulting in smoother and more robust positioning outcomes. Furthermore, the distribution of Figure 8: The trajectories estimated by different methods in outdoor overcast environments. Figure 7: The handheld experimental equipment, integrating the smartphone (mi 10) and solid-state LiDAR (ivox AVIA) on one platform. Figure 9: Distribution of horizontal position error comparison with the change of detected steps. horizontal positioning errors with pedestrian steps is depicted in Figure 9. These results highlight the superior performance of our method compared to other algorithms, consistently providing accurate positioning results with errors of less than 1 m. By incorporating the visual relocalization results, our method significantly reduces cumulative errors in PDR. In contrast, the positioning accuracy of the DW-PDR/vision method degrades significantly due to the interference of abnormal visual relocalization observations, underscoring the robustness of our method in challenging environments. Table II presents the positioning error statistics for different algorithms. Our method exhibits the highest positioning accuracy and significantly reduces the cumulative errors of PDR by 86.9%. While VINS-Mono achieves impressive loop error reduction through loop-closing, its positioning accuracy does not match the competitiveness of our method. In comparison to DW-PDR/vision, our method outperforms it by demonstrating a 14.9% improvement in terms of the RMSE metric. Moreover, our method exhibits a maximum error of only 1.0604 m, whereas DW-PDR/vision shows a maximum error of 4.7975 m. These results effectively illustrate the robustness of our proposed method against abnormal disturbances. ### Outdoor Experiment in Low-Light Condition To further assess the robustness of our method in visually challenging environments, we conducted a third experiment under outdoor nighttime conditions. Figure 10 showcases an example of the nighttime images captured using a mobile phone. The low lighting conditions during nighttime result in poor image quality, presenting a significant challenge for visual tracking methods. Traditional handcrafted feature descriptors exhibit limited robustness in such challenging environments due to their poor invariance. In this study, we introduced learned features [14] into our image-based pipeline. These learned features generate denser and more accurate matches compared to traditional methods like SIFT, as depicted in Figure 11. This integration effectively enhances the reliability and continuity of conventional visual methods. Figure 12 displays the trajectory results of different approaches in an outdoor nighttime environment. It is evident that our method consistently provides continuous and accurate trajectory estimations, showcasing robustness against severely degraded environments. The pure inertial-based PDR approach yields acceptable results as inertial sensors are independent of visual cues. However, its positioning error accumulates over time. In the nighttime environment, VINS-Mono's positioning results deviate significantly from the reference trajectory due to the lack of sufficient features for accurate match tracking. Although the incorporation of learned features improves the robustness of positioning for nighttime query images to some extent, visual relocalization still encounters numerous failures due to the limited representation of local features. Figure 13 illustrates the horizontal positioning errors with significant fluctuations observed in the DW-PDR/vision method. This is attributed to relying solely on the number of inliers as a criterion for evaluating reliability, which can introduce abnormal visual results. The dynamic weighting fusion strategy fails to eliminate the impact of incorrect observations, resulting in estimated trajectories deviating significantly from the reference path. In contrast, our method effectively addresses the risk of visual relocalization failures and mitigates the impact of abnormal observations on positioning accuracy by employing an optimization-based fusion strategy. Additionally, our method Fig. 11: The number of inliers comparison between SuperPoint feature and SIFT features in nighttime environment. Fig. 12: The trajectory comparison of different methods in outdoor nighttime environment. Fig. 10: Example query images for outdoor nighttime experiment. achieves globally smooth and drift-free trajectory estimations through incremental smoothing optimization. Figure 14 displays the cumulative distribution of horizontal positioning errors, demonstrating the superior performance of our method compared to other algorithms. Table 3 provides statistical analysis results of horizontal positioning errors in outdoor nighttime conditions. Our method improves the positioning accuracy of PDR by 77.0% and reduces the maximum error from 4.3309 m to 1.3854 m. Remarkably, even in the dark lighting environment, our method outperforms VINS-Mono in terms of positioning accuracy, which is surprising given the latter's reliance on visual information. The positioning accuracy of DW-PDR/vision is significantly affected by abnormal visual observations. Conversely, our method consistently achieves accurate and remarkable positioning results, with a loop error of only 0.5152 m, underscoring its robustness against visually challenging environments. ## 5 Conclusion To achieve self-reliant and robust pedestrian navigation using a smartphone in visually challenging environments, our work proposes, ReLoc-PDR, a robust pedestrian positioning framework that integrates Pedestrian Dead Reckoning (PDR) and visual relocalization based on incremental smoothing optimization. Considering the visual degradation problem in environments with weak textures and varying illumination, we introduce a visual relocalization pipeline using the learned features from deep neural network instead of traditional handcrafted features. This effectively establishes 2D-3D correspondences with higher inlier rates, enhancing the robustness of the pedestrian localization system. Furthermore, we propose an optimization-based fusion strategy that couples the PDR and visual relocalization poses into a graph model. This fusion strategy is accompanied by the use of the Tukey robust kernel, which helps eliminate the risk of abnormal visual observations. Experimental results demonstrate the effectiveness of our ReLoc-PDR in various challenging environments, including corridors with limited texture, overcast weather conditions, and dark nighttime scenarios. The proposed ReLoc-PDR method achieves accurate and smooth trajectory estimation, continuously providing pedestrian positions at a decimeter-level accuracy and a high frequency.
2310.18274
LipSim: A Provably Robust Perceptual Similarity Metric
Recent years have seen growing interest in developing and applying perceptual similarity metrics. Research has shown the superiority of perceptual metrics over pixel-wise metrics in aligning with human perception and serving as a proxy for the human visual system. On the other hand, as perceptual metrics rely on neural networks, there is a growing concern regarding their resilience, given the established vulnerability of neural networks to adversarial attacks. It is indeed logical to infer that perceptual metrics may inherit both the strengths and shortcomings of neural networks. In this work, we demonstrate the vulnerability of state-of-the-art perceptual similarity metrics based on an ensemble of ViT-based feature extractors to adversarial attacks. We then propose a framework to train a robust perceptual similarity metric called LipSim (Lipschitz Similarity Metric) with provable guarantees. By leveraging 1-Lipschitz neural networks as the backbone, LipSim provides guarded areas around each data point and certificates for all perturbations within an $\ell_2$ ball. Finally, a comprehensive set of experiments shows the performance of LipSim in terms of natural and certified scores and on the image retrieval application. The code is available at https://github.com/SaraGhazanfari/LipSim.
Sara Ghazanfari, Alexandre Araujo, Prashanth Krishnamurthy, Farshad Khorrami, Siddharth Garg
2023-10-27T16:59:51Z
http://arxiv.org/abs/2310.18274v2
# LipSim: A Provably Robust Perceptual Similarity Metric ###### Abstract Recent years have seen growing interest in developing and applying perceptual similarity metrics. Research has shown the superiority of perceptual metrics over pixel-wise metrics in aligning with human perception and serving as a proxy for the human visual system. On the other hand, as perceptual metrics rely on neural networks, there is a growing concern regarding their resilience, given the established vulnerability of neural networks to adversarial attacks. It is indeed logical to infer that perceptual metrics may inherit both the strengths and shortcomings of neural networks. In this work, we demonstrate the vulnerability of state-of-the-art perceptual similarity metrics based on an ensemble of ViT-based feature extractors to adversarial attacks. We then propose a framework to train a robust perceptual similarity metric called **LipSim (Lipschitz Similarity Metric)** with provable guarantees. By leveraging 1-Lipschitz neural networks as the backbone, LipSim provides guarded areas around each data point and certificates for all perturbations within an \(\ell_{2}\) ball. Finally, a comprehensive set of experiments shows the performance of LipSim in terms of natural and certified scores and on the image retrieval application. The code is available at [https://github.com/SaraGhazanfari/LipSim](https://github.com/SaraGhazanfari/LipSim). ## 1 Introduction Comparing data items and having a notion of similarity has long been a fundamental problem in computer science. For many years \(\ell_{p}\) norms and other mathematically well-defined distance metrics have been used for comparing data items. However, these metrics fail to measure the semantic similarity between more complex data like images and are more focused on pixel-wise comparison. To address this problem perceptual distance metrics (Zhang et al., 2011; Fu et al., 2023) have been proposed that employ deep neural networks as a backbone to first compute embeddings, then apply traditional distance metrics on the embeddings of the data in the new space. It is well-established that neural networks are susceptible to adversarial attacks (Goodfellow et al., 2014), That is, imperceptible variations of natural examples can be crafted to deliberately mislead models. Although perceptual metrics provide rich semantic interpretations compared to traditional metrics, they inherit the properties of neural networks and therefore their susceptibility to adversarial attacks (Kettunen et al., 2019; Sjogren et al., 2022; Ghidyal and Liu, 2022). Recent works have tried to address this problem by training robust perceptual metrics (Kettunen et al., 2019; Ghazanfari et al., 2023). However, these works rely on heuristic defenses and do not provide provable guarantees. Recent research has focused on designing and training neural networks with prescribed Lipschitz constants (Tsuzuku et al., 2018; Meunier et al., 2022; Wang and Manchester, 2023), aiming to improve and guarantee robustness against adversarial attacks. Promising techniques, like the SDP-based Lipschitz Layer (SLL) (Araujo et al., 2023), have emerged and allow to design of non-trivial yet efficient neural networks with pre-defined Lipschitz constant. Constraining the Lipschitz of neural has been known to induce properties such as stability in training (Miyato et al., 2018), robustness (Tsuzuku et al., 2018), and generalization (Bartlett et al., 2017). Recently, the DreamSim metric (Fu et al., 2023) has been established as the state-of-the-art perceptual similarity metric. This metric consists of a concatenation of fine-tuned versions of ViT-based embeddings, namely, DINO (Caron et al., 2021), CLIP (Radford et al., 2021), and Open CLIP (Cheriti et al., 2023). To compute the distance between two images, DreamSim measures the cosine similarity distance between these ViT-based embeddings. In this work, we initially demonstrate with a series of experiments that the DreamSim metric is not robust to adversarial examples. Consequently, it could be easy for an attacker to bypass important filtering schemes based on perceptual hash, copy detection, etc. Then, to tackle this problem, we propose LipSim, the first perceptual similarity metric _with provable guarantees_. Building on the DreamSim metric and recent advances in 1-Lipschitz neural networks, we propose a novel student-teacher approach with a Lipschitz-constrained student model. Specifically, we train a 1-Lipschitz feature extractor (student network) based on the state-of-the-art SLL architecture. The student network is trained to mimic the outputs of the embedding of the DreamSim metric, thus distilling the intricate knowledge captured by DreamSim into the 1-Lipschitz student model. After training the 1-Lipschitz feature extractor on the ImageNet-1k dataset, we fine-tune it on the NIGHT dataset. By combining the capabilities of DreamSim with the provable guarantees of a Lipschitz network, our approach paves the way for a certifiably robust perceptual similarity metric. Finally, we demonstrate good natural accuracy and state-of-the-art certified robustness on two alternative forced choice (2AFC) datasets that seek to encode human perceptions of image similarity. Our contributions can be summarized as follows: * We investigate the vulnerabilities of state-of-the-art ViT-based perceptual distance including DINO, CLIP, OpenCLIP, and DreamSim Ensemble. The vulnerabilities are highlighted using AutoAttack (Croce and Hein, 2020) on the 2AFC score which is an index for human alignment and PGD attack against the distance metric and calculating the distance between an original image and its perturbed version. * We propose a framework to train the first certifiably robust distance metric, **LipSim**, which leverages a pipeline composed of 1-Lipschitz feature extractor, projection to the unit \(\ell_{2}\) ball and cosine distance to provide certified bounds for the perturbations applied on the reference image. * We show by a comprehensive set of experiments that not only LipSim provides certified accuracy for a specified perturbation budget, but also demonstrates good performance in terms of natural 2AFC score and accuracy on image retrieval which is a serious application for perceptual metrics. Figure 1: **Demonstrating the effect of an attack on the alignment of DreamSim distance values with the real perceptual distance between images.** Instances of original and adversarial reference images from the NIGHT dataset are shown in the first and second rows and the DreamSim distance between them (_i.e._, \(d(\includegraphics[height=0.0pt]{0.0pt},\includegraphics[height=0.0pt]{0.0pt})\)) is reported below. To get a sense of how big the distance values are, images that have the same distance from the original images are shown in the third row (_i.e._, \(d(\includegraphics[height=0.0pt]{0.0pt},\includegraphics[height=0.0pt]{0.0pt})=d( \includegraphics[height=0.0pt]{0.0pt},\includegraphics[height=0.0pt]{0.0pt})\)). Obviously, the first and third rows include semantically different images, whereas the images on the first and second rows are perceptually identical. Related Works Similarity Metrics.Low-level metrics including \(\ell_{p}\) norms, PSNR as point-wise metrics, SSIM (Wang et al., 2004) and FSIM (Zhang et al., 2011) as patch-wise metrics fail to capture the high-level structure and the semantic concept of more complicated data points like images. In order to overcome this challenge the perceptual distance metrics were proposed. In the context of perceptual distance metrics, neural networks are used as feature extractors, and the low-level metrics are employed in the embeddings of images in the new space. The feature extractors used in recent work include a convolutional neural network as proposed by Zhang et al. (2018) for the LPIPS metric, or an ensemble of ViT-based models (Radford et al., 2021; Caron et al., 2021) as proposed by Fu et al. (2023) for DreamSim. As shown by experiments the perceptual similarity metrics have better alignment with human perception and are considered a good proxy for human vision. Adversarial Attacks & Defenses.Initially demonstrated by Szegedy et al. (2013), neural networks are vulnerable to adversarial attacks, _i.e._, carefully crafted small perturbations that can fool the model into predicting wrong answers. Since then a large body of research has been focused on generating stronger attacks (Goodfellow et al., 2014; Kurakin et al., 2018; Carlini and Wagner, 2017; Croce and Hein, 2020, 2021) and providing more robust defenses (Goodfellow et al., 2014; Madry et al., 2017; Pinot et al., 2019; Araujo et al., 2020, 2021; Meunier et al., 2022). To break this pattern, certified adversarial robustness methods were proposed. By providing mathematical guarantees, the model is theoretically robust against the worst-case attack for perturbations smaller than a specific perturbation budget. Certified defense methods fall into two categories. Randomized Smoothing (Cohen et al., 2019; Salman et al., 2019) turns an arbitrary classifier into a smoother classifier, then based on the Neyman-Pearson lemma, the smooth classifier obtains some theoretical robustness against a specific \(\ell_{p}\) norm. Despite the impressive results achieved by randomized smoothing in terms of natural and certified accuracy (Carlini et al., 2023), the high computational cost of inference and the probabilistic nature of the certificate makes it difficult to deploy in real-time applications. Another direction of research has been to leverage the Lipschitz property of neural networks (Hein and Andriushchenko, 2017; Tsuzuku et al., 2018) to better control the stability of the model and robustness of the model. Tsuzuku et al. (2018) highlighted the connection between the certified radius of the network with its Lipschitz constant and margin. As calculating the Lipschitz constant of a neural network is computationally expensive, a body of work has focused on designing 1-Lipschitz networks by constraining each layer with its spectral norm (Miyato et al., 2018; Farnia et al., 2018), replacing the normalized weight matrix by an orthogonal ones (Li et al., 2019; Prach and Lampert, 2022) or designing 1-Lipschitz layer from dynamical systems (Meunier et al., 2022) or control theory arguments (Araujo et al., 2023; Wang and Manchester, 2023). Vulnerabilities and Robustness of Perceptual Metrics.Investigating the vulnerabilities of perceptual metrics has been overlooked for years since the first perceptual metric was proposed. As shown in (Kettunen et al., 2019; Ghazanfari et al., 2023; Sjogren et al., 2022; Ghildyal and Liu, 2022) perceptual similarity metrics (LPIPS (Zhang et al., 2018)) are vulnerable to adversarial attacks. (Sjogren et al., 2022) presents a qualitative analysis of deep perceptual similarity metrics resilience to image distortions including color inversion, translation, rotation, and color stain. Finally (Luo et al., 2022) proposes a new way to generate attacks to similarity metrics by reducing the similarity between the adversarial example and its original while increasing the similarity between the adversarial example and its most dissimilar one in the minibatch. To introduce robust perceptual metrics, (Kettunen et al., 2019) proposes e-lips which uses an ensemble of random transformations of the input image and demonstrates the empirical robustness using qualitative experiments. (Ghildyal and Liu, 2022) employs some modules including anti-aliasing filters to provide robustness to the vulnerability of LPIPS to a one-pixel shift. More recently (Ghazanfari et al., 2023) proposes R-LPIPS which is a robust perceptual metric achieved by adversarial training Madry et al. (2017) over LPIPS and evaluates R-LPIPS using extensive qualitative and quantitative experiments on BAPPS (Zhang et al., 2018) dataset. Besides the aforementioned methods that show empirical robustness, (Kumar and Goldstein, 2021; Shao et al., 2023) propose methods to achieve certified robustness on perceptual metrics based on randomized smoothing. For example, Kumar and Goldstein (2021) proposed center smoothing which is an approach that provides certified robustness for structure outputs. More precisely, the center of the ball enclosing at least half of the perturbed points in output space is considered as the output of the smoothed function and is proved to be robust to input perturbations bounded by an \(\ell_{2}\)-size budget. The proof requires the distance metric to hold symmetry property and triangle inequality. As perceptual metrics generally don't hold the triangle inequality, the triangle inequality approximation is used which makes the bound excessively loose. In Shao et al. (2023), the same enclosing ball is employed however, the problem is mapped to a binary classification setting to leverage the certified bound as in the randomized smoothing paper (by assigning one to the points that are in the enclosing ball and zero otherwise). Besides their loose bound, these methods are computationally very expensive due to the Monte Carlo sampling for each data point. ## 3 Background Lipschitz Networks.After the discovery of the vulnerability of neural networks to adversarial attacks, one major direction of research has focused on improving the robustness of neural networks to small input perturbations by leveraging Lipschitz continuity. This goal can be mathematically achieved by using a Lipschitz function. Let \(f\) be a Lipschitz function with \(L_{f}\) Lipschitz constant in terms of \(\ell_{2}\) norm, then we can bound the output of the function by \(\left\|f(x)-f(x+\delta)\right\|_{2}\leq L_{f}\left\|\delta\right\|_{2}\). To achieve stability using the Lipschitz property, different approaches have been taken. One efficient way is to design a network with 1-Lipschitz layers which leads to a 1-Lipschitz network (Meunier et al., 2022; Araujo et al., 2023; Wang and Manchester, 2023). State of the Art Perceptual Similarity Metric.DreamSim is a recently proposed perceptual distance metric (Fu et al., 2023) that employs cosine distance on the concatenation of feature vectors generated by an ensemble of ViT-based representation learning methods. More precisely DreamSim is a concatenation of embeddings generated by DINO (Caron et al., 2021), CLIP (Radford et al., 2021), and Open CLIP (Cherti et al., 2023). Let \(f\) be the feature extractor function, the DreamSim distance metric \(d(x_{1},x_{2})\) is defined as: \[d(x_{1},x_{2})=1-S_{c}(f(x_{1}),f(x_{2})) \tag{1}\] where \(S_{c}(x_{1},x_{2})\) is the cosine similarity metric. To fine-tune the DreamSim distance metric, the NIGHT dataset is used which provides two variations, \(x_{0}\) and \(x_{1}\) for a reference image \(x\), and a label \(y\) that is based on human judgments about which distortion is more similar to the reference image \(x\) (some instances of the NIGHT dataset are shown in Figure 5 of the Appendix). Supplemented with this dataset, the authors of DreamSim turn the setting into a binary classification problem. More concretely, given a triplet \((x,x_{0},x_{1})\), and a feature extractor \(f\), they define the following classifier: \[h(x)=\begin{cases}1,&d(x,x_{1})\leq d(x,x_{0})\\ 0,&d(x,x_{1})>d(x,x_{0})\end{cases} \tag{2}\] Finally, to better align DreamSim with human judgment, given the triplet \((x,x_{0},x_{1})\), they optimize a hinge loss based on the difference between the perceptual distances \(d(x,x_{1})\) and \(d(x,x_{1})\) with a margin parameter. Note that the classifier \(h\) has a dependency on \(f\), \(d\) and each input \(x\) comes as triplet \((x,x_{0},x_{1})\) but to simplify the notation we omit all these dependencies. ## 4 LipSim: Lipschitz Similarity Metric In this section, we present the theoretical guarantees of LipSim along with the technical details of LipSim architecture and training. ### A Perceptual Metric with Theoretical Guarantees **General Robustness for Perceptual Metric.** A perceptual similarity metric can have a lot of important use cases, _e.g._, image retrieval, copy detection, etc. In order to make a robust perceptual metric we need to ensure that when a small perturbation is added to the input image, the output distance should not change a lot. In the following, we demonstrate a general robustness property when the feature extractor is 1-Lipschitz and the embeddings lie on the unit \(\ell_{2}\) ball, _i.e._, \(\left\|f(x)\right\|_{2}=1\). **Proposition 1**.: _Let \(f:\mathcal{X}\rightarrow\mathbb{R}^{k}\) be a \(1\)-Lipschitz feature extractor and \(\left\|f(x)\right\|_{2}=1\), let \(d\) be a distance metric defined as in Equation 1 and let \(\delta\in\mathcal{X}\) and \(\varepsilon\in\mathbb{R}^{+}\) such that \(\left\|\delta\right\|_{2}\leq\varepsilon\). Then, we have,_ \[\left|d(x_{1},x_{2})-d(x_{1}+\delta,x_{2})\right|\leq\left\|\delta\right\|_{2} \tag{3}\] The proof is deferred to Appendix A. This proposition implies that when the feature extractor is \(1\)-Lipschitz and its output is projected on the unit \(\ell_{2}\) ball then the composition of the distance metric \(d\) and the feature extractor, _i.e._, \(d\circ f\), is also \(1\)-Lipschitz with respect to its first argument. This result provides some general stability results and guarantees that the distance metric cannot change more than the norm of the perturbation. **Certified Robustness for 2AFC datasets.** We aim to go even further and provide certified robustness for perceptual similarity metrics with 2AFC datasets, _i.e._, in a classification setting. In the following, we show that with the same assumptions as in Proposition 1, the classifier \(h\) can obtain _certified_ accuracy. First, let us define a soft classifier \(H:\mathcal{X}\rightarrow\mathbb{R}^{2}\) with respect to some feature extractor \(f\) as follows: \[H(x)=[d(x,x_{1}),d(x,x_{0})] \tag{4}\] It is clear that \(h(x)=\operatorname*{arg\,max}_{i\in\{0,1\}}H_{i}(x)\) where \(H_{i}\) represent the \(i\)-th value of the output of \(H\). The classifier \(h\) is said to be certifiably robust at radius \(\epsilon\geq 0\) at point \(x\) if for all \(\left\lVert\delta\right\rVert_{2}\leq\epsilon\) we have: \[h(x+\delta)=h(x) \tag{5}\] Equivalently, one can look at the margin of the soft classifier: \(M_{H,x}:=H_{y}(x)-H_{1-y}(x)\) and provide a provable guarantee that: \[M_{H,x+\delta}>0 \tag{6}\] **Theorem 1** (Certified Accuracy for Perceptual Distance Metric).: _Let \(H:\mathcal{X}\rightarrow\mathbb{R}^{2}\) be the soft classifier as defined in Equation 4. Let \(\delta\in\mathcal{X}\) and \(\varepsilon\in\mathbb{R}^{+}\) such that \(\left\lVert\delta\right\rVert_{2}\leq\varepsilon\). Assume that the feature extractor \(f:\mathcal{X}\rightarrow\mathbb{R}^{k}\) is 1-Lipschitz and that for all \(x\), \(\left\lVert f(x)\right\rVert_{2}=1\), then we have the following result:_ \[M_{H,x}\geq\varepsilon\left\lVert f(x_{0})-f(x_{1})\right\rVert_{2}\quad \Longrightarrow\quad M_{H,x+\delta}\geq 0 \tag{7}\] The proof is deferred to Appendix A. Based on Theorem 1, and assuming \(x_{1}\neq x_{0}\), the certified radius for the classier \(h\) at point \(x\) can be computed as follows: \[R(h,x)=\frac{M_{H,x}}{\left\lVert f(x_{0})-f(x_{1})\right\rVert_{2}} \tag{8}\] Theorem 1 provides the necessary condition for a provable perceptual distance metric without changing the underlying distance on the embeddings (_i.e._, cosine similarity). This result has two key advantages. First, as in (Tsuzuku et al., 2018), computing the certificate at each point only requires efficient computation of the classifier margin \(H\). Leveraging Lipschitz continuity enables efficient certificate computation, unlike the randomized smoothing approach of (Kumar and Goldstein, 2021) which requires Monte Carlo sampling for each point. Second, the bound obtained on the margin to guarantee the robustness is in fact tighter than the one provided by (Tsuzuku et al., 2018). Recall (Tsuzuku et al., 2018) result states that for a L-Lipschitz classifier \(H\), we have: \[M_{H,x}\geq\varepsilon\sqrt{2}L\quad\Longrightarrow\quad M_{H,x+\delta}\geq 0 \tag{9}\] Figure 2: Two-step process for training the LipSim perceptual similarity metric. First (left), a distillation on the ImageNet dataset is performed where DreamSim acts as the teacher model, and a 1-Lipschitz neural network (_i.e._, the feature extractor) is learned to mimic DreamSim embeddings. To reduce color bias with the Lipschitz network, we use two different dataset augmentation schemes: a simple random flip and a jittered data augmentation technique that varies the brightness, contrast, hue, and saturation of the image. Second (right), the 1-Lipschitz neural network with projection is then fine-tuned on the NIGHT dataset with a hinge loss. Given that the Lipschitz constant of \(H^{1}\) is \(\sqrt{2}\), this lead to the following bound: \[M_{H,x}\geq 2x\geq\varepsilon\norm{f(x_{0})-f(x_{1})}_{2} \tag{10}\] simply based on the triangle inequality and the fact that \(\norm{f(x)}_{2}=1\). ### LipSim Architecture & Training To design a feature extractor that respects the assumptions of Proposition 1 and Theorem 1, we combined a 1-Lipschitz neural network architecture with an Euclidean projection. Let \(f:\mathcal{X}\rightarrow\mathbb{R}^{k}\) such that: \[f(x)=\pi_{B_{2}(0,1)}\circ\phi^{l}\circ\cdots\circ\phi^{(1)}(x) \tag{11}\] where \(l\) is the number of layers, \(\pi_{B_{2}(0,1)}\) is a projection on the unit \(\ell_{2}\) ball, _i.e._, \(\pi_{B_{2}(0,1)}(x)=\arg\min_{z\in B_{2}(0,1)}\norm{x-z}_{2}\) and where the layers \(\phi\) are the SDP-based Lipschitz Layers (SLL) proposed by Araujo et al. (2023): \[\phi(x)=x-2W\text{diag}\left(\sum_{j=1}^{n}|W^{\top}W|_{ij}q_{j}/q_{i}\right)^ {-1}\sigma(W^{\top}x+b), \tag{12}\] where \(W\) is a parameter matrix being either dense or a convolution, \(\{q_{i}\}\) forms a diagonal scaling matrix and \(\sigma\) is the ReLU nonlinear activation. **Proposition 2**.: _The neural network \(f:\mathcal{X}\rightarrow\mathbb{R}^{k}\) describe in Equation 11 is 1-Lipschitz and for all \(x\) we have \(\norm{f(x)}_{2}=1\)._ Proof of Proposition 2.: The proof is a straightforward application of Theorem 3 of Araujo et al. (2023) and Corollary 2.2.3 of Nesterov et al. (2018). Two-step Process for Training LipSim.LipSim aims to provide good image embeddings that are less sensitive to adversarial perturbations. We train LipSim in two steps, similar to the DreamSim approach. Recall that DreamSim first concatenates the embeddings of three ViT-based models and then fine-tunes the result on the NIGHT dataset. However, to obtain theoretical guarantees, we cannot use the embeddings of three ViT-based models because they are not generated by a 1-Lipschitz feature extractor. To address this issue and avoid self-supervised schemes for training the feature extractor, we leverage a distillation scheme on the ImageNet dataset, where DreamSim acts as the teacher model and we use a 1-Lipschitz neural network (without the \(\ell_{2}\) unit ball projection) as a student model. This first step is described on the left of Figure 2. In the second step, we fine-tuned the 1-Lipschitz neural network with projection on the NIGHT dataset using a hinge loss to increase margins and therefore robustness, as in Araujo et al. (2023). This second step is described on the right of Figure 2. ## 5 Experiments In this section, we present a comprehensive set of experiments to first highlight the vulnerabilities of DreamSim which is the state-of-the-art perceptual distance metric, and second to demonstrate the certified and empirical robustness of LipSim to adversarial attacks. ### Vulnerabilities of Perceptual Similarity Metrics To investigate the vulnerabilities of DreamSim to adversarial attacks, we aim to answer two questions in this section; Can adversarial attacks against SOTA metrics cause: (1) misalignment with human perception? (2) large changes in distance between perturbed and original images? Q1 - Alignment of SOTA Metric with Human Judgments after Attack.In this part we focus on the binary classification setting and the NIGHT dataset with triplet input. The goal is to generate adversarial attacks and evaluate the resilience of state-of-the-art distance metrics. For this purpose, we use AutoAttack (Croce and Hein, 2020), which is one of the most powerful attack algorithms. During optimization, we maximize the cross-entropy loss, the perturbation \(\delta\) is crafted only on the reference image and the two distortions stay untouched: \[\operatorname*{arg\,max}_{\delta:\|\hat{y}\|_{2}\leq\epsilon}\mathcal{L}_{cc} (y,\hat{y})=\mathcal{L}_{cc}([d(x+\delta,x_{1}),d(x+\delta,x_{0})],y) \tag{13}\] Where \(y\in\{0,1\}\) and \(\hat{y}=[d(x+\delta,x_{1}),d(x+\delta,x_{0})]\) which is considered as the logits generated by the model. The natural and adversarial 2AFC scores of DreamSim are reported in Table 1. The natural accuracy drops to half the value for a tiny perturbation of size \(\epsilon=0.5\) and decreases to zero for \(\epsilon=2.0\). In order to visualize the effect of the attack on the astuteness of distances provided by DreamSim, original and adversarial images (that are generated by \(\ell_{2}\)-AA and caused misclassification) are shown in Figure 1. The distances are reported underneath the images as \(d(\boldsymbol{\Theta},\boldsymbol{\Theta})\). To get a sense of the DreamSim distances between the original and perturbed images, the third row is added so that the original images have (approximately) the same distance to the perturbed images and the perceptually different images in the third row (\(d(\boldsymbol{\Theta},\boldsymbol{\Theta})=d(\boldsymbol{\Theta},\boldsymbol{ \Theta})\)). The takeaway from this experiment is the fact that tiny perturbations can fool the distance metric to produce large values for perceptually identical images. Q2 - Specialized Attack for Semantic Metric.In this part, we perform a direct attack against the feature extractor model which is the source of the vulnerability for perceptual metrics by employing the \(\ell_{2}\)-PGD (Madry et al., 2017) attack (\(\epsilon=1.0\)) and the following MSE loss is used during the optimization: \[\max_{\delta:\|\delta\|_{2}\leq\epsilon}\mathcal{L}_{\text{MSE}}\left[f(x+ \delta),f(x)\right] \tag{14}\] The attack is performed on 500 randomly selected samples from the ImageNet-1k test set and against the DreamSim Ensemble feature extractor. After optimizing the \(\delta\), the DreamSim distance metric is calculated between the original image and the perturbed image: \(d(x,x+\delta)\). The distribution of distances is shown in Figure 2(b). We can observe a shift in the mean of the distance from 0 to 0.6 which can be considered as a large value for the DreamSim distance as shown in 1. Figure 3: Figure 2(a) (left) compares percentages of alignment of several distance metrics with human vision based on the NIGHT dataset. As expected, the ViT-based methods outperform the pixel-wise and CNN-based metrics for the original images. However, LipSim with the 1-Lipschitz constraint backbone composed of CNN and Linear layers has a decent natural score and outperforms the (Base) Clip. Moreover, the figure shows the performance under attack (\(\ell_{2}\)-AutoAttack with \(\epsilon=2.0\)) for the SOTA metric. While perturbing the reference image, other methods are experiencing a large decay in their performance but LipSim is showing much stronger robustness. Figure 2(b) (right) shows the distribution of \(d(x,x+\delta)\) for LipSim and DreamSim. The \(\delta\) perturbation is optimized for each method separately. ### LipSim Results In this section, we aim to leverage the framework introduced in the paper and evaluate the LipSim perceptual metric. In the first step (_i.e._, right of Figure 2), we train a 1-Lipschitz network for the backbone of the LipSim metric and use the SSL architecture which has 20 Layers of Conv-SSL and 7 layers of Linear-SSL. For training the 1-Lipschitz feature extractor, the ImageNet-1k dataset is used (without labels) and the knowledge distillation approach is applied to utilize the state-of-the-art feature extractors including DINO, OpenCLIP, and DreamSim which is an ensemble of ViT-based models. To enhance the effectiveness of LipSim, we incorporate two parallel augmentation pipelines: standard and jittered. The standard version passes through the feature extractor and the teacher model while the jittered only passes through the feature extractor. Then, the RMSE loss is applied to enforce similarity between the embeddings of the jittered and standard images. This enables LipSim to focus more on the semantics of the image, rather than its colors. After training the 1-Lipschitz backbone of LipSim, we further fine-tune our model on the NIGHT dataset (_i.e._, step 2 see right of Figure 2). During the fine-tuning process, the embeddings are produced and are projected to the unit \(\ell_{2}\) ball. In order to maintain the margin between logits, the hinge loss is employed similarly to DreamSim. However, while DreamSim has used a margin parameter of 0.05, we used a margin parameter of 0.5 for fine-tuning LipSim in order to boost the robustness of the metric. Remarkably, LipSim achieves strong robustness using a 1-Lipschitz pipeline composed of a 1-Lipschitz feature extractor and a projection to the unit \(\ell_{2}\) ball that guarantees the 1-Lipschitzness of cosine distance. To evaluate the performance of LipSim and compare its performance against other perceptual metrics, we report empirical and certified results of LipSim for different settings. **Empirical Robustness Evaluation.** We provide the empirical results of LipSim against \(\ell_{2}\)-AutoAttack in Table 1. Although the natural score of LipSim is lower than the natural score of DreamSim, there is a large gap between the adversary scores. We can observe that LipSim outperforms all state-of-the-art metrics. The results of a more comprehensive comparison between LipSim, state-of-the-art perceptual metrics, previously proposed perceptual metrics, and pixel-wise metrics are presented in Figure 2(a). The pre-trained and fine-tuned natural accuracies are comparable with the state-of-the-art metrics and even higher in comparison to CLIP. In terms of empirical robustness, LipSim demonstrates great resilience. More com \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Metric/** & \multicolumn{2}{c}{**Natural**} & \multicolumn{3}{c}{\(\ell_{2}\)**-**AA**} \\ \cline{3-6} **Embedding** & **Score** & 0.5 & 1.0 & 2.0 & 3.0 \\ \hline **CLIP** & 93.91 & 29.93 & 8.44 & 1.20 & 0.27 \\ **OpenCLIP** & 95.45 & 72.31 & 42.32 & 11.84 & 3.28 \\ **DINO** & 94.52 & **81.91** & 59.04 & 19.29 & 6.35 \\ **DreamSim** & 96.16 & 46.27 & 16.66 & 0.93 & 0.93 \\ \hline **LipSim (ours)** & 85.09 & 81.58 & **76.92** & **65.62** & **53.07** \\ \hline \hline \end{tabular} \end{table} Table 1: Alignment on NIGHT dataset for original and perturbed images using AutoAttack. In this experiment, the perturbation is only applied on the reference images. While DreamSim employs an ensemble of three ViT-based models as the feature extractors, LipSim Backbone consists of a 1-Lipschitz network (composed of CNN and Linear layers) and is trained from scratch using the knowledge distillation approach. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **LipsSim with** & \multirow{2}{*}{**Training**} & \multirow{2}{*}{**Margin in**} & \multirow{2}{*}{**Natural**} & \multicolumn{3}{c}{**Certified Score**} \\ \cline{4-7} **Teacher Model** & & & & & \(\frac{36}{255}\) & \(\frac{72}{255}\) & \(\frac{108}{255}\) \\ \hline \multirow{4}{*}{**LipSim – DINO**} & Pre-trained & – & 86.90 & 53.67 & 17.32 & 2.25 \\ \cline{2-7} & \multirow{2}{*}{Fine-tuned} & 0.2 & **88.87** & 63.54 & 29.88 & 7.29 \\ & & 0.5 & 86.18 & 66.17 & 41.01 & 17.32 \\ \cline{2-7} & Pre-trained & – & 79.06 & 41.17 & 10.58 & 0.88 \\ \cline{2-7} & \multirow{2}{*}{Fine-tuned} & 0.2 & 82.51 & 61.35 & 34.32 & 12.50 \\ & & 0.5 & 81.90 & 62.72 & 39.75 & **19.57** \\ \cline{2-7} & Pre-trained & – & 86.62 & 52.14 & 16.56 & 1.81 \\ \cline{2-7} & \multirow{2}{*}{Fine-tuned} & 0.2 & 88.60 & 64.47 & 31.85 & 9.43 \\ \cline{2-7} & & 0.5 & 85.09 & **67.32** & **43.26** & 19.02 \\ \hline \hline \end{tabular} \end{table} Table 2: Certified scores of LipSim given different settings. The natural and certified 2AFC scores of all variants of LipSim are shown in this figure. The LipSim - DreamSim version outperforms other variants regarding certified scores. The tradeoff between robustness and accuracy compares the results for different margins in the hinge loss. A higher margin parameter leads to a higher certified score and a lower natural score. parisons have been performed in this sense, the empirical results over \(\ell_{\infty}\)-AutoAttack and \(\ell_{\infty}\)-PGD are also reported in Table 3 in Appendix B which aligns with the \(\ell_{2}\) results and shows strong empirical robustness of LipSim. In order to evaluate the robustness of LipSim outside the classification setting, we have performed \(\ell_{2}\)-PGD attack (\(\epsilon=1.0\)) using the MSE loss defined in Equation 14 and the distribution of \(d(x,x+\delta)\) is shown at Figure 2(b). The values of \(d(x,x+\delta)\) are pretty much close to zero which illustrates the general robustness of LipSim as discussed in proposition 2. To evaluate the general robustness of the DreamSim, we leverage \(\ell_{2}\)-AA with perturbation budget \(\epsilon=0.5\) and provide the original and adversarial instances that have \(0.5\) distance in the input space and larger distance in the embedding space at Figure 7, which is a complete violation of the general robustness proposition for the DreamSim and shows the superiority of LipSim to hold this property. **Certified Robustness Evaluation.** In order to find the certified radius for data points, the margin between logits is computed and divided by the \(\ell_{2}\) norm distance between embeddings of distorted images (\(\left\|f(x_{0})-f(x_{1})\right\|_{2}\)). The results for certified 2AFC scores for different settings of LipSim are reported in Table 2, which demonstrates the robustness of LipSim along with a high natural score. The value of the margin parameter in hinge loss used during fine-tuning is mentioned in the table which clearly shows the trade-off between robustness and accuracy. A larger margin parameter leads to more robustness and therefore higher certified scores but lower natural scores. ### Image Retrieval. After demonstrating the robustness of LipSim in terms of certified and empirical scores, the focus of this section is on one of the real-world applications of a distance metric which is image retrieval. We employed the image retrieval dataset2 proposed by Fu et al. (2023), which has 500 images randomly selected from the COCO dataset. The top-1 closest neighbor to an image with respect to LipSim and DreamSim distance metrics are shown at the rows of Figure 4. In order to investigate the impact of adversarial attacks on the performance of LipSim and DreamSim in terms of Image Retrieval application, we have performed \(\ell_{2}\)-PGD attack (\(\epsilon=2.0\)) with the same MSE loss defined in Equation 14 separately for the two metrics and the results are depicted at the rows of Figure 4. In adversarial Figure 4: Adversarial attack impact on the performance of DreamSim and LipSim distance metrics on image retrieval application. The rows show the original images and the top-1 nearest neighbors. Adversarial images generated separately for LipSim and DreamSim metrics along with their top-1 nearest neighbors are depicted in rows. More precisely, the red block shows a complete sample in the figure, where the upper and lower right images are the original and adversarial queries and the upper and lower left images are the 1-top nearest images to them respectively. rows, the LipSim sometimes generates a different image as the closest which is semantically similar to the closest image generated for the original image. ## 6 Conclusion In this paper, we initially showed the vulnerabilities of the SOTA perceptual metrics including DreamSim to adversarial attacks and more importantly presented a framework for training a certifiable robust distance metric called LipSim which leverages the 1-Lipschitz network as its backbone, 1-Lipschitz cosine similarity and demonstrates non-trivial certified and empirical robustness. Moreover, LipSim was employed for an image retrieval task and exhibited good performance in gathering semantically close images with original and adversarial image queries. For future work, It will be interesting to investigate the certified robustness of LipSim for other 2AFC datasets and extend the performance of LipSim for other applications, including copy detection, and feature inversion. #### Acknowledgments This paper is supported in part by the Army Research Office under grant number W911NF-21-1-0155 and by the New York University Abu Dhabi (NYUAD) Center for Artificial Intelligence and Robotics, funded by Tamkeen under the NYUAD Research Institute Award CG010.
2303.00503
All-genus WDVV recursion, quivers, and BPS invariants
Let $X$ be a smooth projective surface and $D$ a smooth rational ample divisor in $X$. We prove an all-genus generalization of the genus $0$ WDVV equation for primary Gromov--Witten invariants of the local 3-fold $\mathcal{O}_X(-D)$. The proof relies on a correspondence between all-genus Gromov--Witten invariants and refined Donaldson--Thomas invariants of acyclic quivers. In particular, the corresponding BPS invariants are expressed in terms of Betti numbers of moduli spaces of quiver representations.
Pierrick Bousseau, Longting Wu
2023-03-01T13:37:42Z
http://arxiv.org/abs/2303.00503v1
# All-genus WDVV recursion, quivers, and BPS invariants ###### Abstract. Let \(X\) be a smooth projective surface and \(D\) a smooth rational ample divisor in \(X\). We prove an all-genus generalization of the genus \(0\) WDVV equation for primary Gromov-Witten invariants of the local \(3\)-fold \(\mathcal{O}_{X}(-D)\). The proof relies on a correspondence between all-genus Gromov-Witten invariants and refined Donaldson-Thomas invariants of acyclic quivers. In particular, the corresponding BPS invariants are expressed in terms of Betti numbers of moduli spaces of quiver representations. ## 1. Introduction ### All-genus recursion Let \(X\) be a smooth projective surface over \(\mathbb{C}\) and \(D\) be an ample divisor in \(X\). Let \(\beta\in H_{2}(X,\mathbb{Z})\) be a nonzero curve class of \(X\). We use \(\overline{\mathcal{M}}_{g,m}(\mathcal{O}_{X}(-D),\beta)\) to denote the moduli space of \(m\)-pointed genus \(g\) stable maps of class \(\beta\) to the total space of the line bundle \(\mathcal{O}_{X}(-D)\). Since \(D\) is ample, we have \(D\cdot\beta>0\), and so \(\overline{\mathcal{M}}_{g,m}(\mathcal{O}_{X}(-D),\beta)\) coincides with the moduli space \(\overline{\mathcal{M}}_{g,m}(X,\beta)\) of \(m\)-pointed genus \(g\) stable maps to \(X\) with class \(\beta\). However, the Gromov-Witten virtual class \([\overline{\mathcal{M}}_{g,m}(\mathcal{O}_{X}(-D),\beta)]^{\mathrm{vir}}\) is in general different from \([\overline{\mathcal{M}}_{g,m}(X,\beta)]^{\mathrm{vir}}\). We consider the following primary Gromov-Witten invariant: \[N^{\mathcal{O}_{X}(-D)}_{g,\beta}\coloneqq\int_{[\overline{\mathcal{M}}_{g,m} (\mathcal{O}_{X}(-D),\beta)]^{\mathrm{vir}}}\prod_{i=1}^{m}ev_{i}^{*}([pt])\,,\] where \([pt]\in H^{4}(X)\) is the point class of \(X\), and \(ev_{i}:\overline{\mathcal{M}}_{g,m}(\mathcal{O}_{X}(-D),\beta)\to X\) is the evaluation at the \(i\)-th marked point. The virtual dimension of \(\overline{\mathcal{M}}_{g,m}(\mathcal{O}_{X}(-D),\beta)\) is \(T_{\log}\cdot\beta+m\), where \(T_{\log}=-K_{X}-D\), and so we need \(m=T_{\log}\cdot\beta\) in order to have a possibly nonzero invariant. In particular, we need \(T_{\log}\cdot\beta\geq 0\). By fixing \(\beta\) and summing over \(g\), we get the following generating series \[F_{\beta}^{\mathcal{O}_{X}(-D)}\coloneqq\sum_{g\geq 0}N^{\mathcal{O}_{X}(-D)}_{g,\beta}h^{2g-2+T_{\log}\cdot\beta}.\] By the Gopakumar-Vafa conjecture proven in [22, 1, 19, 20], we know that \(F_{\beta}^{\mathcal{O}_{X}(-D)}\in\mathbb{Q}((-q)^{-\frac{1}{2}})\), i.e. a rational function of \((-q)^{-\frac{1}{2}}\). Here \(q\) is related to \(h\) by \(q=e^{\sqrt{-1}h}\). But these \(F_{\beta}^{\mathcal{O}_{X}(-D)}\) are quite hard to compute in general. In this paper, we give a simple uniform recursive formula for \(F_{\beta}^{\mathcal{O}_{X}(-D)}\) when \(D\) is further assumed to be of virtual genus \(0\). Here the _virtual genus_\(g(D)\) of a divisor \(D\) is defined by \(g(D)\coloneqq 1-\frac{1}{2}T_{\log}\cdot D\). **Theorem 1.1**.: _Let \(X\) be a smooth projective surface over \(\mathbb{C}\) and \(D\) be an ample divisor in \(X\) with virtual genus 0. Then, we have the following recursive formula:_ \[F_{\beta}^{\mathcal{O}_{X}(-D)}=\sum_{\begin{subarray}{c}\beta_{1}+\beta_{2}= \beta\\ \beta_{1},\beta_{2}>0\end{subarray}}F_{\beta_{1}}^{\mathcal{O}_{X}(-D)}F_{ \beta_{2}}^{\mathcal{O}_{X}(-D)}\left(q^{D\cdot\beta_{1}}+q^{-D\cdot\beta_{1} }-2\right)\begin{pmatrix}T_{\log}\cdot\beta-3\\ T_{\log}\cdot\beta_{1}-1\end{pmatrix} \tag{1}\] _if \(T_{\log}\cdot\beta\geq 3\). Here \(\binom{*}{*}\) stand for the binomial coefficients._ **Remark 1.2**.: We use the convention that \[\begin{pmatrix}m\\ n\end{pmatrix}=0,\quad\text{if $n<0$ or $n>m$}\,.\] So only the classes \(\beta_{i}\) (\(i=1,2\)) with \(T_{\log}\cdot\beta_{i}>0\) are involved in the summation on the right-hand side of (1). **Remark 1.3**.: The recursion formula might not hold if we relax the ample condition and only require \(D\) to be nef. We give a counterexample in Appendix A. According to [13, Corollary 2.3], the condition that \(X\) has an ample divisor \(D\) with virtual genus \(0\) actually forces \((X,D)\) to be the following two types: 1. \((X,D)=(\mathbb{P}^{2},\text{line})\,\text{or}\,(\mathbb{P}^{2},\text{conic})\); 2. \(X\) is a Hirzebruch surface and \(D\cdot f=1\) for every fiber \(f\) of the Hirzebruch surface \(X\). In particular, we can always choose \(D\) to be effective with \(D\simeq\mathbb{P}^{1}\). By taking the lowest coefficient of \(h\) in the above recursion, we obtain the following recursion for genus \(0\) invariants: **Corollary 1.4**.: _Under the same assumptions as in Theorem 1.1, we have_ \[N^{\mathcal{O}_{X}(-D)}_{0,\beta}=-\sum_{\begin{subarray}{c}\beta_{1}+\beta_{ 2}=\beta\\ \beta_{1},\beta_{2}>0\end{subarray}}N^{\mathcal{O}_{X}(-D)}_{0,\beta_{1}}N^{ \mathcal{O}_{X}(-D)}_{0,\beta_{2}}\,(D\cdot\beta_{1})^{2}\begin{pmatrix}T_{ \log}\cdot\beta-3\\ T_{\log}\cdot\beta_{1}-1\end{pmatrix} \tag{2}\] _if \(T_{\log}\cdot\beta\geq 3\)._ At the moment, a direct proof of the all-genus recursion (1) in Gromov-Witten theory is not known. Instead, we will first translate the recursion into a recursion for quiver Donaldson-Thomas (DT) invariants via the local/relative correspondence [1, 10] together with a GW/quiver correspondence derived in [11] from the GW/Kronecker correspondence for log Calabi-Yau surfaces [1, 1, 1, 1, 10, 11]. The recursion on the quiver DT-side can then be deduced using the geometric properties of the quiver moduli spaces, in a way parallel to the work of Reineke-Weist in [10]. But the genus-zero recursion can be directly deduced on the Gromov-Witten side. Actually, it follows from from the corresponding WDVV recursion for genus-zero relative Gromov-Witten invariants of the pair \((X,D)\) in [10] plus the local/relative correspondence in [11]. The details of such a proof can be found in Appendix B. It will be clear from the proof that the requirement of \(D\) to be rational is necessary. It follows that one can view (1) as an example of all-genus WDVV equation. **Remark 1.5**.: Another structure underlying higher genus Gromov-Witten invariants is given by the conjectural Virasoro constraints [1, 2]. In genus \(0\), the Virasoro constraints are known to essentially follow from the WDVV equation [13], and so one can also view the Virasoro constraints as a higher genus generalization of the WDVV equation. There is however a major difference between the recursion (1) and the Virasoro constraints: the Virasoro constraints are naturally formulated genus by genus in terms of generating series summing over curve classes, whereas the recursion (1) is formulated curve class by curve class in terms of generating series summing over the genus. Actually, as \(D\) is ample, the Gromov-Witten invariants of \(\mathcal{O}_{X}(-D)\) in Theorem 1.1 can be viewed as Gromov-Witten invariants of the projective compactification \(\mathbb{P}(\mathcal{O}_{X}\oplus\mathcal{O}_{X}(-D))\), which is a projective toric variety and so for which the Virasoro constraints are known [11, 12, 12]. In Appendix C, we compare the recursions for genus one invariants derived from (1) and from the Virasoro constraint: the two recursions are different and the fact that they produce the same invariants seems highly non-trivial. **Remark 1.6**.: Recently, there has been an extensive study [1, 1 implies that the recursion (1) is compatible with the above deformation. So after a sequence of deformation, we have \[\operatorname{GW}(\mathcal{O}_{F_{n}}(-C_{n}-sf)) \simeq\operatorname{GW}(\mathcal{O}_{F_{n+2}}(-C_{n+2}-(s-1)f))\simeq\] \[\cdots \simeq\operatorname{GW}(\mathcal{O}_{F_{n+2s-2}}(-C_{n+2s-2}-f))\] by the deformation invariance of Gromov-Witten theory. Thus, it is enough to consider only the following three types of pairs \((X,D)\): \[(\mathbb{P}^{2},\operatorname{line}),\quad(\mathbb{P}^{2},\operatorname{ conic}),\quad(F_{n},C_{n}+f),\,n\geq 0\,. \tag{3}\] ### Quiver For each pair as above, there is a certain type of quivers associated to it by [1]. We give a brief review of the construction in Section 3.2. The results can be summarized as follows. For \((\mathbb{P}^{2},\operatorname{line})\), the corresponding quiver is For \((\mathbb{P}^{2},\operatorname{conic})\), the corresponding quiver is As for \((F_{n},C_{n}+f)\), the corresponding quiver is Note that for all the three types of quivers, the number \(m\) of vertices on the left-hand side is not fixed. In the third type of quivers, the number \(n\) of arrows between vertices \(j_{1}\) and \(j_{2}\) is the same as the subscript of \(F_{n}\). To get a quiver moduli, we need to specify the dimension vector. For those vertices on the left-hand side, we always put dimension one. For those vertices indexed by \(j\), \(j_{1}\), \(j_{2}\), the dimensions can be arbitrary non-negative integers, we denote them as \(d\), \(d_{1}\), \(d_{2}\) respectively. We use \(M^{L}_{m,d}\), \(M^{C}_{m,d}\) and \(M^{F_{n}}_{m,d_{1},d_{2}}\) to denote the moduli spaces of \(\theta\)-semistable quiver representations associated to the first, second and third type of quivers with stability condition \(\theta\) always given by \[\theta(\cdot)=\{\underline{d},\cdot\}\] where \(\{\cdot,\cdot\}\) is the antisymmetrized Euler form and \(\underline{d}\) is the dimension vector associated to the corresponding quiver (see Section 3.1 for more details). Given a projective moduli space \(Y\) of semistable quiver representations, the corresponding refined Donaldson-Thomas invariant \(\Omega_{Y}(q)\) is defined as follows. If the stable locus of \(Y\) is not empty, then \[\Omega_{Y}(q)=(-q^{1/2})^{-\dim_{\mathbb{C}}Y}\sum_{i=0}^{2\dim_{ \mathbb{C}}Y}\dim\mathrm{IH}^{i}(Y,\mathbb{Q})(-q^{1/2})^{i} \tag{4}\] i.e., \(\Omega_{Y}(q)\) is the shifted Poincare polynomial of the intersection cohomology of \(Y\). Otherwise, if the stable locus of \(Y\) is empty, \(\Omega_{Y}(q)=0\). The equivalence with the definition of quiver Donaldson-Thomas invariants based on the motivic Hall algebra [11, 12, 13] is shown in [14]. **Theorem 1.8** ([1]).: _For the following three types of pairs \((X,D)\):_ \[(\mathbb{P}^{2},\mathrm{line}),\quad(\mathbb{P}^{2},\mathrm{conic}),\quad(F_{ n},C_{n}+f),\,n\geq 0\,,\] _we have_ \[\Omega_{M^{X/D}_{\beta}}(q)=F_{\beta}^{X/D}\frac{(-1)^{D\cdot \beta+1}}{(2\sin(h/2))^{T_{\log}\cdot\beta-1}}\] _if \(T_{\log}\cdot\beta>0\), where \(q=e^{\sqrt{-1}h}\), and \(M^{X/D}_{\beta}\) is determined by \((X,D)\) and \(\beta\) as follows:_ 1. _if_ \((X,D)=(\mathbb{P}^{2},\mathrm{line})\) _and_ \(\beta=d[l]\)_, then_ \(M^{X/D}_{\beta}=M^{L}_{2d,d}\) _where_ \([l]\) _is the line class;_ _ 2. _if_ \((X,D)=(\mathbb{P}^{2},\operatorname{conic})\) _and_ \(\beta=d[l]\)_, then_ \(M_{\beta}^{X/D}=M_{d,d}^{C}\)_;_ 3. _if_ \((X,D)=(F_{n},C_{n}+f)\) _and_ \(\beta=d_{1}C_{-n}+d_{2}f\)_, then_ \(M_{\beta}^{X/D}=M_{m,d_{1},d_{2}}^{F_{n}}\) _where_ \(m=T_{\log}\cdot\beta=(1-n)d_{1}+d_{2}\)_._ Combining Theorems 1.7 and 1.8, we can translate the recursion from local side to quiver side. We explain in Section 5 how to derive the recursion on the quiver side from the following geometric properties of the quiver moduli spaces. #### 1.4.1. Duality Using reflection functors, we obtain the following isomorphisms: **Theorem 1.9**.: 1. \(M_{m,d}^{L}\simeq M_{m,m-d}^{L}\) _if_ \(m\)_,_ \(d\) _are coprime and_ \(m-d>0\)_;_ 2. \(M_{m,d}^{C}\simeq M_{m,2m-d}^{C}\) _if_ \(m\)_,_ \(d\) _are coprime and_ \(2m-d>0\)_;_ 3. \(M_{m,d_{1},d_{2}}^{F_{0}}\simeq M_{m,m-d_{1},m-d_{2}}^{F_{0}}\) _if_ \(m>\max\{d_{1},d_{2}\}\) _and_ \(m\)_,_ \(d_{1}+d_{2}\) _are coprime;_ 4. \(M_{(1-n)d_{1}+d_{2}+1,d_{1},d_{2}}^{F_{n}}\simeq M_{(1-n)d_{1}+d_{2}+1,d_{1}+1, d_{2}+n+1}^{F_{n}}\) _if_ \(n>0\) _and_ \((1-n)d_{1}+d_{2}\geq 0\)_._ #### 1.4.2. Other geometric properties We also need other geometric properties which can be summarized as follows. Let \(M_{m,d}^{L,\text{fr}}\), \(M_{m,d}^{C,\text{fr}}\), \(M_{m,d_{1},d_{2}}^{F_{n},\text{fr}}\) be the framed quiver moduli associated to \(M_{m,d}^{L}\), \(M_{m,d}^{C}\), \(M_{m,d_{1},d_{2}}^{F_{n}}\) respectively (see Section 3.4 for more details). **Theorem 1.10**.: _We have the following isomorphisms_ 1. \(M_{m,d}^{L,fr}\simeq M_{m+1,d}^{L}\) _if_ \(m\) _divides_ \(d\)_;_ 2. \(M_{m,d}^{C,fr}\simeq M_{m+1,d}^{C}\) _if_ \(m\) _divides_ \(d\)_;_ 3. \(M_{m,d_{1},d_{2}}^{F_{0},fr}\simeq M_{m+1,d_{1},d_{2}}^{F_{0}}\) _if_ \(m\) _divides_ \(d_{1}+d_{2}\)_;_ 4. \(M_{(1-n)d_{1}+d_{2},d_{1},d_{2}}^{F_{n}}\simeq M_{(1-n)d_{1}+d_{2}+1,d_{1},d_{ 2}}^{F_{n}}\) _if_ \(n>0\) _and_ \((1-n)d_{1}+d_{2}\geq 0\)_._ **Theorem 1.11**.: _Assume that \(m>0\). There exist small resolutions:_ 1. \(M_{m-1,d}^{L,fr}\to M_{m,d}^{L}\) _if_ \(m\) _divides_ \(d\) _and_ \(m>d\)_;_ 2. \(M_{m-1,d}^{C,fr}\to M_{m,d}^{C}\) _if_ \(m\) _divides_ \(d\)_;_ 3. \(M_{m-1,d_{1},d_{2}}^{F_{0},fr}\to M_{m,d_{1},d_{2}}^{F_{0}}\) _if_ \(m\) _divides_ \(d_{1}+d_{2}\) _and_ \(d_{1},d_{2}>0\)_;_ 4. \(M_{(1-n)d_{1}+d_{2}-1,d_{1},d_{2}}^{F_{n},fr}\to M_{(1-n)d_{1}+d_{2},d_{1},d_{ 2}}^{F_{n}}\) _if_ \(n>0\)_,_ \(d_{1}>0\) _and_ \(d_{2}>\max\{d_{1}-1,(n-1)d_{1}\}\)_._ **Remark 1.12**.: All the geometric properties of case (1) in Theorems 1.9, 1.10 and 1.11 were first proven by Reineke and Weist in [10]. We show in Section 3 that cases (2), (3), (4) in Theorems 1.9, 1.10 and 1.11 can be deduced via a slight generalization of their argument in [10]. ### BPS invariants The BPS invariants \(n_{g,\beta}^{\mathcal{O}_{X}(-D)}\) for \(\mathcal{O}_{X}(-D)\) are defined via \[F_{\beta}^{\mathcal{O}_{X}(-D)}=\sum_{g}n_{g,\beta}^{\mathcal{O}_{X}(-D)}(2 \sin(h/2))^{2g-2+T_{\log}\cdot\beta} \tag{5}\] if \(T_{\log}\cdot\beta>0\). Combining the deformation equivalence and Theorems 1.7, 1.8 and 1.11, we can deduce that **Theorem 1.13**.: _Under the same assumptions as in Theorem 1.1 for the pair \((X,D)\), we have_ \[\sum_{g}n_{g,\beta}^{\mathcal{O}_{X}(-D)}(2\sin(h/2))^{2g}=(-1)^{D\cdot\beta- 1}\Omega_{M_{\beta}^{\mathcal{O}_{X}(-D)}}(q) \tag{6}\] _if \(T_{\log}\cdot\beta>0\), where \(q=e^{\sqrt{-1}h}\), and \(M_{\beta}^{\mathcal{O}_{X}(-D)}\) is determined by \((X,D)\) and \(\beta\) as follows:_ 1. _if_ \((X,D)=(\mathbb{P}^{2},\mathrm{line})\) _and_ \(\beta=d[l]\)_, then_ \(M_{\beta}^{\mathcal{O}_{X}(-D)}=M_{2d-1,d}^{L}\)_;_ 2. _if_ \((X,D)=(\mathbb{P}^{2},\mathrm{conic})\) _and_ \(\beta=d[l]\)_, then_ \(M_{\beta}^{\mathcal{O}_{X}(-D)}=M_{d-1,d}^{C}\)_;_ 3. _if_ \((X,D)=(F_{n},C_{n}+(s+1)f)\) _and_ \(\beta=d_{1}C_{-n}+d_{2}f\)_, then_ \(M_{\beta}^{\mathcal{O}_{X}(-D)}=M_{m-1,d_{1},d_{2}+sd_{1}}^{F_{n+2s}}\) _where_ \(m=T_{\log}\cdot\beta=(1-n-s)d_{1}+d_{2}\)_._ By computing the dimension of \(M_{\beta}^{\mathcal{O}_{X}(-D)}\), we are able to determine the BPS Castelnuovo number1 defined as Footnote 1: Our definition of the BPS Castelnuovo number is different from that of Doan and Walpuski in [10] where they defined the BPS Castelnuovo number to be \(\inf\{g\,|\,n_{g,\beta}^{\mathcal{O}_{X}(-D)}=0\}\). **Corollary 1.14**.: _Under the same assumptions as in Theorem 1.1 for the pair \((X,D)\) and assuming \(T_{\log}\cdot\beta>0\), \(M_{\beta}^{\mathcal{O}_{X}(-D)}\neq\emptyset\), we have_ 1. \(g_{\beta}^{\mathcal{O}_{X}(-D)}=\frac{(K_{X}+\beta)\cdot\beta}{2}+1\)_;_ 2. \(n_{g,\beta}^{\mathcal{O}_{X}(-D)}=(-1)^{g+D\cdot\beta-1}\)_, if_ \(g=\frac{(K_{X}+\beta)\cdot\beta}{2}+1\geq 0\)_._ Note that case (1) of Corollary 1.14 matches with the genus-degree formula, and case (2) of Corollary 1.14 actually follows from the geometric fact that the moduli space \(M_{\beta}^{\mathcal{O}_{X}(-D)}\) is connected. ### Numerical results Theorem 1.1 gives an effective way to compute \(F_{\beta}^{\mathcal{O}_{X}(-D)}\) once we know those \(F_{\beta}^{\mathcal{O}_{X}(-D)}\) such that \(T_{\log}\cdot\beta<3\). We show in Section 6 that for the following five pairs: \[(\mathbb{P}^{2},\mathrm{line}),\,(\mathbb{P}^{2},\mathrm{conic}),\,(F_{n},C_ {n}+f),\,n=0,1,2\,,\] those initial \(F_{\beta}^{\mathcal{O}_{X}(-D)}\) can be explicitly determined from the quiver side. We also give some numerical results for the above five pairs in Section 6. As for the pairs \[(F_{n},C_{n}+f),\,n>2,\] the initial \(F_{\beta}^{\mathcal{O}_{X}(-D)}\) are related to the quiver DT-invariants of \(n\)-Kronecker quivers with \(n>2\), for which no explicit closed formula is currently known (see Section 6 for more details). ### Organization of the paper This paper is organized as follows. In Section 2, we give a brief introduction of local/relative correspondence and prove Theorem 1.7. In Section 3, after a brief review of GW/quiver correspondence, we prove Theorems 1.9, 1.10 and 1.11. In Section 4, combining local/relative correspondence with GW/quiver correspondence, we prove Theorem 1.13 and Corollary 1.14 on the BPS invariants. In Section 5, we prove our main theorem, i.e., Theorem 1.1 on the all-genus WDVV recursion. In Section 6, we give some numerical results based on the all-genus WDVV recursion. In Appendix A, we give a counterexample which shows that the ample condition in Theorem 1.1 can not be relaxed to be nef. In Appendix B, we give another proof of Corollary 1.4 purely on the Gromov-Witten side. In Appendix C, we give a comparison of our recursion with a recursion derived from the Virasoro constraints. ### Acknowledgment We would like to thank Rahul Pandharipande, Honglu Fan, Shuai Guo, Hyenho Lho, and Michel van Garrel for helpful discussions. This project was supported by the National Key R&D Program of China (No. 2022YFA1006200). ## 2. Local/relative correspondence Let \(D\) be a rational smooth ample divisor in a smooth projective surface \(X\) over \(\mathbb{C}\). In this section, as a first step towards the recursion formula (1), we prove Theorem 1.7 which relates generating series of local Gromov-Witten invariants of \(\mathcal{O}_{X}(-D)\) to generating series of relative Gromov-Witten invariants of the pair \((X,D)\). ### Local/relative correspondence We will prove Theorem 1.7 by applying the higher genus local/relative correspondence established in [1] which expresses the virtual cycle of \(\overline{\mathcal{M}}_{g}(\mathcal{O}_{X}(-D),\beta)\) as \[\frac{(-1)^{\beta\cdot D-1}}{\beta\cdot D}F_{*}\left((-1)^{g}\lambda_{g}\cap[ \overline{\mathcal{M}}_{g}(X/D,\beta)]^{\mathrm{vir}}\right)+\sum_{\mathcal{G }\in G_{g,\beta}}\frac{1}{|\operatorname{Aut}(\mathcal{G})|}(\tau_{\mathcal{G }})_{*}\left(C_{\mathcal{G}}\cap[\overline{\mathcal{M}}_{\mathcal{G}}]^{ \mathrm{vir}}\right)\,,\] where the various terms are described as follows. The map \(F:\overline{\mathcal{M}}_{g}(X/D,\beta)\to\overline{\mathcal{M}}_{g}(X,\beta)\) is the obtained by forgetting the unique relative marking and stabilizing. Moreover, \(G_{g,\beta}\) is a set of star type graphs \(\mathcal{G}=(V,E,g,b)\) of the following form: the set \(V\) of vertices of \(\mathcal{G}\) admits a decomposition \(\{v\}\coprod V_{1}\) such that for every \(v_{i}\in V_{1}\), there is a unique edge between \(v\) and \(v_{i}\), and all edges of \(\mathcal{G}\) are of this form. In addition, we have maps \(\mathrm{g}:V\to\mathbb{Z}_{\geq 0}\) and \(b:V\to H_{2}(D,\mathbb{Z})\cup H_{2}(X,\mathbb{Z})\) assigning a genus and a curve class to every vertex, such that \(b(v)\in H_{2}(D,\mathbb{Z})\), \(b(v_{i})\in H_{2}(X,\mathbb{Z})\) for every \(v_{i}\in V_{1}\), \(\sum_{v\in V}\mathrm{g}(v)=g\) and \(\sum_{v_{i}\in V_{1}}b(v_{i})+\iota_{*}(b(v))=\beta\) where \(\iota:D\hookrightarrow X\) is the inclusion of \(D\) in \(X\). The moduli space \(\overline{\mathcal{M}}_{\mathcal{G}}\) decorated by a graph \(\mathcal{G}\in G_{g,\beta}\) is \[\overline{\mathcal{M}}_{\mathcal{G}}=\left(\prod_{v_{i}}\overline{\mathcal{M} }_{\mathrm{g}(v_{i})}(X/D,b(v_{i}))\right)\times_{D^{|E|}}\overline{\mathcal{ M}}_{\mathrm{g}(v),|E|}(D,b(v))\,,\] where \(|E|\) is the number of edges of \(\mathcal{G}\). The virtual cycle of \(\overline{\mathcal{M}}_{\mathcal{G}}\) is defined using the Gysin map associated to the diagonal map \(\Delta:D^{|E|}\to D^{|E|}\times D^{|E|}\) as: \[[\overline{\mathcal{M}}_{\mathcal{G}}]^{\mathrm{vir}}:=\Delta^{!}\left(\prod_ {v_{i}\in V_{1}}[\overline{\mathcal{M}}_{\mathrm{g}(v_{i})}(X/D,b(v_{i}))]^{ \mathrm{vir}}\times[\overline{\mathcal{M}}_{\mathrm{g}(v),|E|}(D,b(v))]^{ \mathrm{vir}}\right).\] Finally, \(C_{\mathcal{G}}\) is a certain class in the Chow ring of \(\overline{\mathcal{M}}_{\mathcal{G}}\) containing a factor of \(\prod_{v_{i}\in V_{1}}\lambda_{g_{v_{i}}}\) (see [1, Section 2.2] for more details), and \(\tau_{\mathcal{G}}\) is the natural gluing map from \(\overline{\mathcal{M}}_{\mathcal{G}}\) to \(\overline{\mathcal{M}}_{g}(X,\beta)\). In our case, the definition of local invariants \(N^{\mathcal{O}_{X}(-D)}_{g,\beta}\) also includes the insertions of point classes. So we need to generalize the above expression for \([\overline{\mathcal{M}}_{g}(\mathcal{O}_{X}(-D),\beta)]^{\mathrm{vir}}\) to the case of \[\prod_{i=1}^{m}ev_{i}^{*}([pt])\cap[\overline{\mathcal{M}}_{g,m}(\mathcal{O}_{ X}(-D),\beta)]^{\mathrm{vir}}. \tag{7}\] Following the same argument as in [1], one obtains that (7) can be expressed as \[\prod_{i=1}^{m}ev_{i}^{*}([pt])\cap\left(\frac{(-1)^{\beta\cdot D-1}}{\beta \cdot D}F_{*}\left((-1)^{g}\lambda_{g}\cap[\overline{\mathcal{M}}_{g,m}(X/D, \beta)]^{\mathrm{vir}}\right)+\sum_{\mathcal{G}\in G_{g,\beta,m}}\frac{1}{| \operatorname{Aut}(\mathcal{G})|}(\tau_{\mathcal{G}})_{*}\left(C_{\mathcal{G}} \cap[\overline{\mathcal{M}}_{\mathcal{G}}]^{\mathrm{vir}}\right)\right).\] Here each graph \(\mathcal{G}=(V,E,g,b,a)\in G_{g,\beta,m}\) contains an addition data \(a\) which gives an assignment of the \(m\) markings to the vertices in \(V_{1}\). Let \(|a^{-1}(v_{i})|\) denote the number of additional markings assigned to the vertex \(v_{i}\in V_{1}\). Then \(\overline{\mathcal{M}}_{\mathcal{G}}\) becomes \[\left(\prod_{v_{i}}\overline{\mathcal{M}}_{\mathrm{g}(v_{i}),|a^{-1}(v_{i})|}(X/ D,b(v_{i}))\right)\times_{D^{|E|}}\overline{\mathcal{M}}_{\mathrm{g}(v),|E|}(D,b(v))\] and the definition of \(C_{\mathcal{G}}\) is the same as in the \(m=0\) case. ### Proof of Theorem 1.7 Now let \(m=T_{\log}\cdot\beta\) and apply the degree map to (7), we get \(N^{\mathcal{O}_{X}(-D)}_{g,\beta}\). If we apply the degree map to \[\prod_{i=1}^{m}ev_{i}^{*}([pt])\cap\left(\frac{(-1)^{\beta\cdot D-1}}{\beta \cdot D}F_{*}\left((-1)^{g}\lambda_{g}\cap[\overline{\mathcal{M}}_{g,m}(X/D, \beta)]^{\mathrm{vir}}\right)\right)\] then we get \(\frac{(-1)^{D\cdot\beta-1}}{D\cdot\beta}N^{X/D}_{g,\beta}\). We are left to study \[\prod_{i=1}^{m}ev_{i}^{*}([pt])\cap\left(\sum_{\mathcal{G}\in G_{g,\beta,m}} \frac{1}{|\operatorname{Aut}(\mathcal{G})|}(\tau_{\mathcal{G}})_{*}\left(C_{ \mathcal{G}}\cap[\overline{\mathcal{M}}_{\mathcal{G}}]^{\mathrm{vir}}\right) \right). \tag{8}\] Since all the markings are assigned to the vertices in \(V_{1}\) and \(C_{\mathcal{G}}\) always contains a factor \(\prod_{v_{i}\in V_{1}}\lambda_{\mathrm{g}(v_{i})}\), the cohomology class assigned to the factor \(\prod_{v_{i}\in V_{1}}\overline{\mathcal{M}}_{\mathrm{g}(v_{i}),|a^{-1}(v_{i} )|}(X/D,b(v_{i}))\) in \(\overline{\mathcal{M}}_{\mathcal{G}}\) has degree at least \[\deg\left(\prod_{i=1}^{m}ev_{i}^{*}([pt])\prod_{v_{i}\in V_{1}}\lambda_{ \mathrm{g}(v_{i})}\right)=2m+\sum_{v_{i}\in V_{1}}\mathrm{g}(v_{i})\,.\] Given the virtual dimension \[\dim\left(\prod_{v_{i}\in V_{1}}\left[\overline{\mathcal{M}}_{\mathrm{g}(v_{ i}),|a^{-1}(v_{i})|}(X/D,b(v_{i}))\right]^{\mathrm{vir}}\right)=m+\sum_{v_{i} \in V_{1}}\left(T_{\log}\cdot b(v_{i})+\mathrm{g}(v_{i})\right)\,,\] in order to get a nonzero contribution to (8), we need \[m+\sum_{v_{i}\in V_{1}}\left(T_{\log}\cdot b(v_{i})+\mathrm{g}(v_{i})\right) \geq 2m+\sum_{v_{i}\in V_{1}}\mathrm{g}(v_{i})\,,\] which is equivalent to \[\sum_{v_{i}\in V_{1}}T_{\log}\cdot b(v_{i})\geq m=T_{\log}\cdot\beta=T_{\log} \cdot\iota_{*}(b(v))+\sum_{v_{i}\in V_{1}}T_{\log}\cdot b(v_{i}),\] i.e., \(T_{\log}\cdot\iota_{*}(b(v))\leq 0\). As \(D\) is a curve, \(\iota_{*}(b(v))\) is a multiple of the fundamental class of \(D\), and so \(T_{\log}\cdot\iota_{*}(b(v))=nT_{\log}\cdot D\) where \(n\in\mathbb{Z}_{\geq 0}\) is the degree of \(b(v)\in H_{2}(D,\mathbb{Z})\). The condition that \(D\) is rational further implies that \(T_{\log}\cdot D=2>0\) by the adjunction formula. So we must have \(n=0\). In the this case, it is not possible to add further insertions on the factor \(\prod_{v_{i}\in V_{1}}\overline{\mathcal{M}}_{\mathrm{g}(v_{i}),|a^{-1}(v_{i} )|}(X/D,b(v_{i}))\). So if we apply the degree map to (8), the Kunneth decomposition of the diagonal class \([\Delta]\in H^{2}(D^{|E|}\times D^{|E|},\mathbb{Z})\) contributes \(|E|\) insertions of the point class of \(D\) to the factor \(\overline{\mathcal{M}}_{\mathrm{g}(v),|E|}(D,b(v))\) in \(\overline{\mathcal{M}}_{\mathcal{G}}\). Since \(b(v)=0\), it is not possible to have more than one point class insertions to \(\overline{\mathcal{M}}_{\mathrm{g}(v),|E|}(D,0)\) for a nonzero invariant. This implies that \(|E|=1\). From the above discussion, we conclude that if we apply the degree map to (8), then only those \(\mathcal{G}\in G_{g,\beta,m}\) such that \[\overline{\mathcal{M}}_{\mathcal{G}}=\overline{\mathcal{M}}_{g_{1},m}(X/D,\beta )\times_{D}\overline{\mathcal{M}}_{g_{2},1}(D,0)\] give nonzero contributions where \(g_{1}+g_{2}=g\), \(g_{2}>0\). By the description of the class \(C_{\mathcal{G}}\) in [1, Section 2.2], the corresponding contribution is \[N_{g_{1},\beta}^{X/D}(-1)^{g_{2}-1+D\cdot\beta}(D\cdot\beta)^{2g_{2}-1}\int_{[ \overline{\mathcal{M}}_{g_{2},1}(D,0)]^{\mathrm{vir}}}\psi_{1}^{2g_{2}-2}ev_{ 1}^{*}(w)\,.\] Here \(w\) is the point class of \(D\) and \(\psi_{1}\) is the psi-class associated to the unique marking. The degree zero Gromov-Witten invariant is determined by a Hodge integral on the moduli space of curves: \[\int_{[\overline{\mathcal{M}}_{g_{2},1}(D,0)]^{\mathrm{vir}}}\psi_{1}^{2g_{2} -2}ev_{1}^{*}(w)=(-1)^{g_{2}}\int_{[\overline{\mathcal{M}}_{g_{2},1}]}\psi_{1 }^{2g_{2}-2}\lambda_{g_{2}},,\] and so we conclude that \[N_{g,\beta}^{\mathcal{O}_{X}(-D)}=\sum_{g_{1}+g_{2}=g}N_{g_{1},\beta}^{X/D}(-1 )^{D\cdot\beta-1}(D\cdot\beta)^{2g_{2}-1}\int_{[\overline{\mathcal{M}}_{g_{2},1}]}\psi_{1}^{2g_{2}-2}\lambda_{g_{2}}. \tag{9}\] Here we simply set \(\int_{[\overline{\mathcal{M}}_{g_{2},1}]}\psi_{1}^{2g_{2}-2}\lambda_{g_{2}}=1\) when \(g_{2}=0\) so as to include the term \(\frac{(-1)^{D\cdot\beta-1}}{D\cdot\beta}N_{g,\beta}^{X/D}\) in the right-hand side of (9). The Hodge integrals appearing in (9) have been computed by Faber and Pandharipande in [10]: Theorem 1.7: \[F_{\beta}^{\mathcal{O}_{X}(-D)}=F_{\beta}^{X/D}\frac{(-1)^{D\cdot\beta-1}}{2 \sin(\frac{(D\cdot\beta)h}{2})}\] follows from (9) and [10, Theorem 2]. ## 3. GW/quiver correspondence In this section, after a review of quiver representations and of the GW/quiver correspondence of [1], we prove Theorems 1.9, 1.10, 1.11 on geometric properties of moduli spaces of quiver representations. ### Quiver moduli We give a brief introduction to the moduli of representations of quivers in this section. The main reference is [10]. We always work over \(\mathbb{C}\). A quiver \(Q\) consists of a finite set of vertices \(Q_{0}\) and a finite set of arrows \(Q_{1}=\{\alpha:i\to j|i,j\in Q_{0}\}\). Let \[\mathbb{Z}Q_{0}\coloneqq\bigoplus_{i\in Q_{0}}\mathbb{Z}e_{i}\] where \(\{e_{i}\}_{i\in Q_{0}}\) forms a natural basis. The Euler form on \(\mathbb{Z}Q_{0}\) is defined by \[\langle\underline{a},\underline{b}\rangle\coloneqq\sum_{i\in Q_{0}}a_{i}b_{i} -\sum_{\alpha:i\to j\in Q_{1}}a_{i}b_{j}\] for \(\underline{a}=(a_{i})_{i\in Q_{0}}\), \(\underline{b}=(b_{i})_{i\in Q_{0}}\in\mathbb{Z}Q_{0}\). We further use \((\underline{a},\underline{b})\coloneqq\langle\underline{a},\underline{b} \rangle+\langle\underline{b},\underline{a}\rangle\) to denote symmetrized Euler form and use \(\{\underline{a},\underline{b}\}\coloneqq\langle\underline{a},\underline{b} \rangle-\langle\underline{b},\underline{a}\rangle\) to denote the antisymmetrized Euler form. A representation of a quiver \(Q\) consists of a tuple of vector spaces \((V_{i})_{i\in Q_{0}}\) indexed by the vertices, plus a tuple of linear morphisms \((V_{\alpha}:V_{i}\to V_{j})_{\alpha:i\to j}\) indexed by the arrows. A morphism between two representations \(V,W\) consists of a tuple of linear morphisms \(\{f_{i}:V_{i}\to W_{i}\}_{i\in Q}\) such that \(f_{j}\circ V_{\alpha}=W_{\alpha}\circ f_{i}\) holds for arbitrary arrow \(\alpha:i\to j\). The complex representations of a fixed quiver \(Q\) form a category via componentwise composition. We denote it as \(\operatorname{Rep}_{\mathbb{C}}Q\). Actually, \(\operatorname{Rep}_{\mathbb{C}}Q\) is a \(\mathbb{C}\)-linear abelian category with finite length. Let \(\underline{d}=(\dim V_{i})_{i\in Q_{0}}\) be a dimension vector. The vector space \[R_{\underline{d}}(Q)\coloneqq\bigoplus_{\alpha:i\to j}\operatorname{Hom}(V_{i},V_{j})\] parametrizes all the representations of \(Q\) with fixed dimensions. The group \[G_{\underline{d}}(Q)\coloneqq\bigoplus_{i\in Q_{0}}GL(V_{i})\] naturally acts on \(R_{\underline{d}}\) as follows: \[G_{\underline{d}}(Q)\times R_{\underline{d}}(Q) \longrightarrow R_{\underline{d}}(Q)\] \[(g_{i})_{i\in Q_{0}}\circ(V_{\alpha})_{\alpha:i\to j} \longmapsto (g_{j}\circ V_{\alpha}\circ g_{i}^{-1})_{\alpha:i\to j}\] We want to construct a moduli which parametrizes the isomorphism classes of quiver representations. The quotient of \(R_{\underline{d}}\) by \(G_{\underline{d}}\) might not yield an interesting moduli space. For example, if \(Q\) is acyclic, that is, if there is no directed cycles in \(Q\), then \[R_{\underline{d}}//G_{\underline{d}}:=\operatorname{Spec}\left(\mathbb{C}[R_{ \underline{d}}]^{G_{\underline{d}}}\right)\] is just a point. To have a good moduli of quiver representations, we need to introduce a stability condition. A _stability condition_ \[\theta=\sum_{i\in Q_{0}}\theta_{i}e_{i}^{*}\] is an element in the dual space \((\mathbb{Z}Q_{0})^{*}\), where \(\{e_{i}^{*}\}_{i\in Q_{0}}\) is the dual basis, that is, \(\theta(\underline{d})=\sum_{i\in Q_{0}}\theta_{i}d_{i}\). A stability condition \(\theta\) induces a _slope function_\(\mu:\mathbb{N}Q_{0}\setminus\{0\}\to\mathbb{Q}\) given by \[\mu(\underline{d})=\frac{\theta(\underline{d})}{|\underline{d}|}\] where \(|\underline{d}|=\sum_{i\in Q_{0}}d_{i}\). A representation \(V\) is called \(\theta\)_-semistable_ (resp. \(\theta\)_-stable_) if \(\mu(U)\leq\mu(V)\) (resp. \(\mu(U)<\mu(V)\)) for all the nonzero proper subrepresentations \(U\) of \(V\). We use \(R_{\underline{d}}^{\theta-sst}\) (resp. \(R_{\underline{d}}^{\theta-st}\)) to denote the set of \(\theta\)-semistable (resp. \(\theta\)-stable) representation in \(R_{\underline{d}}\). The quotient \(M_{\underline{d}}^{\theta-sst}=R_{\underline{d}}^{\theta-sst}//G_{\underline {d}}\) parametrizes the closed orbits.2 The moduli space \(M_{\underline{d}}^{\theta-sst}\) contains a Zariski open subset \(M_{\underline{d}}^{\theta-st}=R_{\underline{d}}^{\theta-st}/G_{\underline{d}}\) which parametrizes \(\theta\)-stable representations in \(R_{\underline{d}}\). Both \(M_{\underline{d}}^{\theta-sst}\) and \(M_{\underline{d}}^{\theta-st}\) are irreducible with dimension \(1-<\underline{d},\underline{d}>\) if \(M_{\underline{d}}^{\theta-st}\) is not empty. Footnote 2: The closed orbits are in one-to-one correspondence with the isomorphism classes of \(\theta\)-polystable representations. Without further mention, the stability condition in this paper is always assumed to be \(\theta(\cdot)=\{\underline{d},\cdot\}\) for a given dimension vector \(\underline{d}\). Using the terminology of [1, 1, 10], \(\theta=\{\underline{d},\cdot\}\) is the _anti-attractor_ stability condition. ### GW/quiver correspondence In this subsection, we explain how Theorem 1.8 follows from the GW/quiver correspondence of [1]. The set-up of [1] is a smooth projective surface \(Y\) over \(\mathbb{C}\), with two smooth non-empty divisors \(D_{1}\) and \(D_{2}\) intersecting transversally, and such that the union \(D_{1}\cup D_{2}\) is anticanonical. We consider the following maximal contact relative Gromov-Witten invariants of the pair \((Y,D_{1})\): \[N_{g,\beta}^{Y/D_{1}}\coloneqq\int_{[\overline{\mathcal{M}}_{g,m}(X/D,\beta)] ^{\operatorname{vir}}}\prod_{i=1}^{m}ev_{i}^{*}([pt])(-1)^{g}\lambda_{g}\] where \(\beta\) is a curve class on \(Y\) such that \(\beta\cdot D_{1}>0\) and \(\beta\cdot D_{2}>0\), \(m=\beta\cdot D_{2}\), and \(\lambda_{g}\) is the top Chern class of the Hodge bundle. The main result of [1] is the construction of a quiver \(Q_{\beta}^{Y/D_{1}}\) and of a dimension vector \(d(\beta)\) such that, if \(Q_{\beta}^{Y/D_{1}}\) is acyclic, we have by [1, Theorem 1.2] \[\Omega_{M_{\beta}}(q)=(-1)^{\beta\cdot D_{1}+1}\frac{1}{(2\sin(h/2))^{\beta \cdot D_{2}-1}}\sum_{g\geq 0}N_{g,\beta}^{Y/D_{1}}\hbar^{2g-1+\beta\cdot D_{2 }}\,, \tag{10}\] where \(M_{\beta}\) is the moduli space of \(\theta\)-semistable representations of \(Q_{\beta}^{Y/D_{1}}\) of dimension \(d(\beta)\), where \(\theta=\{d(\beta),-\}\), and \(q=e^{\sqrt{-1}h}\). One can take \((Y,D_{1})=(X,D)\) for the first two pairs in Theorem 1.8: see [1, 6.1] for \((\mathbb{P}^{2},\operatorname{line})\), and [1, 6.2] for \((\mathbb{P}^{2},\operatorname{conic})\). For each case, the quiver \(Q_{\beta}^{Y/D_{1}}\) and the dimension vector \(d(\beta)\) are given as in the statement of Theorem 1.8. Using that \(\beta\cdot D_{2}=\beta\cdot(-K_{X}-D)=T_{\log}\cdot\beta\), one deduces that Theorem 1.8 for these pairs follows from (10). For \((F_{n},C_{n}+f)\), we use that this pair is deformation equivalent to the pair \((F_{n-2},C_{n-2}+2f)\), the class \(\beta=d_{1}C_{-n}+d_{2}f\) on \(F_{n}\) deforming to the class \(\beta^{\prime}=d_{1}C_{-(n-2)}+d_{2}f\) on \(F_{n-2}\). The pair \((Y,D_{1})=(F_{n-2},C_{n-2}+2f)\) is considered in [1, 6.10]. The quiver \(Q_{\beta^{\prime}}^{Y/D_{1}}\) and the dimension vector \(d(\beta^{\prime})\) given in [1, 6.10] are related to the quiver and dimension vector in Theorem 1.8 by mutation of the bottom right vertex. In general, one defines in [1] a quiver \(Q_{\beta^{\prime}}^{Y/D_{1}}\) for each choice of toric model of \((Y,D_{1}\cup D_{2})\) and mutations of quivers correspond to changes of toric models. Therefore, the quiver and dimension vector in Theorem 1.8 can be obtained directly from the construction of [1] for a different choice of toric model. One concludes that Theorem 1.8 for \((F_{n},C_{n}+f)\) also follows from (10). **Remark 3.1**.: There are unfortunately two small errors in the statement of [1, Theorem 1.2] as written in the published version of [1]. First, the pre-factor containing \(2\sin(h/2)\) is not the right one. Second, quiver DT invariant are taken in with respect to a "maximally non-trivial stability condition" rather than with respect to the anti-attractor stability condition. Both errors are corrected in the most recent arXiv version of [1]. ### Duality In this subsection, we prove Theorem 1.9. First, let us give a brief introduction of reflection functors following [1, 2]. For a quiver \(Q=(Q_{0},Q_{1})\), we call a vertex \(i\in Q_{0}\) a _sink_ (resp. a _source_) if there are no arrows starting from \(i\) (resp. ending on \(i\)). Let \(i\) be a vertex. Then we use \(\sigma_{i}Q\) to denote the quiver by reversing all arrows staring or ending at \(i\). We will define a pair of reflection functors \[R_{i}^{+},\,R_{i}^{-}:\operatorname{Rep}_{\mathbb{C}}Q\longrightarrow \operatorname{Rep}_{\mathbb{C}}\sigma_{i}Q\] corresponding to the sink and source cases respectively. Let \(V\), \(W\) be two representations of \(\operatorname{Rep}_{\mathbb{C}}Q\), and \(f:V\to W\) be a morphism between them. (I) Let us start from the case when \(i\) is a sink. The reflection functor \(R_{i}^{+}\) can be constructed as follows. We set \(R_{i}^{+}V=X\) such that \(X_{j}=V_{j}\) for \(j\neq i\) and \(X_{i}\) is given by the kernel of \[(V_{\alpha}):\bigoplus_{\alpha\in Q_{1}\atop t(\alpha)=i}V_{s(\alpha)} \longrightarrow V_{i}\] where \(s,t:\,Q_{1}\to Q_{0}\) are two natural maps such that for each arrow \(\alpha\) it starts at \(s(\alpha)\) and terminates at \(t(\alpha)\). For an arrow \(\alpha\) such that \(t(\alpha)\neq i\), we set the linear morphism \(X_{\alpha}=V_{\alpha}\). Note that if \(t(\alpha)=i\), then we need to reverse the direction of arrow \(\alpha\) to get the arrow \(\sigma_{i}\alpha\) in \(\sigma_{i}Q\). The corresponding linear morphism \(X_{\sigma_{i}\alpha}:X_{i}\to V_{s(\alpha)}=X_{t(\sigma_{i}\alpha)}\) is given by the composition of the natural inclusion and the projection: \[X_{i}\hookrightarrow\bigoplus_{\alpha\in Q_{1}\atop t(\alpha)=i}V_{s(\alpha)} \longrightarrow V_{s(\alpha)}.\] Similarly, we can construct \(R_{i}^{+}W=Y\). For the morphism \(R_{i}^{+}f=g:X\to Y\), let \(g_{j}=f_{j}\) if \(j\neq i\) and \(g_{i}:X_{i}\to Y_{i}\) is defined to be the restriction of the linear morphism \[(f_{s(\alpha)}):\bigoplus_{\alpha\in Q_{1}\atop t(\alpha)=i}V_{s(\alpha)} \longrightarrow\bigoplus_{\alpha\in Q_{1}\atop t(\alpha)=i}W_{s(\alpha)}.\] (II) Next we consider the case when \(i\) is a source. The reflection functor \(R_{i}^{-}\) can be constructed as follows. Let \(R_{i}^{-}V=X^{\prime}\). Still we have \(X^{\prime}_{j}=V_{j}\) if \(j\neq i\). But \(X^{\prime}_{i}\) is now given by the cokernel of \[(V_{\alpha}):\,V_{i}\longrightarrow\bigoplus_{\alpha\in Q_{1}\atop s(\alpha) =i}V_{t(\alpha)}.\] For an arrow \(\alpha\) such that \(s(\alpha)\neq i\), the linear morphism \(X^{\prime}_{\alpha}=V_{\alpha}\). If \(s(\alpha)=i\), then \(\sigma_{i}\alpha\) has an opposite direction as to \(\alpha\). The corresponding linear morphism \(X^{\prime}_{\sigma_{i}\alpha}:X^{\prime}_{s(\sigma_{i}\alpha)}=V_{t(\alpha)} \to X^{\prime}_{i}\) is given by the composition of the natural inclusion and the quotient map: \[V_{t(\alpha)}\hookrightarrow\bigoplus_{\alpha\in Q_{1}\atop s(\alpha)=i}V_{t (\alpha)}\longrightarrow X^{\prime}_{i}.\] Let \(R_{i}^{-}W=Y^{\prime}\). For the morphism \(R_{i}^{-}f=g^{\prime}:X^{\prime}\to Y^{\prime}\), let \(g^{\prime}_{j}=f_{j}\) if \(j\neq i\) and \(g^{\prime}_{i}:X^{\prime}_{i}\to Y^{\prime}_{i}\) is induced from \[(f_{t(\alpha)}):\,\bigoplus_{\alpha\in Q_{1}\atop s(\alpha)=i}V_{t(\alpha)} \longrightarrow\bigoplus_{\alpha\in Q_{1}\atop s(\alpha)=i}W_{t(\alpha)}.\] The reflection functors satisfy the following properties. Let \(E_{i}\) be the simple representation such that \((E_{i})_{i}=\mathbb{C}\) and \((E_{i})_{j}=0\) if \(j\neq i\). **Theorem 3.2** ([1]).: _Let \(i\) be a sink (resp. a source) and \(V\) be an irreducible representation3. Then the reflection functor \(R_{i}^{+}\) has the following properties (if \(i\) is a source, then replace \(+\) by \(-\)):_ Footnote 3: A representation \(V\) is called irreducible if \(V\neq 0\) and \(V=V_{1}\oplus V_{2}\) implies that \(V_{1}=0\) or \(V_{2}=0\). 1. _If_ \(V=E_{i}\)_, then_ \(R_{i}^{+}V=0\) 2. _If_ \(V\neq E_{i}\)_, then_ \(R_{i}^{+}V\) _is irreducible and_ \(R_{i}^{-}R_{i}^{+}V\simeq V\)_. Moreover, the dimension vector_ \(\dim R_{i}^{+}V=r_{i}(\dim V)\) _where_ \(r_{i}:\mathbb{Z}Q_{0}\to\mathbb{Z}Q_{0}\) _is given by_ \[r_{i}(x)=x-(x,e_{i})e_{i}.\] _Here_ \(e_{i}\) _is the the_ \(i\)_th coordinate vector and_ \((\cdot,\cdot)\) _is the symmetrized Euler form given before._ We also need the following dual functor. Let \(Q^{op}\) be the opposite quiver obtained from \(Q\) by reversing the orientation of arrows. The vector space duality \(\operatorname{Hom}_{k}(-,k)\) induces the natural dual functor: \[D:\,\operatorname{Rep}_{\mathbb{C}}Q\longrightarrow\operatorname{Rep}_{ \mathbb{C}}Q^{op}\] which sends a quiver representation to its dual representation. For a \(\theta\)-semistable representation, the dual representation is \((-\theta)\)-semistable. So it naturally induces the following isomorphism \[D:\,M_{\underline{d}}^{\theta-sst}(Q)\simeq M_{\underline{d}}^{(-\theta)-sst} (Q^{op}).\] Now the isomorphisms appearing in Theorem 1.9 are induced from the reflection functors together with the dual functor above. Proof of Theorem 1.9.: Let us start with case (1): \(M_{m,d}^{L}\simeq M_{m,m-d}^{L}\) if \(m\), \(d\) are coprime and \(m-d>0\); The isomorphism was first established by Reineke and Weist in [11] by showing that both \(M_{m,d}^{L}\) and \(M_{m,m-d}^{L}\) are isomorphic to a third geometric quotient space. Let us first reinterpret their proof in terms of the reflection functors and the dual functor given above. We will then show that all the other cases of Theorem 1.9 can be deduced via a parallel argument. The quiver \(Q\) in this case is with dimension vector \(\underline{d}=\sum_{k=1}^{m}e_{i_{k}}+de_{j}\). \(M_{m,d}^{L}\) is simply the moduli space \(M_{\underline{d}}^{\theta-sst}\) of \(\theta\)-semistable representations associated to the above quiver with fixed dimension vector \(\underline{d}\). Note that the stability condition \(\theta\) is always assumed to be \(\theta(\cdot)=\{\underline{d},\cdot\}\). It is then easy to check that \(M_{\underline{d}}^{\theta-sst}=M_{\underline{d}}^{\theta-st}\) since \(m,d\) are coprime. The vertex \(j\) is a sink, so we can apply the reflection functor \(R_{j}^{+}\). After further composition with the dual functor \(D\), \(D\circ R_{j}^{+}\) will map a \(\theta\)-stable representation (which is always irreducible) in \(M_{m,d}^{L}\) to a representation of the above quiver \(Q\) with dimension vector \(r_{j}(\underline{d})=\sum_{k=1}^{m}e_{i_{k}}+(m-d)e_{j}\). Actually, one can further check that \(D\circ R_{j}^{+}\) induces an isomorphism between \(M_{m,d}^{L}\) and \(M_{m,m-d}^{L}\) (with inverse \(R_{j}^{-}\circ D\)) which coincides with the one given by Reineke-Weist in their proof. The key point is to show that for a given representation \(V\) of \(R_{\underline{d}}(Q)\), \(V\) is \(\theta\)-stable if and only if \(D\circ R_{j}^{+}(V)\) is a \(\theta\)-stable representation of \(R_{r_{j}(\underline{d})}(Q)\) which follows from a simple linear algebra fact [11, Lemma 3.2]. Here with a slightly abuse of notation, we still use \(\theta\) to denote the corresponding stability condition for \(R_{r_{j}(\underline{d})}(Q)\), i.e., \(\theta(\cdot)=\{r_{j}(\underline{d}),\cdot\}\). This gives a reinterpretation of Reineke-Weist's proof for case (1) using the reflection functors and the dual functor. For case (2): \(M_{m,d}^{C}\simeq M_{m,2m-d}^{C}\) if \(m\), \(d\) are coprime and \(2m-d>0\), the corresponding quiver \(Q^{\prime}\) is with dimension vector \(\underline{d}=\sum_{k=1}^{m}e_{i_{k}}+de_{j}\). \(M_{m,d}^{C}\) is the moduli of \(\theta\)-stable representations with fixed dimension vector \(\underline{d}\). Then similar to case (1), one can show that \(D\circ R_{j}^{+}\) induces an isomorphism between \(M_{m,d}^{C}\simeq M_{m,2m-d}^{C}\). The key point is still to show that for a given representation \(V\) of \(R_{\underline{d}}(Q^{\prime})\), \(V\) is \(\theta\)-stable if and only if \(D\circ R_{j}^{+}(V)\) is a \(\theta\)-stable representation of \(R_{r_{j}(\underline{d})}(Q^{\prime})\) which still follows from the linear algebra fact [11, Lemma 3.2]. Note that here \(r_{j}(\underline{d})=\sum_{k=1}^{m}e_{i_{k}}+(2m-d)e_{j}\) Similar for cases (3) and (4), they correspond to quivers with dimension vectors always given by \(\underline{d}=\sum_{k=1}^{m}e_{i_{k}}+d_{1}e_{j_{1}}+d_{2}e_{j_{2}}\). Both of the isomorphisms are given by the functor \(D\circ R_{j_{1}}^{+}\circ R_{j_{2}}^{+}\). We miss the details here because they are parallel to cases (1) and (2). But we note that \[r_{j_{1}}\circ r_{j_{2}}(\underline{d})=\sum_{k=1}^{m}e_{i_{k}}+((n^{2}-1)d_{2 }-nd_{2}+(n+1)m)e_{j_{1}}+(nd_{1}+m-d_{2})e_{j_{2}}\] and \(D\circ R_{j_{1}}^{+}\circ R_{j_{2}}^{+}\) maps representations associated to the above quiver to representations associated to a different quiver which can be derived from the above quiver by simply changing all the arrow directions between vertices \(j_{1}\) and \(j_{2}\). ### Other geometric properties In this subsection, we prove Theorems 1.10 and 1.11. Before we give a proof of Theorem 1.10, we need some preparation. Given a quiver \(Q=(Q_{0},Q_{1})\) and a vector of non-negative integers \(\underline{n}\in\mathbb{N}Q_{0}\). According to [14, Section 2.3], the moduli space of \(\theta\)-semistable \(\underline{n}\)-framed representations of \(Q\) with dimension \(\underline{d}\) can be defined as follows: Let \(\widehat{Q}\) be the framed quiver of \(Q\) which is derived from \(Q\) by adding an additional vertex \(i_{0}\) and \(n_{i}\) arrows from \(i_{0}\) to \(i\in Q_{0}\). We extend the dimension vector \(\underline{d}\) to a dimension vector \(\underline{\hat{d}}\) of \(\widehat{Q}\) by adding the entry \(1\) at vertex \(i_{0}\). Assume that \(\theta\) is normalized, i.e., \(\theta(\underline{d})=0\). We then also extend the stability \(\theta\) to a stability \(\hat{\theta}\) of \(\widehat{Q}\) by adding the entry \(1\) for the vertex \(i_{0}\). Then according to [13], the moduli space \(M_{\underline{d},\underline{n}}^{\theta,\mathrm{fr}}(Q)\) of \(\theta\)-semistable \(\underline{n}\)-framed representations of \(Q\) with dimension \(\underline{d}\) is isomorphic to the moduli space \(M_{\underline{\hat{d}}}^{\hat{\theta}-sst}(\widehat{Q})\) of \(\hat{\theta}\)-semistable representations of \(\widehat{Q}\) with dimension \(\underline{\hat{d}}\). We set \(\underline{n}=e_{j}\) for the quiver associated to \((\mathbb{P}^{2},\mathrm{line})\), set \(\underline{n}=2e_{j}\) for the quiver associated to \((\mathbb{P}^{2},\mathrm{conic})\), and set \(\underline{n}=e_{j_{1}}+e_{j_{2}}\) for quivers associated to \((F_{n},C_{n}+f)\). We further use \(M_{m,d}^{L,\mathrm{fr}}\), \(M_{m,d}^{C,\mathrm{fr}}\), \(M_{m,d_{1},d_{2}}^{F_{n},\mathrm{fr}}\) to denote the corresponding \(\underline{n}\)-framing moduli spaces. Note that here the stability \(\theta\) is always assumed to be \(\theta(\cdot)=\{\underline{d},\cdot\}\) for a given dimension vector \(\underline{d}\). We are ready to prove Theorem 1.10. Proof of Theorem 1.10.: Let us start from case (1): \(M_{m,d}^{L,fr}\simeq M_{m+1,d}^{L}\) if \(m\) divides \(d\). This case is first established by Reineke and Weist in [14] by comparing the stability conditions on both sides. Actually, one can easily check that both \(M^{L,fr}_{m,d}\) and \(M^{L}_{m+1,d}\) have the same underlying quiver and dimension vector \(\sum_{k=0}^{m}e_{i_{k}}+de_{j}\). Here we use \(i_{0},\cdots,i_{m}\) to denote the left \(m+1\) vertices for the corresponding quiver of \(M^{L}_{m+1,d}\). The stability for \(M^{L,fr}_{m,d}\) is given by \[\hat{\theta}=\sum_{k=0}^{m}e_{i_{k}}^{*}-\frac{m}{d}e_{j}^{*}\] while the stability for \(M^{L}_{m+1,d}\) is \[\theta^{\prime}=d\left(\sum_{k=0}^{m}e_{i_{k}}^{*}\right)-(m+1)e_{j}^{*}.\] Then we have \[\hat{\theta}=\frac{(m+d)}{d(m+1+d)}\theta^{\prime}+\frac{1}{m+1+d}\dim\] where \(\dim=\sum e_{i_{k}}^{*}+e_{j}^{*}\) is the linear function recording the total dimension. So \(M^{L,fr}_{m,d}\simeq M^{L}_{m+1,d}\) if \(m\) divides \(d\). In a similar way, we can prove case (2): \(M^{C,fr}_{m,d}\simeq M^{C}_{m+1,d}\) if \(m\) divides \(d\); and case (3): \(M^{F_{0},fr}_{m,d_{1},d_{2}}\simeq M^{F_{0}}_{m+1,d_{1},d_{2}}\) if \(m\) divides \(d_{1}+d_{2}\). As for case (4): \(M^{F_{0},fr}_{(1-n)d_{1}+d_{2},d_{1},d_{2}}\simeq M^{F_{0}}_{(1-n)d_{1}+d_{2}+ 1,d_{1},d_{2}}\) with \(n>0\), we need a more careful analysis of the stability conditions on both sides. In this case, \(m=(1-n)d_{1}+d_{2}\geq 0\). First, it is easy to see that both sides have the same underlying quiver with dimension vector \[\sum_{k=0}^{m}e_{i_{k}}+d_{1}e_{j_{1}}+d_{2}e_{j_{2}}.\] The stability for \(M^{F_{n},fr}_{(1-n)d_{1}+d_{2},d_{1},d_{2}}\) is \[\hat{\theta}=\sum_{k=0}^{m}e_{i_{k}}^{*}+(n-1)e_{j_{1}}^{*}-e_{j_{2}}^{*}\] while the stability for \(M^{F_{n}}_{(1-n)d_{1}+d_{2}+1,d_{1},d_{2}}\) is \[\theta^{\prime}=(d_{1}+d_{2})\left(\sum_{k=0}^{m}e_{i_{k}}^{*}+(n-1)e_{j_{1}}^ {*}-e_{j_{2}}^{*}\right)-(e_{j_{1}}^{*}+e_{j_{2}}^{*}).\] Let \(V\) be a \(\hat{\theta}\)-semistable representation. Then for all the nonzero proper subrepresentations \(U\subset V\), we have \[\frac{\hat{\theta}(\operatorname{\mathbf{dim}}U)}{\dim U}\leq\frac{\hat{ \theta}(\operatorname{\mathbf{dim}}V)}{\dim V}=\frac{1}{\dim V}\] where we use \(\operatorname{\mathbf{dim}}U\), \(\operatorname{\mathbf{dim}}V\) to denote the corresponding dimension vectors and use \(\dim U\), \(\dim V\) to denote the corresponding total dimensions. Since \(\frac{\dim U}{\dim V}<1\) as \(U\) is a proper subrepresentation and \(\hat{\theta}(\operatorname{\mathbf{dim}}U)\) must be an integer, the above equality is equivalent to \[\hat{\theta}(\operatorname{\mathbf{dim}}U)\leq 0.\] Next if \(V\) is a \(\theta^{\prime}\)-semistable representation, then we have \[\theta^{\prime}(\operatorname{\mathbf{dim}}U)=(d_{1}+d_{2})\hat{\theta}( \operatorname{\mathbf{dim}}U)-(e_{j_{1}}^{*}+e_{j_{2}}^{*})(\operatorname{ \mathbf{dim}}U)\leq 0\] for all the nonzero proper subrepresentations \(U\subset V\). As \(U\) is proper and \(\hat{\theta}(\operatorname{\mathbf{dim}}U)\) is an integer, the above equality is also equivalent to \[\hat{\theta}(\operatorname{\mathbf{dim}}U)\leq 0.\] We may conclude that \(V\) is \(\hat{\theta}\)-semistable if and only if it is \(\theta^{\prime}\)-semistable. Then case (4): \(M^{F_{n},fr}_{(1-n)d_{1}+d_{2},d_{1},d_{2}}\simeq M^{F_{n}}_{(1-n)d_{1}+d_{2}+ 1,d_{1},d_{2}}\) follows. Next, we are going to prove Theorem 1.11. According to [17, Theorem 4.3], for a quiver \(Q\) with dimension vector \(\underline{d}\) and stabilities \(\theta\) and \(\theta^{\prime}\), if we assume that 1. the moduli \(M^{\theta-st}_{\underline{d}}(Q)\) of \(\theta\)-stable representations is not empty; 2. \(\theta(\underline{d})=0\) and \(\theta^{\prime}\) is a generic deformation of \(\theta^{\prime}\); 3. the restriction of Euler form \(\langle\cdot,\cdot\rangle\) to \(\operatorname{Ker}\left(\theta\right)\) is symmetric. then there exists a natural small resolution \(p:M^{\theta^{\prime}-sst}_{\underline{d}}(Q)\to M^{\theta-sst}_{\underline{d}} (Q)\). We have the following criteria [17, Theorem 5.1] to check whether \(\theta^{\prime}\) is a generic deformation of \(\theta\): We further assume \(\underline{d}=(d_{i})_{i\in Q_{0}}\) to be indivisible, i.e., \(\gcd(d_{i}:i\in Q_{0})=1\). Then there exists a stability \(\eta\) such that \(\eta(\underline{d})=0\) and \(\eta(\underline{e})\neq 0\) for any \(\underline{e}\in\mathbb{N}Q_{0}\) with \(0\neq\underline{e}\lneq\underline{d}\) and \(\theta(\underline{e})=0\). If there exist a constant \(C\in\mathbb{N}\) with \[C>\max(\max(\eta(\underline{e}):\underline{e}\leq\underline{d},\theta( \underline{e})<0),\max(-\eta(\underline{e}):\underline{e}\leq\underline{d}, \theta(\underline{e})>0))\] such that \(\theta^{\prime}=C\theta+\eta\), then \(\theta^{\prime}\) is a generic deformation of \(\theta\). We are now ready to prove Theorem 1.11. Proof of Theorem 1.11.: Recall that stabilities \(\theta\) of \(M^{L}_{m,d}\), \(M^{C}_{m,d}\), \(M^{F_{0}}_{m,d_{1},d_{2}}\) and \(M^{F_{n}}_{(1-n)d_{1}+d_{2},d_{1},d_{2}}\) are always given by \(\{\underline{d},\cdot\}\) where \(\underline{d}\) are the corresponding dimension vectors. It is then easy to check that the restriction of Euler form \(\langle\cdot,\cdot\rangle\) to \(\operatorname{Ker}\left(\theta\right)\) is symmetric, i.e., condition (III) is always satisfied. It is also easy to check that condition (I) is always satisfied. This is due to the extra conditions we put for different cases of Theorem 1.11, e.g. we require that \(m\) divides \(d\) and \(m>d\) for case (1). So we only need to check condition (II). For this, we will use the above criteria. Let us start from case (1). In this case, the stability condition \(\theta^{\prime}\) for \(M^{L,fr}_{m-1,d}\) is given by \[\theta^{\prime}=(r+1)d\hat{\theta}-\dim\] where we set \(r=\frac{m}{d}\) and \[\hat{\theta}=e^{*}_{i_{0}}+d\left(\sum_{k=1}^{m-1}e^{*}_{i_{k}}\right)-(rd-1)e ^{*}_{j}.\] Note that similar to the proof of Theorem 1.10, we use \(i_{0},\cdots,i_{m-1}\) to denote the left \(m\) vertices for the corresponding quiver of \(M^{L,fr}_{m-1,d}\). The stability \(\theta\) for \(M^{L}_{m,d}\) can be explicit written down: \[\theta=\sum_{k=0}^{m-1}e^{*}_{i_{k}}-re^{*}_{j}\] Then we have \[\theta^{\prime}=C\theta+\eta\] where \(C=(r+1)d^{2}-1\) and \[\eta=-(r+1)(d-1)(de^{*}_{i_{0}}-e^{*}_{j}).\] Note that at least one entry of \(\underline{d}=(d_{i})_{i\in Q_{0}}\) is \(1\). So \(\underline{d}\) is indivisible. It is also quite quite easy to verify that the conditions for the criteria are satisfied. So \(\theta^{\prime}\) is a generic deformation of \(\theta\). Note that the stability conditions \(\theta\) and \(\theta^{\prime}\) for the corresponding quiver in case (2) of Theorem 1.11 have the same expression as in case (1). So condition (II) is also satisfied for case (2). The expression of \(\theta\) and \(\theta^{\prime}\) for the corresponding quiver in case (3) is quite similar. Actually, we only need to replace \(d\) by \(d_{1}+d_{2}\) and \(e_{j}^{*}\) by \(e_{j_{1}}^{*}+e_{j_{2}}^{*}\) in the expression of \(\theta\), \(\theta^{\prime}\) for case (1). So condition (II) is also satisfied for case (3) of Theorem 1.11. To verify condition (II) for case (4), we need a more careful analysis of the stability \(\theta^{\prime}\) for \(M_{(1-n)d_{1}+d_{2}-1,d_{1},d_{2}}^{F_{n},fr}\). First, by an easy computation, we know that the stability \(\tilde{\theta}\) for the unframed moduli \(M_{(1-n)d_{1}+d_{2}-1,d_{1},d_{2}}^{F_{n}}\) is \[\tilde{\theta}=(d_{1}+d_{2})\left(\sum_{k=1}^{m-1}e_{i_{k}}^{*}+(n-1)e_{j_{1}} ^{*}-e_{j_{2}}^{*}\right)+(e_{j_{1}}^{*}+e_{j_{2}}^{*})\] where we set \(m=(1-n)d_{1}+d_{2}\). It is not hard to see that \(\tilde{\theta}\) is equivalent to the stability \[\sum_{k=1}^{m-1}e_{i_{k}}^{*}+(n-1)e_{j_{1}}^{*}-e_{j_{2}}^{*}.\] The latter can be further normalized as \[\bar{\theta}=(m+d_{1}+d_{2}-1)\left(\sum_{k=1}^{m-1}e_{i_{k}}^{*}+(n-1)e_{j_{1 }}^{*}-e_{j_{2}}^{*}\right)+\left(\sum_{k=1}^{m-1}e_{i_{k}}^{*}+e_{j_{1}}^{*}+ e_{j_{2}}^{*}\right).\] So the stability for framed moduli \(M_{(1-n)d_{1}+d_{2}-1,d_{1},d_{2}}^{F_{n},fr}\) is given by \[\hat{\theta}=e_{i_{0}}^{*}+\bar{\theta}\] which can be further normalized as \[\theta^{\prime}=(m+d_{1}+d_{2})\left(\sum_{k=1}^{m-1}e_{i_{k}}^{*}+(n-1)e_{j_ {1}}^{*}-e_{j_{2}}^{*}\right)+\dim.\] where \(\dim=\sum_{k=0}^{m-1}e_{i_{k}}^{*}+e_{j_{1}}^{*}+e_{j_{2}}^{*}\). Recall that the stability \(\theta\) for \(M_{(1-n)d_{1}+d_{2},d_{1},d_{2}}^{F_{n}}\) is given by \(\{\underline{d},\cdot\}\) which can be normalized as \[\theta=\sum_{k=0}^{m-1}e_{i_{k}}^{*}+(n-1)e_{j_{1}}^{*}-e_{j_{2}}^{*}.\] So we have \[\theta^{\prime}=C\theta+\eta\] where \(C=(m+d_{1}+d_{2})\) and \(\eta=\dim-(m+d_{1}+d_{2})e_{i_{0}}^{*}\). Using these explicit expressions for \(C\) and \(\eta\), it is easy to verify that \(\theta^{\prime}\) is a generic deformation of \(\theta\). ## 4. BPS invariants In this section, we prove Theorem 1.13 and Corollary 1.14. Proof of Theorem 1.13.: Combining Theorems 1.7 and 1.8, we know that for the following pairs \((X,D)\): \[(\mathbb{P}^{2},\text{line}),\quad(\mathbb{P}^{2},\text{conic}),\quad(F_{n},C _{n}+f),\,n\geq 0\] we have \[F_{\beta}^{\mathcal{O}_{X}(-D)}=\Omega_{M_{\beta}^{X/D}}(q)\frac{(2\sin(h/2))^{T_{ \log}\cdot\beta-1}}{2\sin(\frac{(D\cdot\beta)h}{2})} \tag{11}\] if \(T_{\log}\cdot\beta>0\), where \(q=e^{ih}\). We recall that \(M_{\beta}^{X/D}\) can be determined from \((X,D)\) and \(\beta\) as follows: * if \((X,D)=(\mathbb{P}^{2},\text{line})\) and \(\beta=d[l]\), then \(M_{\beta}^{X/D}=M_{2d,d}^{L}\) where \([l]\) is the line class; * if \((X,D)=(\mathbb{P}^{2},\text{conic})\) and \(\beta=d[l]\), then \(M_{\beta}^{X/D}=M_{d,d}^{C}\); * if \((X,D)=(F_{n},C_{n}+f)\) and \(\beta=d_{1}C_{-n}+d_{2}f\), then \(M_{\beta}^{X/D}=M_{m,d_{1},d_{2}}^{F_{n}}\) where \(m=T_{\log}\cdot\beta=(1-n)d_{1}+d_{2}\). When the stable locus of \(M_{\beta}^{X/D}\) is nonempty, then by Theorem 1.11 we know that there is a small resolution of \(M_{\beta}^{X/D}\) by a framed moduli. Such a framed moduli is a \(\mathbb{P}^{D\cdot\beta-1}\)-bundle over the smooth quiver moduli \(M_{\beta}^{\mathcal{O}_{X}(-D)}\). The latter can be determined from \((X,D)\) and \(\beta\) as follows: * if \((X,D)=(\mathbb{P}^{2},\text{line})\) and \(\beta=d[l]\), then \(M_{\beta}^{\mathcal{O}_{X}(-D)}=M_{2d-1,d}^{L}\); * if \((X,D)=(\mathbb{P}^{2},\text{conic})\) and \(\beta=d[l]\), then \(M_{\beta}^{\mathcal{O}_{X}(-D)}=M_{d-1,d}^{C}\); * if \((X,D)=(F_{n},C_{n}+f)\) and \(\beta=d_{1}C_{-n}+d_{2}f\), then \(M_{\beta}^{\mathcal{O}_{X}(-D)}=M_{m-1,d_{1},d_{2}}^{F_{n}}\) where \(m=T_{\log}\cdot\beta=(1-n)d_{1}+d_{2}\). So by Leray-Hirsch theorem, we have \[\Omega_{M_{\beta}^{X/D}}=P_{\mathbb{P}^{D\cdot\beta-1}}\Omega_{M_{\beta}^{ \mathcal{O}_{X}(-D)}} \tag{12}\] where \[P_{\mathbb{P}^{D\cdot\beta-1}}=(-1)^{D\cdot\beta-1}\frac{q^{\frac{D\cdot\beta }{2}}-q^{-\frac{D\cdot\beta}{2}}}{q^{1/2}-q^{-1/2}}=(-1)^{D\cdot\beta-1}\frac{ 2\sin(\frac{(D\cdot\beta)h}{2})}{2\sin(h/2)}\] is the shifted Poincare polynomial of \(\mathbb{P}^{D\cdot\beta-1}\). When the stable locus of \(M_{\beta}^{X/D}\) is empty, one can easily check that the stable locus of \(M_{\beta}^{\mathcal{O}_{X}(-D)}\) is empty also. So equation (12) still holds. Combining (11) and (12), we have \[\frac{F_{\beta}^{\mathcal{O}_{X}(-D)}}{(2\sin(h/2))^{T_{\log}\cdot\beta-2}}=( -1)^{D\cdot\beta-1}\Omega_{M_{\beta}^{\mathcal{O}_{X}(-D)}}. \tag{13}\] By (6), the left-hand side equals to \(\sum_{g}n_{g,\beta}^{\mathcal{O}_{X}(-D)}(2\sin(h/2))^{2g}\). So \[\sum_{g}n_{g,\beta}^{\mathcal{O}_{X}(-D)}(2\sin(h/2))^{2g}=(-1)^{D\cdot\beta-1 }\Omega_{M_{\beta}^{\mathcal{O}_{X}(-D)}}. \tag{14}\] Finally, for those pairs \((X,D)=(F_{n},C_{n}+(s+1)f)\) with \(s\geq 1\) and \(\beta=d_{1}C_{-n}+d_{2}f\), we set \(M_{\beta}^{\mathcal{O}_{X}(-D)}=M_{m-1,d_{1},d_{2}+sd_{1}}^{F_{n+2s}}\) with \(m=T_{\log}\cdot\beta=(1-n-s)d_{1}+d_{2}\). Then according to the deformation equivalence discussed in Section 1.3, we know that (14) still holds. Next, we will give a proof of Corollary 1.14 based on Theorem 1.13. Proof of Corollary 1.14.: As one can easily check that \(\Omega_{M_{\beta}^{\mathcal{O}_{X}(-D)}}\) only consists of stable representations, the condition that \(\Omega_{M_{\beta}^{\mathcal{O}_{X}(-D)}}\) is not empty implies that \[\Omega_{M_{\beta}^{\mathcal{O}_{X}(-D)}}(q)=(-q^{1/2})^{-\dim_{\mathbb{C}}M_{ \beta}^{\mathcal{O}_{X}(-D)}}\sum_{i}\dim H^{i}(M_{\beta}^{\mathcal{O}_{X}(-D) },\mathbb{Q})(-q^{1/2})^{i}.\] Note that in this case, \(\Omega_{M_{\beta}^{\mathcal{O}_{X}(-D)}}\) is smooth. So the intersection cohomology of \(M_{\beta}^{\mathcal{O}_{X}(-D)}\) coincides with the usual cohomology. Now Corollary 1.14 follows from the facts that \(\dim_{\mathbb{C}}M_{\beta}^{\mathcal{O}_{X}(-D)}=1-<\underline{d},\underline{ d}>\) where \(\underline{d}\) is the dimension vector of \(M_{\beta}^{\mathcal{O}_{X}(-D)}\), and \(M_{\beta}^{\mathcal{O}_{X}(-D)}\) is connected, so \(\dim H^{0}(M_{\beta}^{\mathcal{O}_{X}(-D)},\mathbb{Q})=1\). ## 5. Proof of Theorem 1.1 We prove Theorem 1.1 in this section. First, by the deformation equivalence discussed in Section 1.3, we only need to consider the following pairs \((X,D)\): \[(\mathbb{P}^{2},\text{line}),\quad(\mathbb{P}^{2},\text{conic}),\quad(F_{n},C _{n}+f),\,n\geq 0.\] Next, let us translate the recursion formula (1) into a recursion formula of quiver DT-invariants by (13): \[\Omega_{M_{\beta}^{\mathcal{O}_{X}(-D)}}=\sum_{\begin{subarray}{c}\beta_{1}+ \beta_{2}=\beta\\ \beta_{1},\beta_{2}>0\end{subarray}}\Omega_{M_{\beta_{1}}^{\mathcal{O}_{X}(- D)}}\Omega_{M_{\beta_{2}}^{\mathcal{O}_{X}(-D)}}\,(P_{\mathbb{P}^{D\cdot\beta_{1}-1} })^{2}\begin{pmatrix}T_{\log}\cdot\beta-3\\ T_{\log}\cdot\beta_{1}-1\end{pmatrix}. \tag{15}\] Here we recall that \[P_{\mathbb{P}^{D\cdot\beta_{1}-1}}=(-1)^{D\cdot\beta_{1}-1}\frac{q^{\frac{D \cdot\beta_{1}}{2}}-q^{-\frac{D\cdot\beta_{1}}{2}}}{q^{1/2}-q^{-1/2}}.\] When \((X,D)=(\mathbb{P}^{2},\text{line})\), the above recursion was first derived by Reineke-Weist [14, Theorem 1.2] using a formula relating DT-invariants of framed moduli to unframed one together with some geometric properties of the corresponding quiver moduli4. Actually, we will show in the proof that their proof can be generalized to the cases \((\mathbb{P}^{2},\text{conic})\) and \((F_{n},C_{n}+f)\) as well. Footnote 4: These geometric properties are just case (1) appearing in Theorems 1.9, 1.10 and 1.11. Proof of Theorem 1.1.: Let \(Q\) be a quiver with stability \(\theta\). We use \(\Lambda_{0}^{+}\) to denote the set of nonzero dimension vectors \(\underline{d}\) such that \(\theta(\underline{d})=0\). The key relation used in Reineke-Weist's proof can be described as follows: \[1+\sum_{\underline{d}\in\Lambda_{0}^{+}}\Omega_{M_{\underline{d},\underline{ n}}^{\theta,\text{\emph{tr}}}}(-1)^{\underline{n}\cdot\underline{d}}x^{ \underline{d}}=\text{Exp}\left(\sum_{\underline{d}\in\Lambda_{0}^{+}}P_{ \mathbb{P}^{\underline{n}}\underline{d}^{-1}}\Omega_{M_{\underline{d}}^{\theta -sst}}(-1)^{\underline{n}\cdot\underline{d}}x^{\underline{d}}\right) \tag{16}\] where \(\text{Exp}(\cdot)\) is the plethytic exponential5. Note that we need a technical assumption to make the above equality holds: the restriction of the Euler form \(\langle\cdot,\cdot\rangle\) to \(\Lambda_{0}^{+}\) is symmetric. This assumption always holds for all the cases we will discuss in the next. As before, for a given dimension vector \(\underline{d}\), we always choose the stability \(\theta\) to be \(\{\underline{d},\cdot\}\). Case I: \((X,D)=(\mathbb{P}^{2},\text{line})\). We choose the quivers to be those corresponds to the pair \((\mathbb{P}^{2},\text{line})\), and set \(\underline{d}=\sum_{k=1}^{2d}e_{i_{k}}+de_{j}\), \(\underline{n}=e_{j}\). In this case, \(\theta=\sum_{k=1}^{2d}e_{i_{k}}^{*}-2e_{j}^{*}\). Then using the previous notation, we have \[M_{\underline{d},\underline{n}}^{\theta,\text{fr}}=M_{2d,d}^{L,\text{fr}},\quad M _{\underline{d}}^{\theta-sst}=M_{2d,d}^{L}.\] By Theorems 1.9, 1.10 and 1.11, we have \[M_{2d,d}^{L,\text{fr}}\simeq M_{2d+1,d}^{L}\simeq M_{2d+1,d+1}^{L},\quad\Omega_ {M_{2d,d}^{L}}=P_{\mathbb{P}^{d-1}}\Omega_{M_{2d-1,d}^{L}}.\] We set \(z_{d}^{L}(q)=\Omega_{M_{2d-1,d}^{L}}(q)\). Then by (16), we may deduce that \[z_{d+1}^{L}=\sum_{a_{1}+a_{2}\cdots+a_{d}=d\atop a_{i}\geq 0}\frac{(2d)!}{ \prod_{k=1}^{d}\big{(}(2k)!\big{)}^{a_{k}}(a_{k})!}\prod_{k=1}^{d}\left(\big{(} P_{\mathbb{P}^{k-1}}\big{)}^{2}z_{k}^{L}\right)^{a_{k}}\] which is equivalent to \[\frac{z_{d+1}^{L}}{(2d)!}=\sum_{a_{1}+a_{2}\cdots+a_{d}=d\atop a_{i}\geq 0} \frac{1}{(a_{k})!}\prod_{k=1}^{d}\left(\frac{\big{(}P_{\mathbb{P}^{k-1}}\big{)} ^{2}z_{k}^{L}}{(2k)!}\right)^{a_{k}}.\] So by summing over \(d\), we have \[1+\sum_{d>0}\frac{z_{d+1}^{L}}{(2d)!}x^{d}=\exp\left(\sum_{k>0}\frac{\big{(}P_ {\mathbb{P}^{k-1}}\big{)}^{2}z_{k}^{L}}{(2k)!}x^{k}\right).\] By further taking a derivative \(2x\frac{d}{dx}\) on both sides, we have \[\sum_{d>0}\frac{z_{d+1}^{L}}{(2d-1)!}x^{d}=\left(\sum_{k>0}\frac{\big{(}P_{ \mathbb{P}^{k-1}}\big{)}^{2}z_{k}^{L}}{(2k-1)!}x^{k}\right)\left(\sum_{d\geq 0 }\frac{z_{d+1}^{L}}{(2d)!}x^{d}\right).\] Here we have used the fact that \(z_{1}^{L}=1\) which can be deduced from the fact that \(M_{1,1}^{L}\) is a point. So by taking the coefficients of \(x^{d-1}\) on both sides, we get the recursion \[z_{d}^{L}=\sum_{d_{1}+d_{2}=d\atop d_{1},d_{2}>0}z_{d_{1}}^{L}z_{d_{2}}^{L} \big{(}P_{\mathbb{P}^{d_{1}-1}}\big{)}^{2}\binom{2d-3}{2d_{1}-1}.\] This is exactly equation (15) for the pair \((\mathbb{P}^{2},\text{line})\) by noting that \(z_{d}^{L}=\Omega_{M_{d[l]}^{\mathcal{O}_{\mathbb{P}^{2}}(-1)}}\). Case II: \((X,D)=(\mathbb{P}^{2},\text{conic})\). In this case, the quivers correspond to those of the pair \((\mathbb{P}^{2},\text{conic})\), and set \(\underline{d}=\sum_{k=1}^{d}e_{i_{k}}+de_{j}\), \(\underline{n}=e_{j}\). In this case, \(\theta=\sum_{k=1}^{d}e_{i_{k}}^{*}-e_{j}^{*}\). Then using case (2) of Theorems 1.9, 1.10 and 1.11, we have \[M_{d,d}^{C,fr}\simeq M_{d+1,d+2}^{C},\quad\Omega_{M_{d,d}^{C}}=P_{\mathbb{P}^{2 d-1}}\Omega_{M_{d-1,d}^{C}}.\] In this case, we may deduce from (16) that \[1+\sum_{d>0}\frac{z_{d+2}^{C}}{d!}x^{d}=\exp\left(\sum_{k>0}\frac{\big{(}P_{ \mathbb{P}^{2k-1}}\big{)}^{2}z_{k}^{C}}{(k)!}x^{k}\right)\] with \(z_{d}^{C}=\Omega_{M_{d-1,d}^{C}}\). By taking a derivative \(x\frac{d}{dx}\) on both sides, we get the recursion \[z_{d}^{C}=\sum_{\begin{subarray}{c}d_{1}+d_{2}=d\\ d_{1},d_{2}>0\end{subarray}}z_{d_{1}}^{C}z_{d_{2}}^{C}P_{\mathbb{P}^{2d_{1}-1}} \binom{d-3}{d_{1}-1}\] which is exactly equation (15) for the pair \((\mathbb{P}^{2},\text{conic})\). Here we have used the fact that \(z_{2}^{C}=1\) which can be similarly deduced as in case (I). Case III: \((X,D)=(F_{n},C_{n}+f)\), \(n\geq 0\). We then choose the quivers to be those correspond to the pair \((F_{n},C_{n}+f)\). For the curve class \(\beta=d_{1}C_{-n}+d_{2}f\) with \(T_{\log}\cdot\beta=(1-n)d_{1}+d_{2}\geq 0\), we then set the dimension vector \(\underline{d}=\sum_{k=1}^{(1-n)d_{1}+d_{2}}e_{i_{k}}+d_{1}e_{j_{1}}+d_{2}e_{j_ {2}}\) and \(\underline{n}=e_{j_{1}}+e_{j_{2}}\). The stability \(\theta=\sum_{k=1}^{(1-n)d_{1}+d_{2}}e_{i_{k}}^{*}+(n-1)e_{j_{1}}^{*}-e_{j_{2}}^ {*}\). In this case, we have \[M_{\underline{d},\underline{n}}^{\theta,\text{fr}}=M_{(1-n)d_{1}+d_{2},d_{1}, d_{2}}^{F_{n},\text{fr}},\quad M_{\underline{d}}^{\theta-sst}=M_{(1-n)d_{1}+d_{2},d_{1 },d_{2}}^{F_{n}}.\] Then by cases (3), (4) of Theorems 1.9, 1.10 and 1.11, we have \[M_{(1-n)d_{1}+d_{2},d_{1},d_{2}}^{F_{n},\text{fr}}\simeq M_{(1-n)d_{1}+d_{2}+ 1,d_{1}+1,d_{2}+n+1}^{F_{n}},\,\Omega_{M_{(1-n)d_{1}+d_{2},d_{1},d_{2}}^{F_{n} }}=P_{\mathbb{P}^{d_{1}+d_{2}-1}}\Omega_{M_{(1-n)d_{1}+d_{2}-1,d_{1},d_{2}}^{F_ {n}}}.\] After plugging them into equation (16) and setting \(z_{d_{1},d_{2}}^{F_{n}}=\Omega_{M_{(1-n)d_{1}+d_{2}-1,d_{1},d_{2}}^{F_{n}}}\), we may deduce that \[1+\sum_{\begin{subarray}{c}(1-n)d_{1}+d_{2}\geq 0\\ d_{1}+d_{2}>0\end{subarray}}\frac{z_{d_{1}+1,d_{2}+n+1}^{F_{n}}}{\big{(}(1-n)d _{1}+d_{2}\big{)}!}x_{1}^{d_{1}}x_{2}^{d_{2}}=G\exp\left(\sum_{(1-n)k_{1}+k_{ 2}>0}\frac{\big{(}P_{\mathbb{P}^{d_{1}+d_{2}-1}}\big{)}^{2}z_{k_{1},k_{2}}^{F_ {n}}}{\big{(}(1-n)k_{1}+k_{2}\big{)}!}x_{1}^{k_{1}}x_{2}^{k_{2}}\right)\] where \[G=1+\sum_{\begin{subarray}{c}(1-n)d_{1}+d_{2}=0\\ d_{1}+d_{2}>0\end{subarray}}z_{d_{1}+1,d_{2}+n+1}^{F_{n}}x_{1}^{d_{1}}x_{2}^{d _{2}}.\] After taking a derivative \((1-n)x_{1}\frac{\partial}{\partial x_{1}}+x_{2}\frac{\partial}{\partial x_{2}}\) on both sides and noting that \[\left((1-n)x_{1}\frac{\partial}{\partial x_{1}}+x_{2}\frac{\partial}{\partial x _{2}}\right)G=0,\] we may deduce that \[z_{d_{1},d_{2}}^{F_{n}}=\sum_{\begin{subarray}{c}k_{1}+k_{1}^{\prime}=d_{1}\\ k_{2}+k_{2}^{\prime}=d_{2}\end{subarray}}z_{k_{1},k_{2}}^{F_{n}}z_{k_{1}^{ \prime},k_{2}^{\prime}}^{F_{n}}\big{(}P_{\mathbb{P}^{k_{1}+k_{2}-1}}\big{)}^{2 }\binom{(1-n)d_{1}+d_{2}-3}{(1-n)k_{1}+k_{2}-1}.\] Here we have used the fact that \(z_{1,n+1}^{F_{n}}=1\). This is exactly equation (15) for the pair \((F_{n},C_{n}+f)\). ## 6. Numerical results In this section, we list the first few \(F_{\beta}^{\mathcal{O}_{X}(-D)}\) and \(n_{g,\beta}^{\mathcal{O}_{X}(-D)}\) for the following five pairs \((X,D)\): \[(\mathbb{P}^{2},\text{line}),\quad(\mathbb{P}^{2},\text{conic}),\quad(F_{0},C_ {0}+f),\quad(F_{1},C_{1}+f),\quad(F_{2},C_{2}+f)\] based on the recursion (1). The initial data can be determined from the quiver side (combining (5) and (6)). Case I: \((X,D)=(\mathbb{P}^{2},\text{line})\). In this case, curve classes \(\beta\) of \(\mathbb{P}^{2}\) are indexed by integers \(d\in\mathbb{N}\). We only need to compute those \(F_{\beta}^{\mathcal{O}_{X}(-D)}\) such that \(0<T_{\log}\cdot\beta=2d<3\), i.e., \(d=1\). When \(d=1\), \(F_{d}^{\mathcal{O}_{\mathbb{P}^{2}}(-1)}=1\) can be deduced from (5), (6) and the fact that \(M_{\beta}^{\mathcal{O}_{X}(-D)}=M_{1,1}^{L}\) is a point. We have the following table for the generating series of Gromov-Witten invariants of \(\mathcal{O}_{\mathbb{P}^{2}}(-1)\) up to degree \(5\): The BPS invariants \(n_{g,d}^{\mathcal{O}_{\mathbb{P}^{2}}(-1)}\) are determined from the following formula \[F_{d}^{\mathcal{O}_{\mathbb{P}^{2}}(-1)}=\sum_{g\geq 0}n_{g,d}^{\mathcal{O}_{ \mathbb{P}^{2}}(-1)}(2\sin(h/2))^{2g-2+2d}\,,\] where \(q=e^{\sqrt{-1}h}\). Case II: \((X,D)=(\mathbb{P}^{2},\mathrm{conic})\). Curve classes \(\beta\) are again indexed by integers \(d\in\mathbb{N}\). In this case, we need to compute those \(F_{d}^{\mathcal{O}_{\mathbb{P}^{2}}(-2)}\) such that \(d=1,2\) which can be similarly determined from the quiver side. \begin{table} \begin{tabular}{|l|c|} \hline \(d\) & \(F_{d}^{\mathcal{O}_{\mathbb{P}^{2}}(-2)}\) \\ \hline 1 & \(-(-q)^{1/2}/(q-1)\) \\ \hline 2 & \(-1\) \\ \hline 3 & \((q-1)(q^{2}+2q+1)/(-q)^{3/2}\) \\ \hline 4 & \((q-1)^{2}(q^{6}+3q^{5}+7q^{4}+10q^{3}+7q^{2}+3q+1)/q^{4}\) \\ \hline 5 & \(-(q-1)^{3}(q^{12}+4q^{11}+11q^{10}+25q^{9}+46q^{8}+71q^{7}\) \\ & \(+84q^{6}+71q^{5}+46q^{4}+25q^{3}+11q^{2}+4q+1)/(-q)^{15/2}\) \\ \hline \end{tabular} \end{table} Table 3. GW-invariants of \(\mathcal{O}_{\mathbb{P}^{2}}(-2)\) The BPS invariants \(n^{\mathcal{O}_{\mathbb{P}^{2}}(-2)}_{g,d}\) are determined from the following formula \[F^{\mathcal{O}_{\mathbb{P}^{2}}(-2)}_{d}=\sum_{g\geq 0}n^{\mathcal{O}_{\mathbb{P}^{2 }}(-2)}_{g,d}(2\sin(h/2))^{2g-2+d}\,.\] Case III: \((X,D)=(F_{0},C_{0}+f)\). In this case, \(F_{0}\) is just \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and each curve class \(\beta\) can be uniquely written as \(d_{1}C_{0}+d_{2}f\) where \(d_{1},d_{2}\in\mathbb{N}\). We denote \(F^{\mathcal{O}_{X}(-D)}_{\beta}\) by \(F^{\mathcal{O}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}(-1,-1)}_{d_{1},d_{2}}\). By symmetry, we have \(F^{\mathcal{O}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}(-1,-1)}_{d_{1},d_{2}}=F^{ \mathcal{O}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}(-1,-1)}_{d_{2},d_{1}}\), and so we only need to determine those pairs \((d_{1},d_{2})\) such that \(d_{2}\geq d_{1}\). For the initial data, we need to determine those \(F^{\mathcal{O}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}(-1,-1)}_{d_{1},d_{2}}\) such that \(0<T_{\log}\cdot\beta=d_{1}+d_{2}<3\). All of these \(F^{\mathcal{O}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}(-1,-1)}_{d_{1},d_{2}}\) can be determined from the quiver side as the corresponding quiver moduli space \(M^{\mathcal{O}_{X}(-D)}_{\beta}\) is either a point or empty. When \(d_{1}=0\) and \(d_{2}\geq 2\), we have \(F^{\mathcal{O}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}(-1,-1)}_{d_{1},d_{2}}=0\). This can be deduced either from the quiver side, because the corresponding quiver moduli space is empty, or from the Gromov-Witten side, because in this case curves are mapped to fibers of \(\mathbb{P}^{1}\times\mathbb{P}^{1}\) and so cannot pass through \(d_{2}\geq 2\) points in general position. When \(d_{1}=1\), one deduces from the recursion that \[F^{\mathcal{O}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}(-1,-1)}_{1,d_{2}}=-\left( \frac{1-q}{(-q)^{1/2}}\right)^{d_{2}-1}\,.\] The BPS invariants \(n^{\mathcal{O}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}(-1,-1)}_{g,(d_{1},d_{2})}\) are determined from the following formula \[F^{\mathcal{O}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}(-1,-1)}_{d_{1},d_{2}}=\sum_{ g\geq 0}n^{\mathcal{O}_{\mathbb{P}^{1}\times\mathbb{P}^{1}}(-1,-1)}_{g,(d_{1},d_{2})}( 2\sin(h/2))^{2g-2+d_{1}+d_{2}}\,.\] Case IV: \((X,D)=(F_{1},C_{1}+f)\). Each effective curve class \(\beta\) can be written as \(d_{1}C_{-1}+d_{2}f\) where \(d_{1},d_{2}\in\mathbb{N}\). So the generating series \(F^{\mathcal{O}_{X}(-D)}_{\beta}\) can be written as \(F^{\mathcal{O}_{F_{1}}(-C_{1}-f)}_{d_{1},d_{2}}\). The corresponding quiver \(M^{X/D}_{\beta}\) is If \(\beta=d_{1}C_{-1}+d_{2}f\), then the number \(m\) of those vertices \(i_{k}\) on the left-hand side equals to \(T_{\log}\cdot\beta=d_{2}\). The dimensions associated to vertices \(j_{1},j_{2}\) are \(d_{1},d_{2}\) respectively. And the stability condition is \[\theta=\sum_{k}i_{k}^{*}-j_{2}^{*}.\] So if \(d_{1}>d_{2}\), then for each quiver representation \((V_{i},f_{i})_{i\in Q_{0}}\), we could always find a proper subspace \(W_{j_{1}}\) of \(V_{j_{1}}\) which is the images from the \(d_{2}\) one dimensional spaces \(V_{i_{k}}\). Then we can naturally get a nonzero proper subrepresentation of \((V_{i},f_{i})_{i\in Q_{0}}\) which still has \(\theta=0\) because \(\theta(j_{1})=0\). This implies that the moduli of \(\theta\)-stable representations is empty. Then DT-invariants associated to this quiver is just zero. Combining with Theorems 1.7 and 1.8, we may deduce that \(F_{d_{1},d_{2}}^{\mathcal{O}_{F_{1}}(-C_{1}-f)}=0\) if \(d_{1}>d_{2}\). So we only need to determine those pairs \((d_{1},d_{2})\) such that \(d_{1}\leq d_{2}\). The initial data for \(F_{d_{1},d_{2}}^{\mathcal{O}_{F_{1}}(-C_{1}-f)}\) are \[(d_{2},d_{2})=(0,1),\,(0,2),\,(1,1),\,(1,2),\,(2,2).\] Note that the case \((0,2)\) can be ruled out similar to case III. For the rest pairs, all of those \(F_{d_{1},d_{2}}^{\mathcal{O}_{F_{1}}(-C_{1}-f)}\) can be determined from the quiver side since the corresponding quiver moduli \(M_{\beta}^{\mathcal{O}_{X}(-D)}\) is always a point. We have the following table for \(F_{d_{1},d_{2}}^{\mathcal{O}_{F_{1}}(-C_{1}-f)}\): The BPS invariants \(n_{g,(d_{1},d_{2})}^{\mathcal{O}_{F_{1}}(-C_{1}-f)}\) can be determined from the following formula \[F_{d_{1},d_{2}}^{\mathcal{O}_{F_{1}}(-C_{1}-f)}=\sum_{g\geq 0}n_{g,(d_{1},d_{2}) }^{\mathcal{O}_{F_{1}}(-C_{1}-f)}(2\sin(h/2))^{2g-2+d_{2}}.\] Case V: \((X,D)=(F_{2},C_{2}+f)\). Still each curve class \(\beta\) can be written as \(d_{1}C_{-2}+d_{2}f\). And the corresponding \(F_{\beta}^{\mathcal{O}_{X}(-D)}\) can be written as \(F_{d_{1},d_{2}}^{\mathcal{O}_{F_{2}}(-C_{2}-f)}\). We only consider those \(\beta\) such that \(T_{\log}\cdot\beta=d_{2}-d_{1}>0\). The initial data in this case are those \(F_{d_{1},d_{2}}^{\mathcal{O}_{F_{2}}(-C_{2}-f)}\) such that \(0<d_{2}-d_{1}<3\). If \(d_{2}-d_{1}=1\), then \(M_{\beta}^{\mathcal{O}_{X}(-D)}\) is the moduli associated to following 2-Kronecker quiver: \[j_{1}\qquad d_{1}\] \[\left(\begin{array}{c}\\ \\ \\ j_{2}\quad d_{1}+1\end{array}\right)\] where vertices \(j_{1}\) and \(j_{2}\) are decorated with dimensions \(d_{1}\), \(d_{1}+1\) respectively. This is known to be a point. Combining (5) and (6), we always have \[F_{d,d+1}^{\mathcal{O}_{F_{2}}(-C_{2}-f)}=(-q)^{1/2}/(q-1).\] If \(d_{2}-d_{1}=2\), then \(M_{\beta}^{\mathcal{O}_{X}(-D)}\) is the moduli associated to following quiver: \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \((d_{1},d_{2})\)\(g\) & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) \\ \hline \((0,1)\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \hline \((1,1)\) & \(-1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \hline \((1,2)\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \hline \((2,2)\) & \(-1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \hline \((2,3)\) & \(5\) & \(-1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \hline \((2,4)\) & \(-18\) & \(8\) & \(-1\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \hline \((2,5)\) & \(56\) & \(-41\) & \(11\) & \(-1\) & \(0\) & \(0\) & \(0\) \\ \hline \((3,3)\) & \(-4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \hline \((3,4)\) & \(49\) & \(-36\) & \(10\) & \(-1\) & \(0\) & \(0\) & \(0\) \\ \hline \((3,5)\) & \(-384\) & \(499\) & \(-293\) & \(92\) & \(-15\) & \(1\) & \(0\) \\ \hline \((4,4)\) & \(-32\) & \(28\) & \(-9\) & \(1\) & \(0\) & \(0\) & \(0\) \\ \hline \((4,5)\) & \(729\) & \(-1250\) & \(1003\) & \(-456\) & \(120\) & \(-17\) & \(1\) \\ \hline \((5,5)\) & \(-400\) & \(792\) & \(-721\) & \(365\) & \(-105\) & \(16\) & \(-1\) \\ \hline \end{tabular} \end{table} Table 8. BPS invariants for \(\mathcal{O}_{F_{1}}(-C_{1}-f)\) The vertices \(i_{1}\), \(j_{1}\), \(j_{2}\) are decorated with dimensions \(1\), \(d_{1}\), \(d_{1}+2\) respectively. We can assume that \(d_{1}>0\). Otherwise, \(M_{\beta}^{\mathcal{O}_{X}(-D)}\) is empty. Then by Theorem 1.9, \(M_{\beta}^{\mathcal{O}_{X}(-D)}\) is isomorphic to the moduli associated to the following quiver: The latter is a framed quiver moduli by Theorem 1.10. So the quiver DT invariants \(\Omega_{M_{\beta}^{\mathcal{O}_{X}(-D)}}(q)\) can be computed via (16) above. More precisely, we take the quiver \(Q\) to be the \(2\)-Kronecker quiver with vertices \(j_{1}\), \(j_{2}\) and set \(\underline{n}=e_{j_{1}}+e_{j_{2}}\) and \(\theta=e_{j_{1}}^{*}-e_{j_{2}}^{*}\) in (16). Then \(\Omega_{M_{\beta}^{\mathcal{O}_{X}(-D)}}(q)\) in this case is simply the coefficient of \(x^{\underline{d}}\) with \(\underline{d}=(d_{1}-1)\underline{n}\). Combining with the facts that \(M_{e_{j_{1}}+e_{j_{2}}}^{\theta-sat}=\mathbb{P}^{1}\) and \(M_{d(e_{j_{1}}+e_{j_{2}})}^{\theta-st}=\emptyset\) for \(d>1\), we may deduce that \[\Omega_{M_{\beta}^{\mathcal{O}_{X}(-D)}}(q)=\left[\frac{1}{(1-qx)(1-q^{-1}x)(1 -2x)}\right]_{x^{d_{1}-1}}\] where \([\cdot]_{x^{d_{1}-1}}\) extracts the coefficient of \(x^{d_{1}-1}\). We have now computed \(\Omega_{M_{\beta}^{\mathcal{O}_{X}(-D)}}(q)\) for those \(\beta\) such that \(0<T_{\log}\cdot\beta<3\). Combining (5) and (6), we can determine all the initial \(F_{\beta}^{\mathcal{O}_{X}(-D)}\). Here is the table for \(F_{d_{1},d_{2}}^{\mathcal{O}_{F_{2}}(-C_{2}-f)}\): The BPS invariants \(n^{\mathcal{O}_{F_{2}}(-C_{2}-f)}_{g,(d_{1},d_{2})}\) can be determined from the following formula \[F^{\mathcal{O}_{F_{2}}(-C_{2}-f)}_{d_{1},d_{2}}=\sum_{g\geq 0}n^{\mathcal{O}_{F_{2} }(-C_{2}-f)}_{g,(d_{1},d_{2})}(2\sin(h/2))^{2g-2+d_{2}-d_{1}}.\] In general, in order to determine those Gromov-Witten invariants of \(\mathcal{O}_{F_{n}}(-C_{n}-f)\) with \(n>2\), we need to determine all the quiver DT invariants for the following two types of quivers: \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \((d_{1},d_{2})\)\(g\) & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) \\ \hline \((d,d+1)\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \hline \((1,3)\) & \(-1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \hline \((2,4)\) & \(-4\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \hline \((2,5)\) & \(13\) & \(-7\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \hline \((2,6)\) & \(-38\) & \(33\) & \(-10\) & \(1\) & \(0\) & \(0\) & \(0\) \\ \hline \((2,7)\) & \(104\) & \(-129\) & \(62\) & \(-13\) & \(1\) & \(0\) & \(0\) \\ \hline \((3,5)\) & \(-11\) & \(6\) & \(-1\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \hline \((3,6)\) & \(72\) & \(-89\) & \(46\) & \(-11\) & \(1\) & \(0\) & \(0\) \\ \hline \((3,7)\) & \(-422\) & \(832\) & \(-750\) & \(374\) & \(-106\) & \(16\) & \(-1\) \\ \hline \((4,6)\) & \(-26\) & \(22\) & \(-8\) & \(1\) & \(0\) & \(0\) & \(0\) \\ \hline \((4,7)\) & \(274\) & \(-563\) & \(548\) & \(-298\) & \(92\) & \(-15\) & \(1\) \\ \hline \((5,7)\) & \(-57\) & \(64\) & \(-37\) & \(10\) & \(-1\) & \(0\) & \(0\) \\ \hline \end{tabular} \end{table} Table 10. BPS invariants for \(\mathcal{O}_{F_{2}}(-C_{2}-f)\) where there are \(n\) arrows between vertices \(j_{1}\) and \(j_{2}\) in both quivers and dimensions associated to different vertices are indicated in the picture. As far as we know, the quiver DT invariants for \(n\)-Kronecker quiver with \(n>2\) is quite hard to compute. We do not know how to completely solve the quiver DT invariants for the above two types of quivers. ## Appendix A A Counterexample In this appendix, we give a counterexample to the genus-zero recursion (20) when \(D\) is only assumed to be nef. This shows in particular that Theorem 1.1 may fail once we relax the ample condition of \(D\) to be nef. Let \(X\) be the Hirzebruch surface \(F_{1}=\mathbb{P}(\mathcal{O}(1)\oplus\mathcal{O})\) and \(D\) be a fiber of \(F_{1}\). We have the following recursion if \(T_{\log}\cdot\beta\geq 3\): \[N_{0,\beta}^{\mathcal{O}X(-D)}=-\sum_{\begin{subarray}{c}\beta_{1}+\beta_{2}= \beta\\ \beta_{1},\beta_{2}>0\end{subarray}}N_{0,\beta_{1}}^{\mathcal{O}X(-D)}N_{0, \beta_{2}}^{\mathcal{O}X(-D)}\left(D\cdot\beta_{1}\right)^{2}\begin{pmatrix}T _{\log}\cdot\beta-3\\ T_{\log}\cdot\beta_{1}-1\end{pmatrix}+(D\cdot\beta)^{2}N_{0,\beta-f}^{\mathcal{ O}X(-D)}\] where \(f\) is the fiber class and \(\beta\) is not a multiple of the fiber class \(f\). The above recursion can be deduced similarly as in Section B. First, using WDVV equation, we can deduce the following recursion for the genus-zero relative invariants of the above pair \((X,D)\) as in [11]: \[\frac{N_{0,\beta}^{X/D}}{D\cdot\beta}=\sum_{\begin{subarray}{c}\beta_{1}+ \beta_{2}=\beta\\ \beta_{1},\beta_{2}>0\end{subarray}}\frac{N_{0,\beta_{1}}^{X/D}}{D\cdot\beta_ {1}}\frac{N_{0,\beta_{2}}^{X/D}}{D\cdot\beta_{2}}\left(D\cdot\beta_{1}\right) ^{2}\begin{pmatrix}T_{\log}\cdot\beta-3\\ T_{\log}\cdot\beta_{1}-1\end{pmatrix}+(D\cdot\beta)N_{0,\beta-f}^{X/D} \tag{17}\] where \(T_{\log}\cdot\beta\geq 3\). The appearance of the additional term \((D\cdot\beta)N_{0,\beta-f}^{X/D}\) can be illustrated via the following example. Let \(C_{1}\) be the section of \(F_{1}\) with self intersection number \(1\). Then the above recursion implies that \[N_{0,C_{1}+nf}^{X/D}=N_{0,C_{1}}^{X/D},\,\forall n>0. \tag{18}\] Actually, in this case the contact order equals to \(D\cdot(C_{1}+nf)=1\), and so the relative invariant \(N_{0,C_{1}+nf}^{X/D}\) equals to the corresponding absolute invariant \(N_{0,C_{1}+nf}^{X}\) which counts rational curves in \(X\) passing through \(2n+2\) generic points. By [1, Example 8.1], we know that \(N_{0,C_{1}+nf}^{X}=1\) for all \(n\geq 0\). This confirms the identity (18). In order to deduce the recursion for \(N_{0,\beta}^{\mathcal{O}_{X}(-D)}\), we again need the following local/relative correspondence \[N_{0,\beta}^{X/D}=(-1)^{D\cdot\beta-1}(D\cdot\beta)N_{0,\beta}^{\mathcal{O}_{X} (-D)}. \tag{19}\] According to [1], the above equality holds with the extra condition that \(D\cdot\beta>0\). But it is easy to compute that \(N_{0,\beta}^{X/D}=0\) if \(\beta\) is some multiple of the fiber class \(f\). So in the recursion (17), we can always assume that \(D\cdot\beta_{i}>0\), \(i=1,2\). The recursion for \(N_{0,\beta}^{\mathcal{O}_{X}(-D)}\) then follows from (17) and (19). ## Appendix B Genus \(0\) recursion In this section, we give a direct proof in Gromov-Witten theory of the recursion formula for genus-zero local invariants \(N_{0,\beta}^{\mathcal{O}_{X}(-D)}\): \[N_{0,\beta}^{\mathcal{O}_{X}(-D)}=-\sum_{\beta_{1}+\beta_{2}=\beta\atop\beta_ {1},\beta_{2}>0}N_{0,\beta_{1}}^{\mathcal{O}_{X}(-D)}N_{0,\beta_{2}}^{ \mathcal{O}_{X}(-D)}\,(D\cdot\beta_{1})^{2}\,\binom{T_{\log}\cdot\beta-3}{T_{ \log}\cdot\beta_{1}-1}. \tag{20}\] Let \(N_{0,\beta}^{X/D}\) be the (virtual) number of rational curves in \(X\) with curve class \(\beta\) and passing through \(T_{\log}\cdot\beta\) general points and meet \(D\) at an unspecified point with contact order \(D\cdot\beta\). According to [15, Theorem 1.1], we have the following recursion formula when \(T_{\log}\cdot\beta\geq 3\): \[(H\cdot D)\frac{N_{0,\beta}^{X/D}}{d}= \sum_{\beta_{1}+\beta_{2}=\beta\atop\beta_{1},\beta_{2}>0}\bigg{(} d_{1}^{2}(H\cdot\beta_{2})\binom{T_{\log}\cdot\beta-3}{T_{\log}\cdot\beta_{1}-1 }+d_{1}d_{2}(H\cdot\beta_{2})\binom{T_{\log}\cdot\beta-3}{T_{\log}\cdot\beta_ {1}-2}\] \[-d_{1}^{2}(H\cdot\beta_{1})\binom{T_{\log}\cdot\beta-3}{T_{\log} \cdot\beta_{1}}-d_{1}d_{2}(H\cdot\beta_{1})\binom{T_{\log}\cdot\beta-3}{T_{ \log}\cdot\beta_{1}-1}\bigg{)}\,\frac{N_{0,\beta_{1}}^{X/D}}{d_{1}}\frac{N_{0,\beta_{2}}^{X/D}}{d_{2}}\] where \(H\) is any divisor, \(d=D\cdot\beta\), \(d_{1}=D\cdot\beta_{1}\), \(d_{2}=D\cdot\beta_{2}\). After replacing \(H\) by \(T_{\log}\), the above recursion formula can be simplified as \[(T_{\log}\cdot D)\frac{N_{0,\beta}^{X/D}}{d}=2\sum_{\beta_{1}+\beta_{2}=\beta \atop\beta_{1},\beta_{2}>0}d_{1}^{2}\binom{T_{\log}\cdot\beta-3}{T_{\log} \cdot\beta_{1}-1}\frac{N_{0,\beta_{1}}^{X/D}}{d_{1}}\frac{N_{0,\beta_{2}}^{X/ D}}{d_{2}}\,. \tag{21}\] This follows from the combinatorial identities \[(T_{\log}\cdot\beta_{2})\binom{T_{\log}\cdot\beta-3}{T_{\log}\cdot\beta_{1} -1}-(T_{\log}\cdot\beta_{1})\binom{T_{\log}\cdot\beta-3}{T_{\log}\cdot\beta_ {1}}=2\binom{T_{\log}\cdot\beta-3}{T_{\log}\cdot\beta_{1}-1}\] and \[\sum_{\beta_{1}+\beta_{2}=\beta\atop\beta_{1},\beta_{2}>0}\bigg{(}(T_{\log} \cdot\beta_{2})\binom{T_{\log}\cdot\beta-3}{T_{\log}\cdot\beta_{1}-2}-(T_{ \log}\cdot\beta_{1})\binom{T_{\log}\cdot\beta-3}{T_{\log}\cdot\beta_{1}-1} \bigg{)}\,N_{0,\beta_{1}}^{X/D}N_{0,\beta_{2}}^{X/D}=0.\] Since \(D\) is smooth and rational, we have \(T_{\log}\cdot D=-(K_{X}+D)\cdot D=2\) by the adjunction formula. So (21) can be further simplified as \[\frac{N_{0,\beta}^{X/D}}{D\cdot\beta}=\sum_{\beta_{1}+\beta_{2}=\beta\atop\beta _{1},\beta_{2}>0}(D\cdot\beta_{1})^{2}\binom{T_{\log}\cdot\beta-3}{T_{\log} \cdot\beta_{1}-1}\frac{N_{0,\beta_{1}}^{X/D}}{D\cdot\beta_{1}}\frac{N_{0,\beta_ {2}}^{X/D}}{D\cdot\beta_{2}}\,.\] Now (20) follows from the local/relative correspondence [11]: \[N_{0,\beta}^{X/D}=(-1)^{D\cdot\beta-1}(D\cdot\beta)N_{0,\beta}^{\mathcal{O}_{X}(- D)}.\] From the proof, it is clear that the requirement of \(D\) to be rational, i.e., \(T_{\log}\cdot D=2\) is necessary. ## Appendix C Comparison with genus-one recursion from the Virasoro constraints From the recursion (1), we obtain the following recursion for genus one Gromov-Witten invariants of \(\mathcal{O}_{\mathbb{P}^{2}}(-1)\): \[N_{1,d}=\sum_{\begin{subarray}{c}d_{1}+d_{2}=d\\ d_{1},d_{2}>0\end{subarray}}\left(N_{0,d_{1}}N_{0,d_{2}}\frac{d_{1}^{4}}{12}-(N _{0,d_{1}}N_{1,d_{2}}+N_{0,d_{2}}N_{1,d_{1}})d_{1}^{2}\right)\binom{2d-3}{2d_ {1}-1}\,.\] We may also embed \(\mathcal{O}_{\mathbb{P}^{2}}(-1)\) into \(\mathbb{P}(\mathcal{O}_{\mathbb{P}^{2}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{2}})\). Then Gromov-Witten invariants of \(\mathcal{O}_{\mathbb{P}^{2}}(-1)\) equal to the corresponding invariants of \(\mathbb{P}(\mathcal{O}_{\mathbb{P}^{2}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{2}})\). Since \(\mathbb{P}(\mathcal{O}_{\mathbb{P}^{2}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{2}})\) is a projective toric variety, we can apply the Virasoro constraints [1, 1, 19, 20], and get another recursion: \[N_{1,d}=-\frac{d(d-1)}{24}N_{0,d}-\sum_{\begin{subarray}{c}d_{1}+d_{2}=d\\ d_{1},d_{2}>0\end{subarray}}\frac{(d-1)(2d-1)}{2}\binom{2d-3}{2d_{1}-2}N_{0,d_ {1}}N_{1,d_{2}}\,.\] By Theorem 1.1 and the Virasoro constraints, these two recursions are equivalent. Using computer, we checked directly that up to degree \(19\), it is indeed the case. However, we do not know a direct elementary proof of the equivalence of these two recursions.
2305.13197
Challenging Decoder helps in Masked Auto-Encoder Pre-training for Dense Passage Retrieval
Recently, various studies have been directed towards exploring dense passage retrieval techniques employing pre-trained language models, among which the masked auto-encoder (MAE) pre-training architecture has emerged as the most promising. The conventional MAE framework relies on leveraging the passage reconstruction of decoder to bolster the text representation ability of encoder, thereby enhancing the performance of resulting dense retrieval systems. Within the context of building the representation ability of the encoder through passage reconstruction of decoder, it is reasonable to postulate that a ``more demanding'' decoder will necessitate a corresponding increase in the encoder's ability. To this end, we propose a novel token importance aware masking strategy based on pointwise mutual information to intensify the challenge of the decoder. Importantly, our approach can be implemented in an unsupervised manner, without adding additional expenses to the pre-training phase. Our experiments verify that the proposed method is both effective and robust on large-scale supervised passage retrieval datasets and out-of-domain zero-shot retrieval benchmarks.
Zehan Li, Yanzhao Zhang, Dingkun Long, Pengjun Xie
2023-05-22T16:27:10Z
http://arxiv.org/abs/2305.13197v1
# Challenging Decoder helps in Masked Auto-Encoder Pre-training ###### Abstract Recently, various studies have been directed towards exploring dense passage retrieval techniques employing pre-trained language models, among which the masked auto-encoder (MAE) pre-training architecture has emerged as the most promising. The conventional MAE framework relies on leveraging the passage reconstruction of decoder to bolster the text representation ability of encoder, thereby enhancing the performance of resulting dense retrieval systems. Within the context of building the representation ability of the encoder through passage reconstruction of decoder, it is reasonable to postulate that a "more demanding" decoder will necessitate a corresponding increase in the encoder's ability. To this end, we propose a novel token importance aware masking strategy based on pointwise mutual information to intensify the challenge of the decoder. Importantly, our approach can be implemented in an unsupervised manner, without adding additional expenses to the pre-training phase. Our experiments verify that the proposed method is both effective and robust on large-scale supervised passage retrieval datasets and out-of-domain zero-shot retrieval benchmarks. ## 1 Introduction Passage retrieval is a core sub-task in various downstream applications, such as open-domain question answering Karpukhin et al. (2020); Qu et al. (2021); Zhu et al. (2021), conversational systems Yu et al. (2021) and web search Lin et al. (2021); Fan et al. (2021); Long et al. (2022). Recently, a number of studies have demonstrated that dense passage retrieval systems based on pre-trained language models (PLMs) are significantly more effective compared to traditional sparse retrieval methods such as BM25 Karpukhin et al. (2020). To balance efficiency and effectiveness, existing dense passage retrieval methods usually leverage a dual-encoder architecture, where query and passage are encoded into continuous vector representations by PLMs respectively, and then a lightweight score function such as dot product or cosine similarity between two vectors is used to estimate the semantic similarity between the query-passage pair.1 Footnote 1: We use the term “passage” and “document” interchangeably throughout this paper. In the dual-encoder architecture, the text representation capability of the PLMs plays a crucial role as it shall encode all essential information into the low-dimensional dense vector. However, it has been observed that the progress of PLMs in general language understanding benchmarks does not necessarily lead to an improvement in text representation ability Li et al. (2020); Lu et al. (2021); Wang et al. (2022) as the widely used masked language modeling (MLM) pre-training objective focuses more on representing individual tokens rather than the entire sentence. As a result, numerous recent studies have explored to enhance the base model's sentence representation ability via incorporating supplementary pre-training tasks or designing new pre-training architectures Lee et al. (2019); Gao and Callan (2021); Xiao et al. (2022). Currently, the Masked Auto-Encoder (MAE) is arguably the most effective pre-training framework in retrieval tasks. As illustrated in Figure 1, MAE utilizes the encoder-decoder architecture in which Figure 1: Illustration of the Masked Auto-Encoder (MAE) pre-training framework. the sentence is randomly masked twice as the input to the encoder and decoder, respectively, and the sentence embedding pooled from the encoder is concatenated with the masked input of the decoder to reconstruct the original input. In this framework, the encoding quality is critical to the success of the reconstruction task of decoder, which in turn, is the key to improving the representation ability of the encoder. To increase the difficulty of the decoder, previous works usually utilized a shallower decoder and a higher decoder mask ratio (Xiao et al., 2022). We argue that optimizing the training task of MLM can further enhance the challenge of the decoder, leading to an even greater improvement in the representation ability of the encoder. During the MAE pre-training process, the MLM training objective is used at both the encoder and decoder ends, with masked tokens randomly sampled from a uniform distribution over the input sentence. However, such a random sampling strategy is sub-optimal since it is biased towards sampling uninformative high-frequency tokens (_e.g._, stopwords and punctuations as shown in Figure 2).2 Previously, many research works have attempted to address this issue by detecting and masking spans (Joshi et al., 2020; Levine et al., 2021) or determining which tokens should be selected through the PLMs (Ma et al., 2021; Long et al., 2022). In the information retrieval (IR) community, early lexical retrieval systems (Robertson and Zaragoza, 2009) also used inverse document frequency for term re-weighting in their score functions. Some recent works have also suggested that incorporating the term importance information and token-level matching signal with deep contextualized representations (Dai and Callan, 2020; Khattab and Zaharia, 2020; Gao et al., 2021; Zhou et al., 2022) is beneficial. Footnote 2: Uniform sampling from each sentence is statistically equivalent to sampling from the term frequency distribution over the corpus’ vocabulary, which is a power-law distribution. In this paper, we follow similar principles to optimize the MAE pre-training framework for dense retrieval. Specifically, we aim to increase the proportion of masked tokens that are informative (or meaningful for passage retrieval tasks) during MLM training of the decoder. To achieve this goal, we introduce pointwise mutual information (PMI) between \(n\)-grams to estimate the importance of each token, which is subsequently used to regularize the sampling distribution of MLM, increasing the probability of important tokens being masked. More importantly, PMI can be calculated in an unsupervised manner and can model the corpus-wise keywords and co-occurrence features which are valuable for IR tasks. Compared to previous research which relies on another PLM to determine the importance of tokens (Long et al., 2022) or sample replacements (Wang et al., 2022), our method is more computationally efficient because it only requires computing statistical information from the corpus. We apply this masking strategy asymmetrically to the encoder and decoder sides, enforcing the dense bottleneck between them to encode more information about salient co-occurring tokens captured by decoder masking. We evaluate the effectiveness of our approach on several supervised passage retrieval datasets and zero-shot retrieval benchmarks. To summarize, our contributions are as follows: * We reveal the problem of uniform sampling in random masking, which biases the learning process of language models toward high-frequency tokens that are uninformative in passage retrieval scenarios. * To mitigate the above issue, we introduce an efficient approach for term importance estimation based on pointwise mutual information. We further propose a novel importance-aware masking strategy and integrate this masking strategy with the bottleneck masked autoencoder framework for better sentence representation learning. * Experiments results demonstrate the effectiveness of our approach. Our model outperforms state-of-the-art dense retrievers in both supervised settings and zero-shot settings. Figure 2: Top 10 masked term frequency distribution using random masking with uniform sampling. Statistics are measured on the MS MARCO passage corpus. Related Work Pre-training for Dense RetrievalDense retrieval usually involves learning a dual-encoder that encodes query and document into dense vectors and takes their inner product (or cosine similarity) as their relevance score. It's usually trained with the contrastive objective to distinguish positive pairs from negative ones (Karpukhin et al., 2020). To reduce the gap between MLM and downstream retrieval tasks, plenty of works have designed various self-supervised contrastive pre-training tasks by automatically constructing positive and negative pairs (Lee et al., 2019; Chang et al., 2020; Izacard et al., 2022; Neelakantan et al., 2022). Different from the idea of designing pretext contrastive tasks, another line of research has focused on improving the global text representation learning ability in the masked auto-encoding framework. For example, Lu et al. (2021) proposed adding a shallow auto-regressive decoder on top of the encoder for causal language modeling conditioned on the sentence representation. Gao and Callan (2021) changed the decoder to a bi-directional Transformer head trained with MLM task. Xiao et al. (2022) proposed an enhanced-decoding mechanism with token-specific masking design and an aggressive masking ratio. Wang et al. (2022) introduced ELECTRA-style replaced language modeling to reduce the gap between pre-training and fine-tuning. In addition to predicting the sentence itself based on its contextualized representation, Wu et al. (2022) proposed to additionally predict its surrounding context. Zhou et al. (2022) further used docTSquery (Nogueira and Lin, 2019) and GPT-2 (Radford et al., 2019) for data augmentation and combined all these tasks with multiple decoders. It should be noticed that our contributions are orthogonal to theirs since we only use the self-prediction task and the vanilla MAE architecture. Pointwise Mutual InformationUsing PMI to model word association in NLP was first introduced by Church and Hanks (1990). Levy and Goldberg (2014) found that the skip-gram algorithm is implicitly factorizing the (shifted) word-context PMI matrix for word embedding learning. More recently, Levine et al. (2021) extended PMI to detect more correlated spans for masking. Sadeq et al. (2022) identified more informative masking patterns by finding a maximum cut in the PMI matrix. Different from previous work, we use the PMI to estimate how much a token contributes to the sentence and use the term importance information to regularize the sampling distribution. ## 3 Methodology In this section, we describe our proposed pre-training method for the dense passage retrieval task. We first give a brief overview of the conventional MAE pre-training architecture. Then we introduce how to extend it to our model transforming the decoder to be more challenging. Finally, we detail the multi-stage fine-tuning and inference designs. ### Bottlenecked Masked Autoencoders In the bottlenecked masked autoencoder (BMAE) architecture, a deep encoder is used to get a compressed vector as sentence embedding, and a shallow decoder is used to recover masked tokens based on the sentence embedding offered by the encoder. An information bottleneck (_e.g._, the representation at [CLS] position) is imposed as the communication budge between the encoder and decoder. To make good predictions, the encoder must compress as much information to the bottleneck, whereas the decoder has to attend to the bottleneck representation due to its weakened capacity. Formally, given a piece of text \(\mathbf{x}\), two masking operations are independently applied to the input sequence \(\mathbf{x}\) with different masking ratios \(p_{\text{enc}}\) and \(p_{\text{dec}}\). Among the masked tokens, \(80\%\) are replaced with a special [MASK] token, \(10\%\) are replaced by a random token in the vocabulary, and the remaining tokens are kept unchanged (Devlin et al., 2019), \[\hat{\mathbf{x}}_{\text{enc}} =\text{Mask}(\mathbf{x},p_{\text{enc}}), \tag{1}\] \[\hat{\mathbf{x}}_{\text{dec}} =\text{Mask}(\mathbf{x},p_{\text{dec}}).\] Deep EncoderWe use a multi-layer Transformer network (Vaswani et al., 2017) as the deep encoder which can be initialized from PLMs such as BERT (Devlin et al., 2019). We feed \(\hat{\mathbf{x}}_{\text{enc}}\) through the deep language model encoder to get its contextualized representations \(\mathbf{h}_{\hat{\mathbf{x}}_{\text{enc}}}\) for masked token reconstruction, \[\mathbf{h}_{\hat{\mathbf{x}}_{\text{enc}}}=\text{Encoder}(\hat{\mathbf{x}}_{ \text{enc}}) \tag{2}\] The encoder is trained via the MLM objective where the language modeling head predicts masked tokens based on contextualized representations, \[L_{\text{enc}}=-\sum_{x\in M_{\text{enc}}}\text{log}P_{\text{lm}}(x|\mathbf{h}_{ \hat{\mathbf{x}}_{\text{enc}}}) \tag{3}\] where \(M_{\text{enc}}\) denotes the set of masked tokens at the encoder side. Bottleneck RepresentationTo ensure consistency with downstream fine-tuning, we use the [CLS] token representation \(\mathbf{h}_{c}\in\mathbb{R}^{d_{\text{enc}}}\) at the last layer of the deep encoder as the text representation. We use a linear projection head to construct the bottleneck for decoder recovery, \[\mathbf{h}_{b}=\mathbf{W}\mathbf{h}_{c}+\mathbf{b} \tag{4}\] where \(\mathbf{W}\in\mathbb{R}^{d_{\text{dec}}\times d_{\text{enc}}}\) and \(\mathbf{b}\in\mathbb{R}^{d_{\text{dec}}}\) are trainable parameters. Weak DecoderThe decoder is a shallow neural network consisting of two randomly initialized Transformer layers. Its objective is to recover the aggressively masked inputs conditioned on the bottleneck representation \(\mathbf{h}_{b}\in\mathbb{R}^{d_{\text{dec}}}\) provided by the deep encoder, \[\mathbf{h}_{\hat{\mathbf{x}}_{\text{dec}}}=\text{Decoder}(\mathbf{h}_{b}, \hat{\mathbf{x}}_{\text{dec}}). \tag{5}\] We use the same language model head for masked language modeling, \[L_{\text{dec}}=-\sum_{x\in M_{\text{dec}}}\text{log}P_{\text{lm}}(x|\mathbf{ h}_{\hat{\mathbf{x}}_{\text{dec}}}) \tag{6}\] where \(M_{\text{dec}}\) denotes the set of masked tokens at the decoder side. ### Token Importance Aware Masking The random masking strategy treats all tokens equally. Statistical analysis reveals that \(40\%\) of the masked tokens produced by the \(15\%\) ratio random masking method in BERT pre-training are stop-words or punctuation. However, prior research has shown that the contribution of these tokens to the dense passage retrieval task is limited. Furthermore, these frequently occurring tokens serve to ease the burden on the decoder in the MAE pre-training process. To address the limitation above, we propose a new masking strategy aiming to mask tokens that are more important thus posing a greater challenge for the decoder in the reconstruction task. Token Importance EstimationMutual Information is widely used to estimate the correlation of two random variables. Inspired by previous work, we consider linking the importance of tokens to statistical pointwise mutual information. The pointwise mutual information between two words \(w_{1}\) and \(w_{2}\) represents the correlation between them. Mathematically, the pointwise mutual information of the bi-gram combination \(w_{1}w_{2}\) can be formulated as: \[\text{PMI}(w_{1},w_{2})=log\frac{p(w_{1},w_{2})}{p(w_{1})p(w_{2})} \tag{7}\] where the probability of any \(n\)-gram is defined as the number of its occurrences in the corpus divided by the number of all the n-grams in the corpus. For each word in a sentence, if the word is more informative (namely more important), then statistically speaking, the PMI between this word and the remaining parts of the sentence should be larger. In practice, calculating the PMI between each word and the remaining parts of the sentence can be computationally expensive and may result in bias. As such, a more suitable approach is to determine the importance of a word by analyzing its PMI with adjacent fragments. To simplify the calculation without sacrificing performance, we only need to compute the average mutual information (AMI) of fixed window size. In the determined window size, we use the average PMI value of all possible \(n\)-gram combinations delimited by the token (starting and ending with the token, and with a minimum \(n\) value of \(2\)) as the ultimate reference for token importance estimation. For each word \(w_{i}\) in sentence \(\mathcal{S}\), if the window size is set to \(L\), the final average mutual information of word \(w_{i}\) can be represented as: \[\text{AMI}(w_{i}) =\frac{1}{L-1}\sum_{j=1}^{L-1}\text{PMI}(w_{i-j},...,w_{i}) \tag{8}\] \[+\frac{1}{L-1}\sum_{j=1}^{L-1}\text{PMI}(w_{i},...,w_{i+j}).\] To clarify, we calculate the pointwise mutual information of bigrams when \(L=2\). When \(L\geq 2\), we will expand to the computation of all \(n\)-grams where \(2\leq n\leq L\). In order to balance computational efficiency and accuracy, we set \(L=4\) in this work. Importance Aware SamplingGiven a sentence \(\mathbf{x}=(x_{1},x_{2},...,x_{n})\) that consists of \(n\) tokens equipped with the estimated token importance \(\mathbf{t}=(t_{1},t_{2},...,t_{n})\) where \(t_{i}=\text{AMI}(x_{i})\). Instead of random sampling from a uniform distribution over the sequence of tokens, we propose a novel sampling strategy that takes the term importance into account. Specifically, we sort tokens based on their perturbed importance and mask tokens with higher importance. Each token importance is perturbed with a Gaussian noise before being sorted to preserve some randomness during masking.3 Details can be found in Algorithm 1. The output \(\mathbf{M}=(m_{1},\dots,m_{n})\) is a binary-valued array in which 0 denotes unmasked and 1 for masked. We apply the proposed masking strategy asymmetrically to the encoder and decoder sides. The encoder side masking is unchanged while for the decoder, we use the importance aware masking to mask more salient tokens. Figure 2(a) compares the sampling distribution induced by the proposed importance-aware masking algorithm and the uniform sampling distribution of random masking. More informative words have a higher probability of being masked. Footnote 3: We simply set the variance \(\sigma=1\). ### Pre-training The pre-training is conducted on the unlabeled corpus \(\mathcal{C}=\{d_{1},d_{2},...,d_{N}\}\). The masked auto-encoder is pre-trained with the masked language modeling (MLM) objective from both the encoder and decoder, \[L=L_{enc}+L_{dec}, \tag{9}\] where each loss is normalized to stabilize training. ### Fine-tuning Given a labeled dataset \(\mathcal{D}=\{(q_{1},d_{1}^{+}),(q_{2},d_{2}^{+}),...,(q_{n},d_{n}^{+})\}\) consisting of query and supporting document pairs, we adopt the fine-tuning pipeline from prior work (Wang et al., 2022). The fine-tuning consists of three stages: the first retriever is trained with the official Figure 3: Overview of our pre-training approach. We first estimate term importance in an unsupervised manner for a piece of text: hokey stuff is a crossword puzzle clue that we have spotted 4 times. Then we incorporate the term importance awareness into decoder masking to mask more salient co-occurring tokens. BM25 hard negatives; the second retriever is trained with the hard negatives mined using the first retriever; the final stage retriever is trained with the hard negatives mined via the second retriever and knowledge distillation from a cross-encoder reranker. At each of the three training stages, retrievers are initialized with the same pre-trained checkpoint. Stage 1: BM25 Negatives.For each query \(q\), we sample hard negatives \(\mathbb{D}^{-}_{\text{BM25}}\) from top \(K_{1}\) ranked documents excluding the positive ones returned by BM25. In-batch negatives are also included to improve training efficiency. The retriever is trained as a dual encoder with a contrastive loss, \[L_{\text{cont}}=-\log\frac{\text{exp}(s(q,d^{+}))}{\sum_{d\in\{d^{+}\}\cup \mathbb{D}^{-}_{\text{BARS}}}\text{exp}(s(q,d))} \tag{10}\] where \(s(q,d)\) measures the similarity score between query \(q\) and document \(d\). In this paper, we use the dot product between query and document representations as their similarity score \(s_{\text{de}}(q,d)=\mathbf{q}\cdot\mathbf{d}\), where \(\mathbf{q}=\text{DE}(q)\) and \(\mathbf{d}=\text{DE}(d)\). Stage 2: Hard Negatives.In the second stage, we substitute the BM25 lexical retriever with the dense retriever trained in the first stage for mining more challenging hard negatives Xiong et al. (2021). Top \(K_{2}\) documents excluding positives are collected to construct \(\mathbb{D}^{-}_{\text{hard}}\). The retriever is trained with the same objective as Equation 10 by substituting \(\mathbb{D}^{-}_{\text{BM25}}\) with \(\mathbb{D}^{-}_{\text{hard}}\). Stage 3: Reranker-Distilled.Hard negatives mined by retrievers suffer from the problem of false negatives due to the incomplete data annotation, which can be relieved by pseudo-labeling from a more powerful yet costly cross-encoder reranker Qu et al. (2021); Ren et al. (2021). The cross-encoder takes the concatenation of query and document as input and models their relevance through multi-layer token-level interactions, making it a better architecture for fine-grained relevance estimation. It is trained via the list-wise contrastive loss similar to the dual-encoder retriever except that the score function becomes a parametric form \(s_{\text{ce}}(q,d)=\text{CE}(\text{concat}(q,d))\) and in-batch negatives are not involved in loss computation. The hard negatives \(\mathbb{D}^{-}_{\text{hard}^{2}}\) are mined from the top \(K_{3}\) documents returned by the previous stage retriever. We use the KL (Kullback-Leibler) divergence between the relevance distribution estimated by the dual-encoder retriever and the cross-encoder reranker for knowledge distillation: \[\begin{split} L_{\text{kl}}&=\text{KL}(P_{\text{ ce}}(D|q)||P_{\text{de}}(D|q)),\\ &=\sum_{d\in D}p_{\text{ce}}(d|q)\log\frac{p_{\text{ce}}(d|q)}{p_ {\text{de}}(d|q)}.\end{split} \tag{11}\] where \(p(d|q)\propto\text{exp}(s(q,d))\), \(D\) consists of the positive document \(d^{+}\) and a set of negative documents sampled from \(\mathbb{D}^{-}_{\text{hard}^{2}}\). The original contrastive loss is also added following Wang et al. (2022). The final objective during knowledge distillation stage is \(L=L_{\text{kl}}+\alpha L_{\text{cont}}\). ### Inference The inference for a dense retriever is conducted in two stages: index and retrieval. The document encoder first encodes each document in the corpus into a fix-dimensional dense vector over which a dense index is built. Given a query \(q\), query encoder is employed to get the query representation, and retrieval is done by performing a maximum inner product search over the dense index to find relevant documents from the corpus. ## 4 Experiments ### Setup The pre-training consists of two stages: generic pre-training followed by unsupervised corpus-aware pre-training. 4 We set encoder mask ratio \(p_{\text{enc}}=0.3\) and decoder mask ratio \(p_{\text{dec}}=0.5\). After pre-training, the decoder is discarded and only the encoder is used for retriever initialization. During fine-tuning, we use a siamese architecture that shares the parameters of query encoder and document encoder. We set hard negative mining depths \(K_{1}=1000\), \(K_{2}=K_{3}=200\), and contrastive loss ratio \(\alpha=0.2\) following Wang et al. (2022). Other hyperparameters are adopted from prior work and detailed in Appendix A. During inference, we use the faiss library Johnson et al. (2021) to build the index and perform the exact search. Footnote 4: Due to resource limitation, we omit the pre-training on general corpus by initializing from retrome checkpoint pre-trained on the combination of Wikipedia and BookCorpus and continually pre-train it on the MS MARCO passage corpus consisting of 8.8M passages. We evaluate our approach in two settings: supervised in-domain passage retrieval and zero-shot out-of-domain retrieval. For supervised in-domain evaluation, we choose MS MARCO passage retrieval task Nguyen et al. (2016), TREC 2019 and 2020 Deep Learning Tracks (Craswell et al., 2020, 2020). For zero-shot evaluation, we report results on 14 open available datasets of BEIR benchmark (Thakur et al., 2021). We mainly compare our approach to other pre-training approaches tailored for dense retrieval, including Condenser (Gao and Callan, 2021), coCondenser (Gao and Callan, 2022), RetroMAE (Xiao et al., 2022), SimLM (Wang et al., 2022), CoTMAE (Wu et al., 2022), MASTER (Zhou et al., 2022). We also list the performance of other retrieval systems for reference. ### Main Results In-domain Evaluation.Results on MS MARCO passage retrieval dataset and TREC DL benchmarks are shown in Table 1. In line with previous study, we find that corpus-aware pre-training brings substantial improvements over sophisticated fine-tuning techniques, such as multi-vector encoding and data augmentations. Among recently proposed dense retrieval pre-training approaches, our model achieves the best results on MS MARCO and TREC DL 2019, even outperforming MASTER, a semi-supervised pre-training approach that uses DocT5Query (Nogueira and Lin, 2019; Nogueira et al., 2019) generated queries for additional data augmentation. Table 2 illustrates the retrieval performance at different fine-tuning stages. When fine-tuning with only BM25 negatives, our model achieves the best MRR@10 over all pre-training baselines, outperforming RetroMAE by 1.1 points. When training with self-mined hard negatives, our model \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & **Reranker** & **Multi** & \multicolumn{3}{c}{**MS MARCO dev**} & \multicolumn{2}{c}{**TREC DL 19**} & \multicolumn{1}{c}{**TREC DL 20**} \\ \cline{3-8} & **distilled** & **Vec** & MRR@10 & Recall@50 & Recall@1k & nDCG@10 & nDCG@10 \\ \hline \hline BM25 (Robertson and Zaragoza, 2009) & & 18.5 & 58.5 & 85.7 & 51.2 & 47.7 \\ DeepCT (Dai and Callan, 2020) & & 24.3 & 69.0 & 91.0 & 57.2 & - \\ docT5query (Nogueira and Lin, 2019) & & 27.7 & 75.6 & 94.7 & 64.2 & - \\ \hline ANCE (Xiong et al., 2021) & & 33.0 & - & 95.9 & 64.5 & 64.6 \\ SEED (Lu et al., 2021) & & 33.9 & - & 96.1 & - & - \\ TAS-B (Hofstätter et al., 2021) & ✓ & 34.0 & - & 97.5 & 71.2 & 69.3 \\ ColBERT (Khattab and Zaharia, 2020) & & ✓ & 36.0 & 82.9 & 96.8 & - & - \\ COIL (Gao et al., 2021) & ✓ & 35.5 & - & 96.3 & 70.4 & - \\ Condenser (Gao and Callan, 2021) & & & 36.6 & - & 97.4 & 69.8 & - \\ RocketQA (Qu et al., 2021) & ✓ & & 37.0 & 85.5 & 97.9 & - & - \\ PAIR (Ren et al., 2021) & ✓ & & 37.9 & 86.4 & 98.2 & - & - \\ coCondenser (Gao and Callan, 2022) & & & 38.2 & 86.5 & 98.4 & 71.7 & 68.4 \\ RocketQAv2 (Ren et al., 2021) & ✓ & & 38.8 & 86.2 & 98.1 & - & - \\ AR2 (Zhang et al., 2022) & ✓ & & 39.5 & 87.8 & 98.6 & - & - \\ CoIBERTv2 (Santhanam et al., 2022) & ✓ & ✓ & 39.7 & 86.8 & 98.4 & - & - \\ UnifieR\({}_{\text{dense}}\) (Shen et al., 2022) & & & 38.8 & - & 97.6 & 71.1 & - \\ \hline RetroMAE (Xiao et al., 2022) & ✓ & & 41.6 & 88.5 & 98.8 & 68.1 & 70.6 \\ SimLM (Wang et al., 2022) & ✓ & & 41.1 & 87.8 & 98.7 & 71.2 & 69.7 \\ CoT-MAE (Wu et al., 2022) & & & 39.4 & 87.0 & 98.7 & - & 70.4 \\ MASTER (Zhou et al., 2022) & ✓ & & 41.5 & **88.6** & 98.8 & 72.7 & **71.7** \\ \hline CDMAE & ✓ & & **41.7** & **88.6** & **98.9** & **73.0** & 71.1 \\ \hline \hline \end{tabular} \end{table} Table 1: Main results on MS MARCO passage ranking and TREC Deep Learning tracks. \begin{table} \begin{tabular}{l l c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & **Pretraining** & \multicolumn{2}{c}{**BM25 Negatives**} & \multicolumn{2}{c}{**Hard Negatives**} & \multicolumn{2}{c}{**Reranker-Distilled**} \\ \cline{3-8} & **Setting** & \multicolumn{1}{c}{MRR@10} & \multicolumn{1}{c}{Recall@1k} & \multicolumn{1}{c}{MRR@10} & \multicolumn{1}{c}{Recall@1k} & \multicolumn{1}{c}{MRR@10} & \multicolumn{1}{c}{Recall@1k} \\ \hline coCondenser (Gao and Callan, 2022) & unsupervised & 35.7 & 97.8 & 38.2 & 98.4 & 40.2 & 98.3 \\ RetroMAE (Xiao et al., 2022) & unsupervised & 37.7 & 98.5 & 39.3 & 98.5 & 41.6 & 98.8 \\ SimLM (Wang et al., 2022) & unsupervised & 38.0 & 98.3 & 39.1 & 98.6 & 41.1 & 98.7 \\ CoT-MAE (Wu et al., 2022) & unsupervised & 36.8 & 98.3 & 39.2 & 98.7 & 40.2 & 98.3 \\ MASTER (Zhou et al., 2022) & semi-supervised & 38.3 & **98.8** & **40.4** & **98.8** & 41.5 & 98.8 \\ \hline CDMAE & unsupervised & **38.8** & 98.6 & 40.2 & 98.7 & **41.7** & **98.9** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance at different stages of the fine-tuning pipeline on MS MARCO dev set. Other model results are borrowed from the corresponding papers. We rerun RetroMAE first stage results using their code since it is not reported in their paper. outperforms previous unsupervised baselines by a large margin and is competitive to MASTER which pre-trained longer using more supervision signals. When using knowledge distillation from a cross-encoder reranker, our model achieves the best overall results. We also observe that the improvement diminishes with the incorporation of knowledge distillation due to more accurate labeling. However, running through the cross-encoder reranker for each training instance is costly. Our model is more robust to label noise presented in the training data given its strong performance in the first two stages. In terms of retrieval efficiency, our model uses a dual-encoder architecture which brings no extra inference overhead. Zero-shot Evaluation.Next, we evaluate the out-of-domain robustness of our pre-training approach on the BEIR benchmark. We apply the same pre-training procedure on the unlabeled corpus and use the in-domain supervised training data from the BEIR benchmark for fine-tuning. Table 3 shows the zero-shot retrieval performance of the retriever on 14 out-of-domain datasets when finetuned with only in-domain data from MS MARCO. We observe that the contrastively pre-trained Contriever is a strong baseline, which outperforms SimLM and MASTER, but underperforms RetroMAE by a small margin. Our model shows the best overall performance, outperforming MASTER by 4 points of nDCG@10 and RetroMAE by 2 points, performing best on 9 out of 14 datasets. This verifies the strong generalization ability of our pre-training approach. ### Ablations and Analysis In this section, we ablate and analyze several design choices adopted in model training. By default, we report models' downstream retrieval performance when fine-tuned using only BM25 hard negatives.5 Footnote 5: MRR@10 on MS MARCO dev (MS dev) and nDCG@10 on TREC DL tracks are used for ablation evaluation. Projection Head.Adding a projection head to transform the representation space is commonly used in contrastive representation learning (Chen et al., 2020). We investigate whether this similar approach can contribute to the representation learning in the MAE framework. Table 4 compares the downstream fine-tuning retrieval performance with and without the linear projection header between the encoder and decoder during pre-training. We observe that adding a linear projection head consistently improves the downstream retrieval performance. In the MAE framework, the projection head acts as a task adapter that can extract task-specific information from hidden representations. Therefore, the representation bottleneck before the projection head can focus more on learning task-agnostic information which can generalize better to downstream tasks. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline **Method** & BM25 & DPR & ColBERT & Contriever & Condenser & RetroMAE & SimLM & MASTER & CDMAE \\ \hline TREC-COVID & 65.6 & 33.2 & 67.7 & 59.6 & 75.0 & 77.2 & 63.7 & 62.0 & **79.0** \\ NFCorpus & 32.5 & 18.9 & 30.5 & 32.8 & 27.7 & 30.8 & 32.3 & 33.0 & **33.4** \\ NQ & 32.9 & 47.4 & 52.4 & 49.8 & 48.6 & 51.8 & 47.7 & 51.6 & **55.9** \\ HotpotQA & 60.3 & 39.1 & 59.3 & 63.8 & 53.8 & 63.5 & 58.1 & 58.9 & **65.5** \\ FiQA-2018 & 23.6 & 11.2 & 31.7 & 32.9 & 25.9 & 31.6 & 29.2 & 32.8 & **33.6** \\ ArguAna & 31.5 & 17.5 & 23.3 & 44.6 & 29.8 & 43.3 & 42.1 & 39.5 & **49.1** \\ Touche-2020 & **36.7** & 13.1 & 20.2 & 23.0 & 24.8 & 23.7 & 29.2 & 32.0 & 21.1 \\ CQADupStack & 29.9 & 15.3 & 35.0 & 34.5 & 34.7 & 31.7 & 33.2 & 32.7 & **36.1** \\ Quora & 78.9 & 24.8 & 85.4 & **86.5** & 85.3 & 84.7 & 77.3 & 79.1 & 86.0 \\ DBPedia & 31.3 & 26.3 & 39.2 & 41.3 & 33.9 & 39.0 & 34.5 & 39.9 & **42.0** \\ SCIDOCS & 15.8 & 7.7 & 14.5 & 16.5 & 13.3 & 15.0 & 14.5 & 14.1 & **17.0** \\ FEVER & 75.3 & 56.2 & 77.1 & 75.8 & 69.1 & **77.4** & 65.7 & 69.2 & 76.1 \\ Climate-FEVER & 21.3 & 14.8 & 18.4 & **23.7** & 21.1 & 23.2 & 16.3 & 21.5 & 23.6 \\ SciFact & 66.5 & 31.8 & 67.1 & **67.7** & 59.3 & 65.3 & 58.8 & 63.7 & 67.1 \\ \hline Best on & 1 & 0 & 0 & 3 & 0 & 1 & 0 & 0 & **9** \\ Average & 43.0 & 25.5 & 44.4 & 46.6 & 43.0 & 47.0 & 43.0 & 45.0 & **49.0** \\ \hline \hline \end{tabular} \end{table} Table 3: Zero-shot transfer performance (nDCG@10) on BEIR benchmark. The best score on a given dataset is marked in **bold**, and the second best is underlined. Baseline results are taken from the corresponding papers. **Pre-training Corpus.** Pre-training language models on the general corpus or continuous pre-training on domain-specific unlabeled corpus are beneficial for downstream domain-specific tasks Gururangan et al. (2020); Xiao et al. (2022); Wang et al. (2022). An unanswered question is whether pre-training on the general corpus is still beneficial in the presence of domain-specific corpus. To validate this, we ablate the MAE-style pre-training on the general corpus by changing the initialization checkpoint before pre-training and fine-tuning it on the MS MARCO corpus. Table 5 illustrates the downstream fine-tuning retrieval performance when initialized from different pre-trained checkpoints. We find that adding a warmup pre-training phase on the general corpus before pre-training on domain-specific corpus is still beneficial to downstream tasks. Scaling Hard Negatives.More and harder negatives help in contrastive learning by providing a better gradient estimation but suffer from the problem of false negatives Xiong et al. (2021); Qu et al. (2021). We empirically study the effect of the number of hard negatives used in retriever training. In a training configuration of batch size \(B\) and group size \(N\), the total number of negatives for a given query is \(B\cdot N-1\), where \(N-1\) are hard negatives and the remaining \((B-1)\cdot N\) are in-batch random negatives. We keep the total number of negatives \(B\cdot N-1\) fixed and scale the proportion of hard negatives by changing \(N\) and \(B\) accordingly. Results are shown in Figure 4. We find that even without denoising false negatives, keep increasing the number of hard negatives during fine-tuning generally leads to better performance. Masking Latency.We list the theoretical complexity and empirical time latency of different masking operations adopted by various models in Table 6. Empirical time consumptions are measured at max sequence length \(n=150\) and batch size of 128 on one V100 GPU card. One advantage of our masking strategy is that it is more efficient compared to previous approaches. In contrast, RetroMAE Xiao et al. (2022) utilizes token-specific masks which involve sampling a mask for each token; SimLM Wang et al. (2022) needs two extra forward computations of an ELECTRA generator for sampling masked token replacements. This poses no problem for short sequences but is potentially problematic for longer sequences. ## 5 Conclusion In this paper, we propose a novel token importance-aware masking strategy to sample more salient co-occurring tokens based on pointwise mutual information, which can be implemented efficiently in an unsupervised manner. This masking strategy is applied asymmetrically to the encoder and decoder side for better bottleneck representation learning. Experiments on both in-domain and zero-shot retrieval benchmarks demonstrate the effectiveness of our method, outperforming recently proposed dense retrieval pre-training approaches. \begin{table} \begin{tabular}{l r r r} \hline \hline Model & Complexity & CPU & GPU \\ \hline MAE & \(O(n)\) & 0.4ms & n.a \\ RetroMAE & \(O(n^{2})\) & 11.6ms & n.a \\ SimLM & \(O(n^{2})\) & 0.4ms & 2.8ms \\ CDMAE & \(O(n\cdot logn)\) & 1.1ms & n.a \\ \hline \hline \end{tabular} \end{table} Table 6: Time complexity comparison of different masking strategies adopted in various pre-training approaches. \(n\) is the max sequence length. \begin{table} \begin{tabular}{c c c c} \hline \hline Setting & MS dev & DL 19 & DL 20 \\ \hline \hline w/o projection & 38.4 & 66.3 & 65.9 \\ w/ projection & 38.8 & 67.0 & 67.9 \\ \hline \end{tabular} \end{table} Table 4: Ablation of the projection head. Figure 4: Analysis on the number of hard negatives used in fine-tuning on MS MARCO dev set and TREC DL tracks (averaged). \begin{table} \begin{tabular}{c c c c} \hline \hline Corpus & MS dev & DL 19 & DL 20 \\ \hline \hline MS & 37.9 & 64.4 & 67.6 \\ General \(\rightarrow\) MS & 38.8 & 67.0 & 67.9 \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation of pre-training corpus.
2310.17147
Simple Baselines for Projection-based Full-reference and No-reference Point Cloud Quality Assessment
Point clouds are widely used in 3D content representation and have various applications in multimedia. However, compression and simplification processes inevitably result in the loss of quality-aware information under storage and bandwidth constraints. Therefore, there is an increasing need for effective methods to quantify the degree of distortion in point clouds. In this paper, we propose simple baselines for projection-based point cloud quality assessment (PCQA) to tackle this challenge. We use multi-projections obtained via a common cube-like projection process from the point clouds for both full-reference (FR) and no-reference (NR) PCQA tasks. Quality-aware features are extracted with popular vision backbones. The FR quality representation is computed as the similarity between the feature maps of reference and distorted projections while the NR quality representation is obtained by simply squeezing the feature maps of distorted projections with average pooling The corresponding quality representations are regressed into visual quality scores by fully-connected layers. Taking part in the ICIP 2023 PCVQA Challenge, we succeeded in achieving the top spot in four out of the five competition tracks.
Zicheng Zhang, Yingjie Zhou, Wei Sun, Xiongkuo Min, Guangtao Zhai
2023-10-26T04:42:57Z
http://arxiv.org/abs/2310.17147v1
Simple Baselines for Projection-Based Full-Reference and No-Reference Point Cloud Quality Assessment ###### Abstract Point clouds are widely used in 3D content representation and have various applications in multimedia. However, compression and simplification processes inevitably result in the loss of quality-aware information under storage and bandwidth constraints. Therefore, there is an increasing need for effective methods to quantify the degree of distortion in point clouds. In this paper, we propose simple baselines for projection-based point cloud quality assessment (PCQA) to tackle this challenge. We use multi-projections obtained via a common cube-like projection process from the point clouds for both full-reference (FR) and no-reference (NR) PCQA tasks. Quality-aware features are extracted with popular vision backbones. The FR quality representation is computed as the similarity between the feature maps of reference and distorted projections while the NR quality representation is obtained by simply squeezing the feature maps of distorted projections with average pooling The corresponding quality representations are regressed into visual quality scores by fully-connected layers. Taking part in the ICIP 2023 PCVQA Challenge, we succeeded in achieving the top spot in four out of the five competition tracks. Zicheng Zhang*\({}^{1}\), Yingjie Zhou*\({}^{1}\), Wei Sun\({}^{1}\), Xiongkuo Min\({}^{1}\), and Guangtao Zhai\({}^{1,2}\)\({}^{1}\)Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, China \({}^{2}\)MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China Point cloud, quality assessment, projection-based, full-reference, no-reference ## 1 Introduction Point clouds have emerged as an effective means of representing 3D content and have found extensive applications in immersive domains such as virtual reality [1], mesh representation [2], and metaverse [3]. However, due to constraints in storage space and transmission bandwidth, point clouds are subject to lossy processes such as compression and simplification, which can result in the loss of quality-aware information to balance bit rates. Consequently, there is a pressing need for methods that can effectively quantify the degree of distortion in point clouds, to enable the development of compression systems and enhance the Quality of Experience (QoE) for viewers. According to the feature extraction types, the point cloud quality assessment (PCQA) methods can be categorized into model-based and projection-based methods. The model-based methods directly extract quality-aware features from the point clouds while the projection-based methods infer the visual quality of point clouds via the rendered projections. Additionally, the PCQA methods can also be divided into full-reference (FR), reduced-reference (RR), and no-reference (NR) methods according to the involved content of reference. Early FR-PCQA methods simply focus on the point level, which includes p2point [4] and p2plane [5]. However, these methods only take geometry information into consideration, thus some FR-PCQA methods such as PointSSIM [6], GraphSIM [7], and PCQM [8] are proposed to predict the quality difference between the reference and distorted point clouds by including color features and taking advantage of various features. The NR-PCQA method 3D-NSS [9, 10] uses several statistical distributions to estimate quality-aware parameters from the geometry and color attributes' distributions. Later, some researchers further propose to use 2D projections to evaluate the visual quality of point clouds, achieving competitive performance with the assistance of mature IQA methods. Namely, PQA-net [11] involves extracting features through multi-view projection techniques. Meanwhile, Fan \(et\ al\). [12, 13] evaluates the visual quality of point clouds by analyzing captured video sequences. More recently, MM-PCQA [7] takes advantage of both point cloud and projections and extracts features from both modalities. Although the projection-based methods are highly dependent on the viewpoints, we can ease the viewpoint bias by employing multi-projections [16, 17]. Furthermore, benefiting from the mature development of 2D vision backbones, the effectiveness and efficiency of the projection-based methods can be further boosted. Therefore, in this paper, we propose simple baselines for projection-based FR and NR PCQA. Specifically, the common cube-like projection process is utilized to obtain multi-projections from the point clouds. Then the well-performing 2D vision backbones ConvNeXt V2 [14] and Swin Transformer [15] are both used to extract quality-aware features from the projections. For the FR baseline, the similarity between the feature maps of reference and distorted projections is computed as the quality representation. For the NR baseline, the feature maps of the distorted projections are simply squeezed into quality representation with average pooling. Finally, the FR and NR quality representations are regressed into visual quality scores with the assistance of fully-connected layers. Participating in the ICIP 2023 PCVQA Challenge, we emerged victorious in four out of the five competition tracks, which reveals that the proposed baselines are competitive for both FR-PCQA and NR-PCQA tasks. ## 2 Proposed Method The framework of the proposed method is illustrated in Fig. 1, which includes the projection module, feature extraction module, and quality regression module. ### Cube-Projection Process In order to ensure that we cover a wide range of viewing perspectives, we have chosen to use the widely-used cube-like viewpoints setting, which is also utilized in the popular MPEG VPCC point cloud compression standard [18]. Our approach involves using six different viewpoints that are perpendicular to each other, allowing us to capture rendered projections corresponding to the six surfaces of a cube, as illustrated in the projection module of Fig. 1. Given a point cloud \(\mathcal{P}\), the cube-projection process can be described as: \[\begin{split}\mathbf{P}=\psi(\mathcal{P}),\\ \mathbf{P}=\{P_{k}|k=1,\cdots,6\},\end{split} \tag{1}\] where \(\mathbf{P}\) represents the set of the 6 rendered projections, \(\psi(\cdot)\) stands for the rendering process, and \(P_{k}\) indicates the \(k\)-th rendered projection. ### Feature Extraction Module As stated in [19], the convolution network is better at retaining more spatial information while the transformer network can better capture global semantic features in the quality assessment tasks. Therefore, we propose to use the popular convolution network backbone ConvNeXt V2 [14] and the transformer backbone Swin Transformer [15] to jointly extract quality-aware features from the projections. #### 2.2.1 FR Feature Extraction Given the projections sets \(\mathbf{P}^{r}\) and \(\mathbf{P}^{d}\) rendered from the reference and distorted point cloud pairs, we first extract the feature maps with the backbones introduced above: \[\begin{split} CF_{k}^{r}=\mathcal{C}(P_{k}^{r}),SF_{k}^{r}= \mathcal{S}(P_{k}^{r}),\\ CF_{k}^{d}=\mathcal{C}(P_{k}^{d}),SF_{k}^{d}=\mathcal{S}(P_{k}^ {d}),\end{split} \tag{2}\] where \(\mathcal{C}(\cdot)\) and \(\mathcal{S}(\cdot)\) represent the feature extraction operation of ConvNeXt V2 and Swin Transformer backbones, \(CF_{k}^{r}\) and \(SF_{k}^{r}\) stand for the extracted reference feature maps of the \(k\)-th reference projection \(P_{k}^{r}\), and \(CF_{k}^{d}\) and \(SF_{k}^{d}\) stand for the extracted distorted feature maps of the \(k\)-th distorted projection Figure 1: The framework of the proposed method. The projections are first rendered from the point clouds. Then the ConvNeXt V2 [14] and the Swin Transformer [15] are employed to extract quality-aware information from the projections. Afterward, the extracted features are regressed into quality scores. In addition, the ConvNeXt V2 and the Swin Transformer backbones are trained separately. \(P_{k}^{d}\) respectively. Inspired by the perceptual similarity calculation form described in [20], we compute the structure and texture similarities as defined below: \[\alpha(A_{k},B_{k})=\frac{2\mu_{A_{k}}\mu_{B_{k}}+\gamma_{1}}{(\mu_ {A_{k}})^{2}+(\mu_{B_{k}})^{2}+\gamma_{1}}, \tag{3}\] \[\beta(A_{k},B_{k})=\frac{2\sigma_{A_{k},B_{k}}+\gamma_{2}}{(\sigma _{A_{k}})^{2}+(\sigma_{B_{k}})^{2}+\gamma_{2}},\] \[\overline{CF_{k}}=\alpha(CF_{k}^{x},CF_{k}^{d})\oplus\beta(CF_{k }^{x},CF_{k}^{d}),\] \[\overline{SF_{k}}=\alpha(SF_{k}^{x},SF_{k}^{d})\oplus\beta(SF_{k }^{x},SF_{k}^{d}),\] where \(\alpha(\cdot)\) and \(\beta(\cdot)\) indicate the texture and structure similarity calculation operation [20], \(\mu_{F_{A_{k}}},\mu_{F_{B_{k}}},(\sigma_{F_{A_{k}}})^{2},(\sigma_{F_{A_{k}}})^ {2}\), and \(\sigma_{F_{A_{k}},B_{k}}\) are the global means and variances of feature maps \(A_{k}\) and \(B_{k}\), and the global covariance between \(A_{k}\) and \(B_{k}\), \(\oplus\) represents the concatenation operation, \(\gamma_{1}\) and \(\gamma_{2}\) are small constants to avoid instability, and \(\overline{CF_{k}}\) and \(\overline{SF_{k}}\) denote the final FR quality features extracted by the ConvNeXt V2 and Swin Transformer backbones from the \(k\)-th projections respectively. (\(CF_{k}^{r}\) and \(CF_{k}^{d}\in\mathbb{R}^{H_{x}\times W_{x}\times C_{s}}\), \(SF_{k}^{r}\) and \(SF_{k}^{d}\)\(\in\mathbb{R}^{H_{x}\times W_{x}\times C_{s}}\), \(CF_{k}\in\mathbb{R}^{1\times 2C_{s}}\), and \(\overline{SF_{k}}\in\mathbb{R}^{1\times 2C_{s}}\), where \(C_{c}\) and \(C_{s}\) stand for the number of channels for ConvNeXt V2 and Swin Transformer feature maps respectively.) #### 2.2.2 NR Feature Extraction Given the projections set \(\mathbf{P}^{d}\) rendered from the distorted point cloud, we can similarly obtain the quality-aware features: \[\widetilde{CF_{k}}=Avg(\mathcal{C}(P_{k}^{d})), \tag{4}\] \[\widetilde{SF_{k}}=Avg(\mathcal{S}(P_{k}^{d})),\] where \(\widetilde{CF_{k}}\) and \(\widetilde{SF_{k}}\) represent the NR quality features extracted by the ConvNeXt V2 and Swin Transformer backbones from the \(k\)-th projection, and \(Avg(\cdot)\) indicates the average pooling operation. (\(\widetilde{CF_{k}}\in\mathbb{R}^{1\times C_{c}}\) and \(\widetilde{SF_{k}}\in\mathbb{R}^{1\times C_{s}}\).) ### Quality Regression Module Once the feature extraction module has extracted the quality-aware feature representation, we require a regression model to map these features to quality scores. To accomplish this, we utilize two-layer, fully connected (FC) layers to obtain the projection-level quality scores, which are consequently averaged into the final point cloud quality scores: \[Q_{FR}=\omega_{c}\frac{1}{6}\sum_{k=1}^{6}\mathcal{FC}(\overline{CF_{k}})+ \omega_{s}\frac{1}{6}\sum_{k=1}^{6}\mathcal{FC}(\overline{SF_{k}}), \tag{5}\] \[Q_{NR}=\omega_{c}\frac{1}{6}\sum_{k=1}^{6}\mathcal{FC}(\widetilde {CF_{k}})+\omega_{s}\frac{1}{6}\sum_{k=1}^{6}\mathcal{FC}(\widetilde{SF_{k}}),\] where \(\mathcal{FC}(\cdot)\) stands for the FC layers regression operation, \(\omega_{c}\) and \(\omega_{s}\) are the weights to control the contribution proportion of the ConvNeXt V2 and Swin Transformer, and \(Q_{FR}\) and \(Q_{NR}\) are the corresponding predicted FR and NR quality scores. The Mean Squared Error (MSE) is utilized as the loss function: \[Loss=\frac{1}{n}\sum_{m=1}^{n}\left(Q-Q_{label}\right)^{2}, \tag{6}\] where \(n\) indicates the number of point clouds in a mini-batch, \(Q\) and \(Q_{label}\) are the predicted quality levels and subjective quality labels respectively. ## 3 Experiment ### Database & Evaluation Criteria We participate in all 5 tracks of the ICIP 2023 PCVA Challenge. The proposed method is validated on the BASICS database [21], which is targeted at the quality assessment of compressed point clouds. The BASICS database contains 75 reference point clouds and employs 4 types of compression methods to generate the compressed point clouds. 5 criteria are included to evaluate the performance, which consists of Pearson Linear Correlation Coefficient (PLCC), Spearman Rank Order Correlation Coefficient (SRCC), Difference/Similar Analysis quantified by Area Under the Curve (D/S\({}_{auc}\)) [22], Better/Worse Analysis quantified by Correct Classification percentage (B/W\({}_{cc}\)) [22], and Runtime Complexity (RC). ### Implementation Details The ConvNeXt V2 [14] base and the Swin Transformer [15] base are selected as the backbones, which are both initialized with the weights pretrained on the ImageNet-22K database [23]. The white background of the projections is removed. Then we resize the resolution of the minimum dimension of the projections as 520 while maintaining their aspect ratios and the 384\(\times\)384 patches are cropped as the input. The Adam optimizer [24] with the initial learning rate 4e-5 is utilized and the batch size is set as 6. The two backbones are trained separately and the proposed method is trained on a server with NVIDIA 3090. Specifically, the BASICS database provides a fixed train-validation-test split. We conduct a \(k\)-fold training strategy on the training sets and evaluate the performance on the validation set. The top-performing models are then saved for evaluation on the testing set. The testing set is not available during the development phase. The final competition results are only based on the performance on the testing set. ### Experiment Performance The competition results for all five tracks are exhibited in Table 1, Table 2, Table 3, Table 4, and Table 5 respectively, from which we can make several observations. a) The proposed method achieves 1st place in Tracks 2-4 in terms of PLCC and gains the 3rd place in the Track 1; b) The proposed method outperforms all the compared teams by a large performance margin in the NR tracks, which shows the superiority of the proposed method for the NR-PCQA tasks; c) The proposed method consumes much less time than the compared teams on the FR tracks, which is even about 7x times faster than the second RC ranking competitor (8.60 vs. 42.80). In all, the proposed method is both effective and efficient for both FR-PCQA and NR-PCQA tasks. Additionally, with the development of 2D vision backbones, the effectiveness can be further boosted. The proposed framework can also adopt lightweight vision backbones to adapt to the application scenario where computation resources are limited. ## 4 Conclusion In conclusion, this paper proposes simple yet effective baselines for point cloud quality assessment (PCQA) through cube-like projection and feature extraction using popular vision backbones. Our approach utilizes multi-projections to generate full-reference (FR) and no-reference (NR) quality representations and regresses the quality representations into visual quality scores through fully-connected layers. The experimental results demonstrate the competitive performance of our proposed baselines for both FR and NR PCQA tasks in the ICIP 2023 PCVQA Challenge. Our work paves the way for future research to enhance the compression and simplification processes while improving the Quality of Experience (QoE) of viewers for point clouds.
2301.01064
PIE-QG: Paraphrased Information Extraction for Unsupervised Question Generation from Small Corpora
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
Dinesh Nagumothu, Bahadorreza Ofoghi, Guangyan Huang, Peter W. Eklund
2023-01-03T12:20:51Z
http://arxiv.org/abs/2301.01064v1
# PIE-QG: Paraphrased Information Extraction for Unsupervised Question Generation from Small Corpora ###### Abstract Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources. ## 1 Introduction Question Answering systems (QA systems) provide answers to input questions posed in natural language. Answering questions from unstructured text can be performed using Machine Reading Comprehension (MRC). Given a passage, several sentences or a paragraph, and a question posed, the QA system produces the best suitable answer. Extractive Question Answering systems (EQA systems) are a subset of QA systems and involve an MRC task where the predicted answer is a span of words from the passage. With pre-trained language models (Radford et al., 2018), EQA systems achieve excellent results, surpassing even human performance. Pre-trained language models, such as BERT (Devlin et al., 2019) and GPT (Radford et al.), can be fine-tuned to perform downstream tasks such as QA. However, huge amounts of data are required to train these models for specific domains, making the task labor-intensive, in terms of the effort required to assemble suitable domain-specific training data. A single training instance for an EQA system dataset requires a question, a passage, and an answer. Domain-relevant documents can be collected with advanced information retrieval tools, and passages are formed by splitting documents into several related sentences or a paragraph. However, generating the question and answer pairs, that provide the training set for the QA system from a given passage, is considered the most difficult challenge, an approach known as unsupervised QA (Cui et al., 2004). Existing unsupervised QA system techniques such as (Lewis et al., 2019) and (Lyu et al., 2021) use an out-of-domain dataset for question generation, namely, they require additional training sources beyond what can be provided by the target corpus and a pre-trained generic model. On the other hand, rule-based QA system methods, those constrained to generate question-answer pairs from only the corpus itself, run the risk of generating questions with high lexical overlap with the passage, at risk of forcing the model to learn word matching patterns. The work of (Fabbri et al., 2020) and (Li et al., 2020) use information retrieval-based methods, such as elastic search and citation navigation, to create questions from passages other than those presented within the target dataset. However, these methods may not generate sufficient training questions, especially when the corpus is small and has no citation or inter-document linking structure. In this paper, we focus on addressing the limitations of EQA systems using a novel unsupervised Paraphrased Information Extraction for Question Generation (PIE-QG) method that generates synthetic training data through the extraction of <subject, relation, object> triples from a given corpus. We use the original passage to produce question answer training pairs by generating a paraphrased version of the original passage to avoid lexical overlap between the passage and the question-answers. We adopt Open Information Extraction Kolluru et al. (2020) to extract <subject, relation, object> triples from every sentence of the paraphrased passage. These triples are rich in semantics and represent raw facts; therefore, generating question-answer pairs from triples results in well-formed and effective training data. Furthermore, many sentences in the passage contribute to generating meaningful extractions, thus helping to pose questions in different ways from a single passage. An example of the question generation process we propose (called PIE-QG for Paraphrasing, Information Extraction Question Generation) is shown in Figure 1. The contributions of this paper are as follows: 1. We describe the PIE-QG method in which paraphrased passages from the original corpus are used to generate question-answer pairs without reliance on external reference data sources, such as retrieval-based or inter-document link navigation methods. Paraphrasing passages reduces the effect of lexical overlap between the passage and the question. 2. We generate multiple questions from a single paraphrased passage by adopting Open Information Extraction to extract facts, thus increasing the number of question-answer pairs extracted from the corpus. We have conducted experiments on four Extractive QA datasets and demonstrate that the proposed PIE-QG method achieves comparable performance in terms of Exact Match (EM) and F1 score while requiring significantly fewer passages. The remainder of this paper is organized as follows. We present related work in Section 2. In Section 3, we describe the proposed PIE-QG method. Section 4 discusses the experimental setup. In Section 5, we evaluate the performance of our method. Section 6 presents the limitations of the proposed method and Section 7 offers some concluding remarks. ## 2 Related Work Pre-trained language models, such as BERT Devlin et al. (2019), can be fine-tuned for downstream tasks like Extractive QA systems (EQA systems). A comprehensive natural language (NL) passage, which might be several sentences or a paragraph of NL-text, is considered as the context where the model finds the answer span. The input question and the context are represented as a single sequence, passed to a pre-trained model and the answer is predicted by calculating the probabilities of the first and last tokens of the answer span. Pre-trained language models such as BERT Devlin et al. (2019), T5 transformer Raffel et al. (2020) and XLNet Yang et al. (2019), achieve exceptional performance in EQA systems, however at the cost of reliance on large human-annotated supervised datasets. The Stanford Question Answering Dataset (SQuAD) Rajpurkar et al. (2016) is a widely used dataset for EQA systems. Lewis et al. (2019), Fabbri et al. (2020), Li et al. (2020), and Lyu et al. (2021) used randomly sampled passages from Wikipedia, where named entities, or noun chunks, are identified as answers as these tend to be useful for question answering. The questions are then formed in natural language according to the passage and a selected answer phrase. Figure 1: Question Generation from a context (left) by paraphrasing followed by information extraction using OpenIE. Note: The text in green indicates the selected answer. Unsupervised EQA is achieved using the cloze-translation method Lewis et al. (2019) by forming passage, question-answer triples from a given target corpus. The answers present in the passages are masked to form "fill in the blanks" styled questions, so-called cloze questions. The authors translate natural language questions using a neural machine translation (NMT) model trained with different corpora that contain cloze questions and natural question pairs. Questions generated directly from the passage can only answer simple cloze questions by matching text within the passage, an approach that can not give correct answers for differently phrased questions. In an effort to broaden the questions used to train an EQA system, Fabbri et al. (2020) generated questions using a similar sentence taken from a different passage. The actual passage is considered a query and sentences are retrieved using elastic search. The most similar sentence, which contains the answer but excludes the original query passage, and with less than 95% similarity, to avoid plagiarised sentences, is used to form the question-answer pairs. The answer from these sentences is masked and a question in the form of a "Wh+B+A?" rule, where "wh" (one of what, when, or who) is selected based on the answer-entity type ("B" is a fragment of the sentence that comes after the answer mask, and "A" is the fragment that is present before the answer mask). Li et al. (2020) uses citations to form a summary of the passage. The cited passage is considered the context, and the sentence where the citation appeared is used for question generation, to avoid lexical overlap. The question generation process involves masking the answer with a cloze mask, where the mask mentions only the type of the answer entity. The dependency tree for the sentence is altered in such a way that the cloze mask is brought to the beginning. The question is then created by replacing the cloze mask with the suitable "wh" word, again determined by the type of the answer entities. Lyu et al. (2021) perform unsupervised QA by creating a question generation model from text summaries. The model uses dependency trees and semantic role labels extracted from the summary to generate a question. A neural encoder-decoder model is then trained to translate articles to summary-informed questions. The trained model is applied to the actual passages to create questions. However, we consider this method as a transfer learning task rather than unsupervised question generation due to its dependency on a text-summary dataset. Our method compares to Fabbri et al. (2020) and Li et al. (2020), avoids the sentence and citation-based retrieval, and minimizes the requirement of having a large corpus to generate question-answer pairs. ## 3 Paraphrased Information Extraction for Question Generation To overcome the reliance on external reference data sources with a large number of passages, we made Figure 2: The general pipeline of PIE-QG for question generation using paraphrasing and OpenIE. Note: Blue indicates named entities, red merged triples with a common subject and green the selected answers. use of OpenIE and paraphrased passages for unsupervised synthetic question generation. The actual passages are first altered to a paraphrased form and <subject, predicate, object> triples are then extracted from the paraphrased passages. These triples, combined with certain heuristics, form question-answer pairs which are then used alongside the original passage as context to fine-tune the QA model. The pipeline of our proposed EQA question generation process is illustrated in Figure 2. The steps in this pipeline are detailed as follows. _(i) Paraphrasing:_ Question-answer pairs generated directly from the passage result in inferior QA system performance, as they produce models that have little ability to generalize Fabbri et al. (2020). Paraphrasing is therefore adopted to alter the passage without changing its actual meaning. The intuition behind this is to create questions from passages that are semantically similar but lexicographically different from the original passage. Paraphrasing question-answer pairs themselves has been shown to cause semantic drift Pan et al. (2021). By contrast, in our approach, the passage is paraphrased, rather than question-answer pair. This improves the model's performance. The effect of paraphrasing is discussed in Section 5. _(ii) Co-reference resolution:_ As we aim to make use of every sentence in the passage to generate questions, some sentences are ineffective due to the presence of pronouns Ma et al. (2021). This problem is solved by implementing co-reference resolution, replacing pronouns in the paraphrased passages with the proper name of the referring noun. _(iii) Information Extraction:_ OpenIE is applied on paraphrased passages to generate extractions in the form of arguments and relations from natural language text Mausam (2016). Given a sentence \(w_{i}\) in the passage, {\(w_{1},w_{2},w_{3},...w_{N}\)}, OpenIE generates extractions {\(T_{1},T_{2},T_{3},...T_{M}\)}, where each extraction is in the form <subject, predicate, object>, namely triples. OpenIE is proven to be an efficient solution for downstream tasks such as complex question answering Khot et al. (2017). _(v) Question formation:_ OpenIE extractions produced from a passage are used to form questions as a synthetic training set for QA system fine-tuning. _(vi) Named entity filtering:_ Since triples extracted from a passage have different types of extractions, we select the triples that contain named entities in the answer. In other words, the subject (or object) is selected as an answer only if it is a named entity. _(vii) Eliminating duplicate triples:_ One downside of open information extraction is the presence of duplicate or semantically redundant triples. Generating separate questions from similar or duplicate triples causes inferior performance in the EQA system model, hence redundant triples are sorted and the longest triple from the sort is selected as the single source for final question generation. _(viii) Merging triples:_ Questions generated from the triples using the above methods result in simple and easy-to-answer questions. For robust model training, we generate more complex questions from multiple triples by grouping triples with the same subject or object. For instance, if there are two triples of the form {\(\langle s_{1},r_{1},o_{1}\rangle,\langle s_{2},r_{2},o_{2}\rangle\)} and \(s_{1}=s_{2}\), we form a question-answer pair with "Wh + \(r_{1}\) + \(o_{1}\), \(r_{2}\) + \(o_{2}\)?" as the question and \(s_{1}\) (or \(s_{2}\)) as the answer. Each triple extracted from a paraphrased passage can form two questions with either subject or object as an answer. When a subject is selected as an answer, the question is formulated as "Wh + relation + object?". Conversely, when an object is selected as the answer, the question generated is of the form "Wh + subject + relation?". "Wh" is the question word in these formulations and the appropriate form is selected from a list, based on the answer entity type as earlier described. ## 4 Experimental Platform DatasetsThe performance of our question generation method is evaluated in terms of Exact-Match (EM) and F-1 score using existing EQA datasets, namely SQuAD v1.1 Rajpurkar et al. (2016) development set, and NewsQA Trischler et al. (2016), BioASQ Tsatsaronis et al. (2015) and DuoRC Saha et al. (2018) test sets. SQuAD version 1.1 is acquired from the official version1 while the Fisch et al. (2019) published versions of test sets are considered for NewsQA, BioASQ, and DuoRC. A more recent SQuAD v2.0 Rajpurkar et al. (2018) is considered unsuitable for our experiments as the synthetic training set does not contain unanswerable questions. Footnote 1: [https://raipurkar.github.io/SQuAD-explorer/](https://raipurkar.github.io/SQuAD-explorer/) Question GenerationWe take a relatively small subset of 30,000 passages from the Li et al. (2020) sampled Wikipedia dataset for question generation and for training the model. The pseudo-code for the proposed question generation technique is presented in Algorithm 1. Some of the questions resulting from this process can be grammatically incorrect. We rely on questions posed to the model during inference to be in natural language with correct grammar, we experiment by introducing a grammar correction module in the pipeline to synthesize syntactically accurate questions but later removed this due to its effect discussed in Section 5. Sourced Wikipedia passages are transformed into paraphrased passages with a pre-trained model2 based on the PEGASUS transformer Zhang et al. (2020). Pronouns in the paraphrased passage are replaced with the nouns they refer to. We used neuralcoref3 for this purpose, the spaCy implementation of pre-trained co-referent resolution based on reinforcement learning Clark and Manning (2016). OpenIE6 is used to extract <subject, predicate, object> triples from the pronoun-replaced paraphrased passages. OpenIE6 uses Iterative Grid Labeling and is based on BERT. A spaCy-based named-entity recognition (NER) module Honnibal et al. (2020) is used to generate a list of named-entities from the passage. Named-entity recognition (NER) is particularly helpful for filtering triples and determining the answer-entity type for appropriate "wh" word selection. The simplest version of "Wh" word is selected for a particular named entity based on Fabbri et al. (2020). Questions generated from this process are grammatically corrected using a RoBERTa-based Liu et al. (2019) grammar correction module named "GECToR" Omelianchuk et al. (2020). All models are applied from the above-mentioned sources out-of-the-box, namely with no domain specific fine-tuning. Footnote 2: [https://huggingface.co/tuner08/pegasus_paraphrase](https://huggingface.co/tuner08/pegasus_paraphrase) QA fine-tuningWe use pre-trained BERT models from Devlin et al. (2019) as the baseline and fine-tune the models for downstream QA system tasks with the generated training data. The generated question, and its context (the actual NL-passage that contains both the question and its answer), are represented as a single sequence, separated by different segment masks and the "[SEP]" token. The final linear layer of the model is trained, to identify the start and end spans of the answer, by computing log-likelihood for each token. All experiments are performed on the uncased version of the BERT-base model with a learning rate of 3e-5, a maximum sequence length of 384, a batch size of 12, a document stride of 128 for 2 epochs, and a check-point at every 500 steps. The best checkpoint was selected by validating each against 5000 QA pairs randomly sampled from the synthetic training data. We use the Huggingface4 implementation for input tokenization, model initialization, and training. For comparison with the state-of-the-art EQA models, we also experimented on the BERT-large whole-word masking version with the same training data. All models are trained and validated on a single NVIDIA Tesla A100 GPU. Footnote 3: [https://spacy.io/universe/project/neuralcoref](https://spacy.io/universe/project/neuralcoref) Footnote 4: [https://huggingface.co](https://huggingface.co) ## 5 Results and Discussion The effectiveness of the question-answer data generated using the PIE-QG method is measured by training the BERT-base model and evaluating it against existing EQA development and test sets. The Exact Match (EM) and F-1 scores are selected as the metrics to evaluate the effectiveness of each component in the QA models. The initial set of questions is created using OpenIE, where the passage is directly used to form triples and generate questions as described in Section 3. The intuition behind using OpenIE is to generate multiple questions from a single passage. However, as previously described, such a simple-minded approach suffers from having pronouns as answers, ungrammatical questions, and high degrees lexical similarity between passage and question, making most extracted triples suitable for word matching only. Effect of ParaphrasingUsing paraphrased passages for question generation avoids lexical overlaps with the passage and improves model performance. Ten different paraphrases are generated for each sentence in the passage using the PEGASUS (Zhang et al., 2020) paraphrasing generation model. Jensen-Shannon Divergence (JSD) is calculated for each paraphrase against the original sentence. JSD calculates a divergence score based on the word distributions between two sentences, a higher value for JSD accounts for a more different sentence, while a lower value JSD score represents higher lexical overlap. In our PIE-QG pipeline, sentences with the highest JSD values are selected for question generation to make the question syntactically different. Paraphrasing has a strong positive effect on the model, improving the EM & F-1 score by at least 4% and 7% respectively on all evaluation sets. Effect of Co-reference ResolutionThe presence of pronouns in passages results in meaningless question-answer pairs. For instance, _"Vaso Sepashvili (; born 17 December 1969) is a retired Georgian professional footballer. He made his professional debut in the Soviet Second League B in 1990 for FC Aktyubinets Aktyubinsk"_ is the passage. This produces a triple _" <He, made, his professional debut in the Soviet Second League B in 1990 for FC Aktyubinets Aktyubinsk>"_. While the relation and object form a question "Who made his professional debut in the Soviet Second League B in 1990 for FC Aktyubinets Aktyubinsk?" with the subject "He" selected as the answer. The best answer for this question is found co-referenced in the previous sentence where the pronoun "He" refers to "Vaso Sepashvili". To address this we alter the passage with co-reference resolution to replace all pronouns with the referring proper noun. The above sentence is changed in such a way that the extracted triple becomes _"<Vaso Sepashvili, made, his professional debut in the Soviet Second League B in 1990 for FC Aktyubinets Aktyubinsk>"_ and the ideal answer is selected. Pronouns were replaced with their referring nouns using this method to generate meaningful questions while the original passage is retained for training the QA model. In this way, co-referent resolution has a positive impact on the model performance increasing the EM by 2%-6% across all the sets. tion (NER) to extract all named entities from the passage. To become a candidate to be selected for the question generation process, either the subject or object from the triple must contain at least one named entity. This NER filtering method is beneficial to the model, it eliminates many impractical question-answer pairs from the training set and improves the overall Exact Match (EM) and F-1 score by 2% except for NewsQA. Effect of Filtering Identical TriplesSemantically similar triples are formed using OpenIE6 with a high degree of lexical overlap. Constructing questions from these triples causes question duplication and has the potential to deteriorate model performance and even result in over-fitting. To filter similar or duplicate triples, each triple is verified with other triples extracted from the passage to discover lexical overlaps between them. If a triple formed as a sentence is a sub-string of another, the shorter is removed from the training set to avoid the production of redundant questions. From Figure 1, triples such as _<the deals, could violate, EU antitrust laws>_ and _<The European Commission, is worried, that the deals could violate EU antitrust laws>_ convey the same meaning with a high degree of lexical overlap, hence the former is removed. _Filtering identical triples_ in this way has a small but favorable impact on the model as shown in the ablation summary in Table 1. Effect of Merging TriplesA subject (or object) in a passage can exhibit relations to multiple objects (or subjects). Triples with common subjects are merged to form complex questions such that QA model can understand complex relationships. Merging triples has a small but positive effect on the model performance improving EM by 1.2% and F-1 by 0.9% as shown in Table 1. Effect of Grammar CorrectionQuestions generated from the above process often contain grammatical errors which can negatively impact model performances. We experimented with "GECToR" 5, a grammar correction module that tags and corrects input questions with grammar errors. For instance, the question "What is is worried that the deals could violate EU antitrust laws?" is formulated. The repeat occurrence of the verb "is" is an obvious error. The grammar correction module alters the question where the final question is formulated correctly as "What is worried that the deals could violate EU antitrust laws?". Based on heuristics presented in Table 1, all incremental upgrades until "Merging Triples" improve the model performance, but Grammar correction does not and is hence removed from the pipeline. Footnote 5: [https://github.com/grammarly/gector](https://github.com/grammarly/gector) Effect of Training Data SizeExperiments were conducted to measure the EM and F-score at different synthetic data sizes to identify the optimal number of training questions. Figure 3 presents the results of these experiments and reveals that PIE-QG achieves peak performance between 30K-50K training questions using BERT-large model and begin to over-fit beyond that number. The same effect is also observed in (Fabbri et al., 2020). The method to determine the optimal number of training questions is to split the generated question-answer pairs into blocks each of 10K. These are then split into training and validation sets. At fixed points of 500 training steps, the validation set is \begin{table} \begin{tabular}{l c c|c c|c c|c c|c} \multicolumn{1}{c}{} & \multicolumn{2}{c}{SQuAD1.1} & \multicolumn{2}{c}{NewsQA} & \multicolumn{2}{c}{BioASQ} & \multicolumn{2}{c}{DuoRC} \\ \hline **Fine-tuning Models** & **EM** & **F-1** & **EM** & **F-1** & **EM** & **F-1** & **EM** & **F-1** & **\#Training** \\ & & & & & & & & & **Contexts** \\ \hline _BERT-base_ & & & & & & & & & \\ Sentence Retrieval (Fabbri et al., 2020) & 46.1\(\dagger\) & 56.8\(\dagger\) & 20.1 & 31.1 & 29.4 & **38.1** & 28.8 & 35.0 & 45K \\ **PIE-QG (Ours)** & **48.6** & **58.7** & **21.8** & **32.5** & **29.6** & 37.5 & **34.3** & **40.1** & **20-28K** \\ \hline _BERT-large_ & & & & & & & & & \\ Cloze Translation (Lewis et al., 2019) \(\dagger\) & 45.4 & 55.6 & 19.6 & 28.5 & 18.9 & 27.0 & 26.0 & 32.6 & 782K \\ RefQA (Li et al., 2020) & 57.1 \(\dagger\) & 66.8 \(\dagger\) & 27.6 & 41.0 & 42.0 & 54.9 & 41.6 & 49.7 & 178K \\ + Iterative Data Refinement & **62.5 \(\dagger\)** & **72.6 \(\dagger\)** & **32.1** & **45.1** & **44.1** & **57.4** & **45.7** & **54.2** & 240K \\ **PIE-QG (Ours)** & 61.2 & **72.6** & 29.7 & 44.1 & 43.6 & 55.1 & 44.6 & 52.9 & **20-28K** \\ \hline \end{tabular} \end{table} Table 2: Comparison of PIE-QG with state-of-the-art unsupervised QA models. Note: Iterative refinement achieves the best performance through structural analysis of the corpus via citation and intra-document links, a model that requires \(\times 8\) as many contexts as the PIE-QG model we propose: \(\dagger\)\(\dagger\) indicates results taken from the existing literature, and all other figures are evaluated with published synthetic training data (or) pre-trained models. “#Training Contexts” are measured based on respective published synthetic datasets. Each model uses the same synthetic training data sourced from Wikipedia for fine-tuning and is evaluated against the standard EQA datasets. measured against the QA model. This incrementally informs the process of when the model optimizes against the number of question-answer pairs used to train it. It is observed, shown in Figure 3, that this occurs for each of the datasets in the range 30-50K. Increasing the number of template-styled training questions negatively affects the evaluation performance after a certain point because of memorisation of synthetic data patterns. Comparison with the State-of-the-ArtFabbri et al. (2020) use a BERT-base model as the backbone for their experiments while Lewis et al. (2019) and Li et al. (2020) employed the BERT-large whole word masking pre-trained model. Questions generated from the PIE-QG model performed better than the information retrieval-based method presented by Fabbri et al. (2020) and produced an absolute improvement of 2.5% on EM and 1.9% on F-1 on the SQuAD 1.1 development set. Comparing BERT-large models, the PIE-QG model outperforms citation retrieval-based RefQA, a method that involves dependency tree reconstruction. However, RefQA, which includes a refinement technique, achieves the best performance, achieving 1-2.5% higher F-1 score than that of PIE-QG, but at the cost of using \(8\times\) more passages and \(10\times\) more training questions. Also, refinements in RefQA are performed on the training data through iterative cross-validations on the SQuAD 1.1 development set, whereas the PIE-QG model does not involve such a process. The number of passages and questions used by each method are presented in detail in Table 3. PIE-QG outperforms retrieval-based question generation on every dataset and produces comparable performance with RefQA with \(8\times\) fewer passages. To summarise, the experimental results demonstrate the advantages of the PIE-QG method; 1. Paraphrasing the original passage eliminates the need of using external knowledge sources to avoid lexical overlap; 2. Multiple questions generated using OpenIE with our proposed method minimizes the requirement of a large corpus without having to sacrifice the performance. ## 6 Limitations The downside of the PIE-QG unsupervised question generation pipeline is the use of external modules like paraphrasing, OpenIE, and NER, which may not exist in languages other than English. The quality of question-answer pairs generated to train the QA model is therefore dependent on the performance of these modules on the selected corpus. It is however anticipated that PIE-QG will perform similarly well on any English language corpus. It is future work to apply these modules within the PIE-QG pipeline to other languages where comparable language-specific models can be sourced and performance outcomes analyzed. ## 7 Conclusion With no reliance on any external reference corpora, the PIE-QG model uses paraphrasing and Open Information Extraction (OpenIE) to generate synthetic training questions for fine-tuning the language model in a QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from paraphrased passages, and questions are formed with subjects (or objects) as answers. Pronoun co-referents are resolved and where possible, triples are merged, and duplicate and highly similar triples are removed. Furthermore, triples that do not contain named entities Figure 3: Evaluation of the PIE-QG model F-score for different datasets against the number of questions in the training set using the BERT-large model, the optimal number for each dataset is in the range 30-50K. \begin{table} \begin{tabular}{l l l} \hline **System** & **\#Contexts** & **\#Questions** \\ \hline Fabbri et al. (2020) & 45K & 50K \\ RefQA Li et al. (2020) & 178K & 300K \\ + IDR & 240K & 480K \\ PIE-QG & 20-28K & 30-50K \\ \hline \end{tabular} \end{table} Table 3: Comparison of statistics of the synthetic training data generated by existing unsupervised question generation methods with PIE-QG. are eliminated. The PIE-QG pipeline results in a high-quality question-answer training set that informs the QA model. Using the PIE-QG pipeline results in a QA model that achieves performance comparable to the state-of-the-art performance using significantly fewer passages. It is only narrowly outperformed by RefQA, an approach that uses iterative data refinement, and therefore relies on the citation structure of corpora and \(\times 10\) more training questions.
2310.08067
GameGPT: Multi-agent Collaborative Framework for Game Development
The large language model (LLM) based agents have demonstrated their capacity to automate and expedite software development processes. In this paper, we focus on game development and propose a multi-agent collaborative framework, dubbed GameGPT, to automate game development. While many studies have pinpointed hallucination as a primary roadblock for deploying LLMs in production, we identify another concern: redundancy. Our framework presents a series of methods to mitigate both concerns. These methods include dual collaboration and layered approaches with several in-house lexicons, to mitigate the hallucination and redundancy in the planning, task identification, and implementation phases. Furthermore, a decoupling approach is also introduced to achieve code generation with better precision.
Dake Chen, Hanbin Wang, Yunhao Huo, Yuzhao Li, Haoyang Zhang
2023-10-12T06:31:43Z
http://arxiv.org/abs/2310.08067v1
# GameGPT: Multi-agent Collaborative Framework for Game Development ###### Abstract The large language model (LLM) based agents have demonstrated their capacity to automate and expedite software development processes. In this paper, we focus on game development and propose a multi-agent collaborative framework, dubbed GameGPT, to automate game development. While many studies have pinpointed hallucination as a primary roadblock for deploying LLMs in production, we identify another concern: redundancy. Our framework presents a series of methods to mitigate both concerns. These methods include dual collaboration and layered approaches with several in-house lexicons, to mitigate the hallucination and redundancy in the planning, task identification, and implementation phases. Furthermore, a decoupling approach is also introduced to achieve code generation with better precision. ## 1 Introduction Artificial intelligence's applications in game development can be traced back to classic games such as _Starcraft_ and _Diablo_[1; 2; 3]. Developers have consistently required AI systems for crafting interactive virtual worlds and characters. These systems have become standard in the development of such interactive platforms. Early game AI research emphasizes controlling non-player characters (NPCs) and pathfinding [4]. With the advancement of natural language processing (NLP), some pioneering works that focus on generating levels using the deep learning technique have emerged. A representative is MarioGPT [5], which successfully generates levels in _Super Mario Bros_ by fine-tuning GPT2 [6]. Recently, transformer-based large language models (LLMs) have achieved substantial advancements, making notable strides in both natural language processing and computer vision [7; 8; 9; 10; 11; 12]. The training of LLMs is a multi-phase process. The initial phase involves training these models on extensive corpora, fostering the acquisition of fundamental language capabilities. In the subsequent phase, which is of considerable significance, the models are fine-tuned via data for a diverse of NLP tasks that are delineated through instructions [13; 14; 15; 16]. This instruction tuning enhances the models' ability to generalize across a wide range of applications, leading to noteworthy zero-shot performance on unseen tasks. Lastly, the reinforcement learning from human feedback (RLHF) phase guarantees the models' structural integrity and reliability [17]. More importantly, this phase also grants the model the capacity to generate content that emulates human style, thereby enhancing its versatility as an agent. Moreover, the advancement of LLMs has catalyzed the utilization of agents in automating software development processes. Various studies have explored the deployment of a single LLM-based agent to perform diverse tasks. AutoGPT, for instance, employs an LLM agent to tackle real-world decision-making tasks [18], while HuggingGPT employs a single LLM as a controller to orchestrate the completion of complicated AI tasks [19]. Despite these approaches relying on a sole LLM agent, they incorporate reviewer roles to refine decision-making. In AutoGPT, a secondary opinion is obtained from a supervised learner to augment performance, and HuggingGPT integrates GPT-4 as a critic to evaluate decision accuracy. Furthermore, multiple works utilize multiple agents in their frameworks to make LLM competent for complex tasks [20, 21, 22, 23, 24]. MetaGPT [21] introduces a multi-agent framework, which can be used for automating the development of various software. CHATDEV [20] presents a novel software development framework that harnesses agents to enhance collaboration among the various roles involved in the software development process. When employing LLM agents for automated software development, these studies encounter inherent limitations associated with LLMs, notably the issue of hallucination. This challenge manifests particularly during the planning and code generation phases [19, 20]. Distinct from generic software development, the game development industry operates under a stringent demand to keep up with the trends, necessitating heightened precision and conciseness throughout the development process for optimal efficiency. Moreover, tuning and employing one single LLM to serve the whole development cycle of game development without hallucination and high precision is impractical and costly. As a result, the framework requires multiple critic and reviewer roles to effectively mitigate the hallucinatory tendencies inherent in language models. Furthermore, in the context of game development, we identify an additional limitation of LLMs, that of redundancy. Particularly in the game development domain, LLMs can generate unnecessary and uninformative tasks or code snippets. To effectively address both hallucination and redundancy, the GameGPT framework strategically employs a combination of approaches, including dual collaboration, instruction tuning through in-house lexicons, and code decoupling. Notably, dual collaboration involves the interaction between LLMs and small expert deep learning models, alongside the collaborative engagement between execution roles and review roles. These synergistic have empirically demonstrated their effectiveness for mitigating both hallucination and redundancy within the framework. In summary, our contributions are as follows: * We introduce an innovative multi-agent framework tailored to facilitate automated game development. * Beyond hallucination, we identify the issue of redundancy inherent to LLM-based agents in the context of game development. * To address the hallucination and redundancy concerns of LLMs within game development, several mitigations including dual collaboration and code decoupling are proposed. * Empirical results demonstrate the GameGPT's capability in effective decision-making and decision-rectifying throughout the game development process. ## 2 GameGPT ### Overview The GameGPT framework is designed as a specialized multi-agent system for game development. To address the limitations of the LLM and the temporal constraints of game development, we integrate multiple agents with distinct roles into the framework. This integration aims to enhance precision and scalability. The scalability aspect of GameGPT offers the potential to create games of medium to large sizes. Moreover, GameGPT operates in a collaborative manner, exhibiting a dual collaboration approach. Firstly, it involves cooperation between the LLMs and smaller expert models dedicated to specific tasks, thereby enhancing the decision-making process. Secondly, collaboration occurs among agents assigned different roles, contributing to the decision-rectification and minimizing the hallucination of LLMs. Figure 1 provides an overview of the proposed GameGPT framework, which operates through five distinct stages: game development planning, task classification, code generation, task execution, and result summarization. Upon receiving a client's request, the game development manager initiates the game development planning stage, resulting in the creation of a task list. Subsequently, game development engineers utilize smaller expert models to accurately determine the task type and its associated parameters. Following this, game engine engineers proceed to generate code and scripts in alignment with the designated game engineer. Throughout the initial three stages, three critics are incorporated to mitigate concerns related to hallucination and redundancy. Concluding these Figure 1: Overview of the proposed framework stages, the game engine testing engineer undertakes the execution of tasks and subsequently produces a comprehensive result summary. ### Multi-agent Framework In GameGPT, each agent maintains a private memory system and can access the shared public discussion to acquire the necessary information for guiding their decision-making process. For agent \(i\) at time step \(t\), this process can be formally represented as follows: \[p_{\theta_{i}}(O_{it}|M_{it},P_{t}), \tag{1}\] where \(p_{\theta_{i}}\) corresponds to the LLM or an expert model associated with agent \(i\), \(O_{it}\) denotes the output or deliverable produced by the agent \(i\) at time step \(t\), and \(M_{it}\) and \(P_{t}\) refer to its private memory and the requisite public record up to time step \(t\), respectively. The presence of multiple agents with distinct roles is crucial in GameGPT due to the unique characteristics of the game development industry and the limitations of LLMs. Given that game development cycles typically span several months, relying on a solitary agent with comprehensive memory and contextual information would render language models, including LLMs, inefficient. This approach could also result in scalability challenges as projects become more complicated over time. Furthermore, considering the limitations on the number of tokens processed by LLMs, employing a solitary agent with an all-encompassing memory for large-scale game development projects is not pragmatic. Additionally, the inherent issues of hallucination and redundancy observed in LLMs underscore the significance of collaboration among multiple agents, particularly those with critic roles. This collaboration is significant in mitigating the challenges posed by LLM hallucination and redundancy. GameGPT utilizes a diverse set of roles to facilitate its operations, including responsibilities across the game development cycle. These roles comprise the game content designer, game development manager, plan reviewer, game development engineer, task reviewer, game engine engineer, code reviewer, and game engine testing engineer. Each of these roles works on distinct missions throughout the game development process. ### Game Development Planning The initial phase of GameGPT, upon receiving a user request, involves generating a task plan. This planning phase stands as one of the pivotal steps, greatly influencing the seamless progression of the entire development process. Orchestrated by an LLM-based game development manager, this phase entails proposing an initial plan that is subsequently disassembled into a list of tasks. Notably, due to the limitations inherent in LLMs, this initial plan often exhibits instances of hallucinations, which give rise to unexpected tasks, and include redundant tasks that are uninformative or unnecessary. To address these challenges, we present four strategies designed to alleviate these concerns. All four strategies are orthogonal to each other and can be layered to achieve better effectiveness. Game Genre Classification with TemplateThe first strategy is to perform classification for the incoming request, aimed at discerning the genre intended for the game. Presently, the GameGPT framework accommodates development for five distinct game genres, namely, [e.g., action, strategy, role-playing, simulation, and adventure]. For each genre, we provide a standardized plan template, guiding the game development manager in completing the template with relevant information. By adopting this approach, the frequency of redundant tasks is notably reduced, while simultaneously mitigating the likelihood of hallucination occurrences. Plan ReviewThe second strategy involves the engagement of a plan reviewer which is another LLM-based agent. This plan reviewer operates through strategically crafted prompts, facilitating a comprehensive review of the task plan. Its primary objective is to minimize occurrences of hallucination and redundancy. The plan reviewer assesses the plan and furnishes feedback, aiming to refine and enhance its precision, efficiency, and conciseness. The insights provided by the plan reviewer serve as input to the game development manager, empowering the shaping of a task plan that is notably more accurate and refined. Instruction Tuning the Language Model for PlanningThe third approach aims to tune and tailor the LLM of the game development manager for game development planning through specialized instructions. This fine-tuning process endeavors to yield a plan that is both more accurate and concise. To facilitate this, we collect and consolidate an in-house dataset comprising numerous input-output pairs. While these pairs do not conform to a standardized format concerning length or structure, they uniformly revolve around requests for game development. The corresponding outputs are supplied by adept game development practitioners. By adopting this approach, we effectively bridge the gap between the LLM's general linguistic capabilities and its aptitude for game development planning. User Presentation and RectificationThe fourth and final strategy serves as a safety net within the planning phase. Throughout the planning process, the game development manager consistently shares interim outcomes with the users on the frontend interface, enabling them to remain informed about ongoing developments. To augment this, an interactive method is integrated, empowering users to actively review, rectify, and enhance the plan in accordance with their expectations. This approach safeguards alignment between the devised plan and the users' desires. ### Game Development Task Classification The process of task Classification within GameGPT demands a high accuracy in identifying both the task type and its corresponding arguments. Consequently, to ensure the accuracy of this phase, the agent of the game development engineer role is allocated. This role is supported by the utilization of two expert models, which collaboratively engage in task classification. This collaborative approach enhances the accuracy and effectiveness of task and argument identification. Task Classifier and Task Argument IdentifierTo circumvent the hallucination of language models and enhance the accuracy of the task classification, we provide a list of possible types of tasks in game development. In order to perform the classification, a BERT model is employed to effectively categorize each task. The BERT model has been trained with an in-house dataset. This dataset contains data entries uniquely tailored to the tasks of game development. The input is a task drawn from the predetermined list, while the output corresponds to the task's designated category. Identifying the argument involves another LLM. The agent provides a template that corresponds to the identified task type and subsequently prompts the LLM to populate this template. The incorporation of the template can elevate the accuracy of argument identification and significantly reduce the hallucination. Task Type and Argument ReviewIn this phase, a task reviewer reviewer agent is empolyed. It is prompted to double-confirm that the identified type and argument of each class are reasonable. The review process includes if the task type is in the predetermined range and is the best fit for the corresponding task. It also involves reviewing the argument list and see if it aligns with the task. In some scenarios where inferring the argument based on the contextual task information and the user's request is unfeasible, GameGPT adopts a proactive approach. The task reviewer engages the user by initiating a prompt on the frontend interface and requests additional information necessary for the argument. This interactive method ensures the completeness of argument details even in instances where automated inference falls short. Task Dependency and Execution SequenceThe task reviewer agent is also responsible for discerning task dependencies and constructing a directed acyclic graph that encapsulates these relationships. Subsequent to the establishment of this graph, a breadth-first search algorithm is employed to traverse it. The traversal yields a determined sequence for task execution. This process ensures an orderly and systematic execution of tasks in accordance with their dependencies, resulting in a coherent and structured development progression. ### Code generation Generating lengthy code scripts with an LLM inherently carries a greater risk of encountering hallucinations and redundancy. In response, we introduce a novel approach to decoupling the code for game design, simplifying the inference process for the LLM and consequently mitigating both hallucination and redundancy. Decouple the Script for Game Development The hallucination and redundancy tend to be more frequent when generating lengthy scripts. In response, our proposed framework introduces a novel decoupling approach specifically designed for game development, aimed at separating the Lua script. To achieve this, we strategically divide the expected script into numerous code snippets of manageable length for LLM processing. This decoupling approach significantly eases the work of LLM and thereby mitigates the occurrence of hallucination and redundancy. Post Decoupling In-context InferenceIn [10], an effective inference method called in-context-learning to mitigate hallucination is proposed. In GameGPT, we adopt a similar post-training strategy built upon our decoupling method. As our decoupling approach breaks down task-related code into smaller code snippets, we no longer depend on lengthy example scripts for inference. Instead, we incorporate multiple example snippets into the prompt, effectively reducing both hallucination and redundancy. Candidate selectionMoreover, another technique integrated into GameGPT to counteract hallucinations involves generating a set of \(K\) code snippets for each task. These snippets are subsequently tested within a virtual environment and simultaneously presented to the user. Both the testing process and user feedback are leveraged to identify and eliminate problematic candidates, leaving only the most viable option for execution. This approach serves to further minimize the occurrence of hallucinations. Instruction TuningFurthermore, we have collected an in-house lexicon comprising an extensive repository of code snippets designed for game development. Each of these snippets is annotated by labelers, providing clear instructions that specify their intended purpose. This high-quality lexicon serves as a valuable resource for fine-tuning our model. Code Review and EnhancementFollowing the code generation by the game engine engineer, a code reviewer agent is engaged to thoroughly review and inspect the codebase. The code reviewer performs a comprehensive assessment, actively seeking out any instances of deviation from the original request or unintended hallucinations present within the code. Upon thorough scrutiny, the code reviewer not only flags potential discrepancies but also furnishes recommendations for refining the code, ultimately yielding a more reasonable version. Subsequent to the review process, the revised code, along with the code reviewer's feedback, is shared with both the game engine engineer and the user through the frontend interface. If the user deems it necessary, they can provide suggestions for code revision directly via the frontend interface. These suggestions are relayed to the code reviewer, who subsequently assesses and incorporates them as appropriate, fostering a collaborative and iterative approach to code enhancement. ### Game Development Task Execution and Result Summary Once the code generation and enhancement are finished, the responsibility transitions to a game engine testing engineer, tasked with executing the generated tasks. During this phase, the testing engineer adheres to the execution sequence formulated in the preceding stage. The execution process involves sending the code of each individual task to the game engine. Subsequently, the testing engineer performs the execution and keeps track of the logs during the execution. Upon the completion of all tasks specified in the execution sequence, the testing engineer consolidates all the logs generated throughout the execution process. This compilation results in the creation of a succinct and comprehensive summary, which is then presented to the user through the frontend interface. Additionally, the testing engineer also identifies and reports any observed tracebacks during the execution. These tracebacks serve as critical indicators that may necessitate adjustments to the execution or code, enabling the refinement of the overall process and contributing to the generation of a polished end product.
2310.07354
Give and Take: Federated Transfer Learning for Industrial IoT Network Intrusion Detection
The rapid growth in Internet of Things (IoT) technology has become an integral part of today's industries forming the Industrial IoT (IIoT) initiative, where industries are leveraging IoT to improve communication and connectivity via emerging solutions like data analytics and cloud computing. Unfortunately, the rapid use of IoT has made it an attractive target for cybercriminals. Therefore, protecting these systems is of utmost importance. In this paper, we propose a federated transfer learning (FTL) approach to perform IIoT network intrusion detection. As part of the research, we also propose a combinational neural network as the centerpiece for performing FTL. The proposed technique splits IoT data between the client and server devices to generate corresponding models, and the weights of the client models are combined to update the server model. Results showcase high performance for the FTL setup between iterations on both the IIoT clients and the server. Additionally, the proposed FTL setup achieves better overall performance than contemporary machine learning algorithms at performing network intrusion detection.
Lochana Telugu Rajesh, Tapadhir Das, Raj Mani Shukla, Shamik Sengupta
2023-10-11T10:11:54Z
http://arxiv.org/abs/2310.07354v1
# Give and Take: Federated Transfer Learning for Industrial IoT Network Intrusion Detection ###### Abstract The rapid growth in Internet of Things (IoT) technology has become an integral part of today's industries forming the Industrial IoT (IIoT) initiative, where industries are leveraging IoT to improve communication and connectivity via emerging solutions like data analytics and cloud computing. Unfortunately, the rapid use of IoT has made it an attractive target for cybercriminals. Therefore, protecting these systems is of utmost importance. In this paper, we propose a federated transfer learning (FTL) approach to perform IIoT network intrusion detection. As part of the research, we also propose a combinational neural network as the centerpiece for performing FTL. The proposed technique splits IoT data between the client and server devices to generate corresponding models, and the weights of the client models are combined to update the server model. Results showcase high performance for the FTL setup between iterations on both the IIoT clients and the server. Additionally, the proposed FTL setup achieves better overall performance than contemporary machine learning algorithms at performing network intrusion detection. Internet of Things, Industrial IoT, Network Intrusion Detection, Transfer Learning, Federated Learning, Cybersecurity + Footnote †: publicationid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid:id: pubid: pubid:id: pubid: pubid:id: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid:id: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid:id: pubid:id: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid:id: pubid:id: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid:id: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid:id: eral Data Protection Regulation (GDPR) do not allow the collection of data without the consent of data subjects. Collection of IoT device data can be challenging as this data can contain sensitive and private user data. For instance, a personal voice assistant might contain overheard personal information from a user's conversation. Additionally, another limitation is the willingness to share the data with relevant individuals and organizations. To maintain confidentiality, organizations prohibit the sharing of their data with the outside world. In some organizations, data sharing is restricted to a subset of individuals who work directly with the data and is kept hidden from other employees. To ensure security, certain practices restrict data sharing across international borders. Therefore, finding a data source collected from IIoT devices is a non-trivial task due to the restrictions placed on data-sharing practices by organizations. The second limitation lies with the traditional ML algorithms that have been investigated to create ML-based NIDS for IIoT. Due to the high-dimensional nature of IoT data, existing ML algorithms tend to lose their efficiency. Also, most IoT data is collected from heterogenous sources like water sensors and soil moisture sensors. This can make it difficult to create a comprehensive ML algorithm capable of analyzing the different robust statistics and variances of the data sources [3]. Additionally, developing ML or deep learning (DL) techniques to perform IIoT intrusion detection is challenging as new attack types and vectors are getting discovered frequently. These new attack types are not being effectively detected by existing ML and DL methods and can also require more computation power to operate [4]. To address the issue of data availability, federated learning (FL) has been seen as an effective mechanism. FL helps to build personalized models for IIoT devices that do not violate user and organizational privacy through novel training methods. In addition, FL provides a privacy protection scheme that utilizes the computing resources of the IIoT device to train the models, thereby preventing information leakage during transmissions [5]. Similarly, to address the limitation of heterogeneous IIoT data sources, transfer learning (TL) can be an effective solution. TL focuses on transferring knowledge on previously learned IIoT data to another domain to reduce computational complexity and redundant training of ML models [6]. Through the usage of FL and TL, ML-NIDS can be generated that protect IoT user privacy and also can accommodate ML algorithms that can account for heterogeneous data sources. In this paper, we propose a novel federated transfer learning (FTL) approach to perform IIoT network intrusion detection. As the centerpiece for our proposed approach, we propose a novel combinational neural network (Combo-NN) architecture to perform effective intrusion detection. For this analysis, we utilize a comprehensive dataset consisting of data collected from diverse IIoT devices. The main contributions of this work include: * Performing data processing of the IIoT-based network intrusion detection dataset to enhance performance. * Proposing an FTL-based approach to detect data breaches in IIoT environments, and introducing a novel neural network approach to perform the FTL. * Analyzing and evaluating the performance of the proposed technique with traditional ML approaches. The rest of the paper is organized as follows: Section II introduces the related work for this research. Our proposed methodology is shown in Section III. Experimentation, results, and analysis are provided in Section IV. Finally, conclusions are drawn in Section V. ## II Related Work The development of network intrusion detection approaches has been a critical component in protecting today's networking infrastructures. With the emergence of IoT, these infrastructures need security contingencies now more than ever. Contemporary IoT NIDS surrounded the usage of signature-based mechanisms [7][8][9]. In [7], the authors proposed a signature-based NIDS that involved the usage of both centralized and distributed intrusion detection models. The work in [8] developed CBSigIDS, a generic framework of collaborative blockchain signature-based intrusion detection systems, which can incrementally build and update a trusted signature database in a collaborative IoT environment. Researchers in [9] demonstrated the use of an optimized pattern recognition algorithm to detect such attacks within IoT. The limitation of signature-based methods is that they are static in operational capability and can be circumvented if attackers introduce a level of variability in their malware. With the rise of ML, ML-based NIDS has also become a trending technology to protect IoT systems [10][11][12][13][14]. The authors in [10] used a deep neural network to model various NIDS datasets to assess their performance at detecting attacks. This work did not take into account the impact of class imbalance on classifier performance. The work in [11] proposed a bi-directional long short-term memory neural network on a NIDS dataset to decipher between normal and attack conditions. The limitation of this work lay in the small size of the dataset that was used to create the model. Additionally, the trade-off between the decision parameters was not considered. Researchers in [12] applied ML to various open-sourced NIDS datasets. This work did not consider any fog or edge computing devices, which is not representative of real-life practices. In [13], the researchers observed multiple ML models on an IoT dataset. Though this work formed the basis of contemporary research for NIDS for IoT environments, they did not perform any model hyperparameter optimization that impacts performance. In [14], authors evaluated six ML models for detecting MQTT-based IoT attacks. The knock on this work is that it only observed performance after training only on an MQTT protocol-based dataset. Additionally, to preserve privacy efforts in IoT, the usage of FL on IoT has seen some attention from researchers [15][16][17]. In [15], the authors performed centralized FL on an IIoT dataset. The work done in [16] proposed an FL-based NIDS that trained on multiple views of IoT network data in a decentralized format to detect, classify, and defend against attacks. Lastly, the researchers in [17] proposed an FL-based intrusion detection system for securing agricultural-IoT infrastructures using local learning. The limitations of these works are that they have not used the advantages of the transfer learning, FL setup, which is the primary aim of this paper. ## III Methodology The proposed approach for the FTL algorithms consists of a client-server architecture, where the models generated from the client data are utilized to update the server model in the setup. The steps of this proposed methodology are illustrated in Figure 2. ### _IoT Dataset_ The proposed approach has wide IIoT applications and is agnostic to dataset features. For this research, any open-sourced IoT dataset can be utilized for analysis. Let the dataset be denoted using the variable \(Z\), consisting of \(E\) total samples. \(z_{i}\) represents a single data sample and \(z_{i}\in Z,i=\{0,1...E\}\). After the aggregation of the dataset, the first step is to perform data pre-processing where redundant and/or incomplete information from the datasets can be eliminated to not hamper the performance of any trained ML model. Additionally, we also perform encoding in this step to ensure that no categorical features are being fed into the ML model. Once pre-processed, feature selection is performed to select the features that optimally affect the performance of the trained ML model. Following this, we split the pre-processed and feature-selected dataset into train and test samples. Finally, we parse the trained models into our FTL module. ### _Pre-process_ In this stage, we evaluate the contents of the acquired datasets. Any features that seem to have redundancy or do not actively help in the classification task are removed. Examples of such features include data features where all values are the same in magnitude regardless of the labels associated with that data sample. Additionally, we also evaluate the dataset features and eliminate columns that consistently have "infinity" or "NaN" type values. These inputs will not be accepted by an ML classifier and can hamper system performance if used. Additionally, we also perform feature encoding in this stage, where categorical features are encoded to have numerical values to ensure that these inputs are accepted by ML classifiers. After data pre-processing, the original dataset \(Z\) is now denoted by \(Z^{\prime}\). ### _Feature Scaling_ Once the dataset \(Z^{\prime}\) has been pre-processed, it moves into the feature scaling stage. The aim here is to identify dataset features that can be essential to the classification task and prioritize them, along with scaling them to fit within an operational range. To begin, we employ a correlation matrix. This is an algebraic and statistical tool that can be utilized to observe the relationship between two variables in a dataset. A correlation score is given for each pair of features which helps us to determine features that are not important. If it is positive, the features are directly correlated. If it is negative, the features are inversely correlated. The higher the magnitude of the score, the higher the correlation [18]. We parse the dataset \(Z^{\prime}\) through the correlation matrix to unearth which data features are essential to the classification task. After analysis of the resultant matrix, the features with the lowest correlation scores get dropped. The threshold for dropping features is dependent on the variance of the correlation scores for the data features. These features get dropped as they result in higher cardinality for the dataset, while not providing any valuable insights into the data. Finally, we scale the resultant features to ensure that all feature magnitudes fit within a prescribed range. This makes sure that any trained model is not biased towards features that have higher magnitudes. This tactic is essential in datasets that have imbalanced magnitudes. Post feature scaling step, the resultant dataset is denoted by \(X\). ### _Data Split_ After data collection, pre-processing, and feature selection, the full data suite \(X\) is partitioned into the training \(X_{train}\) and test \(X_{test}\) sets. We assume the total number of training samples is \(U\), while the total number of testing samples is \(V\). It is also ensured that \(U>V\). ### _Federated Transfer Learning_ Once the data set has been split appropriately, the FTL approach can commence. In this situation, we are utilizing the combination of a deep neural network (DNN) and a convolutional neural network (CNN) as the base for our FTL in the IIoT setup. We shall refer to this machine learning architecture as a combinational neural network (Combo-NN). To conduct our FTL, we split the existing training data \(X_{train}\) into two parts: the client data, denoted by \(X_{c}\), and the server data, denoted by \(X_{s}\). The proportion for splitting the data depends on multiple IoT characteristics like the size of the network and the number of devices that are in the IoT setup. Next, the server trains a Combo-NN with \(X_{s}\). Once trained, the server models Fig. 2: Proposed technique for the FTL Setup get deployed to all the IIoT clients. Within each client, the trained Combo-NN model gets retrained with the local client data. Once all the models are trained, clients send the models back to the server. The server performs federated averaging on these models and deploys the resultant model back to all the clients, and the procedure continues periodically till convergence. An overview of the FTL procedure is illustrated in Figure 3. The following sections highlight the important steps in the proposed approach. #### Ii-B1 Client Data The client data, \(X_{c}\), is then further split between all the clients in the IIoT FTL setup. Here we denote the number of IoT devices to be \(N\), which each gets a subset of the client data. The local client data for each IoT client is denoted by \(X_{c_{i}}\), where i represents an IoT client in the infrastructure. #### Ii-B2 Server Data and Model Training As mentioned above, the training data is also split into the server data, denoted by \(X_{s}\). The server data is essential as this forms the basis for training our proposed Combo-NN model. The model trained on \(X_{s}\) will be initially deployed to all of the client devices in the IoT infrastructure to begin the FTL procedure. The proposed Combo-NN architecture consists of a CNN and a DNN. In our methodology, we are using Resnet-50 as our CNN [19]. We are utilizing Resnet-50 as it is an effective tool for generalizing nonlinear data in literature [19]. However, other CNN architectures can also be utilized, depending on the proposed application of the FTL. CNNs can be beneficial for performing network intrusion detection as they tend to train fast due to their underlying architecture. Also, CNNs have the capability of training multi-layer networks with gradient descent that is competent to learn complex, high-dimensional, and nonlinear data [20]. Additionally, we are augmenting multiple hidden layers of a DNN to Resnet-50 for the proposed FTL approach. A DNN is chosen as it provides benefits to performing network intrusion detection as the layers of depth in the network are effective at modeling nonlinear interactions between the various observable features in the network [21]. The proposed Combo-NN architecture is illustrated in Figure 4. Let our proposed untrained Combo-NN in the server be denoted by \(M_{s}\). The network, then, gets trained using the server data \(X_{s}\) by: \[T_{s}=train(M_{s},X_{s}) \tag{1}\] where \(T_{s}\) denotes the trained Combo-NN in the server device. After training, \(T_{s}\) is duplicated \(N\) times, each copy denoted by \(T_{c_{i}}\), where \(i\) refers to each of the \(N\) IoT clients in the infrastructure. Fig. 3: Overview and Steps in the FTL Process ### _Model Transfer to Clients_ After training the server model, \(T_{s}\), in the previous stage, \(T_{s}\) is duplicated \(N\) times. Each duplicated copy is denoted by \(T_{c_{i}}\), where \(i\) refers to each of the \(N\) IoT clients in the infrastructure. In this stage, all the trained duplicated copies are sent to all of the IoT client devices. Each client device deploys \(T_{c_{i}}\) in their environment. ### _Model Training with Local Data_ After each model trained model \(T_{c_{i}}\) is deployed in each IoT client \(i\), in this stage, the model gets retrained using the local IoT data \(X_{c_{i}}\). The purpose of this is to generalize the model with the local data for the IoT to provide better network intrusion detection performance for the client. The network gets trained using the client data by: \[M_{c_{i}}=train(T_{c_{i}},X_{c_{i}}) \tag{2}\] where \(M_{c_{i}}\) denotes the trained Combo-NN model that was trained with the local client data of the IoT device. ### _Model Transfers to Server_ Once all the models in all \(N\) clients are trained with the local data, these models are transferred to the IoT server. The purpose of this step is to accumulate all the trained models into a centralized location to perform the calibration of the global Combo-NN model located on the server. ### _Federated Averaging_ Once all of the trained models \(M_{c}\) are transferred to the server, it can commence the federated averaging step to aggregate all the client models into one. FL is an iterative procedure and is traditionally conducted in multiple rounds. Here, \(t\) denotes the number of rounds for the FL. In this stage, we perform the Federated Stochastic Gradient Descent (FedSGD) approach [22]. This is conducted through the following equation: \[F_{i}(w_{t_{s}})=\frac{1}{n_{i}}\sum_{j\in\rho_{i}}f_{i}(w_{t_{s}}) \tag{3}\] where \(w_{t_{s}}\) are the weights of the current server model \(T_{s}\), \(n_{i}\) represents the number of data points on the IoT client \(i\), \(\rho_{i}\) represents the set of data points on the IoT client \(i\), \(f_{i}\) denotes the loss that is achieved for the data sample (\(x_{i}\), \(y_{i}\)) with the current model weights \(w_{t_{s}}\). Next, the FedSGD step is finished using: \[g_{i}=Gradient(F_{i}(w_{t_{s_{t}}})) \tag{4}\] where \(g_{i}\) is the computed gradients for every single IoT client model. ### _Server Model Weight Updates_ The final step for our FTL approach is to update the weights of our server model based on the results achieved from the Fig. 4: Proposed Combo-NN Architecture federated averaging step. In this stage, the central server aggregates the achieved gradients from the previous stage and applies the update using: \[\forall i,w_{t_{i+1}}\leftarrow w_{t_{s_{t}}}-\eta g_{i} \tag{5}\] Here, \(\eta\) represents the learning rate. In this step, each client \(i\) takes a step of gradient descent \(g_{i}\) on the current server model using its local data with the learning rate \(\eta\). The last step in this stage is: \[w_{t_{s_{t+1}}}\leftarrow\sum_{i=1}^{N}\frac{n_{i}}{n}w_{t_{i+1}} \tag{6}\] In this step, the server takes the weighted average of the resulting models and updates the current server model in the system. ## IV Experimentation and Results ### _Experiment Setup_ For the experimentation, we utilized Jupyter Notebook as our development environment, along with Python being the development language. Some of the essential libraries used were pandas, scikit-learn, matplotlib, seaborn, keras, tensorflow, and psutil. The dataset that is used in our experiments is Edge-IIoTset Cyber Security Dataset of IoT & IIoT [15]. For our experiment, we assume \(N=2\). For evaluation, we are using Accuracy (A) and Macro-Average scores of Precision (MAP), Recall (MAR), and F-Measure (MAF). Additionally, we are evaluating the FTL approach against Logistic Regression (LR), Gaussian Naive Bayes (GNB), Random Forest (RF), and Stochastic Gradient Descent (SGD) classifiers. These algorithms have been chosen as they are customarily used to study IoT network intrusion detection in the literature [23][24][25]. ### _Performance Evaluation and Analysis_ First, we observe the performance achieved by the IIoT client models and the server model when they experience multiple iterations of the FTL approach. The results can be illustrated in 6. It should be noted that the server model's performance is dependent on that of the client models in the FTL setup. Here, we note that after the first iteration, Client 2 achieves a high accuracy of 89.8% while Client 1 achieves a high accuracy of 89.7%. Hence, after the first iteration, the server achieves high performance with an accuracy of 90%. After the second iteration, we note that Client 2 maintains its performance with an 89.8% accuracy. Client 1 slightly decreases in performance but still maintains a high accuracy score of 88.2%. The server after the second iteration diminishes negligibly but achieves a high accuracy score of 86%. In Client 1 and the server, we notice that slight decrease in performance between iterations, which could be attributed to multiple reasons like inter-client class imbalance and overall class imbalance in the dataset [26][27]. Next, we evaluate the performance that the server IoT device experiences when undergoing the proposed FTL approach. The results from this experiment are illustrated in Figure 5. Here, we observe that when the server model has experienced the first iteration of the FTL approach, it achieves high performance with A, MAP, MAR, and MAF scores of 90%, 89%, 90%, and 88%, respectively. After the second iteration of the FTL approach, the FTL approach achieved comparable scores with A, MAP, MAR, and MAF scores of 86%, 84%, 86%, and 83%, respectively. We observe a slight decrease from the previous iteration, as previously noted in Figure 6. This can be attributed to multiple reasons like inter-client class imbalance and overall class imbalance in the dataset [26]. Finally, we analyze the performance of the proposed FTL approach in performing multi-class classification. For this, we evaluate the effectiveness of the proposed approach against traditional ML algorithms that are customarily used for IoT network intrusion detection. The results are illustrated in Figure 7. From the observed results, we notice that the GNB classifier achieves the least performance in being able to detect network intrusions with an \(A\) score of 75%, an MAP, MAR, and MAF scores of 72%, 80%, and 69%, respectively. We also note that contemporary ML algorithms achieve better performance than the GNB classifier, but these achieved performances are not the optimal solutions. The performance achieved by the proposed Combo-NN-based FTL approach provides the best overall performance compared to the other contemporary ML algorithms. It achieves a 90% accuracy score, and MAP, MAR, and MAF scores of 89%, 90%, and 88% scores, respectively. This is because the FTL technique achieves optimal performance iteratively, making it more suitable for performing network intrusion detection over a wide set of network data. Additionally, the usage of a neural network-based architecture ensures that any deeply correlated Fig. 5: Progression of Server Performance through Iterations Fig. 6: Accuracy Progression of Client and Server Performances through Iterations network metrics that are not easily visible are identified by the neural network. This showcases that the proposed FTL techniques can effectively model the intrusion types in the dataset and better differentiate between the various labels. Therefore, using the proposed FTL approach can be a more effective mechanism to detect network intrusions in an IIoT setup versus other contemporary ML algorithms. ## V Conclusion In this paper, we propose a new type of intrusion detection mechanism for Industrial IoT devices using the concept of Federated Transfer Learning. As part of this proposed work, we have introduced a novel neural network-based architecture called Combo-NN which conducts network intrusion detection utilizing a combination of a CNN and DNN. To evaluate the efficacy of our proposed approach, we analyze its performance on an open-source IIoT network intrusion detection dataset. Results showcase high performance for the FTL setup between iterations on both the IIoT clients and the server. Additionally, we observe that the proposed FTL setup can achieve better overall performance than contemporary ML algorithms that have previously been used to perform network intrusion detection in the literature. To the best of our knowledge, this is the first time FTL is being used to perform network intrusion detection for IIoT. Future work will focus on analyzing the performance of the proposed technique to other network intrusion detection datasets, along with developing potential security features to protect the approach from adversarial attacks.
2308.05709
A photonic source of heralded GHZ states
Generating large multiphoton entangled states is of main interest due to enabling universal photonic quantum computing and all-optical quantum repeater nodes. These applications exploit measurement-based quantum computation using cluster states. Remarkably, it was shown that photonic cluster states of arbitrary size can be generated by using feasible heralded linear optics fusion gates that act on heralded three-photon Greenberger-Horne-Zeilinger (GHZ) states as the initial resource state. Thus, the capability of generating heralded GHZ states is of great importance for scaling up photonic quantum computing. Here, we experimentally demonstrate this required building block by reporting a polarisation-encoded heralded GHZ state of three photons, for which we build a high-rate six-photon source ($547{\pm}2$ Hz) from a solid-state quantum emitter and a stable polarisation-based interferometer. The detection of three ancillary photons heralds the generation of three-photon GHZ states among the remaining particles with fidelities up to $\mathcal{F}=0.7278{\pm}0.0106$. Our results initiate a path for scalable entangling operations using heralded linear-optics implementations.
H. Cao, L. M. Hansen, F. Giorgino, L. Carosini, P. Zahalka, F. Zilk, J. C. Loredo, P. Walther
2023-08-10T17:17:28Z
http://arxiv.org/abs/2308.05709v1
# A photonic source of heralded GHZ states ###### Abstract Generating large multiphoton entangled states is of main interest due to enabling universal photonic quantum computing and all-optical quantum repe nodes. These applications exploit measurement-based quantum computation using cluster states. Remarkably, it was shown that photonic cluster states of arbitrary size can be generated by using feasible heralded linear optics fusion gates that act on heralded three-photon Greenberger-Horne-Zeilinger (GHZ) states as the initial resource state. Thus, the capability of generating heralded GHZ states is of great importance for scaling up photonic quantum computing. Here, we experimentally demonstrate this required building block by reporting a polarisation-encoded heralded GHZ state of three photons, for which we build a high-rate six-photon source (547\(\pm\)2 Hz) from a solid-state quantum emitter and a stable polarisation-based interferometer. The detection of three ancillary photons heralds the generation of three-photon GHZ states among the remaining particles with fidelities up to \(\mathcal{F}\)=0.7278\(\pm\)0.0106. Our results initiate a path for scalable entangling operations using heralded linear-optics implementations. _Introduction.--_Quantum entanglement enables the exploration of unique phenomena absent in the classical world, such as non-locality [1; 2; 3] and teleportation [4; 5; 6]; and it ultimately provides an advantage to quantum systems over classical ones for various tasks [7; 8], ranging from metrology and sensing [9; 10; 11; 12; 13], to computation [14; 15; 16; 17]. Photonics is among the leading physical platforms for creating entangled quantum systems [18], it exploits non-classical states of light and is divided mainly in two categories: continuous-variable [19; 20], and discrete-variable approaches [21; 22]. With advantages and disadvantages from both sides, continuous-variable implementations are highly sensitive to losses, degrading the quality of the quantum state. In this regard, discrete-variable encoding constitutes an appealing alternative, as even in the absence of deterministic entangling gates, there exist loss-tolerant schemes for the generation of large entangled quantum states using only probabilistic, but heralded, linear-optics quantum gate operations [23; 24]. Additional approaches exist that aim to generate discrete-variable cluster states [25] directly by exploiting quantum emitters [26; 27; 28; 29; 30; 31]. However, this requires complex control of atomic structures, solid-state materials, and electro-magnetic fields, to name a few technological challenges. In contrast, linear-optics alone also provides a path for the scalable generation of multiphoton cluster states universal for quantum computation [32; 24; 33]. Thereon, a ballistic (without feed-forward requirements) and loss-tolerant (where losses do not induce logical errors) model for universal quantum computing, named fusion-based quantum computation [34], exploits small resource states made up of a handful of entangled particles [35], and combines them into larger entangled states via boosted (heralded, and at the expense of ancillary photons) entangling gates called fusion operations [32]. The smallest building block in these protocols [24] is the heralded three-photon Greenberger-Horne-Zeilinger (GHZ) state [36; 37]. Creating them requires the quantum interference of six separable single photons [23], or a minimum of ten photons from non-linear frequency conversion processes [38; 39]. The efficient generation of the necessary input multiphoton states remained to date a major challenge. In this regard, semiconductor quantum dots have recently matured to a point where one handles the interference of single photons at scales of eight particles [40] and even beyond with lost-photons boosted protocols [41], thus now reaching the necessary scales for these more advanced experiments. Here, we experimentally demonstrate a heralded three-photon polarisation-encoded GHZ state based on the interference of six single photons. We employ a 28.7% fibre-efficient quantum dot single-photon source, actively demultiplexed to produce a source of six indistinguishable photons with 547\(\pm\)2 Hz detected rates. The high quality of the source and interferometric apparatus enable producing heralded three-photon GHZ states at a detection rate of 0.914\(\pm\)0.006 Hz, and fidelities up to \(\mathcal{F}\)=0.7278\(\pm\)0.0106. Our results mark an important step for enabling the realisation of future fusion-based quantum computing protocols. _High-rate multiphoton source.--_We first describe our source of multiphoton states. An InAs/GaAs quantum dot coupled to a micropillar cavity is kept in a cryostat at \(\sim\)4 K, and is resonantly driven using a standard crossed-polarised excitation scheme. An efficient collection setup allows us to measure 19.5 MHz of single photons with simultaneous purity 1\(-\)9\({}^{2}\)(0)=0.981\(\pm\)0.003 and indistinguishability [42]\(\mathcal{I}\)=0.941\(\pm\)0.002, see Fig. 1, when pumped with \(\pi\)-pulses at 80 MHz repetition rate and using a detection system of 85% efficiency, thus corresponding to a 28.7% fibre-efficient single-photon source. Subsequently, we utilise a time-to-space demultiplexer composed of resonant-enhanced electro-optic modulators (r-EOMs) and polarising beamsplitters (PBSs) arranged in a tree-structure [43; 44], see Fig. 2(a). As a result, a number of input time bins separated from each other by 12.5 ns are deterministically routed towards, in this implementation, eight different outputs, which there follow suitable fibre-based temporal delays to result in a source of eight indistinguishable single photons travelling simultaneously. Figure 2(b) shows the measured multiphoton coincidence rates using eight superconducting nanowire single-photon detectors (SNSPDs) directly at the output of the demultiplexed source. In particular, the resulting six-photon source is detected at a rate of 547\(\pm\)2 Hz, and the eight-photon source at 15.7\(\pm\)0.4 Hz. We note that these are the highest rates of multiphoton sources reported to date. _Heralded entanglement.--_We use six single photons from this source as input to a polarisation-based interferometer, as depicted in Fig. 3(a), such that the detection of three photons heralds an entangled GHZ state among the other three [23]. The six input photons are labelled \(i_{1},...i_{6}\), and are first initialised in horisontal polarisation. When one, and only one, photon propagates towards each of the heralding outputs \(o_{4},o_{5}\), and \(o_{6}\), then the signal output photons \(o_{1},o_{2}\), and \(o_{3}\) are left in a three-particle entangled state. Note that the successful implementation of this protocol requires that all six photons are highly indistinguishable from each other. To confirm that this is the case, we measure all 15 cases of pair-wise indistinguishabilities among the six input photons, and find an average indistinguishability of 0.923\(\pm\)0.009 across all combinations, see Fig. 3(b). The heralded generation of the entangled quantum state requires that no more than one photon is detected at each heralding output. For example, without number-resolution, a pattern with four photons among the three heralding spatial trajectories can not be discerned from another pattern with an exact number of three photons. In such cases, the state produced at signal outputs is not solely the target GHZ state, but it also contains other components with a different number of photons. Therefore, only non-heralded (post-selected) states are generated in the absence of number-resolving detection. Our implementation makes use of pseudo photon-number resolution (PPNR) at every heralding output, \(o_{4,h},o_{4,v},o_{5,h},o_{5,v},o_{6,h},o_{6,v}\), by further splitting each of them into two new outputs and SNSPDs; where \(h\) and \(v\) denote horisontal and vertical polarisation, respectively. Therefore, we use 18 SNSPDs in total: six detectors to cover both polarisations of the three signal outputs, and twelve detectors for implementing the polarisation Figure 2: **Multiphoton source.** (a) Resonant demultiplexer. Seven synchronised r-EOMs—one driven at 40 MHz, two at 20 MHz, and four at 10 MHz–and polarising beam splitters, deterministically demultiplex eight consecutive time bins. Fibre-based delays and translation stages are used to correct the time bins’ initial temporal mismatch. (b) Measured coincidence rates. Multiphoton rates at the output of the demultiplexer, up to a number of \(n\)=8 photons. Figure 1: **Single-photon quality.** (a) Normalised second-order auto-correlation function \(g^{(2)}(\Delta t)\) from a Hanbury Brown and Twiss setup, and (b) at the output of a Hong-Ou-Mandel experiment at \(\pi\)-pulse excitation. We measure the single-photon purity \(1-g^{(2)}(0)\)=0.981\(\pm\)0.003, and two-photon indistinguishability \(\mathcal{I}\)=0.941\(\pm\)0.002. and pseudo number-resolved measurement of the three heralding outputs. At each PPNR stage (six in total), we condition a measurement such that one of the detectors clicks and the other one does not, which performs the pseudo number resolution of one, and no more, photon. The heralded generation of the target state occurs then by imposing that each of the heralding stages measures at most one photon. The specific 3-GHZ state generated depends on the polarisation pattern that clicks at the heralding outputs. For instance, the patterns \(\{hhh,hvv,vhv,vvh\}\), where \(ijk\) denotes the condition \(o_{4,i},o_{5,j},o_{6,k}\), herald the signal state \(|\text{GHZ}^{+}\rangle\)=\(\left(|000\rangle+|111\rangle\right)/\sqrt{2}\) with an accumulated success probability of 1/64, and \(|0\rangle\) (\(|1\rangle\)) denoting hisrontal (vertical) polarisation. Conversely, the complementary patterns \(\{hhv,hvh,vhh,vvv\}\) herald the state \(|\text{GHZ}^{-}\rangle\)=\(\left(|000\rangle-|111\rangle\right)/\sqrt{2}\) with an equal success probability. Accordingly, the total success probability of generating a three-GHZ state with this protocol is 1/32. In our experiment, we start by measuring the witness \(\mathcal{W}_{\text{GHZ}}\)=31/2\(-X_{1}X_{2}X_{3}-\left(Z_{1}Z_{2}+Z_{2}Z_{3}+Z_{1}Z_{3}\right)/2\), whose negative value verifies the presence of genuine three-particle entanglement for GHZ states [45]. Figure 4 displays the measured mean values of the involved observables, from where we obtain \(\langle\mathcal{W}_{\text{GHZ}}\rangle\)= \(-\) 0.2613\(\pm\)0.0335, confirming three-partite entanglement by more than seven standard deviations. Moreover, given the high countrates of the available six-photon source, we are also able to perform over-complete three-qubit quantum state tomography at the signal outputs, with all heralding patterns simultaneously. That is, by using 18 SNSPDs, we reconstruct both \(|\text{GHZ}^{+}\rangle\) and \(|\text{GHZ}^{-}\rangle\) from the measurement of 3\({}^{3}\)=27 three-qubit observables that result from all combinations of \(Z,X,Y\) Pauli matrices among the 3 signal qubits. Figure 5 shows the reconstructed density matrices of the heralded entangled states, from where we extract a fidelity of \(\mathcal{F}^{+}\)=0.7189\(\pm\)0.0109 to the \(|\text{GHZ}^{+}\rangle\) state when using the corresponding four heralding conditions, as well as \(\mathcal{F}^{-}\)=0.6995\(\pm\)0.0116 to \(|\text{GHZ}^{-}\rangle\) by using the respective other four heralding patterns. Note that small terms are present in the imaginary part of the density matrices, showing that the pre Figure 3: **Polarisation six-photon interferometer.** (a) Depiction of experimental setup. Three pairs of single photons \(\left(i_{1},i_{6}\right),\left(i_{2},i_{5}\right),\left(i_{3},i_{4}\right)\) probabilistically generate three bell pairs, together with other unwanted terms, after interfering on PBSs 1,2,3. Subsequently, a non-heralded six-photon entangled state, and further unwanted states, is probabilistically generated after PBSs 4,5. The polarisation and number-resolved detection at the output of the type I and type II fusion operation among outputs \(o_{4},o_{5}\), and \(o_{6}\) corrects for the unwanted terms and leaves the remaining photons at outputs \(o_{1},o_{2}\), and \(o_{3}\) in a probabilistic but heralded three-photon GHZ state. Quater-wave (QWP) and half-wave plates (HWP) together with extra PBSs are used to perform three-qubit quantum state tomography. (b) Photons’ indistinguishability. We use the same six-photon interferometer to measure all 15 pair-wise two-photon indistinguishabilities, resulting in an average value of 0.923\(\pm\)0.009 across all combinations. pared states contain a small relative phase between the state basis, which can be compensated for via local unitaries. Taking this into account, we obtain fidelities of \(\overline{\mathcal{F}}^{+}\)=0.7278\(\pm\)0.0106 and \(\overline{\mathcal{F}}^{-}\)=0.7083\(\pm\)0.0120 to the GHZ states \(\left(|000\rangle\pm e^{i(0.04\times 2\pi)}|111\rangle\right)/\sqrt{2}\), respectively. We measure both heralded states at a combined rate of 0.914\(\pm\)0.006 Hz. This value is expected considering: \(\sim\)80% throughput efficiency of the polarisation interferometer (affecting six photons), \(\sim\)85% throughput efficiency of the pseudo number-resolving detection setup (three photons), and \(\sim\)85% throughput given by multiple fibre mating connections (six photons). Together with a 1/32 success probability of producing both GHZ states, results in an expected rate of (547 Hz) \(\times\)0.8\({}^{6}\times\)0.85\({}^{3}\times\)0.85\({}^{6}\)/32\(\sim\)1 Hz. _Discussion.--_We have experimentally demonstrated a building block for ballistic and all linear-optical photonic quantum computing: the heralded three-photon Greenberger-Horne-Zeilinger state. First, we developed a high-rate (547\(\pm\)2 Hz) source of six photons from a solid-state quantum emitter, with an average pairwise indistinguishability of 0.923\(\pm\)0.009. Subsequently, these photons propagated through a polarisation-based multimode interferometer, where the pseudo number-resolved detection of three of them heralded the generation of three-GHZ states among the remaining particles. Thanks to the high rate of the generated multiphoton source, we were able to perform three-qubit overcomplete quantum state tomography, reaching fidelities up to \(\mathcal{F}\)=0.7278\(\pm\)0.0106. Moreover, the efficient multiphoton source presented here reached an eight-photon rate of 15.7\(\pm\)0.4 Hz, readily enabling the implementation of more complex experiments at scales soon beyond ten photons. As a result, we anticipate that near-term further improvements in our source performance will enable a plethora of quantum photonics protocols that remained heretofore out of experimental reach, with particular emphasis on scalable all-optical entangling operations. This research was funded in whole, or in part, from the European Union's Horizon 2020 and Horizon Europe research and innovation programme under grant agreement No 899368 (EPIQUS), the Marie Sklodowska-Curie grant agreement No 956071 (AppQInfo), and the QuantERA II Programme under Grant Agreement No 101017733 (PhoMemtor); from the Austrian Science Fund (FWF) through [F7113] (BeyondC), and [FG5] (Research Group 5); from the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development and the Christian Doppler Research Association. For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. _Note added.--_During the writing of this manuscript, we became aware of a similar work [46].
2301.08726
Continuous Newton-like Methods featuring Inertia and Variable Mass
We introduce a new dynamical system, at the interface between second-order dynamics with inertia and Newton's method. This system extends the class of inertial Newton-like dynamics by featuring a time-dependent parameter in front of the acceleration, called variable mass. For strongly convex optimization, we provide guarantees on how the Newtonian and inertial behaviors of the system can be non-asymptotically controlled by means of this variable mass. A connection with the Levenberg--Marquardt (or regularized Newton's) method is also made. We then show the effect of the variable mass on the asymptotic rate of convergence of the dynamics, and in particular, how it can turn the latter into an accelerated Newton method. We provide numerical experiments supporting our findings. This work represents a significant step towards designing new algorithms that benefit from the best of both first- and second-order optimization methods.
Camille Castera, Hedy Attouch, Jalal Fadili, Peter Ochs
2023-01-20T18:46:18Z
http://arxiv.org/abs/2301.08726v2
# Continuous Newton-like Methods featuring Inertia and Variable Mass ###### Abstract We introduce a new dynamical system, at the interface between second-order dynamics with inertia and Newton's method. This system extends the class of inertial Newton-like dynamics by featuring a time-dependent parameter in front of the acceleration, called _variable mass_. For strongly convex optimization, we provide guarantees on how the Newtonian and inertial behaviors of the system can be non-asymptotically controlled by means of this variable mass. A connection with the Levenberg-Marquardt (or regularized Newton's) method is also made. We then show the effect of the variable mass on the asymptotic rate of convergence of the dynamics, and in particular, how it can turn the latter into an accelerated Newton method. We provide numerical experiments supporting our findings. This work represents a significant step towards designing new algorithms that benefit from the best of both first- and second-order optimization methods. ## 1 Introduction ### Problem Statement A major challenge in modern unconstrained convex optimization consists in building fast algorithms while maintaining low computational cost and memory footprint. This plays a central role in many key applications such as large-scale machine learning problems or data processing. The problems we are aiming to study are of the form \[\min_{x\in\mathbb{R}^{n}}f(x).\] Large values of \(n\) demand for algorithms at the interface of first- and second-order optimization. Limited computational capabilities explain why gradient-based (first-order) algorithms remain prominent in practice. Unfortunately, they often require many iterations which is true even for the provably best algorithms for certain classes of optimization problems; for example that of convex and strongly convex functions with Lipschitz continuous gradient [37, 33, 34]. On the other hand, algorithms using second-order information (the Hessian of \(f\))--with Newton's method as prototype--adapt locally to the geometry of the objective, allowing them to progress much faster towards a solution. However, each iteration comes with high computational and memory costs, which highlights a challenging trade-off. It is therefore essential to develop algorithms that take the best of both worlds. They are commonly referred to as (limited-memory) quasi-Newton methods. Several quasi-Newton algorithms partly address this issue, for example BFGS methods [18, 23, 25, 39, 31], yet, in very large-scale applications, first-order algorithms often remain the preferred choice. In order to reach a new level of efficiency, deep insights into the mechanism and relations between algorithms are required. To that aim, an insightful approach is to see optimization algorithms as discretization of ordinary differential equations (ODEs): for small-enough step-sizes, iterates can be modeled by a continuous-time trajectory [32, 13]. Obtaining a fast algorithm following this strategy depends on two ingredients: choosing an ODE for which rapid convergence to a solution can be proved, and discretizing it with an appropriate scheme that preserves the favorable properties of the ODE. Both steps are highly challenging, our work focuses on the ODE matter. We study the following second-order dynamical system in a general setting: \[\varepsilon(t)\ddot{x}(t)+\alpha(t)\dot{x}(t)+\beta\nabla^{2}f(x(t))\dot{x}(t) +\nabla f(x(t))=0,\quad t\geq 0,\] (VM-DIN-AVD) where \(f\colon\mathbb{R}^{n}\to\mathbb{R}\) is a smooth convex twice continuously differentiable function, with gradient \(\nabla f\) and Hessian \(\nabla^{2}f\) defined on \(\mathbb{R}^{n}\) equipped with scalar product \(\langle\cdot,\cdot\rangle\), and induced norm \(\|\cdot\|\). Additionally, \(f\) is assumed to be coercive, and strongly convex on bounded subsets of \(\mathbb{R}^{P}\). The functions \(\varepsilon,\alpha\colon\mathbb{R}_{+}\to\mathbb{R}_{+}\) (where \(\mathbb{R}_{+}=[0,+\infty[\)) are differentiable, non-increasing, and \(\varepsilon(t)>0\) for all \(t\geq 0\). Together with \(\beta>0\), they are control parameters that define the type of dynamics that drives the trajectory (or solution) \(x\colon\mathbb{R}_{+}\to\mathbb{R}^{n}\), whose first- and second-order derivatives are denoted \(\dot{x}\) and \(\ddot{x}\) respectively. We call the above dynamics (VM-DIN-AVD), which stands for "Variable Mass Dynamical Inertial Newton-like system with Asymptotically Vanishing Damping" since it generalizes a broad class of ODEs whose original member is DIN [2], where \(\varepsilon\) and \(\alpha\) were constant. DIN was then extended to the case of non-constant _asymptotically vanishing dampings_ (AVD) \(\alpha\)[9]. In this work we introduce the non-constant parameter \(\varepsilon\) called _variable mass_ (VM) in front of the acceleration \(\ddot{x}\), in the same way that \(\alpha\) is called (viscous) _damping_ by analogy with classical mechanics. A key feature of these ODEs, that positions them at the interface of first- and second-order optimization, is that they possess equivalent forms involving only \(\nabla f\) but not \(\nabla^{2}f\), significantly reducing computational costs, hence enabling the design of practical algorithms, see e.g., [20, 10, 21]. The key idea behind this is the relation \(\nabla^{2}f(x(t))\dot{x}(t)=\frac{\mathrm{d}}{\mathrm{d}t}\nabla f(x(t))\), see Section 2 for an equivalent formulation of (VM-DIN-AVD) exploiting this. This paper emphasizes the relation between (VM-DIN-AVD) and well-studied special cases. Indeed, taking \(\varepsilon=\alpha=0\), one obtains1 the Continuous Newton (CN) method [24] Footnote 1: CN is usually considered with \(\beta=1\), we put \(\beta\) in the system to ease the discussions below. \[\beta\nabla^{2}f(x_{N}(t))\dot{x}_{N}(t)+\nabla f(x_{N}(t))=0,\quad t\geq 0,\] (CN) known notably for being invariant to affine transformations and yielding fast vanishing of the gradient (see Section 3). In fact, this observation shows that (VM-DIN-AVD) is a singular perturbation of (CN), which also justifies the terminology "Newton-like" in DIN. When \(\alpha\neq 0\) but \(\varepsilon=0\), we recover the Levenberg-Marquardt (LM) method, \[\alpha(t)\dot{x}_{LM}(t)+\beta\nabla^{2}f(x_{LM}(t))\dot{x}_{LM}(t)+\nabla f(x _{LM}(t))=0,\quad t\geq 0,\] (LM) also known as regularized Newton method since it stabilizes (CN). In the rest of the paper, the solutions of (CN) and (LM) will always be denoted by \(x_{N}\) and \(x_{LM}\) respectively. Alvarez et al. [2] showed that for \(\alpha=0\), \(\beta=1\), and \(\varepsilon\) constant and small, (VM-DIN-AVD) is a "perturbed" Newton method since the distance between the solutions of (VM-DIN-AVD) and (CN) is at most proportional to \(\sqrt{\varepsilon}\) at all time. Yet, despite the benefits of this class of ODEs, such as stabilization properties, see e.g., [9, 10], no improvement2 of the rate of convergence (in values) has been shown compared to inertial first-order dynamics [37, 41]. This raises the question: Footnote 2: DIN-like systems were thought to yield faster vanishing of the gradient compared to inertial first-order dynamics, until recently [4]. "_are these ODEs really of Newton type?_", which is crucial in view of designing faster algorithms from them. Figure 1: Left: phase diagram on distances from (VM-DIN-AVD) to (CN) and (LM) (see Section 3). For each patch, the color indicates which of the distances \(\|x_{N}(t)-x(t)\|\) and \(\|x_{LM}(t)-x(t)\|\) is considered, the scaling of a corresponding upper-bound on this distance is written; in white for prior work and in black for our contributions. The green line separates the cases \(\varepsilon\geq\alpha\) (above) and \(\varepsilon\leq\alpha\) (below). Right: 2D illustration of the trajectories of (VM-DIN-AVD) for several choices of \(\varepsilon\) on a quadratic function. Using fast-vanishing \(\varepsilon(t)\) (dark-blue solid curves), one can bring the solution of (VM-DIN-AVD) close to that of (CN), making it, for example, more robust to bad conditioning compared to first-order dynamics (such as gradient descent). ### Main Contributions We show that the answer to this question is partially positive, and closely related to the choices of \(\varepsilon\) and \(\alpha\). We provide general results on the role played by these two control parameters and how they can be chosen to control (VM-DIN-AVD), and make it close to (CN) _for all time_, as illustrated on the right-hand side of Figure 1, but also to obtain fast convergence. This represents a first step towards building new fast practical algorithms. Our main contributions are the following: - We provide a first-order equivalent formulation of (VM-DIN-AVD), and address the questions of existence and uniqueness of the solutions of (VM-DIN-AVD) under mild assumptions. - We generalize the perturbed Newtonian property discussed above to non-constant and possibly vanishing variable masses \(\varepsilon\), and "not too large" positive dampings \(\alpha\), and derive bounds that (formally) take the form \(\|x(t)-x_{N}(t)\|=O(\sqrt{\varepsilon(t)})\). We then extend these results to larger dampings \(\alpha\) and make the connection between (VM-DIN-AVD) and (LM). This contribution is summarized in the phase diagram of Figure 1. - Using quadratic functions as a model for strongly convex functions, we shed light on techniques to efficiently approximate solutions of (VM-DIN-AVD). We then show how \(\varepsilon\) and \(\alpha\) affect the speed of convergence. Depending on their setting, the solutions of (VM-DIN-AVD) may either converge as fast as that of (CN), _faster_, or rather have a (LM) nature, as summarized in Table 1. - We provide numerical experiments supporting our theoretical findings. ### Related work The system (VM-DIN-AVD) belongs to the class of inertial systems with viscous and geometric ("Hessian-driven") dampings, initially introduced with constant \(\varepsilon=1\) and constant \(\alpha\) in [2] and called DIN (for Dynamical Inertial Newton-like system). Except in a few cases [20, 19], most of the follow-up work then considered extensions of DIN with non-constant AVD \(\alpha\), with in particular the DIN-AVD system with \(\alpha(t)=\alpha_{0}/t\) as introduced in [9]. The reason for this popular choice for \(\alpha\) is its link with Nesterov's method [41]. Non-constant choices for \(\beta\) have been considered [10, 1, 29, 11]. We keep it constant here, and rather focus on non-constant \(\varepsilon\), unlike prior work that used constant \(\varepsilon=1\). The mass \(\varepsilon\) was only considered in the original work [2], but only for fixed \(\varepsilon\), \(\beta=1\) and constant \(\alpha=0\). VM-DIN-AVD is however closely related to the IGS system considered in [11] as it is actually equivalent to the latter after dividing both sides of (VM-DIN-AVD) by \(\varepsilon(t)\). Our approach--which consists in studying the connections with other second-order dynamics as \(\varepsilon\) vanishes asymptotically--is however different from the one followed in [11], which is of independent interest. The literature on DIN is rich, let us mention further connections with Nesterov's method [40, 1], extensions with Tikhonov \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{2}{c}{Parameters of (VM-DIN-AVD)} & \multicolumn{2}{c}{Speed of convergence} \\ \hline Dominant parameter & Integrability in \(+\infty\) & w.r.t. (CN) & w.r.t. (LM) \\ \hline variable mass \(\varepsilon\) & yes & as fast & as fast \\ & no & faster & faster \\ viscous damping \(\alpha\) & yes & as fast & only depends \\ & no & slower & on \(\varepsilon\) \\ \hline \hline \end{tabular} \end{table} Table 1: Informal summary of Section 4. Comparison of (VM-DIN-AVD) with other dynamics regularization [16] and closed-loop damping [12, 29]. The non-smooth and possibly non-convex cases have been considered in [5, 6, 20]. Finally, avoidance of strict saddle points in smooth non-convex optimization has been shown in [19]. The influence of the damping \(\alpha\) on the (LM) dynamics has been studied in [7, 8]. Interestingly, the conditions enforced on \(\alpha\) in these papers (formally a sub-exponential decay) are very similar to those we make on \(\varepsilon\) and \(\alpha\) for (VM-DIN-AVD) (see Assumptions 1 and 2). Regarding the second part of our analysis, which deals with the case where \(f\) is quadratic, Attouch et al. [10] provided closed-form solutions to (VM-DIN-AVD) for \(\varepsilon\equiv 1\) and special choices of \(\alpha\). Our work rather deals with approximate solutions which allows considering a wide class of functions \(\varepsilon\) and \(\alpha\). We rely on the Liouville-Green (LG) method [30, 26] presented in Section 4. Generalizations of LG are also often referred to as WKB methods [44, 28, 17] and seem to be mostly used in physics so far. To the best of the authors' knowledge, the current work seem to be one of the first to use the LG method in optimization, and the first for DIN-like systems. ### Organization The paper is organized as follows. We discuss the existence of solutions in Section 2. Our main results, from a non-asymptotic control perspective, are then presented in Section 3. An analysis of the role played asymptotically by \(\varepsilon\) and \(\alpha\) is then carried out on quadratic functions in Section 4. Finally, numerical experiments are presented in Section 5, and some conclusions are then drawn. ## 2 Existence and Uniqueness of Solutions In the sequel, we fix \(x_{0}\in\mathbb{R}^{n}\) and \(\dot{x}_{0}\in\mathbb{R}^{n}\), such that, unless stated otherwise, (VM-DIN-AVD) is always considered with initial condition \((x(0),\dot{x}(0))=(x_{0},\dot{x}_{0})\), and (CN) and (LM) with initial condition \(x_{N}(0)=x_{LM}(0)=x_{0}\). We also fix initial values for the control parameters \(\varepsilon(0)=\varepsilon_{0}>0\), \(\varepsilon^{\prime}(0)=\varepsilon^{\prime}_{0}\leq 0\) and \(\alpha(0)=\alpha_{0}\geq 0\). In addition to the definitions of \(\varepsilon\) and \(\alpha\) in Section 1.1, we assume that \(\varepsilon\) is twice differentiable, with bounded second derivative. In order to use the Cauchy-Lipschitz Theorem, we reformulate (VM-DIN-AVD) into a first-order (in time) system by introducing an auxiliary variable \(y:\mathbb{R}_{+}\to\mathbb{R}^{n}\). Notably, our reformulation does not involve \(\nabla^{2}f\), in the same fashion as [2, 9]. For all \(t\), defining \(\nu(t)=\alpha(t)-\varepsilon^{\prime}(t)-\frac{1}{\beta}\varepsilon(t)\), we show in Appendix A that (VM-DIN-AVD) is equivalent to \[\begin{cases}\varepsilon(t)\dot{x}(t)+\beta\nabla f(x(t))+\nu(t)x(t)+y(t)&=0 \\ \dot{y}(t)+\nu^{\prime}(t)x(t)+\frac{\nu(t)}{\beta}x(t)+\frac{1}{\beta}y(t)&=0 \end{cases}\] (gVM-DIN-AVD) with initial conditions \((x(0),y(0))=\Big{(}x_{0},-\varepsilon_{0}\dot{x}_{0}-\beta\nabla f(x_{0})-( \alpha_{0}-\varepsilon^{\prime}_{0}-\frac{1}{\beta}\varepsilon_{0})x_{0} \Big{)}\). One can notice that in the special case where \(\varepsilon\) is taken constant and equal to \(1\) (that is when (VM-DIN-AVD) is simply the DIN-AVD system [9]), we recover the same first-order formulation as that in [9]. We can then apply the Cauchy-Lipschitz Theorem. For all \(t\geq 0\) and \((u,v)\in\mathbb{R}^{n}\times\mathbb{R}^{n}\), define the mapping \[G\left(t,(u,v)\right)=\begin{pmatrix}\frac{1}{\varepsilon(t)}(-\beta\nabla f (u)-\nu(t)u-v)\\ -\nu^{\prime}(t)u-\frac{\nu(t)}{\beta}u-\frac{1}{\beta}v\end{pmatrix},\] so that (gVM-DIN-AVD) rewrites \((\dot{x}(t),\dot{y}(t))=G\left(t,(x(t),y(t))\right)\) for all \(t\geq 0\). Since \(f\) is twice continuously differentiable, one can see that \(G\) is continuously differentiable w.r.t. its second argument \((u,v)\). Consequently \(G\) is locally Lipschitz continuous w.r.t. \((u,v)\) and by the Cauchy-Lipschitz Theorem, for each initial condition, there exists a unique local solution to (gVM-DIN-AVD) and thus to (VM-DIN-AVD). We then show that this solution is actually global (in Appendix A) by proving the boundedness of \((x,y)\). We omit the existence and uniqueness of the solutions of (CN) and (LM) since these are standard results, obtained with similar arguments. ## 3 Non-asymptotic Control of (VM-DIN-AVD) The purpose of this section is to understand how close \(x\) might be to \(x_{N}\) and \(x_{LM}\), as a function of \(\alpha\) and \(\varepsilon\). Since \(f\) is coercive and strongly convex on bounded sets, it has a unique minimizer \(x^{*}\in\mathbb{R}^{n}\). Consequently, any two trajectories that converge to \(x^{*}\) will eventually be arbitrarily close to each other. Thus, asymptotic results of the form \(\|x(t)-x_{N}(t)\|\xrightarrow[t\to+\infty]{}0\) are not precise enough to claim, for example, that \(x\) has a "Newtonian behavior". Instead, we will derive upper bounds on the distance between trajectories that hold _for all time \(t\geq 0\)_, and which typically depend on \(\varepsilon\) and/or \(\alpha\). We first present the case where \(\alpha\) is small relative to \(\varepsilon\) and then generalize. ### Comparison with (Cn) under Moderate Viscous Dampings When the viscous damping \(\alpha\) remains moderate w.r.t. the variable mass \(\varepsilon\), one can expect the solutions of (VM-DIN-AVD) to be close to that of (CN). We make the following assumptions. **Assumption 1**.: _There exists \(c_{1},c_{2}\geq 0\) such that for all \(t\geq 0\), \(|\varepsilon^{\prime}(t)|\leq c_{1}\varepsilon(t)\) and \(\alpha(t)\leq c_{2}\varepsilon(t)\)._ The assumption states that \(\alpha\) must decrease at least as fast as \(\varepsilon\) (up to a constant).3 The reason for assuming \(|\varepsilon^{\prime}(t)|\leq c_{1}\varepsilon(t)\) is technical and will appear more clearly in the proofs below. It formally means that \(\varepsilon\) can decrease at most exponentially fast.4 This is a relatively mild assumption that holds, for example, for any polynomial decay \(\varepsilon_{0}/(t+1)^{a}\), \(a\in\mathbb{N}\). Footnote 3: Assumption 1 can actually hold only after some large-enough \(t_{0}\geq 0\), we take \(t_{0}=0\) for the sake of simplicity. Footnote 4: This is a consequence of Gronwall’s lemma, see e.g., [22]. We start with the main result of this section. **Theorem 3.1**.: _Let \(x_{N}\) be the solution of (CN), and let \(c_{1},c_{2}\geq 0\). There exist \(C_{0},C_{1},C_{2}\geq 0\), depending on \(c_{1}\), \(c_{2}\), such that for all \((\varepsilon,\alpha)\) for which Assumption 1 holds with constants \(c_{1}\) and \(c_{2}\), the corresponding solution \(x\) of (VM-DIN-AVD) is such that for all \(t\geq 0\),_ \[\|x(t)-x_{N}(t)\|\leq C_{0}e^{-\frac{t}{\beta}}\varepsilon_{0}\|\dot{x}_{0}\|+ C_{1}\sqrt{\varepsilon(t)}+C_{2}\int_{s=0}^{t}e^{\frac{1}{\beta}(s-t)} \sqrt{\varepsilon(s)}\,\mathrm{d}s. \tag{1}\] This extends a previous result from [2, Proposition 3.1] which states a similar bound for constant \(\varepsilon\), \(\alpha\equiv 0\) and \(\beta=1\). Theorem 3.1 corresponds to the blue parts in the phase diagram of Figure 1 (see also Corollary 3.6 below). **Remark 3.2**.: _The strength of the above result comes from the fact that the constants \(C_{0},C_{1},C_{2}\) do not depend on \(\varepsilon\) and \(\alpha\), and that the result is non-asymptotic. This allows in particular choosing \((\varepsilon,\alpha)\) to control the distance from \(x\) to \(x_{N}\), for all time \(t\geq 0\)._ **Remark 3.3**.: _Under Assumption 1, the dynamics (VM-DIN-AVD) is dominated by the variable mass. The damping \(\alpha\) does not appear in Theorem 3.1._ The above theorem and remarks emphasize the "Newtonian nature" of (VM-DIN-AVD). We present two lemmas before proving Theorem 3.1, and then state a simpler bound than (1), see Corollary 3.6. **Lemma 3.4**.: _Let \((\varepsilon,\alpha)\), and let \(x\) be the corresponding solution of (VM-DIN-AVD). For all \(t\geq 0\), define the function,_ \[U(t)=\frac{\varepsilon(t)}{2}\|\dot{x}(t)\|^{2}+f(x(t))-f(x^{\star}).\] _Then \(U\) is differentiable and for all \(t>0\),_ \[\frac{\mathrm{d}U}{\mathrm{d}t}(t)=\frac{\varepsilon^{\prime}(t)}{2}\|\dot{x} (t)\|^{2}-\alpha(t)\|\dot{x}(t)\|^{2}-\beta\langle\nabla^{2}f(x(t))\dot{x}(t), \dot{x}(t)\rangle\leq 0.\] _Therefore, in particular, \(U\) is non-increasing._ Proof.: Let \(t\geq 0\), since \(x\) is twice differentiable, \(U\) is differentiable and, \[\frac{\mathrm{d}U}{\mathrm{d}t}(t)=\frac{\varepsilon^{\prime}(t)}{2}\|\dot{x }(t)\|^{2}+\varepsilon(t)\langle\dot{x}(t),\ddot{x}(t)\rangle+\langle\dot{x}(t ),\nabla f(x(t))\rangle.\] We use the fact that \(x\) is solution of (VM-DIN-AVD), to substitute \(\varepsilon(t)\ddot{x}(t)\) by its expression, \[\frac{\mathrm{d}U}{\mathrm{d}t}(t)=\frac{\varepsilon^{\prime}(t)}{2}\|\dot{x} (t)\|^{2}-\alpha(t)\|\dot{x}(t)\|^{2}-\beta\langle\nabla^{2}f(x(t))\dot{x}(t ),\dot{x}(t)\rangle.\] By assumption \(\varepsilon\) is non-increasing so for all \(t>0\), \(\varepsilon^{\prime}(t)\leq 0\). Furthermore \(f\) is convex so \(\langle\nabla^{2}f(x(t))\dot{x}(t),\dot{x}(t)\rangle\geq 0\). Hence \(U\) is non-increasing. We then state the following bound. **Lemma 3.5**.: _There exists \(C\geq 0\) such that for any \((\varepsilon,\alpha)\) and the corresponding solution \(x\) of (VM-DIN-AVD), for all \(t\geq 0\) it holds that,_ \[\varepsilon(t)\|\dot{x}(t)\|\leq C\sqrt{\varepsilon(t)}.\] Proof.: Let \(t\geq 0\), according to Lemma 3.4, \(U\) is non-increasing so \(U(t)\leq U(0)\), or equivalently, \[\frac{\varepsilon(t)}{2}\|\dot{x}(t)\|^{2}+f(x(t))-f(x^{\star})\leq\frac{ \varepsilon_{0}}{2}\|\dot{x}_{0}\|^{2}+f(x_{0})-f(x^{\star}).\] This implies in particular that, \[\varepsilon(t)\|\dot{x}(t)\|^{2}\leq\varepsilon_{0}\|\dot{x}_{0}\|^{2}+2(f(x_ {0})-f(x^{\star})),\] and hence by multiplying both sides by \(\varepsilon(t)\) and composing with the square-root we obtain that, \[\varepsilon(t)\|\dot{x}(t)\|\leq C\sqrt{\varepsilon(t)},\] where \(C=\sqrt{\varepsilon_{0}\|\dot{x}_{0}\|^{2}+2(f(x_{0})-f(x^{\star}))}\). Proof of Theorem 3.1.: Let \((\varepsilon,\alpha)\) as defined in Sections 1.1 and 2, and let \(x\) be the corresponding solution of (VM-DIN-AVD). Then, according to Lemma 3.4, for all \(t\geq 0\), \(U(t)\leq U(0)\), so in particular \[f(x(t))\leq f(x_{0})+\frac{\varepsilon_{0}}{2}\|\dot{x}_{0}\|^{2}.\] Denoting \(M_{0}=f(x_{0})+\frac{\varepsilon_{0}}{2}\|\dot{x}_{0}\|^{2}\), the set \(\mathsf{K}_{0}=\{y\in\mathbb{R}^{n}\mid f(y)\leq M_{0}\}\) is bounded, since \(f\) is coercive (\(\lim_{\|y\|\to+\infty}f(y)=+\infty\)). So for all \(t\geq 0\), \(x(t)\in\mathsf{K}_{0}\). Since \(M_{0}\) (and hence \(\mathsf{K}_{0}\)) depends only on \(\varepsilon_{0}\), \(x_{0}\) and \(\dot{x}_{0}\), we have proved that for any choice \((\varepsilon,\alpha)\), the corresponding solution \(x\) of (VM-DIN-AVD) is inside \(\mathsf{K}_{0}\) at all time. Let \(x_{N}\) be the solution of (CN). One can see similarly that for all \(t\geq 0\), \(f(x_{N}(t))\leq f(x_{N}(0))=f(x_{0})\leq M_{0}\). So we also have \(x_{N}(t)\in\mathsf{K}_{0}\) for all \(t\geq 0\). Now, fix \(c_{1},c_{2}>0\), and let \((\varepsilon,\alpha)\) such that Assumption 1 is satisfied with constants \(c_{1},c_{2}\). Let \(x\) be the corresponding solution of (VM-DIN-AVD). Since \(f\) is strongly convex on bounded sets, it is strongly convex on \(\mathsf{K}_{0}\). We denote \(\mu>0\) the strong-convexity parameter of \(f\) on \(\mathsf{K}_{0}\). Equivalently, we have that \(\nabla f\) is strongly monotone on \(\mathsf{K}_{0}\), that is, \(\forall y_{1},y_{2}\in\mathsf{K}_{0}\), \[\langle\nabla f(y_{1})-\nabla f(y_{2}),y_{1}-y_{2}\rangle\geq\mu\|y_{1}-y_{2} \|^{2}. \tag{2}\] Let \(t\geq 0\), since \(x(t)\in\mathsf{K}_{0}\) and \(x_{N}(t)\in\mathsf{K}_{0}\), by combining (2) with the Cauchy-Schwarz inequality, we deduce that \[\|x(t)-x_{N}(t)\|\leq\frac{1}{\mu}\|\nabla f(x(t))-\nabla f(x_{N}(t))\|. \tag{3}\] Therefore, it is sufficient to bound the difference of gradients in order to bound \(\|x(t)-x_{N}(t)\|\). First, remark that (CN) can be rewritten as follows: \(\frac{\mathrm{d}}{\mathrm{d}t}\nabla f(x_{N}(t))+\frac{1}{\beta}\nabla f(x_{N }(t))=0\). So we can integrate, for all \(t\geq 0\), \[\nabla f(x_{N}(t))=e^{-\frac{t}{\beta}}\nabla f(x_{0}). \tag{4}\] We now turn our attention to \(\nabla f(x(t))\), for which we cannot find a closed-form solution in general. We rewrite (VM-DIN-AVD) in the following equivalent form \[\frac{\mathrm{d}}{\mathrm{d}t}\left[\varepsilon(t)\dot{x}(t)+\beta\nabla f(x (t))\right]+\frac{1}{\beta}\varepsilon(t)\dot{x}(t)+\nabla f(x(t))=\left( \frac{1}{\beta}\varepsilon(t)+\varepsilon^{\prime}(t)-\alpha(t)\right)\dot{x }(t).\] Introducing the variable \(\omega(t)=\varepsilon(t)\dot{x}(t)+\beta\nabla f(x(t))\), the latter is thus solution to \[\begin{cases}\dot{\omega}(t)+\frac{1}{\beta}\omega(t)=\left(\frac{1}{\beta} \varepsilon(t)+\varepsilon^{\prime}(t)-\alpha(t)\right)\dot{x}(t),\quad t\geq 0,\\ \omega(0)=\varepsilon_{0}\dot{x}_{0}+\beta\nabla f(x_{0}).\end{cases}\] This is a non-homogeneous first-order ODE in \(\omega\), whose solution can be expressed using the integrating factor \[\omega(t)=e^{-\frac{t}{\beta}}(\varepsilon_{0}\dot{x}_{0}+\beta\nabla f(x_{0 }))+e^{-\frac{t}{\beta}}\int_{0}^{t}e^{\frac{s}{\beta}}\left(\frac{1}{\beta} \varepsilon(s)+\varepsilon^{\prime}(s)-\alpha(s)\right)\dot{x}(s)\,\mathrm{d}s.\] We thus have the following expression for \(\nabla f(x)\), for all \(t\geq 0\), \[\beta\nabla f(x(t))=\beta e^{-\frac{t}{\beta}}\nabla f(x_{0})+e^{-\frac{t}{ \beta}}\varepsilon_{0}\dot{x}_{0}-\varepsilon(t)\dot{x}(t)+e^{-\frac{t}{ \beta}}\int_{0}^{t}e^{\frac{s}{\beta}}\left(\frac{1}{\beta}\varepsilon(s)+ \varepsilon^{\prime}(s)-\alpha(s)\right)\dot{x}(s)\,\mathrm{d}s. \tag{5}\] We can now use (4) and (5) in (3) to get \[\|x(t)-x_{N}(t)\|\leq\frac{1}{\beta\mu}\left\|e^{-\frac{t}{\beta}}\varepsilon_{0} \dot{x}_{0}-\varepsilon(t)\dot{x}(t)+e^{-\frac{t}{\beta}}\int_{0}^{t}e^{\frac{ \delta}{\beta}}\left(\frac{1}{\beta}\varepsilon(s)+\varepsilon^{\prime}(s)- \alpha(s)\right)\dot{x}(s)\,\mathrm{d}s\right\|.\] Using the triangle inequality, we obtain, \[\|x(t)-x_{N}(t)\|\leq\frac{\varepsilon_{0}\|\dot{x}_{0}\|}{\beta\mu}e^{-\frac{t }{\beta}}+\frac{\varepsilon(t)\|\dot{x}(t)\|}{\beta\mu}+\frac{1}{\beta\mu}\int _{0}^{t}e^{\frac{1}{\beta}(s-t)}\left|\frac{1}{\beta}\varepsilon(s)+ \varepsilon^{\prime}(s)-\alpha(s)\right|\|\dot{x}(s)\|\,\mathrm{d}s. \tag{6}\] The first term in (6) corresponds to the first one in (1) with \(C_{0}=1/(\beta\mu)\). As for the second-one, by direct application of Lemma 3.5, there exists \(C>0\) such that for all \(t\geq 0\), \(\frac{\varepsilon(t)\|\dot{x}(t)\|}{\beta\mu}\leq C\sqrt{\varepsilon(t)}\), so we set \(C_{1}=C/(\beta\mu)\). Regarding the last term in (6), using Assumption 1 and again Lemma 3.5, it holds that, for all \(s\geq 0\), \[\left|\frac{1}{\beta}\varepsilon(s)+\varepsilon^{\prime}(s)-\alpha(s)\right| \|\dot{x}(s)\|\leq\left(\frac{1}{\beta}+c_{1}+c_{2}\right)\varepsilon(s)\| \dot{x}(s)\|\leq\left(\frac{1}{\beta}+c_{1}+c_{2}\right)C\sqrt{\varepsilon(s)}.\] This proves the theorem with \(C_{2}=\frac{C}{\beta\mu}\left(\frac{1}{\beta}+c_{1}+c_{2}\right)\). Let us analyze the bound in Theorem 3.1. The first term in (1) decays exponentially fast and can even be zero if the initial speed is \(\dot{x}_{0}=0\), the second-one decays like \(\sqrt{\varepsilon(t)}\), however, the rate at which the last term decreases is less obvious. The following corollary gives a less-tight but easier-to-understand estimate. **Corollary 3.6**.: _Consider the same assumptions and variables as in Theorem 3.1. If furthermore \(c_{1}<\frac{2}{\beta}\), then there exists \(C_{3}>0\) such that for all \(t\geq 0\),_ \[\|x(t)-x_{N}(t)\|\leq C_{0}e^{-\frac{t}{\beta}}\varepsilon_{0}\|\dot{x}_{0}\| +C_{3}\sqrt{\varepsilon(t)}.\] Proof of Corollary 3.6.: For all \(t\geq 0\), define \(J(t)=\int_{0}^{t}e^{\frac{s}{\beta}}\sqrt{\varepsilon(s)}\,\mathrm{d}s\). An integration by parts yields \[J(t)=\left[\beta e^{\frac{s}{\beta}}\sqrt{\varepsilon(s)}\right]_{s=0}^{t}- \int_{s=0}^{t}\beta e^{\frac{s}{\beta}}\frac{\varepsilon^{\prime}(s)}{2\sqrt{ \varepsilon(s)}}\,\mathrm{d}s=\beta e^{\frac{t}{\beta}}\sqrt{\varepsilon(t)} -\beta\varepsilon_{0}+\int_{s=0}^{t}\beta e^{\frac{s}{\beta}}\frac{- \varepsilon^{\prime}(s)}{2\varepsilon(s)}\sqrt{\varepsilon(s)}\,\mathrm{d}s. \tag{7}\] By assumption, \(0\leq c_{1}<\frac{2}{\beta}\) such that for all \(s>0\), \(|\varepsilon^{\prime}(s)|\leq c_{1}\varepsilon(s)\), which in our setting is equivalent to \(\frac{-\varepsilon^{\prime}(s)}{\varepsilon(s)}\leq c_{1}\). So we deduce from (7) that \[J(t)\leq\beta e^{\frac{t}{\beta}}\sqrt{\varepsilon(t)}+c_{1}\frac{\beta}{2} \int_{s=0}^{t}e^{\frac{s}{\beta}}\sqrt{\varepsilon(s)}\,\mathrm{d}s=\beta e^{ \frac{t}{\beta}}\sqrt{\varepsilon(t)}+c_{1}\frac{\beta}{2}J(t).\] So, \(\left(1-c_{1}\frac{\beta}{2}\right)J(t)\leq\beta e^{t}\sqrt{\varepsilon(t)}\). By assumption \(1-c_{1}\frac{\beta}{2}>0\), therefore, \(J(t)\leq\frac{2}{2-c_{1}\beta}e^{\frac{t}{\beta}}\sqrt{\varepsilon(t)}\). Finally, using this in (1) and setting \(C_{3}=C_{1}+C_{2}\frac{2}{2-c_{1}\beta}\), we obtain the result. **Remark 3.7**.: _The above proofs suggest that an extension to the case where \(f\) is non-smooth but strongly convex is possible using regularization techniques. This is left for future work._ So far our results only cover the case where \(\alpha\) is "not too large" w.r.t. \(\varepsilon\), and do not study (LM). We now state a more general result that covers these cases. ### Generalization to Arbitrary Viscous Dampings with Sub-exponential Decay This time we do not assume a link between \(\varepsilon\) and \(\alpha\) but only sub-exponential decays. **Assumption 2**.: _There exists \(c_{1},c_{2}\geq 0\) such that for all \(t\geq 0\), \(|\varepsilon^{\prime}(t)|\leq c_{1}\varepsilon(t)\) and \(|\alpha^{\prime}(t)|\leq c_{2}\alpha(t)\)._ We are now in position to state the main result of this section. **Theorem 3.8**.: _Let \(x_{N}\) and \(x_{LM}\) be the solution of (CN) and (LM) respectively, and let \(c_{1},c_{2}\geq 0\). There exist constants \(C,\tilde{C}\geq 0\), depending on \(c_{1}\), \(c_{2}\), such that for all \(\varepsilon\) and \(\alpha\) for which Assumption 2 holds with \(c_{1}\) and \(c_{2}\), the corresponding solution \(x\) of (VM-DIN-AVD) is such that for all \(t\geq 0\),_ \[\|x(t)-x_{N}(t)\|\leq C\left[e^{-\frac{t}{\beta}}+\sqrt{\varepsilon(t)}+ \alpha(t)+\int_{s=0}^{t}e^{\frac{1}{\beta}(s-t)}(\sqrt{\varepsilon(s)}+\alpha( s))\,\mathrm{d}s\right], \tag{8}\] _and,_ \[\|x(t)-x_{LM}(t)\|\leq\tilde{C}\left[e^{-\frac{t}{\beta}}+\sqrt{\varepsilon(t )}+\alpha(t)+\int_{s=0}^{t}e^{\frac{1}{\beta}(s-t)}(\sqrt{\varepsilon(s)}+ \alpha(s))\,\mathrm{d}s\right]. \tag{9}\] The proof is postponed to Appendix B. Although it follows a similar reasoning as that of Theorem 3.1, more involved estimates are needed. Let us comment on these results. The bound (8) generalizes Theorem 3.1, although the constant involved will, in general, be larger than those in (1) (see the proof of Theorem 3.8 in appendix). Theorem 3.8 allows for far more flexibility in the choice of \(\varepsilon\) and \(\alpha\) in order to control \(x\) and make it possibly close to \(x_{N}\). The bound in (9) is the same as that in (8) (up to a constant), but this time w.r.t. \(x_{LM}\), thus connecting (VM-DIN-AVD) to (LM). We make the following two important remarks. First (9) involves \(\alpha\), suggesting that making \(x\) close to \(x_{LM}\) requires not only \(\varepsilon\) but also \(\alpha\) to vanish asymptotically. Additionally, Theorem 3.8 does not state to which of \(x_{N}\) and \(x_{LM}\) the solution of (VM-DIN-AVD) is the closest. It remains an open question to know whether one can make (9) independent of \(\alpha\), and to state to which trajectory \(x\) is the closest. Yet, the numerical experiments in Section 5 suggest that neither are possible. Indeed, we observe that for some functions \(f\), \(x\) is _sometimes_ closer to \(x_{N}\) than to \(x_{LM}\), even when \(\varepsilon(t)\leq\alpha(t)\). Nevertheless, Theorem 3.8 answers the question asked in the introduction: yes, (VM-DIN-AVD) is really of second-order nature since it can be brought close to the second-order dynamics (CN) and (LM). Doing so, it benefits from the good properties of these methods, such as the robustness to bad conditioning, as previously illustrated on the right of Figure 1. This concludes the analysis from a control perspective. We will now derive an approximation of the solution \(x\) in order to study the impact that \(\varepsilon\) and \(\alpha\) have on the speed of convergence of \(x\) to \(x^{\star}\) compared to the speeds of convergence of \(x_{N}\) and \(x_{LM}\). ## 4 Approximate Solutions and Asymptotic Analysis on Quadratic Functions We consider the particular case where \(f\) is a strongly-convex quadratic function in order to study the asymptotic behavior of (VM-DIN-AVD) w.r.t. (CN) and (LM). Quadratic functions are the prototypical example of strongly-convex functions. In particular, any strongly-convex function can be locally approximated by a quadratic one around its minimizer, making the latter a good model for understanding the local behavior of dynamics. In this section, \(f\) is quadratic: \(f(y)=\frac{1}{2}\|Ay-b\|_{2}^{2}\) for all \(y\in\mathbb{R}^{n}\) where \(A\in\mathbb{R}^{n\times n}\) is symmetric positive definite and \(b\in\mathbb{R}^{n}\). Without loss of generality, we take \(b=0\), so that the unique minimum is \(x^{\star}=0\). ### Setting: the Special Case of Quadratic Functions Quadratic functions are particularly interesting in our setting since DIN-like ODEs take a simpler form (as observed in [10, 40]). Indeed, \(\forall y\in\mathbb{R}^{n}\), \(\nabla f(y)=A^{T}Ay\) and \(\nabla^{2}f(y)=A^{T}A\). Since \(\nabla^{2}f(y)\) is independent of \(y\) we can rewrite (VM-DIN-AVD) in an eigenspace5 of \(A^{T}A\). That is, we can study the ODE coordinate-wise by looking at one-dimensional problems of the form Footnote 5: This can be generalized to the case where \(A^{T}A\) is only semi-definite by considering orthogonal projections on an eigenspace spanned by the positive eigenvalues of \(A^{T}A\). \[\varepsilon(t)\ddot{x}(t)+(\alpha(t)+\beta\lambda)\dot{x}(t)+\lambda x(t)=0, \quad t\geq 0.\] (Q1-VM-DIN-AVD) Here (and throughout what follows) \(\lambda>0\) denotes any eigenvalue of \(A^{T}A\) and \(x\colon\mathbb{R}_{+}\to\mathbb{R}\) now denotes the corresponding coordinate (function) of the solution of (VM-DIN-AVD) in an eigenspace of \(A^{T}A\). The dynamics (Q1-VM-DIN-AVD) is a _linear_ second-order ODE in \(x\) with non-constant coefficients. Similarly, (LM) can be rewritten coordinate-wise as \[(\alpha(t)+\beta\lambda)\dot{x}_{LM}(t)+\lambda x_{LM}(t)=0,\quad t\geq 0,\] (Q1-LM) where \(x_{LM}\colon\mathbb{R}_{+}\to\mathbb{R}\), and (CN) becomes \[\beta\dot{x}_{N}(t)+x_{N}(t)=0,\quad t\geq 0,\] (Q1-CN) where again, \(x_{N}\colon\mathbb{R}_{+}\to\mathbb{R}\) is one-dimensional. Observe in particular that (CN) and (LM) are now first-order _linear_ ODEs, whose solutions have the closed forms, for all \(t\geq 0\), \[x_{N}(t)=x_{0}e^{-\frac{t}{\beta}}\quad\text{and}\quad x_{LM}(t)=x_{0}\exp \left(-\int_{0}^{t}\frac{\lambda}{\alpha(s)+\beta\lambda}\,\mathrm{d}s\right). \tag{10}\] Since the minimizer is \(x^{\star}=0\), we directly see that \(x_{N}\) converges exponentially fast to \(x^{\star}\), with a rate independent of \(\lambda\). The rate of \(x_{LM}\) depends however on \(\lambda\) and how fast \(\alpha\) vanishes. Unfortunately, except for some special choices of \(\varepsilon\) and \(\alpha\) (see [10]), one cannot solve the second-order linear ODE (Q1-VM-DIN-AVD) in closed form in general. Additionally, it is hopeless to circumvent the difficulty by finding a closed form for \(\nabla f(x)\), accordingly to what we did in Section 3, since here \(\nabla f(x)=\lambda x\). In order to study the speed of convergence of \(x\) despite not having access to a closed form, we will approximate it with a controlled error, via a method that we now present. ### The Liouville-Green Method In what follows, we rely on the Liouville-Green method [30, 26], a technique for obtaining _non-asymptotic_ approximations to solutions of linear second-order ODEs with non-constant coefficients. First, we give the intuition behind the method, following the presentation of [35]. Consider the differential equation \[\ddot{z}(t)-r(t)z(t)=0,\quad t\geq 0, \tag{11}\] where \(r\) is real-valued, positive, and twice continuously differentiable. Any linear second-order ODE can be reformulated in the form (11), see Lemma 4.5 below. Since for all \(t\geq 0\), \(r(t)\neq 0\), we can use the changes of variables \(\tau=\int_{0}^{t}\sqrt{r(s)}\,\mathrm{d}s\) and \(w=r^{1/4}z\) and show that \(w\) is solution to \[\ddot{w}(\tau)-(1+\psi(\tau))w(\tau)=0,\quad t\geq 0, \tag{12}\] where6\(\psi(\tau)=\frac{4r(t)r^{\prime\prime}(t)-5r^{\prime}(t)^{2}}{16r(t)^{3}}\). The LG method consists in neglecting the term \(\psi(\tau)\) in (12), which simply yields two approximate solutions \(\hat{w}_{1}(\tau)=e^{\tau}\) and \(\hat{w}_{2}(\tau)=e^{-\tau}\). Expressing this in terms of \(z\) and \(t\), we obtain Footnote 6: We express \(\psi(\tau)\) using \(t\) for the sake of readability, using the one-to-one correspondence between \(\tau\) and \(t\). \[\hat{z}_{1}(t)=r(t)^{-1/4}\exp\left(\int_{0}^{t}\sqrt{r(s)}\,\mathrm{d}s\right) \quad\text{and}\quad\hat{z}_{2}(t)=r(t)^{-1/4}\exp\left(\int_{0}^{t}-\sqrt{r(s )}\,\mathrm{d}s\right). \tag{13}\] Those are the LG approximations of the solutions of (11). They are formally valid on any interval \([0,T]\), \(T>0\) when \(\psi\) is "not too large", provided that \(\sqrt{r}\) is integrable on \([0,T]\). **Remark 4.1**.: _There exists other (but less intuitive) ways to derive the approximations above, which allow for generalization to higher-order linear ODEs, see e.g., [14, Chapter 10]._ The advantage of this approach is the possibility to estimate the error made using (13) w.r.t. the true solutions of (11). This is expressed in the following theorem which gathers results from [15, 36, 42]. **Theorem 4.2** (Olver [35]).: _Let \(r\colon\mathbb{R}_{+}\to\mathbb{R}\) be a real, positive, twice continuously differentiable function, and define \(\varphi(t)=\frac{4r(t)r^{\prime\prime}(t)-5r^{\prime}(t)^{2}}{16r(t)^{5/2}}\) for all \(t\geq 0\). Then for any \(T>0\), the differential equation,_ \[\ddot{z}(t)-r(t)z(t)=0,\quad t\in[0,T], \tag{14}\] _has two real and twice continuously differentiable solutions defined for all \(t\in[0,T]\) by,_ \[z_{1}(t)=\frac{1+\delta_{1}(t)}{r(t)^{1/4}}\exp\left(\int_{0}^{t}\sqrt{r(s)} \,\mathrm{d}s\right)\quad\text{and}\quad z_{2}(t)=\frac{1+\delta_{2}(t)}{r(t) ^{1/4}}\exp\left(-\int_{0}^{t}\sqrt{r(s)}\,\mathrm{d}s\right),\] _where \(|\delta_{1}(t)|\leq\exp\left(\frac{1}{2}\int_{0}^{t}|\varphi(s)|\,\mathrm{d}s \right)-1\) and \(|\delta_{2}(t)|\leq\exp\left(-\frac{1}{2}\int_{t}^{T}|\varphi(s)|\,\mathrm{d}s \right)-1\). If in addition \(\int_{0}^{+\infty}|\varphi(s)|\,\mathrm{d}s<+\infty\), then the results above also hold for \(T=+\infty\)._ **Remark 4.3**.: _We make the following remarks regarding the above result. - Note that \(z_{1}\) and \(z_{2}\) in Theorem 4.2 are exact solutions to (14). The LG approximations \(\hat{z}_{1}\) and \(\hat{z}_{2}\) are obtained by neglecting the unknown functions \(\delta_{1}\) and \(\delta_{2}\) in \(z_{1}\) and \(z_{2}\). The theorem gives a non-asymptotic bound for the errors \(|z_{1}(t)-\hat{z}_{1}(t)|\) and \(|z_{2}(t)-\hat{z}_{2}(t)|\), \(t\geq 0\). - Since we assumed \(r\) to be twice continuously differentiable and positive, \(\varphi\) is continuous, so it is integrable except maybe for \(t\to+\infty\). - For the sake of simplicity, the formulation of Theorem 4.2 slightly differs from that in [35], the original formulation can be recovered by a change of variable._ ### Liouville-Green Approximation of (Vm-Din-Avd) We now proceed to make use of the LG method for approximating the solutions of (Q1-VM-DIN-AVD). The reader only interested in the result can jump directly to the Section 4.4. We first make the following assumption. **Assumption 3**.: _The functions \(\alpha\) and \(\varepsilon\) are three times continuously differentiable, and \(\varepsilon_{0}\) is such that \(\forall t\geq 0\), \(\varepsilon_{0}<\frac{(\beta\lambda)^{2}}{2|\alpha^{\prime}(t)|+4\lambda}\)._ **Remark 4.4**.: _The condition on \(\varepsilon_{0}\) in Assumption 3 is only technical, so that \(r\) defined below is positive. It can be easily satisfied since \(|\alpha^{\prime}(t)|\) is uniformly bounded. Indeed, \(\alpha\) is non-increasing and non-negative (see Section 1.1), from which one can deduce that \(\int_{0}^{+\infty}|\alpha^{\prime}(s)|\,\mathrm{d}s\leq\alpha_{0}\)._ We now rewrite (Q1-VM-DIN-AVD) in the form (14). **Lemma 4.5**.: _Suppose that Assumption 3 holds, and let \(x\) be the solution of (Q1-VM-DIN-AVD). For all \(t\geq 0\), define_ \[p(t)=\frac{\alpha(t)+\beta\lambda}{\varepsilon(t)}\quad\text{and}\quad r(t)= \frac{p(t)^{2}}{4}+\frac{p^{\prime}(t)}{2}-\frac{\lambda}{\varepsilon(t)}. \tag{15}\] _Then, \(p\) and \(r\) are twice continuously differentiable, \(r\) is positive and the function \(y\) defined for all \(t\geq 0\) by \(y(t)=x(t)\exp\left(\int_{0}^{t}\frac{p(s)}{2}\,\mathrm{d}s\right)\) is a solution to_ \[\ddot{y}(t)-r(t)y(t)=0,\quad t\geq 0, \tag{16}\] _with initial condition \((y(0),\dot{y}(0))=(x_{0},\dot{x}_{0}+\frac{p(0)}{2}x_{0})\)._ Proof.: We first check that for all \(t\geq 0\), \(r(t)\) is positive. Let \(t>0\), \[r(t)>0\iff\frac{(\alpha(t)+\beta\lambda)^{2}}{4\varepsilon(t)^{2}}+\frac{ \alpha^{\prime}(t)}{2\varepsilon(t)}-\frac{(\alpha(t)+\beta\lambda)\varepsilon ^{\prime}(t)}{\varepsilon(t)^{2}}-\frac{\lambda}{\varepsilon(t)}>0. \tag{17}\] Since \(\varepsilon^{\prime}(t)\leq 0\) and \(\alpha^{\prime}(t)\leq 0\), one can check that a sufficient condition for (17) to hold is, \[r(t)>0\iff\frac{(\alpha(t)+\beta\lambda)^{2}}{4}>\left(\frac{|\alpha^{\prime }(t)|}{2}+\lambda\right)\varepsilon(t)\iff\frac{(\beta\lambda)^{2}}{2|\alpha^ {\prime}(t)|+4\lambda}>\varepsilon_{0}.\] So under Assumption 3, for all \(t\geq 0\), \(r(t)>0\). We then check that \(y\) is indeed solution to (16). Let \(t>0\), \[\dot{y}(t)=\frac{p(t)}{2}x(t)\exp\left(\int_{0}^{t}\frac{p(s)}{2}\,\mathrm{d}s \right)+\dot{x}(t)\exp\left(\int_{0}^{t}\frac{p(s)}{2}\,\mathrm{d}s\right),\] and \[\ddot{y}(t)=\exp\left(\int_{0}^{t}\frac{p(s)}{2}\,\mathrm{d}s\right)\left[ \left(\frac{p(t)^{2}}{4}+\frac{p^{\prime}(t)}{2}\right)x(t)+p(t)\dot{x}(t)+ \ddot{x}(t)\right].\] Since \(x\) solves (Q1-VM-DIN-AVD), it holds that \(\ddot{x}(t)=-p(t)\dot{x}(t)-\frac{\lambda}{\varepsilon(t)}x(t)\), so, \[\ddot{y}(t)=\exp\left(\int_{0}^{t}\frac{p(s)}{2}\,\mathrm{d}s \right)\left(\frac{p(t)^{2}}{4}+\frac{p^{\prime}(t)}{2}-\frac{\lambda}{ \varepsilon(t)}\right)x(t)\\ =\left(\frac{p(t)^{2}}{4}+\frac{p^{\prime}(t)}{2}-\frac{\lambda }{\varepsilon(t)}\right)y(t)=r(t)y(t).\] Lemma 4.5 gives a reformulation of (Q1-VM-DIN-AVD) suited to apply Theorem 4.2. To use the theorem for all \(t\geq 0\), we need to ensure that \(\varphi(t)=\frac{4r(t)r^{\prime\prime}(t)-5r^{\prime}(t)^{2}}{16r(t)^{5/2}}\) is integrable. To this aim we make the following assumption. **Assumption 4**.: _The functions \(\varepsilon\) and \(\alpha\) have first, second and third-order derivatives that are integrable on \([0,+\infty[\). In addition, \(\lim\limits_{t\to\infty}\varepsilon(t)=0\) and \(\varepsilon^{\prime}(t)^{2}/\varepsilon(t)\) is integrable on \([0,+\infty[\)._ **Remark 4.6**.: _Assumption 4 holds for most decays used in practice, with in particular any polynomial decay of the form \(\frac{\varepsilon_{0}}{(t+1)^{a}}\) and \(\frac{\alpha_{0}}{(t+1)^{b}}\), \(a\in\mathbb{N}\setminus\{0\}\) and \(b\in\mathbb{N}\). Note that \(\varepsilon\) and \(\alpha\) need not be integrable and \(\alpha\) can even be constant._ The next lemma states the integrability of \(\varphi\) on \([0,+\infty[\). **Lemma 4.7**.: _Under Assumption 3 and 4, \(\int_{0}^{+\infty}|\varphi(s)|\,\mathrm{d}s<+\infty\)._ The proof of this lemma, relies on relatively simple arguments but involves long computations and is thus postponed to Appendix C. We can now use Theorem 4.2 to obtain an exact form for the solution of (Q1-VM-DIN-AVD) based on the LG approximations. **Theorem 4.8**.: _Suppose that Assumptions 3 and 4 hold. There exists \(A,B\in\mathbb{R}\) such that \(x(0)=x_{0}\), \(\dot{x}(0)=\dot{x}_{0}\) and for all \(t\geq 0\), the solution of (Q1-VM-DIN-AVD) is_ \[x(t)=A\frac{1+\delta_{1}(t)}{r(t)^{1/4}}\frac{\sqrt{\alpha(t)+ \beta\lambda}}{\sqrt{\varepsilon(t)}}\exp\left(\int_{0}^{t}-\frac{\lambda}{ \alpha(s)+\beta\lambda}-\frac{\lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta \lambda)^{3}}+o(\varepsilon(s))\,\mathrm{d}s\right)\] \[+B\frac{1+\delta_{2}(t)}{r(t)^{1/4}}\frac{\sqrt{\varepsilon(t)}} {\sqrt{\alpha(t)+\beta\lambda}}\exp\left(\int_{0}^{t}-\frac{\alpha(s)+\beta \lambda}{\varepsilon(s)}+\frac{\lambda}{\alpha(s)+\beta\lambda}+\frac{ \lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))\, \mathrm{d}s\right), \tag{18}\] _where for all \(t\geq 0\),_ \[|\delta_{1}(t)|\leq\exp\left(\frac{1}{2}\int_{0}^{t}|\varphi(s)|\,\mathrm{d}s \right)-1\quad\text{and}\quad|\delta_{2}(t)|\leq\exp\left(-\frac{1}{2}\int_{t }^{+\infty}|\varphi(s)|\,\mathrm{d}s\right)-1. \tag{19}\] Thanks to the bounds (19), we now have an approximation of \(x\). We will use it in particular to compare \(x\) asymptotically to the solutions of (Q1-LM) and (Q1-CN). Before this, we prove Theorem 4.8. Proof of Theorem 4.8.: Let \(x\) be the solution of (Q1-VM-DIN-AVD) define \(p,r\) as in (15). Let us also define \(y(t)\stackrel{{\text{\tiny def}}}{{=}}x(t)\exp\left(\int_{0}^{t} \frac{p(s)}{2}\,\mathrm{d}s\right)\). According to Lemma 4.5, \(r\) is positive and \(y\) is solution to (16). Then, from Lemma 4.7, \(\int_{t}^{T}|\varphi(s)|\,\mathrm{d}s<+\infty\), so we can apply Theorem 4.2 to \(y\) on \([0,+\infty[\). Therefore, there exists \(A,B\in\mathbb{R}\), such that \(\forall t\geq 0\), \[y(t)=A\frac{1+\delta_{1}(t)}{r(t)^{1/4}}\exp\left(\int_{0}^{t}\sqrt{r(s)}\, \mathrm{d}s\right)+B\frac{1+\delta_{2}(t)}{r(t)^{1/4}}\exp\left(\int_{0}^{t}- \sqrt{r(s)}\,\mathrm{d}s\right),\] where \(A\) and \(B\) are determined by the initial conditions, and \(\delta_{1}\), \(\delta_{2}\) are such that (19) holds. Going back to \(x(t)=y(t)\exp\left(\int_{0}^{t}-\frac{p(s)}{2}\,\mathrm{d}s\right)\), we obtain that for all \(t\geq 0\), \[x(t)=A\frac{1+\delta_{1}(t)}{r(t)^{1/4}}\exp\left(\int_{0}^{t}-\frac{p(s)}{2}+ \sqrt{r(s)}\,\mathrm{d}s\right)+B\frac{1+\delta_{2}(t)}{r(t)^{1/4}}\exp\left( \int_{0}^{t}-\frac{p(s)}{2}-\sqrt{r(s)}\,\mathrm{d}s\right). \tag{20}\] It now remains to expand the terms in the two exponentials in (20) in order to obtain (18). To this aim, we approximate \(\sqrt{r(s)}\), let \(s\geq 0\), \[\sqrt{r(s)} =\frac{p(s)}{2}\sqrt{1+\frac{2p^{\prime}(s)}{p(s)^{2}}-\frac{4 \lambda}{\varepsilon(s)p(s)^{2}}}\] \[=\frac{p(s)}{2}\left(1+\frac{p^{\prime}(s)}{p(s)^{2}}-\frac{2 \lambda}{\varepsilon(s)p(s)^{2}}-\frac{1}{8}\left(\frac{2p^{\prime}(s)}{p(s)^ {2}}-\frac{4\lambda}{\varepsilon(s)p(s)^{2}}\right)^{2}+o(\varepsilon(s)^{2})\right)\] \[=\frac{p(s)}{2}+\frac{p^{\prime}(s)}{2p(s)}-\frac{\lambda}{ \varepsilon(s)p(s)}-\frac{1}{16}\left(\frac{2p^{\prime}(s)}{p(s)^{3/2}}-\frac {4\lambda}{\varepsilon(s)p(s)^{3/2}}\right)^{2}+o(\varepsilon(s))\] \[=\frac{p(s)}{2}+\frac{p^{\prime}(s)\varepsilon(s)}{2(\alpha(s)+ \beta\lambda)}-\frac{\lambda}{\alpha(s)+\beta\lambda}-\frac{1}{16}\left( \frac{2p^{\prime}(s)}{p(s)^{3/2}}-\frac{4\lambda\sqrt{\varepsilon(s)}}{( \alpha(s)+\beta\lambda)^{3/2}}\right)^{2}+o(\varepsilon(s))\] \[=\frac{p(s)}{2}+\frac{\alpha^{\prime}(s)}{2(\alpha(s)+\beta \lambda)}-\frac{\varepsilon^{\prime}(s)}{2\varepsilon(s)}-\frac{\lambda}{ \alpha(s)+\beta\lambda}-\frac{1}{16}\left(\frac{2p^{\prime}(s)}{p(s)^{3/2}}- \frac{4\lambda\sqrt{\varepsilon(s)}}{(\alpha(s)+\beta\lambda)^{3/2}}\right)^ {2}+o(\varepsilon(s)) \tag{21}\] Focusing on the first exponential term in (20), we deduce from (21) that for all \(t\geq 0\), \[\exp\left(\int_{0}^{t}-\frac{p(s)}{2}+\sqrt{r(s)}\,\mathrm{d}s\right)\] \[=\exp\left(\int_{0}^{t}\frac{\alpha^{\prime}(s)/2}{\alpha(s)+ \beta\lambda}-\frac{\varepsilon^{\prime}(s)}{2\varepsilon(s)}-\frac{\lambda }{\alpha(s)+\beta\lambda}-\frac{1}{16}\left(\frac{2p^{\prime}(s)}{p(s)^{3/2}} -\frac{4\lambda\sqrt{\varepsilon(s)}}{(\alpha(s)+\beta\lambda)^{3/2}}\right) ^{2}+o(\varepsilon(s))\,\mathrm{d}s\right)\] \[=\frac{\sqrt{\alpha(t)+\beta\lambda})}{\sqrt{\alpha_{0}+\beta \lambda}}\frac{\sqrt{\varepsilon_{0}}}{\sqrt{\varepsilon(t)}}\exp\left(\int_ {0}^{t}\frac{-\lambda}{\alpha(s)+\beta\lambda}-\frac{1}{16}\left(\frac{2p^{ \prime}(s)}{p(s)^{3/2}}-\frac{4\lambda\sqrt{\varepsilon(s)}}{(\alpha(s)+ \beta\lambda)^{3/2}}\right)^{2}+o(\varepsilon(s))\,\mathrm{d}s\right)\] \[=\frac{\sqrt{\alpha(t)+\beta\lambda})}{\sqrt{\alpha_{0}+\beta \lambda}}\frac{\sqrt{\varepsilon_{0}}}{\sqrt{\varepsilon(t)}}\exp\left(\int_ {0}^{t}\frac{-\lambda}{\alpha(s)+\beta\lambda}-\frac{\lambda^{2}\varepsilon( s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))\,\mathrm{d}s\right),\] where the last line relies on further computations postponed to Lemma C.1 in Appendix C. Performing the exact same type of computations on \(\exp\left(\int_{0}^{t}-\frac{p(s)}{2}-\sqrt{r(s)}\,\mathrm{d}s\right)\), and up to redefining \(A\) and \(B\) so as to encompass all the constants, we obtain (18) and the result is proved. ### Comparison of \(x\) with \(x_{lm}\) and \(x_{n}\) We now have an expression for \(x\) which is almost explicit: we do not know \(\delta_{1}\) and \(\delta_{2}\) in closed form, but they are uniformly bounded (by Lemma 4.7). We will now compare the asymptotic behavior of (18) with those of the solutions of (Q1-LM) and (Q1-CN) that we denoted \(x_{LM}\) and \(x_{N}\) respectively. Our main result of Section 4 is the following, where \(\sim_{+\infty}\) denotes the asymptotic equivalence7 between two functions as \(t\to\infty\). **Theorem 4.9**.: _Let \(x\) be the solution of (Q1-VM-DIN-AVD), given in (18), and \(x_{LM}\) and \(x_{N}\) whose closed forms are stated in (10). Under Assumptions 3 and 4, there exists \(C>0\) such that the following asymptotic equivalences hold:_ \[\begin{split} x(t)&\sim_{+\infty}x_{LM}(t)C\exp \left(\int_{0}^{t}-\frac{\lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta\lambda)^ {3}}+o(\varepsilon(s))\,\mathrm{d}s\right),\quad\text{and}\\ x(t)&\sim_{+\infty}x_{N}(t)C\exp\left(\int_{0}^{t} \frac{\alpha(s)}{\beta(\alpha(s)+\beta\lambda)}-\frac{\lambda^{2}\varepsilon( s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))\,\mathrm{d}s\right).\end{split} \tag{22}\] _As a consequence, the convergence of \(x\) to \(x^{\star}\) is:_ 1. _[label=()]_ 2. _Faster than that of_ \(x_{LM}\) _if_ \(\varepsilon\) _is non-integrable and as fast otherwise._ 3. _Slower than that of_ \(x_{N}\) _if_ \(\alpha\) _is non-integrable and as fast if_ \(\alpha\) _is integrable, in the case where_ \(\forall t\geq 0\)_,_ \(\alpha(t)>\varepsilon(t)\)_._ 4. _Faster than that of_ \(x_{N}\) _if_ \(\varepsilon\) _is non-integrable and as fast if_ \(\varepsilon\) _is integrable, in the case where_ \(\forall t\geq 0\)_,_ \(\alpha(t)<\varepsilon(t)\)_._ While the results of Section 3 were related to the closeness of (VM-DIN-AVD) w.r.t. (CN) and (LM) from a control perspective, Theorem 4.9 provides a different type of insight. First, the results are asymptotic, so they only allow to control (VM-DIN-AVD) for large \(t\). They provide however a clear understanding of the nature of the solutions of (VM-DIN-AVD) and their convergence. The conclusions (summarized in Table 1) are in accordance with what we would expect: when the viscous damping is larger than the variable mass, (VM-DIN-AVD) behaves more like the Levenberg-Marquardt method than the Newton one, but it actually becomes an accelerated Levenberg-Marquardt dynamics when \(\varepsilon\) is non-integrable but vanishing. However, when the variable mass \(\varepsilon\) is larger than \(\alpha\), the dynamics is closer to the one of the Newton method, and can actually be an accelerated Newton dynamics, again for non-integrable \(\varepsilon\). This is analogous to the necessary condition that \(\alpha\) must be non-integrable in order to accelerate first-order methods in convex optimization (see [3]). We conclude this section by proving Theorem 4.9. Proof of Theorem 4.9.: Thanks to Assumptions 3 and 4, Theorem 4.8 tells us that \(x\) has the form (18). We now analyze the two terms in (18). First, we know from Theorem 4.8 that \(\delta_{1}(0)=0\) and \(\lim_{t\to+\infty}\delta_{2}(t)=0\). In addition, by Lemma 4.7, \(\delta_{1}\) and \(\delta_{2}\) are uniformly bounded by some positive constant. Then \(r(t)^{-1/4}\) decays asymptotically like \(\sqrt{\varepsilon(t)}\) and \(\alpha\) is bounded. So \(A\frac{1+\delta_{1}(t)}{r(t)^{1/4}}\frac{\sqrt{\alpha(t)+\beta\lambda}}{\sqrt {\varepsilon(t)}}\) is asymptotically equivalent to some constant \(c_{1}\in\mathbb{R}\) as \(t\to+\infty\). Similarly, \(B\frac{1+\delta_{2}(t)}{r(t)^{1/4}}\frac{\sqrt{\varepsilon(t)}}{\sqrt{\alpha( t)+\beta\lambda}}\) is equivalent to \(c_{2}\varepsilon(t)\), with \(c_{2}\in\mathbb{R}\). We now analyze the "exponential factors" in (18). On the one hand, \(\frac{\lambda}{\alpha(s)+\beta\lambda}+\frac{\lambda^{2}\varepsilon(s)}{( \alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))\) converges to \(\frac{1}{\beta}\) as \(s\to\infty\), while \(\frac{\alpha(s)+\beta\lambda}{\varepsilon(s)}\) diverges to \(+\infty\). Therefore, we deduce that, \[\exp\left(\int_{0}^{t}-\frac{\alpha(s)+\beta\lambda}{\varepsilon (s)}+\frac{\lambda}{\alpha(s)+\beta\lambda}+\frac{\lambda^{2}\varepsilon(s)}{ (\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))\,\mathrm{d}s\right)\\ =o\left(\exp\left(\int_{0}^{t}-\frac{\lambda}{\alpha(s)+\beta \lambda}-\frac{\lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o( \varepsilon(s))\,\mathrm{d}s\right)\right).\] As a consequence, the second term in (18) will decrease to \(0\) faster than the first-one (let alone the additional \(\varepsilon(t)\) decay that we have just discussed). The asymptotic behavior of \(x\) will thus be governed by the first term in (18). Let us now focus on the first term in (18). Observe that \(\exp\left(\int_{0}^{t}-\frac{\lambda}{\alpha(s)+\beta\lambda}\,\mathrm{d}s\right)\) is exactly the decay of \(x_{LM}\) in (10). Thus, we have proved that there exists \(C>0\), such that the following asymptotic equivalence holds, \[A\frac{1+\delta_{1}(t)}{r(t)^{1/4}}\frac{\sqrt{\alpha(t)+\beta \lambda}}{\sqrt{\varepsilon(t)}}\exp\left(\int_{0}^{t}-\frac{\lambda}{\alpha( s)+\beta\lambda}-\frac{\lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o( \varepsilon(s))\,\mathrm{d}s\right)\\ \sim_{+\infty}x_{LM}(t)C\exp\left(\int_{0}^{t}-\frac{\lambda^{2} \varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))\,\mathrm{d}s \right),\] which proves the first part of (22). The second equivalence in (22) is obtained using the following identity, \[\int_{0}^{t}-\frac{\lambda}{\alpha(s)+\beta\lambda}\,\mathrm{d}s=\int_{0}^{t} -\frac{1}{\beta}+\frac{\alpha(s)}{\beta(\alpha(s)+\beta\lambda)}\,\mathrm{d}s =-\frac{t}{\beta}+\int_{0}^{t}\frac{\alpha(s)}{\beta(\alpha(s)+\beta\lambda)} \,\mathrm{d}s \tag{23}\] and \(e^{-t/\beta}\) is precisely the rate at which \(x_{N}\) decreases. So (22) holds. It finally remains to deduce the conclusions of the theorem from (22). - Regarding the comparison with \(x_{LM}\), the integral \(\int_{0}^{t}-\frac{\lambda^{2}\varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o (\varepsilon(s))\,\mathrm{d}s\) converges if and only if \(\varepsilon\) is integrable on \([0,+\infty[\), and diverges to \(-\infty\) when \(\varepsilon\) is not. So \(x\) converges to \(0\) at least as fast as \(x_{LM}\) and faster when \(\varepsilon\) is not integrable. - As for the comparison with \(x_{N}\), if \(\alpha(s)>\varepsilon(s)\geq 0\) for all \(s\geq 0\), then the integral \(\int_{0}^{t}\frac{\alpha(s)}{\beta(\alpha(s)+\beta\lambda)}-\frac{\lambda^{2} \varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s))\,\mathrm{d}s\) is convergent in \(+\infty\) if and only if \(\alpha\) is integrable and diverges to \(+\infty\) when \(\alpha\) is non-integrable. So when \(\alpha\) is integrable, the speed of convergence of \(x\) is the same as that of \(x_{N}\). When \(\alpha\) is not integrable, the convergence to \(0\) is slower but still holds. Indeed, for all \(s\geq 0\)\(\frac{\alpha(s)}{\beta(\alpha(s)+\frac{1}{3})}<\frac{\alpha(s)}{\beta\alpha( s)}=\frac{1}{\beta}\). Thus for all \(t>0\), \(-\frac{t}{\beta}+\int_{0}^{t}\frac{\alpha(s)}{\beta(\alpha(s)+\beta\lambda)} \,\mathrm{d}s<0\). - Finally, the comparison with \(x_{N}\) in the case \(\varepsilon(s)>\alpha(s)\) is exactly the same as the comparison with \(x_{LM}\) using (23). ## 5 Numerical Experiments We present two set of experiments that illustrate our main results from Sections 3 and 4. We first detail the general methodology. ### Methodology We compare the solutions of (CN), (LM) and (VM-DIN-AVD) obtained for strongly-convex functions in dimension \(n=100\). Since closed-form solutions are not available, they are estimated via discretization schemes with small step-sizes \(\gamma=10^{-1}\). We used Euler semi-explicit schemes, where a linear system is solved at each iteration, for the sake of stability. The resulting algorithms are detailed in Appendix D. ### First Experiment: Distance between Trajectories We begin with an empirical validation of the results of Section 3 on the distance between \(x\), \(x_{LM}\) and \(x_{N}\). Each of Figures 2, 3 and 4 (as well as Figures 6 in Appendix D) corresponds to a different strongly-convex function, specified below its corresponding figure. In order to ensure strong convexity, each function contains a quadratic term of the form \(\|Ax\|^{2}\), where \(A\) is symmetric positive definite. Several observations can be made from the numerical results, but we first note on the right plots of Figures 2 to 4 that \(x_{N}\) always converges asymptotically linearly (i.e., exponentially fast). This is also the case for \(x\) and \(x_{LM}\) in some (but not all) cases. This is important because \(\|x(t)-x_{N}(t)\|\leq\|x(t)-x^{\star}\|+\|x_{N}(t)-x^{\star}\|\), so if both \(x\) and \(x_{N}\) converge linearly, then the bounds of Theorems 3.1 and 3.8 need not be tight asymptotically. That being said, the strength of these results is to be non-asymptotic and this is highlighted by the experiments as we now explain. Looking at the left and middle plots of Figures 2, 3 and 4, we observe that Theorems 3.1 and 3.8 seem empirically validated, since the distances \(\|x(t)-x_{N}(t)\|\) and \(\|x(t)-x_{LM}(t)\|\) decrease relatively fast to zero. Again, when \(x\) converges rapidly to \(x^{\star}\) this is not very insightful, however, the main interest Figure 2: Comparison of the solutions \(x_{N}\), \(x_{LM}\) and \(x\) of (CN), (LM) and (VM-DIN-AVD) respectively, for a strongly-convex function of the form \(f(x)=e^{-\|x\|^{2}}+\frac{1}{2}\|Ax\|^{2}\). Left figures: distance \(\|x(t)-x_{N}(t)\|\) versus time \(t\), each curve corresponds to a different choice of \(\varepsilon\); middle figures: distance \(\|x(t)-x_{LM}(t)\|\), again for several \(\varepsilon\). Right figures: distance to the optimum \(x^{\star}\) for reference, \(x_{N}\) and \(x_{LM}\) are in dotted and dashed lines, other curves correspond to (VM-DIN-AVD) for several choices of \(\varepsilon\). The brown curve is often hidden behind the purple (and sometimes the pink) curve. Top and bottom rows show results respectively for non-integrable and integrable viscous dampings \(\alpha\). The theoretical bounds from Theorem 3.8 are only displayed on Figure 4 below, for the sake of readability. of our theorems appears on the left of Figures 3 and 4: the blue and green curves, corresponding to slowly decaying choices of \(\varepsilon\), converge more slowly than other trajectories. However, when taking faster decays, we recover fast convergence and closeness to \(x_{N}\) (this is particularly true for the purple curve). Very similar observations are made w.r.t. \(x_{LM}\) on the middle plots. Despite not being stated in the theorems of Section 3, the experiments match the intuition that when \(\varepsilon>\alpha\), \(x\) may be closer to \(x_{N}\) and when \(\varepsilon\leq\alpha\), \(x\) would rather be closer to \(x_{LM}\). This is more noticeable on the top rows of the figures, where \(\alpha\) is not integrable. Figure 4 suggests that the bounds in Theorem 3.8 are rather tight for small \(t\), since, for example, the blue and green curves on the left show a relatively slow vanishing of \(\|x(t)-x_{N}(t)\|\) for slowly decaying \(\varepsilon\). The experiments show that the bounds seem however often too pessimistic for large \(t\), for which the second part of our study provides better insights (see Section 4 and below). Interestingly, slow decays of \(\varepsilon\) may sometimes result in faster convergence for \(x\) than fast decays (and also faster convergence than \(x_{LM}\)), notably on Figure 2. We also note that \(\varepsilon(t)=\varepsilon_{0}/t\) combined either with \(\alpha(t)=\alpha_{0}/t\) or \(\alpha(t)=\alpha_{0}/t^{2}\) seems to always yield fast convergence on these experiments (and sometimes the fastest of all dynamics). ### Second Experiment: Empirical Validation of Theorem 4.9 We now turn our attention to the solutions \(x\), \(x_{N}\) and \(x_{LM}\) for a quadratic function of the form \(f(y)=\frac{1}{2}\|Ay\|^{2}\), \(y\in\mathbb{R}^{n}\), and for several choices of \(\varepsilon\) and \(\alpha\). The results in Figure 5 exactly match the expected behavior summarized in Table 1. Indeed, looking first at the right-hand side of Figure 5, \(x\) is as fast Figure 3: Similar experiment and figures as those described in Figure 2, but for the function \(f(x)=\log\left(\sum_{i=1}^{n}e^{x_{i}}+e^{-x_{i}}\right)+\frac{1}{2}\|Ax\|^{2}\). as the corresponding8\(x_{LM}\) when \(\varepsilon\) is not integrable and regardless of \(\alpha\), and \(x\) is faster when \(\varepsilon\) is non-integrable. Then on the left-hand side, when comparing to \(x_{N}\), \(x\) is slower in settings where \(\alpha\) is larger than \(\varepsilon\) and non-integrable (red curves), or almost as fast when \(\alpha\) is integrable (pink curve). However, acceleration w.r.t. to \(x_{N}\) is indeed achieved for non-integrable \(\varepsilon\) regardless of \(\alpha\) (first-two blue curves), and the rate is the same as that of \(x_{N}\) when \(\varepsilon\) is integrable (third blue curve). Footnote 8: That is, the solution of (LM) for the same \(\alpha\) as that considered for (VM-DIN-AVD). ## 6 Conclusions and Perspectives We introduced a general ODE (VM-DIN-AVD) featuring variable mass, and provided a deep understanding on the behavior of its solutions w.r.t. time dependent control parameters \(\varepsilon\) and \(\alpha\), both, asymptotically and non-asymptotically. We can conclude that (VM-DIN-AVD) is indeed of (regularized) Newton type, since it can be controlled to be close to both (CN) and (LM). Yet we also showed that (VM-DIN-AVD) fundamentally differs from the other two dynamics in its nature. In particular, Theorem 4.9 and the numerical experiments emphasized that \(\varepsilon\) and \(\alpha\) can accelerate (or slow down) (VM-DIN-AVD) w.r.t. (CN) and (LM). We also note that our bounds in Theorems 3.1 and 3.8 seem relatively tight, in particular for functions with large gradients (see Figure 4). Our contribution yields a complete and satisfying picture on the relation between the three systems, which was only partially understood. We believe that our results build a strong foundation for the development of algorithms that combine the best properties of first- and second-order optimization methods. Figure 4: Similar experiment and figures as those described in Figure 2, but for the function \(f(x)=\|x\|^{50}+\frac{1}{2}\|Ax\|^{2}\). The thin “dash dotted” curves represent the theoretical bounds from Theorem 3.8 for each choice of \((\varepsilon,\alpha)\) considered. As for future work, we showed that (VM-DIN-AVD) is promising from an optimization perspective. So far we approximated solutions of (VM-DIN-AVD) via schemes that required solving a linear system at each iteration (this is also true for (CN) and (LM)). Our new understanding on \((\varepsilon,\alpha)\) paves the way towards designing new Newton-like algorithms with a significantly reduced computational cost, which is crucial for large-scale optimization. Another open question is whether it is possible to preserve the properties evidenced in this work when \(\varepsilon\) is defined in a closed-loop manner (formally depending on \(x\) rather than on \(t\)). Finally, it would be worth investigating how the current work can be extended to general convex and/or non-smooth functions. ## Acknowledgment C. Castera, J. Fadili and P. Ochs are supported by the ANR-DFG joint project TRINOM-DS under number ANR-20-CE92-0037-01. The numerical experiments were made thanks to the development teams of the following libraries: Python [38], Numpy [43] and Matplotlib [27]. Figure 5: Numerical validation of Theorem 4.9: distance to the optimum \(x^{\star}\) as a function of time. on a quadratic function \(f(x)=\frac{1}{2}\|Ax\|^{2}\). Left: speed comparison w.r.t. (CN) for several choices of \(\varepsilon\) and \(\alpha\). Right: Comparison with LM in for \(\alpha\) integrable or not and several choices of \(\varepsilon\). Shades of blue represent cases where \(\varepsilon(t)>\alpha(t)\) while shades of red represent the opposite setting. ## Appendix A Equivalent First-order System and Global Existence of Solutions ### First-order Equivalent Formulation We reformulate (VM-DIN-AVD) as a system of ODE involving only first-order time derivatives and the gradient of \(f\). For this purpose, notice that for all \(t>0\) (VM-DIN-AVD) can be rewritten as, \[\frac{\mathrm{d}}{\mathrm{d}t}\left[\varepsilon(t)\dot{x}(t)\right]+\beta\frac{ \mathrm{d}}{\mathrm{d}t}\nabla f(x(t))+\alpha(t)\dot{x}(t)-\varepsilon^{\prime }(t)\dot{x}(t)+\nabla f(x(t))=0,\quad t\geq 0. \tag{24}\] We then integrate (24) for all \(t\geq 0\), \[\varepsilon(t)\dot{x}(t)+\beta\nabla f(x(t))-\varepsilon_{0}\dot{x}_{0}- \beta\nabla f(x_{0})+\int_{0}^{t}(\alpha(s)-\varepsilon^{\prime}(s))\dot{x}(s )+\nabla f(x(s))\,\mathrm{d}s=0. \tag{25}\] For all \(t\geq 0\), we define the variable, \[z(t)=\int_{0}^{t}(\alpha(s)-\varepsilon^{\prime}(s))\dot{x}(s)+\nabla f(x(s)) \,\mathrm{d}s-\varepsilon_{0}\dot{x}_{0}-\beta\nabla f(x_{0}).\] We differentiate \(z\), for all \(t>0\), \(\dot{z}(t)=(\alpha(t)-\varepsilon^{\prime}(t))\dot{x}(t)+\nabla f(x(t))\), so that we can rewrite (25) as, \[\begin{cases}\varepsilon(t)\dot{x}(t)+\beta\nabla f(x(t))+z(t)=0\\ \dot{z}(t)-(\alpha(t)-\varepsilon^{\prime}(t))\dot{x}(t)-\nabla f(x(t))=0 \end{cases},\quad t\geq 0.\] We substitute the first line in the second-one, \[\begin{cases}\varepsilon(t)\dot{x}(t)+\beta\nabla f(x(t))+z(t)=0\\ \beta\dot{z}(t)-\beta(\alpha(t)-\varepsilon^{\prime}(t)-\frac{1}{\beta} \varepsilon(t))\dot{x}(t)+z(t)=0\end{cases},\quad t\geq 0. \tag{26}\] To ease the readability, we recall the notation \(\nu(t)=\alpha(t)-\varepsilon^{\prime}(t)-\frac{1}{\beta}\varepsilon(t)\) from Section 2. Then define for all \(t\geq 0\), \(y(t)=z(t)-\nu(t)x(t)\), and differentiate, \(\dot{y}(t)=\dot{z}(t)-\nu(t)\dot{x}(t)-\nu^{\prime}(t)x(t)\). We finally rewrite (26) as, \[\begin{cases}\varepsilon(t)\dot{x}(t)+\beta\nabla f(x(t))&+\nu(t)x(t)+y(t)=0 \\ \dot{y}(t)+\nu^{\prime}(t)x(t)&+\frac{\nu(t)}{\beta}x(t)+\frac{1}{\beta}y(t)=0 \end{cases}.\] which is (gVM-DIN-AVD). Finally, the initial condition on \(y\) is \[y(0)=z(0)-\nu(0)x(0)=-\varepsilon_{0}\dot{x}_{0}-\beta\nabla f(x_{0})-( \alpha_{0}-\varepsilon^{\prime}_{0}-\frac{1}{\beta}\varepsilon_{0})x_{0}.\] **Remark A.1**.: _Notice that the quantity \(\nu(t)=\alpha(t)-\varepsilon^{\prime}(t)-\frac{1}{\beta}\varepsilon(t)\) involved in (gVM-DIN-AVD) also plays a key role in our analysis of Section 3, see e.g., (6). In particular the sign of \(\nu(t)\) changes the nature of (VM-DIN-AVD) and is related to Assumption 1._ ### Local Solutions are Global Using the formulation (gVM-DIN-AVD), we proved local existence and uniqueness of solutions of (VM-DIN-AVD) in Section 2. Using the same notations, we justify that the local solution \((x,y)\) actually exists globally. According to Lemma 3.4, the Lyapunov function \(U(t)=\frac{\varepsilon(t)}{2}\|\dot{x}(t)\|^{2}+f(x(t))-f(x^{\star})\) is non-negative and decreasing. Thus, it is uniformly bounded on \(\mathbb{R}_{+}\) and the same holds for \(t\mapsto f(x(t))\) since for all \(t\geq 0\), \(U(t)\geq f(x(t))\). Then, \(f\) is coercive by assumption, so \(x\) is uniformly bounded on \(\mathbb{R}_{+}\) (otherwise \(f(x)\) could not remain bounded). We now prove that \(y\) is also uniformly bounded. From (gVM-DIN-AVD), for all \(t>0\), \(\dot{y}(t)=-\frac{1}{\beta}y(t)-(\frac{\nu(t)}{\beta}+\nu^{\prime}(t))x(t)\) so we can use the following integrating factor, \[y(t)=e^{-\frac{t}{\beta}}y_{0}-e^{-\frac{t}{\beta}}\int_{0}^{t}\frac{1}{\beta }e^{\frac{s}{\beta}}(\nu(s)+\beta\nu^{\prime}(s))x(s)\,\mathrm{d}s.\] Using triangle inequalities, for all \(t\geq 0\), \[\|y(t)\|\leq e^{-\frac{t}{\beta}}\|y_{0}\|+\sup_{s\geq 0}\|(\nu(s)+\beta\nu^{ \prime}(s))x(s)\|e^{-\frac{t}{\beta}}\int_{0}^{t}\frac{1}{\beta}e^{\frac{s}{ \beta}}\,\mathrm{d}s\leq\|y_{0}\|+\sup_{s\geq 0}\|(\nu(s)+\beta\nu^{\prime}(s))x(s)\|. \tag{27}\] Using the definition of \(\varepsilon\) and \(\alpha\) from Sections 1.1 and 2, observe that \(\varepsilon\), \(\alpha\), \(\varepsilon^{\prime}\) and \(\alpha^{\prime}\) are all bounded on \(\mathbb{R}_{+}\), and \(\varepsilon^{\prime\prime}\) is assumed to be bounded. So \(\nu\) and \(\nu^{\prime}\) are bounded, and since we also proved that \(x\) is uniformly bounded on \(\mathbb{R}_{+}\), we deduce from (27) that \(y\) is uniformly bounded as well. Hence, the unique local solution \((x,y)\) is global. ## Appendix B Proof of Theorem 3.8 This section is devoted to proving the general result of Section 3. Fix some constants \(c_{1},c_{2}>0\) and let \(\varepsilon\) and \(\alpha\) such that Assumption 2 is satisfied with these constants. Let \(x\) be the corresponding solution of (VM-DIN-AVD), \(x_{N}\), and \(x_{LM}\) that of (CN) and (LM), respectively. Following the same arguments as in the beginning of the proof of Theorem 3.1, for all \(t\geq 0\), \(x(t)\), \(x_{N}(t)\) and \(x_{LM}(t)\) belong to the bounded set \(\mathsf{K}_{0}\) defined in that proof. Since \(f\) is \(\mu\)-strongly convex on \(\mathsf{K}_{0}\), the proof relies again on bounding difference of gradients, indeed, for all \(t\geq 0\), \[\|x(t)-x_{N}(t)\|\leq\frac{1}{\mu}\|\nabla f(x(t))-\nabla f(x_{N}(t))\|\text{ and }\|x(t)-x_{LM}(t)\|\leq\frac{1}{\mu}\|\nabla f(x(t))-\nabla f(x_{LM}(t))\|. \tag{28}\] Recall also that the closed form of \(\nabla f(x_{N})\) is given in (4). Expressing \(\nabla f(x)\).We follow the exact same steps as in the proof of Theorem 3.1 to obtain the expression of \(\nabla f(x)\) given in (5), which we recall, for all \(t\geq 0\), \[\beta\nabla f(x(t))=\beta e^{-\frac{t}{\beta}}\nabla f(x_{0})+e^{-\frac{t}{ \beta}}\varepsilon_{0}\dot{x}_{0}-\varepsilon(t)\dot{x}(t)+\int_{0}^{t}e^{ \frac{s-t}{\beta}}\left(\frac{1}{\beta}\varepsilon(s)+\varepsilon^{\prime}(s )-\alpha(s)\right)\dot{x}(s)\,\mathrm{d}s.\] Here we do not assume any relation between \(\varepsilon\) and \(\alpha\), and we thus need to find a more suitable expression for \(\nabla f(x(t))\). We first expand the terms in the integral, for all \(t\geq 0\), \[\beta\nabla f(x(t))=\beta e^{-\frac{t}{\beta}}\nabla f(x_{0})+e^{- \frac{t}{\beta}}\varepsilon_{0}\dot{x}_{0}-\varepsilon(t)\dot{x}(t)\\ +\int_{0}^{t}e^{\frac{s-t}{\beta}}\left(\frac{1}{\beta} \varepsilon(s)+\varepsilon^{\prime}(s)\right)\dot{x}(s)\,\mathrm{d}s-\int_{0 }^{t}e^{\frac{s-t}{\beta}}\alpha(s)\dot{x}(s)\,\mathrm{d}s. \tag{29}\] Then, for all \(s\geq 0\), we have the identity, \[e^{\frac{s}{\beta}}\dot{x}(s)=e^{\frac{s}{\beta}}\dot{x}(s)+\frac{1}{\beta}e^{ \frac{s}{\beta}}x(s)-\frac{1}{\beta}e^{\frac{s}{\beta}}x(s)=\frac{\mathrm{d}} {\mathrm{d}s}(e^{\frac{s}{\beta}}x(s))-\frac{1}{\beta}e^{\frac{s}{\beta}}x(s), \tag{30}\] which we use to perform an integration by part on the last integral in (29), \[\int_{0}^{t}e^{\frac{s}{\beta}}\alpha(s)\dot{x}(s)\,\mathrm{d}s=\left[\alpha( s)e^{\frac{s}{\beta}}x(s)\right]_{0}^{t}-\int_{0}^{t}\left(\alpha^{\prime}(s)+ \frac{\alpha(s)}{\beta}\right)e^{\frac{s}{\beta}}x(s)\,\mathrm{d}s.\] Therefore, \[e^{-\frac{t}{\beta}}\int_{0}^{t}e^{\frac{s}{\beta}}\alpha(s)\dot{x}(s)\, \mathrm{d}s=\alpha(t)x(t)-e^{-\frac{t}{\beta}}\alpha_{0}x_{0}-\int_{0}^{t}e^{ \frac{s-t}{\beta}}\left(\alpha^{\prime}(s)+\frac{\alpha(s)}{\beta}\right)x(s) \,\mathrm{d}s, \tag{31}\] and we can substitute in (29), \[\beta\nabla f(x(t))=\beta e^{-\frac{t}{\beta}}\nabla f(x_{0})+e^{ -\frac{t}{\beta}}\varepsilon_{0}\dot{x}_{0}-\varepsilon(t)\dot{x}(t)+\int_{0} ^{t}e^{\frac{s-t}{\beta}}\left(\frac{1}{\beta}\varepsilon(s)+\varepsilon^{ \prime}(s)\right)\dot{x}(s)\,\mathrm{d}s\\ -\alpha(t)x(t)+e^{-\frac{t}{\beta}}\alpha_{0}x_{0}+\int_{0}^{t}e^ {\frac{s-t}{\beta}}\left(\alpha^{\prime}(s)+\frac{\alpha(s)}{\beta}\right)x(s )\,\mathrm{d}s. \tag{32}\] Uniform boundedness.In view of exploiting (32), we recall that for all \((\varepsilon,\alpha)\), \(x\) is uniformly bounded. So there exists \(R>0\) such that for all \((\varepsilon,\alpha)\), the corresponding solution \(x\) of (VM-DIN-AVD) is such that \[\sup_{t\geq 0}\|x(t)\|\leq R. \tag{33}\] We are now in position to prove (8). Distance from \(x\) to \(x_{N}\).We first gather (4) and (32). For all \(t\geq 0\), \[\beta\nabla f(x(t))-\beta\nabla f(x_{N}(t))=e^{-\frac{t}{\beta}} \varepsilon_{0}\dot{x}_{0}-\varepsilon(t)\dot{x}(t)+\int_{0}^{t}e^{\frac{s-t} {\beta}}\left(\frac{1}{\beta}\varepsilon(s)+\varepsilon^{\prime}(s)\right) \dot{x}(s)\,\mathrm{d}s\\ +e^{-\frac{t}{\beta}}\alpha_{0}x_{0}-\alpha(t)x(t)+\int_{0}^{t}e ^{\frac{s-t}{\beta}}\left(\alpha^{\prime}(s)+\frac{\alpha(s)}{\beta}\right)x(s )\,\mathrm{d}s.\] We then use (28) and triangle inequalities, \[\beta\mu\|x(t)-x_{N}(t)\|\leq e^{-\frac{t}{\beta}}\varepsilon_{0 }\|\dot{x}_{0}\|+\varepsilon(t)\|\dot{x}(t)\|+\int_{0}^{t}e^{\frac{s-t}{\beta}} \left|\frac{\varepsilon(s)}{\beta}+\varepsilon^{\prime}(s)\right|\|\dot{x}(s)\| \,\mathrm{d}s\\ +e^{-\frac{t}{\beta}}\alpha_{0}\|x_{0}\|+\alpha(t)\|x(t)\|+\int_{ 0}^{t}e^{\frac{s-t}{\beta}}\left|\frac{\alpha(s)}{\beta}+\alpha^{\prime}(s) \right|\|x(s)\|\,\mathrm{d}s. \tag{34}\] By Assumption 2, for all \(s\geq 0\), \(|\frac{\varepsilon(s)}{\beta}+\varepsilon^{\prime}(s)|\leq(\frac{1}{\beta}+c_{1}) \varepsilon(s)\) and \(|\frac{\alpha(s)}{\beta}+\alpha^{\prime}(s)|\leq(\frac{1}{\beta}+c_{2})\alpha(s)\). We then use Lemma 3.5 (denoting by \(C>0\) the constant stated in the lemma) on the first line of (34), and we use the boundedness (33) on the second line to obtain, \[\beta\mu\|x(t)-x_{N}(t)\|\leq e^{-\frac{t}{\beta}}\varepsilon_{0 }\|\dot{x}_{0}\|+C\sqrt{\varepsilon(t)}+C\left(\frac{1}{\beta}+c_{1}\right) \int_{0}^{t}e^{\frac{s-t}{\beta}}\sqrt{\varepsilon(s)}\,\mathrm{d}s\\ +e^{-\frac{t}{\beta}}\alpha_{0}\|x_{0}\|+R\alpha(t)+R\left(\frac{ 1}{\beta}+c_{2}\right)\int_{0}^{t}e^{\frac{s-t}{\beta}}\alpha(s)\,\mathrm{d}s.\] This proves (8). Expressing \(\nabla f(x_{LM})\).We now repeat previous arguments but for (LM). First, (LM) is equivalent to \[\frac{\mathrm{d}}{\mathrm{d}t}\nabla f(x_{LM}(t))+\frac{1}{\beta}\nabla f(x_{ LM}(t))=-\alpha(t)\dot{x}_{LM}(t).\] So using an integrating factor one can check that for all \(t\geq 0\), \[\nabla f(x_{LM}(t))=e^{-\frac{t}{\beta}}\nabla f(x_{0})-e^{-\frac{t}{\beta}} \int_{0}^{t}\frac{1}{\beta}e^{\frac{s}{\beta}}\alpha(s)\dot{x}_{LM}(s)\, \mathrm{d}s.\] We can then follow exactly steps (30) to (31) so as to obtain, \[e^{-\frac{t}{\beta}}\int_{0}^{t}e^{\frac{s}{\beta}}\alpha(s)\dot{x}_{LM}(s)\, \mathrm{d}s=\alpha(t)x_{LM}(t)-e^{-\frac{t}{\beta}}\alpha_{0}x_{0}-e^{-\frac{ t}{\beta}}\int_{0}^{t}\left(\alpha^{\prime}(s)+\frac{\alpha(s)}{\beta}\right)e^{ \frac{s}{\beta}}x_{LM}(s)\,\mathrm{d}s.\] Finally, remark that for all \(t\geq 0\), \[\frac{\mathrm{d}}{\mathrm{d}t}f(x_{LM}(t))=-\alpha(t)\|\dot{x}_{LM}(t)\|^{2}- \beta\langle\dot{x}_{LM}(t),\nabla^{2}f(x_{LM}(t))\dot{x}_{LM}(t)\rangle\leq 0.\] So \(f(x_{LM}(t))\leq f(x_{0})\) and using the coercivity of \(f\) as before we deduce that for all choices \(\alpha\), \[\sup_{t\geq 0}\|x_{LM}(t)\|\leq R. \tag{35}\] Distance from \(x\) to \(x_{LM}\).We substract gradients, \[\beta\nabla f(x(t))-\beta\nabla f(x_{LM}(t)))=e^{-\frac{t}{\beta }}\varepsilon_{0}\dot{x}_{0}-\varepsilon(t)\dot{x}(t)+\int_{0}^{t}e^{\frac{s- t}{\beta}}\left(\frac{1}{\beta}\varepsilon(s)+\varepsilon^{\prime}(s)\right) \dot{x}(s)\,\mathrm{d}s\\ -\alpha(t)(x(t)-x_{LM}(t))-\int_{0}^{t}e^{\frac{s-t}{\beta}} \left(\alpha^{\prime}(s)+\frac{\alpha(s)}{\beta}\right)(x(s)-x_{LM}(s))\, \mathrm{d}s,\] and we proceed as before using (28), Assumption 2 and Lemma 3.5. It holds that, \[\beta\mu\|x(t)-x_{LM}(t)\|\leq e^{-\frac{t}{\beta}}\varepsilon_{ 0}\|\dot{x}_{0}\|+C\sqrt{\varepsilon(t)}+C\left(\frac{1}{\beta}+c_{1}\right) \int_{0}^{t}e^{\frac{1}{\beta}(s-t)}\sqrt{\varepsilon(s)}\,\mathrm{d}s\\ +\alpha(t)\|x(t)-x_{LM}(t)\|+\left(\frac{1}{\beta}+c_{2}\right) \int_{0}^{t}e^{\frac{1}{\beta}(s-t)}\|x(s)-x_{LM}(s)\|\,\mathrm{d}s.\] Finally, using (33) and (35), for all \(s\geq 0\), \(\|x(s)-x_{LM}(s)\|\leq 2R\), which concludes the proof. Integrability of \(\varphi\) and Additional Asymptotic Computations Below we prove Lemma 4.7. Proof.: We suppose that Assumptions 3 and 4 hold. As stated in Remark 4.3, since \(\varphi\) is continuous, we only need to check its integrability when \(t\) tends to \(+\infty\). Let \(t>0\), we first establish some useful identities, we omit the dependence on \(t\) for the sake of readability. \[p^{\prime} =\frac{\alpha^{\prime}\varepsilon-(\alpha+\beta\lambda)\varepsilon ^{\prime}}{\varepsilon^{2}},\] \[p^{\prime\prime} =\frac{\alpha^{\prime\prime}\varepsilon^{2}-2\alpha^{\prime} \varepsilon^{\prime}\varepsilon-(\alpha+\beta\lambda)\varepsilon^{\prime \prime}\varepsilon+2(\alpha+\beta\lambda)(\varepsilon^{\prime})^{2}}{ \varepsilon^{3}}.\] Then, \[r =\frac{p^{2}}{4}\left(1+\frac{2p^{\prime}}{p^{2}}-\frac{4\lambda }{\varepsilon p^{2}}\right)=\frac{(\alpha+\beta\lambda)^{2}}{4\varepsilon^{2 }}\left(1+\frac{2p^{\prime}\varepsilon^{2}}{(\alpha+\beta\lambda)^{2}}-\frac{ 4\lambda\varepsilon}{(\alpha+\beta\lambda)^{2}}\right) \tag{36}\] \[=\frac{(\alpha+\beta\lambda)^{2}}{4\varepsilon^{2}}\left(1+\frac {2\alpha^{\prime}\varepsilon}{(\alpha+\beta\lambda)^{2}}-\frac{2\varepsilon^{ \prime}}{(\alpha+\beta\lambda)}-\frac{4\lambda\varepsilon}{(\alpha+\beta \lambda)^{2}}\right).\] An important consequence of Assumption 4 is that \(|\varepsilon^{\prime}(t)|=o(\varepsilon(t))\), \(|\varepsilon^{\prime\prime}(t)|=o(\varepsilon^{\prime}(t))\) (and the same holds for \(\alpha\) w.r.t. to its derivatives). Therefore, we deduce from (36) that \[r(t)\sim_{+\infty}\frac{(\alpha(t)+\beta\lambda)^{2}}{4\varepsilon(t)^{2}},\] and we note that \(1/r\) decays at the same speed as \(\varepsilon^{2}\), which will be useful later. In order to study \(\varphi\), we now differentiate \(r\), \[r^{\prime} =\frac{p^{\prime}p}{2}\left(1+\frac{2p^{\prime}}{p^{2}}-\frac{4 \lambda}{\varepsilon p^{2}}\right)+\frac{1}{4}\left(2p^{\prime\prime}-\frac{4 (p^{\prime})^{2}}{p}+\frac{8\lambda p^{\prime}}{\varepsilon p}+\frac{4\lambda \varepsilon^{\prime}}{\varepsilon^{2}}\right)\] \[=\frac{2p^{\prime}}{p}r+\frac{1}{4}\left(2p^{\prime\prime}-\frac {4(p^{\prime})^{2}}{p}+\frac{8\lambda p^{\prime}}{\varepsilon p}+\frac{4 \lambda\varepsilon^{\prime}}{\varepsilon^{2}}\right),\] and \[r^{\prime\prime}= 2\frac{p^{\prime\prime}p-(p^{\prime})^{2}}{p^{2}}r+\frac{2p^{ \prime}}{p}r^{\prime} \tag{37}\] \[+\frac{1}{4}\left(2p^{\prime\prime\prime}+4\frac{(p^{\prime})^{3} -2p^{\prime\prime}p^{\prime}p}{p^{2}}+8\lambda\frac{p^{\prime\prime}p \varepsilon-(p^{\prime})^{2}\varepsilon-p^{\prime}p\varepsilon^{\prime}}{ \varepsilon^{2}p^{2}}+\frac{4\lambda\varepsilon^{\prime\prime}}{\varepsilon^ {2}}-\frac{8\lambda(\varepsilon^{\prime})^{2}}{\varepsilon^{3}}\right).\] Then, to justify that \(\varphi\) is integrable, we prove that \(\frac{r^{\prime\prime}}{r^{3/2}}\) and \(\frac{(r^{\prime})^{2}}{r^{5/2}}\) are integrable. Since we know that \(1/r\) decays at the same speed as \(\varepsilon^{2}\), we can equivalently show that \(\varepsilon^{3}r^{\prime\prime}\) and \(\varepsilon^{5}(r^{\prime})^{2}\) are integrable. To this aim we fully expand all the terms in (37), and compute \((r^{\prime})^{2}\), which is extremely long and involved. On the one hand, it holds that, \[r^{\prime 2}\varepsilon^{5}= \left[-\frac{(\alpha+\beta\lambda)^{2}\left(-\frac{4\lambda \varepsilon}{(\alpha+\beta\lambda)^{2}}+1+\frac{(-2(\alpha+\beta\lambda) \varepsilon^{\prime}+2\alpha^{\prime}\varepsilon)}{(\alpha+\beta\lambda)^{2}} \right)\varepsilon^{\prime}}{2\sqrt{\varepsilon}}+\frac{(\alpha+\beta\lambda) ^{2}\sqrt{\varepsilon}}{4}\left(-\frac{4\lambda\varepsilon^{\prime}}{(\alpha+ \beta\lambda)^{2}}+\frac{8\lambda\alpha^{\prime}\varepsilon}{(\alpha+\beta \lambda)^{3}}\right.\right.\] \[\left.\left.+\frac{2\left(-\frac{2(\alpha+\beta\lambda) \varepsilon^{\prime}}{\varepsilon}+2\alpha^{\prime}\varepsilon\right) \varepsilon^{\prime}}{(\alpha+\beta\lambda)^{2}}+\frac{\left(\frac{4(\alpha+ \beta\lambda)\varepsilon^{\prime 2}}{\varepsilon}-2(\alpha+\beta\lambda) \varepsilon^{\prime\prime}-4\alpha^{\prime}\varepsilon^{\prime}+2\alpha^{ \prime\prime}\varepsilon\right)}{(\alpha+\beta\lambda)^{2}}-\frac{2\left(-2 (\alpha+\beta\lambda)\varepsilon^{\prime}+2\alpha^{\prime}\varepsilon \right)\alpha^{\prime}}{(\alpha+\beta\lambda)^{3}}\right)\] \[\left.+\frac{(\alpha+\beta\lambda)}{2}\bigg{(}-\frac{4\lambda \varepsilon}{(\alpha+\beta\lambda)^{2}}+1+\frac{(-2(\alpha+\beta\lambda) \varepsilon^{\prime}+2\alpha^{\prime}\varepsilon)}{(\alpha+\beta\lambda)^{2}} \bigg{)}\alpha^{\prime}\sqrt{\varepsilon}\quad\right]^{2}.\] On the other hand, we, \[r^{\prime\prime}\varepsilon(t)^{3}=\] \[-\lambda\varepsilon^{\prime\prime}\varepsilon+\frac{4\lambda \alpha^{\prime}\varepsilon^{\prime}\varepsilon}{\alpha+\beta\lambda}+2\frac{ 2\lambda\alpha^{\prime\prime}\varepsilon^{2}}{\alpha+\beta\lambda}-\frac{6 \lambda\alpha^{\prime 2}\varepsilon^{2}}{(\alpha+\beta\lambda)^{2}}+\frac{( \alpha+\beta\lambda)^{2}}{2}\left(-\frac{3\varepsilon^{\prime 2}}{\varepsilon}+ \varepsilon^{\prime\prime}\right)\left(\frac{4\lambda\varepsilon}{(\alpha+ \beta\lambda)^{2}}-1+\frac{2\left((\alpha+\beta\lambda)\varepsilon^{\prime}- \alpha^{\prime}\varepsilon\right)}{(\alpha+\beta\lambda)^{2}}\right)\] \[+2(\alpha+\beta\lambda)\left(\frac{4\lambda\varepsilon}{(\alpha+ \beta\lambda)^{2}}-1+\frac{2\left((\alpha+\beta\lambda)\varepsilon^{\prime}- \alpha^{\prime}\varepsilon\right)}{(\alpha+\beta\lambda)^{2}}\right)\alpha^{ \prime}\varepsilon^{\prime}-\frac{\left((\alpha+\beta\lambda)\alpha^{\prime \prime}+\alpha^{\prime 2}\right)}{2}\left(\frac{4\lambda\varepsilon}{(\alpha+\beta\lambda)^{2}}-1+ \frac{2\left((\alpha+\beta\lambda)\varepsilon^{\prime}-\alpha^{\prime} \varepsilon\right)}{(\alpha+\beta\lambda)^{2}}\right)\varepsilon\] \[-\left(\frac{(\alpha+\beta\lambda)\varepsilon^{\prime}}{\varepsilon }-\alpha^{\prime}\right)\varepsilon^{\prime 2}-\left((\alpha+\beta\lambda)\varepsilon^{\prime}-\alpha^{\prime} \varepsilon\right)\varepsilon^{\prime\prime}-2\left(-\frac{2(\alpha+\beta \lambda)\varepsilon^{\prime 2}}{\varepsilon}+(\alpha+\beta\lambda)\varepsilon^{\prime\prime}+2\alpha^{ \prime}\varepsilon^{\prime}-\alpha^{\prime\prime}\varepsilon\right)\varepsilon^ {\prime}\] \[+2\left(2\lambda\varepsilon^{\prime}-\frac{4\lambda\alpha^{ \prime}\varepsilon}{\alpha+\beta\lambda}+2\left(\frac{(\alpha+\beta\lambda) \varepsilon^{\prime}}{\varepsilon}-\alpha^{\prime}\right)\varepsilon^{ \prime}+\left(-\frac{2(\alpha+\beta\lambda)\varepsilon^{\prime 2}}{\varepsilon}+(\alpha+\beta\lambda)\varepsilon^{\prime\prime}+2\alpha^{ \prime}\varepsilon^{\prime}-\alpha^{\prime\prime}\varepsilon\right)-\frac{2 \left((\alpha+\beta\lambda)\varepsilon^{\prime}-\alpha^{\prime}\varepsilon \right)\alpha^{\prime}}{\alpha+\beta\lambda}\right)\varepsilon^{\prime}\] \[-\frac{1}{2}\left(\frac{6(\alpha+\beta\lambda)\varepsilon^{ \prime 3}}{\varepsilon}-6(\alpha+\beta\lambda)\varepsilon^{\prime}\varepsilon^{ \prime\prime}+(\alpha+\beta\lambda)\varepsilon^{\prime\prime}\varepsilon-6 \alpha\varepsilon^{\prime}\varepsilon^{2}+3\alpha^{\prime}\varepsilon^{\prime \prime}\varepsilon-\alpha^{\prime\prime\prime}\varepsilon^{2}\right)\] \[+\frac{1}{\alpha+\beta\lambda}\left(4\left((\alpha+\beta\lambda) \varepsilon^{\prime}-\alpha^{\prime}\varepsilon\right)\alpha^{\prime}\varepsilon^ {\prime}+\left((\alpha+\beta\lambda)\varepsilon^{\prime}-\alpha^{\prime} \varepsilon\right)\alpha^{\prime\prime}\varepsilon+2\left(-2(\alpha+\beta \lambda)\varepsilon^{\prime 2}+(\alpha+\beta\lambda)\varepsilon^{\prime\prime}\varepsilon+2\alpha^{ \prime}\varepsilon^{\prime}\varepsilon-\alpha^{\prime\prime}\varepsilon ^{2}\right)\alpha^{\prime}\right)\] \[-\frac{2}{\alpha+\beta\lambda}\left(2\lambda\varepsilon^{\prime}- \frac{4\lambda\alpha^{\prime}\varepsilon}{\alpha+\beta\lambda}+2\left(\frac{( \alpha+\beta\lambda)\varepsilon^{\prime}}{\varepsilon}-\alpha^{\prime} \right)\varepsilon^{\prime}+\left(-\frac{2(\alpha+\beta\lambda)\varepsilon^{ \prime 2}}{\varepsilon}+(\alpha+\beta\lambda)\varepsilon^{\prime\prime}+2\alpha^{ \prime}\varepsilon^{\prime}-\alpha^{\prime\prime}\varepsilon\right)-\frac{2 \left((\alpha+\beta\lambda)\varepsilon^{\prime}-\alpha^{\prime}\varepsilon \right)\alpha^{\prime}}{\alpha+\beta\lambda}\right)\alpha^{\prime}\varepsilon\] \[-\frac{3\left((\alpha+\beta\lambda)\varepsilon^{\prime}-\alpha^{ \prime}\varepsilon^{2}\right)\alpha^{\prime 2}}{(\alpha+\beta\lambda)^{2}}.\] We then analyze the integrability of each of the terms above. By Assumption 4, \(\varepsilon^{\prime}\), \(\varepsilon^{\prime\prime}\) and \(\varepsilon^{\prime\prime\prime}\) are integrable and the same goes for \(\alpha^{\prime}\), \(\alpha^{\prime\prime}\) and \(\alpha^{\prime\prime\prime}\), which is enough to justify the integrability of almost all the terms above. We finally see that we also need \(\frac{(\varepsilon^{\prime})^{2}}{\varepsilon}\) and \(\frac{(\varepsilon^{\prime})^{3}}{\varepsilon}\) to be integrable, which holds by Assumption 4. Overall, \(\varphi\) is integrable on \(\mathbb{R}_{+}\). We now state and prove the following result which was used at the end of the proof of Theorem 4.8. **Lemma C.1**.: _Under Assumptions 3 and 4, for all \(s\geq 0\),_ \[\frac{1}{16}\left(\frac{2p^{\prime}(s)}{p(s)^{3/2}}-\frac{4\lambda\sqrt{ \varepsilon(s)}}{(\alpha(s)+\beta\lambda)^{3/2}}\right)^{2}=\frac{\lambda^{2} \varepsilon(s)}{(\alpha(s)+\beta\lambda)^{3}}+o(\varepsilon(s)).\] Proof.: We omit the time dependence on \(s\geq 0\) for the sake of readability. Using Assumption 3 we can define and expand the following quantity, \[\frac{1}{16}\left(\frac{2p^{\prime}}{p^{3/2}}-\frac{4\lambda\sqrt {\varepsilon}}{(\alpha+\beta\lambda)^{3/2}}\right)^{2}=\frac{\lambda^{2} \varepsilon}{(\alpha+\beta\lambda)^{3}}-\frac{p^{\prime}\lambda\sqrt{ \varepsilon}}{p^{3/2}(\alpha+\beta\lambda)^{3/2}}+\frac{(p^{\prime})^{2}}{4p^ {3}}\] \[=\frac{\lambda^{2}\varepsilon}{(\alpha+\beta\lambda)^{3}}- \lambda(\alpha^{\prime}\varepsilon-(\alpha+\beta\lambda)\varepsilon^{\prime}) +\frac{1}{4(\alpha+\beta\lambda)^{3}}\left((\alpha^{\prime})^{2}\varepsilon+ \frac{(\varepsilon^{\prime})^{2}}{\varepsilon}(\alpha+\beta\lambda)^{2}-2 \alpha^{\prime}\varepsilon^{\prime}(\alpha+\beta\lambda)\right).\] Assumption 4, implies in particular that \(|\varepsilon^{\prime}(t)|=o(\varepsilon(t))\) and that \(\alpha^{\prime}(t)\to 0\), which we use in the equality above to obtain the desired conclusion. ## Appendix D Additional Experiments and Details We first detail the discretization that we used for approximating the solutions of the three ODEs considered in Section 5. We use Euler discretization schemes with fixed step-size \(\gamma>0\) and approximate the solutions at times \(t_{k}=\gamma k\), for all \(k\in\mathbb{N}\). For a trajectory \(x\), we use the notation \(x(t_{k})\stackrel{{\text{def}}}{{=}}x^{(k)}\). The approximation of (CN) is obtained by explicit discretization, so that for all \(k\in\mathbb{N}\), we have, \[x_{N}^{(k+1)}=x_{N}^{(k)}-\gamma\left[\beta\nabla^{2}f(x_{N}^{(k)})\right]^{-1} \nabla f(x_{N}^{(k)}). \tag{38}\] Then, defining \(\varepsilon_{k}=\varepsilon(t_{k})\) and \(\alpha_{k}=\alpha(t_{k})\), (LM) and (VM-DIN-AVD) are obtained via Euler semi-implicit discretization. The solution of (LM) is approximated by, \[x_{LM}^{(k+1)}=x_{LM}^{(k)}-\gamma\left[\alpha_{k}I_{n}+\beta\nabla^{2}f(x_{ LM}^{(k)})\right]^{-1}\nabla f(x_{LM}^{(k)}), \tag{39}\] where \(I_{n}\) is the identity matrix on \(\mathbb{R}^{n}\). The solution of (VM-DIN-AVD) is obtained similarly, \[x^{(k+1)}=x^{(k)}+\left[(\varepsilon_{k}+\gamma\alpha_{k})I_{n}+\gamma\beta \nabla^{2}f(x^{(k)})\right]^{-1}\left(\varepsilon_{k}(x^{(k)}-x^{(k-1)})- \gamma^{2}\nabla f(x^{(k)})\right). \tag{40}\] As safety check, one can see that for \(\varepsilon_{k}=0\), (40) is equivalent to (39), which is itself equivalent to (38) when \(\alpha_{k}=0\). Figure 6: Similar experiment and figures as those described in Figure 2, but for a poorly conditioned quadratic \(f(x)=\frac{1}{2}\|Ax\|^{2}\) (first-two rows) and the function \(f(x)=\log\left(\sum_{i=1}^{n}e^{x_{i}}\right)+\frac{1}{2}\|Ax\|^{2}\) (last-two rows).
2307.04292
A Demand-Driven Perspective on Generative Audio AI
To achieve successful deployment of AI research, it is crucial to understand the demands of the industry. In this paper, we present the results of a survey conducted with professional audio engineers, in order to determine research priorities and define various research tasks. We also summarize the current challenges in audio quality and controllability based on the survey. Our analysis emphasizes that the availability of datasets is currently the main bottleneck for achieving high-quality audio generation. Finally, we suggest potential solutions for some revealed issues with empirical evidence.
Sangshin Oh, Minsung Kang, Hyeongi Moon, Keunwoo Choi, Ben Sangbae Chon
2023-07-10T00:58:28Z
http://arxiv.org/abs/2307.04292v1
# A Demand-Driven Perspective on Generative Audio AI ###### Abstract To achieve successful deployment of AI research, it is crucial to understand the demands of the industry. In this paper, we present the results of a survey conducted with professional audio engineers, in order to determine research priorities and define various research tasks. We also summarize the current challenges in audio quality and controllability based on the survey. Our analysis emphasizes that the availability of datasets is currently the main bottleneck for achieving high-quality audio generation. Finally, we suggest potential solutions for some revealed issues with empirical evidence. Machine Learning, ICML ## 1 Introduction The use of audio generative models has the potential to significantly impact a variety of industries. Although essential, the process of creating foley effects is often tedious, non-reproducible, and lacks scalability. Moreover, the utilization of pre-recorded sounds is not conducive to real-time or interactive applications, rendering it inadequate for fields like gaming, metaverse, or any domain requiring the simulation of lifelike environments. The advent of generative audio AI offers a promising solution to address these limitations, significantly impacting areas like film production, gaming, social platforms, and more. Audio synthesis research has a long history (Dudley, 1955; Chowning, 1973), but we will focus on the data-driven approaches as they are the recent pioneers with huge potential. The current generative audio AI is still in its early stages, necessitating further advancements in various aspects. We present this paper to provide a demand-driven perspective on task definitions, challenges, and potential solutions within audio generation. Specifically, our focus is on general audio, excluding speech and music. The key contributions of this paper include: * A survey with individuals working in movie sound productions to share insights into the industry-side demands. * Detailed definitions and review of distinct tasks in audio generation regarding input types and conditions. * A summary of the related challenges towards industrial demands and a proposal on potential solutions supported by empirical evidence, including a method with which we achieved 2nd place in the foley synthesis challenges at DCASE 2023. ## 2 Demands from Industry To gather insights regarding the impact of audio generative models on the industry, we first interviewed two professionals from the field of movie sound production. They highlighted that their role extends beyond that of sound technicians, as they contribute to the artistic dimension of creating immersive and captivating sound experiences. Despite the inevitable laborious nature of foley and sound effect recording, they are compelled to record new sounds since existing sounds are hardly reusable. While they have a vast library of previous sound stems, there is effectively no efficient method at hand for searching and finding suitable sounds. Even if they find a suitable sound, they have to spend time on editing the time synchronization and sound tone. Based on this knowledge, we conducted a survey involving 18 individuals working in movie sound production, addressing the topic of AI audio generation. We first presented them with some examples of AI image generation applications and a demo page1 of a recent text-to-audio model (Liu et al., 2023). We then asked three following primary questions with multiple-choice options. Footnote 1: [https://audioldm.github.io/](https://audioldm.github.io/) **Q1.**_What are the major challenges faced in foley recording?_ The most frequently selected option for this question was the time synchronization problem. Following that, respondents expressed the importance of audio quality and consistency in tone with the synchronous recording. In the additional comments, respondents emphasized again that for foley sound, audio quality, synchronization with the scene, and consistency in tone with other sound sources are crucial - to the point that without a good synchronization, some might only consider using AI-generation for ambient sounds. This indicates that relying solely on text-based conditioning may not be sufficient for a majority of use-cases. **Q2.**_What is the limitation(s) of the current text-conditioned audio generation as a product?_ The survey result is plotted in Figure 1. In this question, it was found that audio quality presents the most significant challenge for practical usage. According to their comments, the concerns about _quality_ encompass other aspects such as low fidelity, low sampling rate, roughness, and other related factors. A majority of respondents expressed complaints regarding the sample rate. It is noteworthy that while the industry requires full-band signals at 48kHz or higher, most of the current systems still operate within the 16kHz-24kHz range (Kreuk et al., 2022; Huang et al., 2023; Liu et al., 2023). For _creativity_, which was the second most frequently chosen category, it refers to the generation of new sounds that fulfill artistic intentions, e.g., creating "the sound of a lightsaber in Star Wars." The terms such as _edit_ and _text_, which received the third and fourth highest numbers of votes, indicate the problems of controllability. **Q3.**_How would you like to condition the audio generation?_ As in Figure 2, the most frequently chosen option is the utilization of video for time synchronization and achieving an appropriate sound tone. More than half of the respondents were interested in generating similar sounds to reference audio samples. The third and fourth popular options, namely _interp._ and _consistn._, are related to refining the generated audio based on reference audio samples. The respondents seemed to show their hope for a more efficient workflow in Q3, in contrast to showing their expectations in Q2. This survey result presents important remarks on generative audio research. First, texts and videos are complementary to each other towards a more complete generative audio system. Second, sound and event synchronization is an important topic that deserves more attention. Third, although it is somewhat deviated from our topic, high-quality audio indexing, search, and separation may be also a solution for some of the problems generative audio AI aims to solve. Based on this understanding, we delve into the current state and challenges of the audio generation field in the following sections. ## 3 Task Definitions In a recent proposal paper on foley sound synthesis challenge (Choi et al., 2022), the audio generative AI task is specified based on the input and output types. The authors outline three distinct input types: i) category index, ii) text description, and iii) videos. While the categorization of output types is not explicitly stated, it can be inferred as follows: i) individual foley sounds representing a single event, ii) a combination of multiple events and/or ambient sounds, and iii) a comprehensive soundtrack comprising foley sounds ambient elements, and spatially enhanced mixing. We will focus on the input types since the determination of output types is primarily governed by technical feasibility, allowing a limited scope with the current technology. ### Input Types First, a category index, that indicates a single type of audio event, would be the simplest form of input type for a sound synthesis system. This was adopted in some previous works (Ghose and Prevost, 2020, 2022) and this year's DCASE Task 7 (Choi et al., 2023). Solutions with this approach would improve foley recording processes for some popular categories such as dog barks, door slams, or footsteps. The second type would be text descriptions as employed in recent research (Kreuk et al., 2022; Yang et al., 2023; Liu et al., 2023; Huang et al., 2023), relying on audio caption datasets. There are several promising aspects associated with this text-to-audio approach. i) Extensive research has already been conducted on text-to-X generation (e.g., text-to-image generation studies (Ramesh et al., 2021; Saharia Figure 1: Answers of Q2: _What is the limitation(s) of the current text-conditioned audio generation as a product?_ Figure 2: Answers of Q3: _How would you like to condition the audio generation by?_ et al., 2022; Rombach et al., 2022)), which simplifies its adaptation for audio generation purposes. ii) The familiarity of users with UI/UX utilizing text inputs further supports the feasibility of this approach. However, there are difficulties as well. i) Compared to text-image pairs, there is a scarcity of text-audio pairs available for training models (Huang et al., 2023). For example, the number of items of AudioCaps (Kim et al., 2019), the largest audio captioning dataset, is 0.013% of (or 7561 times smaller than) that of LAION-400M, an text-image pair dataset (Schuhmann et al., 2021). ii) Text input has limitations in providing highly detailed descriptions at a professional level, as audio engineers rely on precise controls like knobs and sliders to make fine adjustments to the sound (e.g., equalizers). Third, video input types have pros and cons. Unlike the previous input types, videos may provide the exact timings of events (Zhou et al., 2018; Ghose and Prevost, 2022; Cui et al., 2023). As its importance was discussed in Section 2, there is a huge potential for improving the workflow of video creation in this scenario by efficient time synchronization. However, the video itself does not provide complete information because it is common that not everything visible should sound, as well as not everything that sounds is visible. Additionally, there are deliberate artistic intentions involved in video creation such as muting/exaggerating certain sounds. These artistic decisions may vary significantly. Therefore, when developing video-to-sound generation methods, the ability to edit and manipulate the generated audio becomes crucial, just as it is important for text-based generation approaches as we will discuss in the following section. ### Conditioning Conditioning can be viewed as a form of input in a broader sense and is deeply related to controllability and editability. AudioLDM pioneered sound editing through text-based approaches (Liu et al., 2023), and we believe that this direction of research will continue toward more diverse, intuitive, and fine-grained conditioning. For example, users may want to control factors such as sound bandwidth, F0 contours, temporal and spectral envelopes, etc. Our exploration of these product development considerations will continue in the following sections. ## 4 Challenges ### Dataset Improvement for Audio Quality Recently, there have been some generative AI products successfully deployed on language and image (Touvron et al., 2023; Chowdhery et al., 2022; OpenAI, 2023; Ramesh et al., 2021). However, the current state of audio generation research does not seem mature enough to be adopted into professional sound production. As audio quality was the most prominent issue as in Figure 1, we focus on the issues and potential solutions on datasets to improve the generated audio quality in this section. First of all, the current data scarcity deteriorates the model training and resulting audio quality. Compared to image generation datasets that go beyond a few billion pairs (Ramesh et al., 2022), there are much less text-paired audio data available (Kreuk et al., 2022; Huang et al., 2023). Moreover, most of such paired datasets are _weakly labeled_, i.e. their labels or captions lack time resolution. This is problematic as it is common practice to slice audio signals for ease of \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Name**} & \multirow{2}{*}{**AQ**} & \multicolumn{2}{c}{**Dataset Size**} & \multicolumn{2}{c}{**Modality**} \\ \cline{3-6} & & **Dura.** & **N. Files** & **Lb** & **Cp** & **Vd** \\ \hline **Audioset** & & & & & & \\ AudioSet & _noisy_ & 5420 \(h\) & 1,951,460 & ✓ & & ✓ \\ Gemmeke et al., (2017) & & & & & & \\ AudioCaps & _noisy_ & 144.9 \(h\) & 52,904 & ✓ & ✓ & ✓ \\ Kim et al., (2019) & & & & & & \\ \hline **Freesound** & & & & & & \\ Freesound & _noisy_ & 3003 \(h\) & 515,581 & \(\triangle\) & \\ Front et al., (2013) & & & & & & \\ UrbanSound8K & _noisy_ & 8.75 \(h\) & 8,732 & ✓ & & \\ Salamon et al., (2014) & & & & & & \\ ESC-50 & & _noisy_ & 2.78 \(h\) & 2,000 & ✓ & & \\ Piczak, (2015) & & & & & & \\ Clotho & & & & & & \\ Drossos et al., (2020) & & & & & & \\ FSD50K & _noisy_ & 108.3 \(h\) & 51,197 & ✓ & & \\ Fonseca et al., (2021) & & & & & & \\ \hline **Others** & & & & & & \\ VGG Sound & _noisy_ & 550 \(h\) & \(\approx\) 200,000 & ✓ & ✓ \\ Chen et al., (2020) & & & & & & \\ BBC sound effects 2 & _clean_ & 463.5 \(h\) & 15,973 & ✓ & & \\ Epidemic Sound effects 3 & _clean_ & 220.4 \(h\) & 75,645 & ✓ & & \\ Free To Use Sounds 4 & _noisy_ & 175.7 \(h\) & 6,370 & ✓ & \\ Sonis Game Effects 5 & _clean_ & 84.6 \(h\) & 5,049 & \(\triangle\) & \\ WeSoundEffects 6 & _clean_ & 12.0 \(h\) & 488 & \(\triangle\) & \\ Odeon Sound Effects 7 & _clean_ & 19.5 \(h\) & 4,420 & \(\triangle\) & \\ \hline \hline \end{tabular} \end{table} Table 1: A list of audio datasets. AQ: audio quality, Dura.: duration, N. Files: number of files. Modality columns refer to the existence of labels, captions, and videos, respectively. _Clean_ recording: Audio is recorded in well-treated environments and mastered for professional content production. _Noisy_: dataset contains environmental noises or interference signals. \(\triangle\): Textual information included, not necessarily captions. This table is partially from (Kreuk et al., 2022) and (Wu et al., 2023) training and memory-related issues. Since the text in the pairs depicts audio coarsely in the time axis, there should be potential risks of mismatching when the audio signal is sliced into smaller segments for some practical reasons. Augmentation method (Kreuk et al., 2022; Huang et al., 2023) or using a contrastive embedding network (Liu et al., 2023) can help this, but not as an absolute treatment. The characteristics of the audio itself even exacerbates the problem. It is a difficult problem to separate foreground and background audio sources, and obtaining isolated audio recording would remain to be costly. The spatial characteristics of the recording environment often have negative affects to the recording quality. Altogether, there are many factors that make it tricky to create a studio-quality audio dataset. We listed available audio datasets in Table 1. Since the largest datasets in the list are collected or curated from crowd-sourced audio (Font et al., 2013) or video (Gemmeke et al., 2017; Chen et al., 2020), their recording conditions may vary and are usually not good. Thus, the samples from those datasets often suffer from severe background noises, low recording bandwidth / bit rate, and various types of distortion. _Clean_ datasets are limited to several commercial sound effect libraries. To this trade-off problem of _more_ data vs. _clean_ data, we propose a solution called _quality-aware training_ (QAT). This can be simply done by prompting, i.e., appending dataset labels indicating the quality of the dataset in the text input. QAT enables to utilize a broader range of datasets. During the training phase, a model can learn from both _clean_ and _noisy_ datasets with quality labels. As a result, the model would learn not only the concepts of different audio events but also their audio quality; i.e., the model would have _compositionality_ of audio events and audio quality. During the inference phase, we can force the model to generate clean signals by conditioning the model, i.e., by appending 'clean' labels to the text input. This enabled us to use all data pairs regardless of their quality without deteriorating their output quality. In our experience, this approach let us control the audio quality, reverberation, signal bandwidth, and audio event independently and achieve 2nd place in the recent foley synthesis challenge at DCASE 2023 (Kang et al., 2023;b). Details about experiments are provided in Appendix B. ### Methodological Improvement for Controllability Controllability was another major concern in our survey, as the audio engineers have specific intent about how the generated output should sound. Audio generation may take a long time, hence it is crucial for deployable audio AI systems to have effective controllability Classifier-free guidance is a widely adopted solution for the problem across diffusion-based and Transformer-based generative models. At the cost of sample qualities by extrapolating intermediate features or logits, it introduces diversity, which would make exploration easier for the users of generative audio AI systems. Most of the recent text-to-audio generation research adopted this technique (Kreuk et al., 2022; Liu et al., 2023; Huang et al., 2023). Controllability can be also attained by introducing new features or new modalities, for example, a reference audio or a conditioning video as in Figure 2. As AudioLDM demonstrated audio manipulation without fine-tuning (Liu et al., 2023), we believe text-guided audio-to-audio generation is a compelling research direction towards deployable generative audio AI. Video-based foley generation has been less popular, but it would be an interesting direction for future research along with the existing research (Zhou et al., 2018; Ghose and Prevost, 2020, 2022). Finally, conventional signal features such as F0 contour or envelopes can be a great user interface for experienced audio engineers. As those features are easy to extract from audio signals, it is plausible to use them as one of the inputs during the training phase, then build a user interface that allows control of the generated output by modifying the features. ## 5 Conclusion In this paper, we presented a survey conducted with sound engineers in the movie industry. Based on the survey results, we have provided task definitions for audio generation research and identified related research challenges. Our objective was to bridge the gap between current research and industry practices, offering potential solutions to address the challenges of audio quality and controllability. Surprisingly, there are limited opportunities for researchers to gain insights from the industry side. We believe that this work serves as a valuable starting point for understanding the difficulties faced by both researchers and potential users, ultimately aligning our efforts to solve the real-world problems. While our perspective focuses on the movie industry, it is important to acknowledge that neighboring industries may face different challenges with varying priorities. For example, the demand for real-time generation systems may be stronger in the virtual reality or gaming industry, while the standards for audio quality or artistic intent may be lower for non-professional movie creation platforms such as YouTube. We hope that our work represents a meaningful step towards comprehending the diverse demands placed on generative audio AI and its diverse applications.
2310.16259
Kerr-Nonlinearity Assisted Exceptional Point Degeneracy in a Detuned PT-Symmetric System
Systems operating at exceptional points (EPs) are highly responsive to small perturbations, making them suitable for sensing applications. Although this feature impedes the system working exactly at an EP due to imperfections arising during the fabrication process. We propose a fast self-tuning scheme based on Kerr nonlinearity in a coupled dielectric resonator excited through a waveguide placed in the near-field of the resonators. We show that in a coupled resonator with unequal Kerr-coefficients, initial distortion from EP regime can be completely compensated. It provides an opportunity to reach very close to the EP in a coupled resonator with detuned resonant frequencies via tuning the intensity of the incident wave. Using time-modulation of the incident wave in nonlinear systems to control both the gain or loss, and resonant frequencies can be a possible approach to fully control the parameters close to an EP.
Shahab Ramezanpour
2023-10-25T00:29:32Z
http://arxiv.org/abs/2310.16259v2
# Kerr-Nonlinearity Assisted Exceptional Point Degeneracy in a Detuned \(PT\)-Symmetric System ###### Abstract Systems operating at exceptional points (EPs) are highly responsive to small perturbations, making them suitable for sensing applications. Although this feature impedes the system working exactly at an EP due to imperfections arising during the fabrication process. We propose a fast self-tuning scheme based on Kerr nonlinearity in a coupled dielectric resonator excited through a waveguide placed in the near-field of the resonators. We show that in a coupled resonator with unequal Kerr-coefficients, initial distortion from EP regime can be completely compensated. It provides an opportunity to reach very close to the EP in a coupled resonator with detuned resonant frequencies via tuning the intensity of the incident wave. Using time-modulation of the incident wave in nonlinear systems to control both the gain or loss, and resonant frequencies can be a possible approach to fully control the parameters close to an EP. ## I Introduction _Exceptional Points_ (EPs) are degeneracy in non-Hermitian systems that have attracted many interests in recent years due to their broad applications in photonics [1, 2, 3, 4]. In practice, achieving the EP-regime can be challenging, since EP-based systems are highly responsive to imperfections arising in the fabrication process. Though in many applications, it is not required to work exactly at an EP, it is desirable to have a way to control the system's status. Therefore, a subsidiary mechanism is here investigated to tune these systems close to an EP. Secondary mechanisms can be, such as a heating scheme to tune the resonant frequencies of coupled resonators [2, 4] or a nanoparticle as a perturbation to tune the (radiation) loss and resonant frequency of resonators [5, 6]. For example, [7] derives general conditions of EP in a perturbed coupled resonator which contains one or two nanoparticles, and a highly chiral EP is achieved in a coupled resonator perturbed by a nanoscatterer [8]. Another interesting scheme is using nonlinearity in the coupled resonators. The eigenvalue analysis of such a system is performed in [9], which shows that using a coupled resonator with unequal Kerr-nonlinearities can exhibit an EP for the resonators with unequal resonant frequencies. To have an EP in coupled resonators, specific relations between the system parameters, including gain or loss, coupling rate, etc. are required. However, instead of using explicit gain or loss, in [10] time-modulation is applied as an equivalent mean to produce gain or loss. Virtual PT-symmetric systems [11] can have applications in critical coupling in high-Q resonators [12] and pulling force for a passive resonant object of any shape and composition [13]. Furthermore, instead of using gain or loss in a two-resonator system, an EP degeneracy has been realized in a single resonator with a time-periodic component [14, 15], time modulation of a single resonator [16] or a distributed parameter in a transmission line [17]. Exceptional degeneracies are also observed in waveguides loaded with discrete gain and loss elements [18], and in lossless/gainless coupled-resonators optical waveguides [19]. Another way to get EPs is based on two resonators coupled by a gyrator [20]. Instead of material properties, shape and dimension of resonators can be manipulated to achieve desired optical responses, tune their energy spectra, and create specific resonances related to such as EP and Bound State in Continuum (BIC). By engineering the height and radius of a cylinder with a high refractive index, high-Q supercavity mode can be achieved by realizing the regime of BIC [21, 22]. This structure has shown a considerable enhancement of second harmonic generation [23]. The supercavity mode as well as EP can also be realized in a deformed shape of a cylindrical resonator [24]. By manipulating the shape of a semiconductor particle, near unity absorption can be achieved in the broadband optical spectra [25]. Meanwhile, energy spectra as well as whispering gallery modes of semiconductor quantum dot can be controlled in a lateral and corrugated shape [26]. A deformed disc enables the light to travel between different angular momenta, which can effectively enhance third-harmonic generation [27].EP can be achieved in a deformed disc resonator through the coupling of modes with different angular momenta. [28, 29]. Among of several tunable mechanisms such as material properties and shape of resonators, utilizing Kerr-nonlinearity in resonators can be privileged considering its efficient and fast tuning properties by input power. Incorporating both nonlinearity and non-Hermiticity is intriguing and new functionalities can be emerged such as tuning topological insulators [30, 31, 32] and transition from PT-symmetric to broken PT-symmetric [33, 34] phase. Besides, nonlinearity may provide a suitable platform to improve the sensitivity of EP-based systems in the presence of Quantum noise [35]. The eigenvalue of a nonlinear non-Hermitian system shows that the EP can be tuned with respect to a detun ing parameter related to resonant frequencies [9]. In this paper, we investigate the lasing application of an EP in such systems, and show that the detuning can be completely compensated by suitable amount of Kerr-nonlinearity. We study a coupled of resonators with balanced gain and loss, and calculate field modes with a nonlinear coupled mode theory approach. For one of the resonators, a detuning parameter and a Kerr-nonlinearity is considered, in which we show that a suitable amount of detuning and nonlinearity can compensate each other. It can provide the opportunity to reach very close to the EP in coupled resonator with detuned resonant frequencies in a fast self-tuning scheme, through tuning the intensity of the incident wave. ## II Theoretical framework of two coupled Kerr resonators Due to imperfections, which may arise in the fabrication process, achieving an EP regime at which both real and imaginary parts of eigenvalues degenerate is challenging. Although, there are some methods to alleviate these imperfections, such as using a heating scheme to tune the resonant frequencies of the resonators and employing a scatterer in the vicinity of a resonator to tune both its loss and resonant frequency, they either can be a slow process, in the former case [2], or need a fine positioning and sizing of the scatterer in the latter case [5]. Here, we propose an efficient fast-process self-tuning scheme using resonators containing different Kerr materials, in which imperfection in the resonant frequencies can be compensated through tuning the intensity of the incident wave. This process is shown schematically in Fig. 1, in which the difference of the resonant frequencies, which appears in eigenmodes \(a_{-}\) and \(a_{+}\), is compensated by increasing the power of the incident wave through the tapered waveguide coupled to the resonators. The equation of motion for mode amplitudes of a dimer made of two resonators with different Kerr nonlinearities, where one of them is coherently driven, can be described by a generalized Gross-Pitaevskii (GP) equation [9; 36] as \[i\frac{d}{dt}\begin{pmatrix}\psi_{1}\\ \psi_{2}\end{pmatrix}=H\begin{pmatrix}\psi_{1}\\ \psi_{2}\end{pmatrix}-\begin{pmatrix}S_{1}\\ 0\end{pmatrix} \tag{1}\] with \[H=\begin{pmatrix}\omega_{0}-i\delta_{1}+\chi_{1}|\psi_{1}|^{2}&\kappa_{12}\\ \kappa_{21}&\omega_{0}+i\delta_{2}-\epsilon+\chi_{2}|\psi_{2}|^{2}\end{pmatrix}\] where \(\omega_{0}\) is resonant frequency of the uncoupled resonators, \(\delta_{1}\) is dissipation on the first resonator, \(\delta_{2}\) is gain on the second resonator, \(\epsilon\) is detuning of the resonant frequencies, \(S_{1}\) is excitation and \(\chi_{1}\) and \(\chi_{2}\) are the nonlinear Kerr coefficients of the first and second resonators, respectively. According to the energy conservation in the system, the coupling strengths should satisfy the relation \(\kappa_{21}=\kappa_{12}^{*}\). Therefore, we write \(\kappa_{12}=\kappa_{21}=\kappa\) for a real value of the coupling strengths. Under a continuous monochromatic excitation \(S_{1}=s_{1}e^{-i\omega t}\), by using the ansatz \(\psi_{j}=a_{j}e^{-i\omega t}\), (\(j=1,2\)), and \(\frac{d\psi_{j}}{dt}=-i\omega a_{j}e^{-i\omega t}+\frac{da_{j}}{dt}e^{-i \omega t}\), time-independent GP \[\omega a_{1} =(\omega_{0}-i\delta+\chi_{1}|a_{1}|^{2})a_{1}+\kappa a_{2}-s_{1} \tag{2a}\] \[\omega a_{2} =\kappa a_{1}+(\omega_{0}+i\delta-\epsilon+\chi_{2}|a_{2}|^{2})a _{2} \tag{2b}\] can be obtained by considering slowly varying wave approximation [2; 36; 37], where the coupled resonators with balanced gain and loss (\(\delta_{1}=\delta_{2}=\delta\)) is assumed. The mode amplitudes are calculated from eq. (2) as \[a_{1} =\frac{(-\Delta+i\delta-\epsilon+\chi_{2}|a_{2}|^{2})s_{1}}{(- \Delta-i\delta+\chi_{1}|a_{1}|^{2})(-\Delta+i\delta-\epsilon+\chi_{2}|a_{2}|^ {2})-\kappa^{2}} \tag{3a}\] \[a_{2} =\frac{-\kappa s_{1}}{(-\Delta-i\delta+\chi_{1}|a_{1}|^{2})(- \Delta+i\delta-\epsilon+\chi_{2}|a_{2}|^{2})-\kappa^{2}} \tag{3b}\] where \(\Delta=\omega-\omega_{0}\). From Eq. 3, we deduce that for the linear case, \(\chi_{1}=\chi_{2}\)=0, and at the EP condition associated to PT symmetry, \(\epsilon=0\), \(\delta=\kappa\), the field amplitudes tend to infinity when \(\Delta\to 0\). Therefore, small perturbation can significantly affect the mode amplitudes, and we can observe the effect of detuning in the field modes. To avoid divergence of \(a_{1}\) and \(a_{2}\) in the linear case, and better convergence in the nonlinear one, (\(\delta\),\(\kappa\)) are considered to be equal to \((1,1-10^{-3})\), i.e., the system is very close, but not exactly at, the EP. After numerically solving eq. (2) [9], Fig. 2(a) shows the field amplitudes for the linear case and without detuning parameter (i.e., \(\epsilon=0\)), whereas Fig. 2(b) shows the modes for the linear case with a small detuning of \(\epsilon=0.004\). The applied source, \(s_{1}\) is equal to \(0.01\). Comparing Figure 2(b) Figure 1: Tuning a coupled resonator with different resonant frequencies to the EP via intensity of the incident wave (perturbed) to Fig. 2(a) (unperturbed), we can observe that amplitude (linewidth) of the field modes decrease (increase) considerably. In order to compensate for the detuning with a nonlinearity, we consider a specific example with detuning \(\epsilon=0.002\), and calculate the field modes versus nonlinearity \(\chi_{2}\), for \(\Delta=0\) (Fig. 3(a)). Figure 3(a) shows that the maximum of the modes occurs at \(\chi_{2}=4\times 10^{-5}\), and Fig. 3(b) shows the field modes versus frequency for \(\epsilon=0.002\) and \(\chi_{2}=4\times 10^{-5}\). The detuned system compensated with a nonlinearity is represented by Fig. 3(b), where the field amplitudes are almost identical to those in Fig. 2(a) (linear system without detuning parameter), but with slightly reduced linewidth. Figure 3(c) shows a 3D plot of the \(a_{1}\) field amplitude versus \(\chi_{2}\) and \(\Delta\) for \(\epsilon=0.002\), which shows that the maximum occurs at \(\Delta=0\) and \(\chi_{2}=4\times 10^{-5}\). The decaying above of the \(\chi_{2}=4\times 10^{-5}\) is much faster than below of \(\chi_{2}=4\times 10^{-5}\). ## III Numerical analysis The Kerr nonlinearity is an efficient mechanism to tune the resonant frequency of a resonator by setting the power of the incident wave. This tuning is crucial in high-sensitive systems based on EP since small perturbations can considerably affect its performance. We use the finite element method implemented in Comsol Multiphysics [38] to analyze a single ring resonator as well as a coupled ring resonator containing a Kerr nonlinearity, in a 2D, z-invariant model. Figure 4(a) shows the \(z\)-component of electric field, \(|E_{z}|\) of a single resonator coupled to a waveguide, at its resonant wavelength \(1.565\,\mu\)m. The radius of the ring is \(r_{0}/\lambda_{0}=4\) with \(\lambda_{0}=1.55\,\mu\)m, the refractive indices of core and cladding are 2.5 and 1.5, respectively, and the width of the core (including waveguide) is \(0.2\,\mu\)m. Figure 4(b) shows the transmittance \(T=|S_{21}|^{2}\), where \(S_{21}\) is the S-parameter relating port 1 to port 2, for two radii: \(r_{0}/\lambda_{0}=4\) (solid line), \(r_{0}/\lambda_{0}=4-10^{-3}\) (dashed-dotted line). The case with \(r_{0}/\lambda_{0}=4-10^{-3}\) contains the Kerr nonlinearity (dotted line), where the nonlinearity is tuned to achieve the resonant frequency equal to the resonator with radius \(r_{0}/\lambda_{0}=4\). The incident power in the simulation was \(P_{in}=1\) W, which lead to a tuning nonlinearity of \(\chi=2.6\times 10^{-7}\,\mathrm{cm}^{2}/\mathrm{W}\), where \(\chi\) is considered in the refractive index of the nonlinear resonator as \[n=n_{\mathrm{core}}+\frac{c\epsilon_{0}n_{\mathrm{core}}\chi}{2}|E|^{2} \tag{4}\] with \(n_{\mathrm{core}}=2.5\) and \(c=3\times 10^{8}\,\mathrm{m}/\mathrm{s}\). In the previous section, we have shown that a detuning in the degenerate resonant frequency of a coupled resonator can be compensated by utilizing Kerr nonlinearity. In Fig. 5, we first tune a coupled resonator with balanced gain and loss to the EP, then we apply a detuning in the radius of one resonator, followed by introducing the nonlinearity to compensate for the detuning. Figure 5(a) shows the transmittance \(|S_{41}|^{2}\) of the coupled resonators with radii \(r_{1,2}/\lambda_{0}=4\), and refractive indices \(n_{1}=2.5-i0.016\), \(n_{2}=2.5+i0.016\) (dotted line) and \(n_{1}=2.5-i0.0174\), \(n_{2}=2.5+i0.0174\) (solid line). The solid line has the characteristic of an EP, with one resonance close to the resonance of a sin Figure 4: A single resonator without and with Kerr-nonlinearity. (a) Field distribution. (b) Transmittance. gle resonator. Figure 5(b) (solid line) shows the transmittance of the same structure with radii \(r_{1}/\lambda_{0}=4\), \(r_{2}/\lambda_{0}=4-10^{-3}\), and the same refractive indices as in Fig. 5(a), in which the magnitude of the transmittance at EP is reduced drastically. To compensate for this detuning, Kerr nonlinearity is considered in the second resonator with \(\chi=8.07\times 10^{-9}\,\mathrm{cm^{2}/W}\) (dotted line). This value of \(\chi\) is found by analyzing the system with different values of \(\chi\), and incident power \(P_{in}=1\) W. We note that we can fix the value of nonlinearity and find the suitable amount of incident power in the same manner. For large values of \(\chi\), the convergence of the numerical simulation could be hardly achieved at an EP, and the number of iterations (a setting in Comsol) is increased to solve this problem. The \(|E_{z}|\) distribution of the coupled resonator containing nonlinearity, at the EP, is shown in Fig. 5(c). In a realistic configuration, we tune the system to an EP by adjusting the input power. Figure 6(a) and 6(b) shows transmittance \(|S_{41}|^{2}\) and \(|S_{21}|^{2}\), respectively, in which \(\chi=8.07\times 10^{-9}\,\mathrm{cm^{2}/W}\) is considered. This amount of Kerr nonlinearity can be obtained in, for instance, an oil-filled cavity [39]. Analogous to Fig. 3(c), at \(P_{in}=1\) W (a default value in Figs. 4 and 5), the transmittance is maximum, while below (above) of this value, transmittance reduces drastically (smoothly). ## IV Dynamics In this section, we analyze the dynamic of the system around EP, and solve the time dependent coupled mode equation, eq. (1), numerically. First, a linear system (\(\chi_{1}=\chi_{2}=0\)) with a sinusoidal excitation \(S_{1}=A\mathrm{sin}(\omega t)\), and dimensionless parameters \(A=0.01\), \(\omega=\omega_{0}=3\), \(\delta_{1}=-\delta_{2}=1\), and \(\epsilon=0\) is studied. Figure 7, shows time evolution of the field amplitudes \(|\psi_{j}|\) for different value of the coupling around the value \(\kappa=1\). We note that for \(\kappa>1\), the eigenvalues of the system are entirely real (PT-symmetric region), while for \(\kappa<1\) the eigenvalues become complex (broken PT-symmetric region). The boundary of these two regions, \(\kappa=1\) has characteristic of EP. For the coupling \(\kappa=1.1\), Fig. 7(a) (left) depicts field amplitudes oscillating by time, and Fig. 7(a) shows the field amplitudes spanning an elliptical route (right). This numerical result is pretty close to analytical one in the supplementary information. With decreasing the coupling value, \(\kappa=1+10^{-3}\), and approaching to the EP, the rate of oscillation would be decreased (Fig. 7(b) (left)), and the span area of \(|\tilde{\psi}_{2}|\) vs \(|\tilde{\psi}_{1}|\) shrinks. At EP, \(\kappa=1\), Fig. 7(c) (left) shows growing of field amplitudes by time, while \(|\psi_{2}|\) vs \(|\psi_{1}|\) shows a linear dependence, \(|\psi_{1}|=|\psi_{2}|\) (Fig. 7(c) (right)). For \(\kappa=1-10^{-4}\), field amplitudes grow by time (Fig. 7(d) (left)), and \(|\psi_{2}|\) vs \(|\psi_{1}|\) shows a linear dependence (Fig. 7(d) (right)). In Fig. 8, we investigate the interplay between detuning and nonlinearity. The values of parameters are chosen wisely to get a better convergence from the numerical method. For instance, for larger detuning value, higher nonlinearity should be employed in order to compensate its effect, which can lead to divergence of the problem. The evolution of field amplitudes by time is shown in Fig. 8(a), for nonzero value of detuning \(\epsilon=0.001\), while the other parameters are the same as in Fig. 7(b). Figure 8(b) shows that with adding an amount of nonlinearity \(\chi_{2}=10^{-6}\), some resonances appear in the evolution path of the field amplitudes, and Fig. 8(c) shows that with \(\chi_{2}=4.4\times 10^{-5}\) a similar dynamic as in Fig. 7(b) (linear system with zero detuning) can be obtained, representing the compensation of detuning with nonlinearity. In Figure 8(d), we considered the coupling \(\kappa=1\), which shows that the evolution of the system with (\(\epsilon\),\(\chi_{2}\))=(\(0.001,9\times 10^{-7}\)) (green dash-dotted line) is almost identical with (\(\epsilon\),\(\chi_{2}\))=(0,0). Figure 5: Coupled resonator: (a) Transmittance far from EP and at EP. (b) Transmittance of detuned and nonlinearity assisted coupled resonators. (c) Field distribution at EP. Figure 6: Transmittance calculated (a) from port 4 and (b) port 2 versus input power ## V Conclusion In conclusion, we studied a coupled resonator with gain and loss with a detuning parameter and nonlinearity for one of the resonators. The field mode is calculated by solving nonlinear coupled mode theory with self-consistent field and iteration methods. We have shown an excellent compensation of the detuning of the two-coupled cavity system by using an appropriate amount of nonlinearity. We have provided a full-wave numerical simulation to confirm the proposed approach. The method enables the system to work very close to the EP by tuning the intensity of the incident wave in a fast-process self-tuned scheme. ###### Acknowledgements. I am grateful to Prof. Filippo Capolino (Univeristy of California, Irvine), Alireza Nikzamir (Univeristy of California, Irvine) and Prof. Andrey Bogdanov (Harbin Engineering University) for helpful discussions.
2303.16361
Dynamical Modularity in Automata Models of Biochemical Networks
Given the large size and complexity of most biochemical regulation and signaling networks, there is a non-trivial relationship between the micro-level logic of component interactions and the observed macro-dynamics. Here we address this issue by formalizing the existing concept of pathway modules, which are sequences of state updates that are guaranteed to occur (barring outside interference) in the dynamics of automata networks after the perturbation of a subset of driver nodes. We present a novel algorithm to automatically extract pathway modules from networks and we characterize the interactions that may take place between modules. This methodology uses only the causal logic of individual node variables (micro-dynamics) without the need to compute the dynamical landscape of the networks (macro-dynamics). Specifically, we identify complex modules, which maximize pathway length and require synergy between their components. This allows us to propose a new take on dynamical modularity that partitions complex networks into causal pathways of variables that are guaranteed to transition to specific states given a perturbation to a set of driver nodes. Thus, the same node variable can take part in distinct modules depending on the state it takes. Our measure of dynamical modularity of a network is then inversely proportional to the overlap among complex modules and maximal when complex modules are completely decouplable from one another in the network dynamics. We estimate dynamical modularity for several genetic regulatory networks, including the Drosophila melanogaster segment-polarity network. We discuss how identifying complex modules and the dynamical modularity portrait of networks explains the macro-dynamics of biological networks, such as uncovering the (more or less) decouplable building blocks of emergent computation (or collective behavior) in biochemical regulation and signaling.
Thomas Parmer, Luis M. Rocha
2023-03-29T00:01:30Z
http://arxiv.org/abs/2303.16361v2
# Dynamical Modularity in Automata ###### Abstract Given the large size and complexity of most biochemical regulation and signaling networks, there is a non-trivial relationship between the micro-level logic of component interactions and the observed macro-dynamics. Here we address this issue by formalizing the concept of pathway modules developed by _Marques-Pita and Rocha_[1], which are sequences of state updates that are guaranteed to occur (barring outside interference) in the causal dynamics of automata networks after the perturbation of a subset of driver nodes. We present a novel algorithm to automatically extract pathway modules from networks and characterize the interactions that may take place between the modules. This methodology uses only the causal logic of individual node variables (micro-dynamics) without the need to compute the dynamical landscape of the networks (macro-dynamics). Specifically, we identify complex modules, which maximize pathway length and require synergy between their components. This allows us to propose a new take on dynamical modularity that partitions complex networks into causal pathways of variables that are guaranteed to transition to specific dynamical states given a perturbation to a set of driver nodes. Thus, the same node variable can take part in distinct modules depending on the state it takes. Our measure of dynamical modularity of a network is then inversely proportional to the overlap among complex modules and maximal when complex modules are completely decouplable from one another in the network dynamics. We estimate dynamical modularity for several genetic regulatory networks, including the full _Drosophila melanogaster_ segment-polarity network. We discuss how identifying complex modules and the dynamical modularity portrait of networks explains the macro-dynamics of biological networks, such as uncovering the (more or less) decouplable building blocks of emergent computation (or collective behavior) in biochemical regulation and signaling. ## 1 Introduction Biological intracellular networks are often composed of modules that are formed from molecular components and perform a specific function within the cell [2, 3, 4, 5, 6, 7]. These separable components are thought to provide the biological organism with robustness to environmental perturbations and genotypic mutations while still allowing enough flexibility for evolution [8, 9, 10, 11]. Though this kind of modularity is a general and intuitive concept, there are many formal definitions and it is not clear how to best model this phenomenon in living systems. A popular method to model biological systems is with discrete dynamic network models because they are simple to understand and do not rely on the estimation of kinetic parameters (unlike continuous models such as ordinary differential equations) [12, 13]. The network nodes (automata) typically represent genes, transcripts, proteins, molecular species, external inputs, or other qualitative states affecting the biological cell, and they are connected to one another by an edge if they interact in some way. Automata can take any number of discrete states, but the simplest example is Boolean automata which are in one of two states at any given time [14, 15, 16]. These models are often applied to intracellular signaling networks because they can recreate phenotypical states using discrete variables, while ignoring kinetic details. Even given a simple model like a Boolean network (BN), it is unclear how to optimally decompose the system into separable modules. Furthermore, networks observed in the real-world may contain a large number of nodes and interactions and be arbitrarily complex. Although the interactions between components in such a BN are well defined, it is unclear how the individual node interactions give rise to global properties of the network such as its modularity or attractor landscape. It is, therefore, useful to understand how the micro-dynamics of component interactions give rise to the macro-dynamic behavior observed in networks. Modularity methods have become a standard approach to decompose graphs and solve problems such as community detection. These approaches typically focus on topological modules, where a module is defined by a higher density of links between nodes in the same module and a lower density of links to nodes in different modules [17, 18]. Such structural modularity measures ignore the dynamical information of the network and may, therefore, be limited when it comes to discovering functional modules [19, 20]. It has also been shown that relying on structural information alone cannot account well for dynamics in control problems on BNs [21]. Additionally, the same circuit topologies in biological systems can switch between distinct steady states and dynamical behaviors (multi-stability and multifunctionality), which indicates that there is not a clear relationship between structure and function [22]. For these reasons, dynamical modularity measures have also been proposed. _Irons et al._ proposed a method to decompose BNs into dynamical modules based on attractors of the system without the details of the underlying network, although with a suitable mathematical model the method can also determine how robust the subsystems are and how they are regulated [23]. _Kolchinsky et al._ proposed a method based on the idea that modules constrain the spread of perturbations and defined perturbation modularity as the autocovariance of perturbed trajectories [24]. Other methods combine structural and dynamical approaches by decomposing a network into structural components and associating these components with dynamical properties [25, 26]. _Zanudo and Albert_ decompose an expanded version of a BN (taking into account its dynamics) into stable motifs, which are strongly-connected components representing partial fixed points of the system. They show that controlling these motifs can drive a network to a desired attractor [27]. We use the threshold network representation of automata network dynamics proposed by _Marques-Pita and Rocha_, the _dynamics canalization map_ (DCM), which represents both Boolean states of every node. They simplified the description of a network's dynamics by explicitly removing redundancy in the logical expressions governing each node's update (described in the following subsections). This redescription allows to describe dynamical modularity, critical nodes that guarantee convergence to steady states (with incomplete knowledge of initial conditions), and measures of macro-level canalization [1]. Using the DCM, they identified the dynamical modules on the _Drosophila melanogaster_ single-cell segment polarity network (SPN) [28] that are controlled by the network's inputs. Here we formally define modules on the DCM. We expand the analysis of [1] by finding such modules not just for the inputs but for all possible nodes and low-order interactions on the _drosophila_ segment polarity network (SPN). We develop theoretical results to explain the interaction of different dynamical modules and define _synergy_ between modules. The concept of synergy is used to define a subset of modules, called complex modules, that capture all unique dynamical information in the BN. We then use complex modules to define a measure of dynamical modularity for BNs. This measure is driven purely by the network's dynamics rather than its structure. Unlike other measures, our dynamical modularity measure differentiates between different states of a variable. It accounts for the fact that a single variable may be part of multiple functional modules depending on its behavior and thus may be useful in models of multifunctionality in biological systems. Additionally, our measure makes explicit the influence of each variable in each state and the control that a given variable has on the rest of the network. We also use the mean dynamical modularity of a network to quantify its decomposability and use this metric to compare different biological networks. This paper is organized as follows: In the remainder of the Introduction, we give some background on schematic redescription, canalyzing maps, and the DCM introduced in [1]. In section II, we give theoretical results concerning the formal definition of pathway modules (based on the perturbation of seed nodes), complex modules (which have synergy between seed nodes and are of maximal size), and dynamical modularity (based on an optimal partition of the DCM). In section III, we apply our methodology to the _drosophila_ single-cell and parasegment SPN to find complex modules on these networks, quantify their dynamical modularity, and compare them to other biological networks of comparable size. In section IV, we offer our conclusions and note some limitations to our methodology. ### _Boolean Networks_ A Boolean automaton \(x\) can be in one of two states, \(x\in\{0,1\}\) (traditionally ON or OFF, representing activation of the molecule of interest above or below a certain threshold) [14, 15, 16]. The state of an automaton is updated in discrete time steps governed by a logical function of its inputs (e.g., \(A=B\lor C\)). More formally, \(x^{t+1}=f(x^{t}_{1},...,x^{t}_{k})\) for automaton \(x\) with \(k\) inputs at time step \(t\), where the mapping \(f:\{0,1\}^{k}\rightarrow\{0,1\}\) can be read from the automaton's look-up table \(F\) that denotes the output \(x^{t+1}\) for each of its \(2^{k}\) input combinations (refer to Figure 1). A \(BN\) is a graph \(B=(X,E)\) where \(X\) is a set of \(n\) Boolean automata nodes and \(E\) is a set of directed edges \(e_{i,j}\in E\) that indicate that node \(x_{i}\) is an input to node \(x_{j}\)[14, 29]. The _configuration_ of a BN is the value of all automata states \(x\in X\). Automata nodes within the network may be updated synchronously in discrete time steps or asynchronously, where each node has a different update schedule (deterministic asynchronous BNs) or is selected at random with some probability [30, 29]. Deterministic BNs are guaranteed to eventually resolve in some attractor, either a fixed point or a limit cycle of repeating states. In biological networks, attractors are typically associated with some phenotype (such as a wildtype expression pattern) [16]. ### _Node canalization_ _Canalization_ refers to the ability of a subset of inputs to control an automaton's transition [31, 32, 33, 34, 1, 21, 35]. The Boolean transition function of the automaton is thus satisfied by this subset of inputs and the logical states of its other inputs can be ignored. This shows inherent redundancy in the transition function that can be removed via schematic redescription, a process introduced in [1]. The Quine-McCluskey Boolean minimization algorithm is used to reduce an automaton's look-up table F to a set of prime implicants [36] that are then combined to form a set of wildcard schemata F\({}^{{}^{\prime}}\). Within the wildcard schemata only essential inputs (enputs) are defined as ON or OFF. Every other input is replaced with the wildcard symbol (#) which indicates that those inputs are redundant (given the states of the inputs) and do not affect the automaton's state transition [1]. Additionally, permutations of inputs that leave the transition unchanged are marked with a position-free symbol (o) and combined to form group-invariant enputs. This captures the input-symmetry within the wildcard schemata and further reduces them to a set of two-symbol schemata without permutation redundancy F\({}^{{}^{\prime\prime}}\) (see Figure 1). It is not surprising to find input redundancy and input-symmetry at the automaton level (known as micro-canalization) in biological networks because they are known to be robust, which buffers perturbations to individual inputs and can help maintain wildtype phenotypes [37, 33, 10]. After all redundancy is removed, the necessary and sufficient logic of automata nodes can also be represented by a canalyzing map (CM) [1], a type of threshold network [38]. The CM of a variable x represents all logical transitions (two-symbol schemata) that can result in an update of the state of x. It is composed of _s-units_ which represent nodes (x and its inputs) in a particular discrete state (e.g., ON or OFF), _t-units_ that implement the transition function of x via thresholds, and fibres that connect s-units and t-units together. A fibre may connect one s-unit to one t-unit (indicating that the s-unit sends one _signal_ to the t-unit), or multiple fibres may be merged together so that several s-units send one signal to a t-unit (representing permutation redundancy). T-units have a threshold value \(\tau\) and only fire if they receive at least \(\tau\) simultaneous incoming signals from s-units; s-units fire if they receive any incoming signal from a t-unit. All logical interactions between x and its inputs are represented such that each t-unit corresponds to one schema in F\({}^{{}^{\prime\prime}}\). Thus the CM is equivalent to the schemata necessary to ensure the logical transition from x\({}^{t}\) to x\({}^{t+1}\) (see Figure 1). ### Dynamics canalization maps The canalyzing map of each automaton can be be integrated into a larger threshold network called the DCM [1]. This parsimonious network, with input and symmetry redundancy removed, represents the control logic and dynamics of the entire BN. Each possible state of each variable is represented in the DCM; thus there are 2n s-units for a BN of size n and t-units that represent each schema, as defined for each individual CM. Because the DCM captures the state updates of all automata in the network, it is possible to determine a logical sequence of signals based on the deterministic firing of s-units and t-units. Given an initial set of s-units (and barring any outside interference), a logical sequence of state updates is guaranteed to occur based on the logic embedded in the schema F\({}^{\prime\prime}\) and represented by the t-units in the DCM. This sequence of state updates is referred to as a _pathway module_ because it is a sequence of s-units and t-units firing in the DCM and the state updates occur independently of all other node states that are not involved in the firing of t-units [1]. Importantly, the initial conditions of a pathway module may only define the state of a (small) subset of nodes in the network, while the complement of this set is assumed to be in an unknown state (represented by '#'). These Figure 1: **Example automaton with a logical ‘OR’ transition function**. Left: The node x has two inputs. Center: The look-up table F describing the transition of x and the one and two-symbol schemata redescription (F\({}^{\prime}\) and F\({}^{\prime\prime}\); # is a wildcard input, \({}^{\circ}\) is a group-invariant enput). Right: The CM for x with the associated logical function representing both state updates. Black s-units represent nodes in their ON state, white s-units represent nodes in their OFF state, and blue diamonds represent t-units labeled with their threshold value (\(\tau\)). Each edge (fibre) represents the logical contribution of a source unit to an end unit and thus dictates how X will turn ON or OFF. The CM shows that one ON input is sufficient to turn X ON but that two OFF inputs are needed to turn X OFF. known and unknown node states are referred to as a _partial configuration_\(\hat{\mathtt{x}}\) of the network. Because dynamics are deterministic, this partial configuration causes a sequence of state updates (which we refer to as dynamical unfolding) that results in an outcome configuration \(\mathtt{P}\), which may be either another partial configuration or an attractor of the network. If \(\mathtt{P}\) is an attractor then we know that \(\hat{\mathtt{x}}\) controls the full network dynamics because the attractor is guaranteed to occur based on the subset of known s-units in the initial condition if the initial node states are sustained [1]. Thus, we can compute global (macro-scale) dynamical information from only partial knowledge of the network configuration by integrating knowledge of the local (micro-scale) dynamics. We note that the calculation of dynamical unfolding is similar to the calculation of partial steady states in the logical interaction hypergraph [39], cascading effects of node removal and the calculation of elementary signaling modes on the expanded network [40], and calculation of the logical domain of influence on the expanded network [41]. However, none of these methods explicitly remove input symmetry from the transition functions of nodes or consider different types of node perturbation. In the next section we give a formal treatment of pathway modules and provide definitions of novel concepts such as pathway module interaction, complex modules, and measures of dynamical modularity. ## 2 Formal Results Pathway modules can be discovered using a breadth-first search algorithm on the DCM starting from the initial seeds. However, unlike a search on a traditional graph, the different types of units on the DCM must be considered. For a t-unit to be included, its threshold \(\tau\) must be met. Furthermore, the threshold is met only if its inputs fire simultaneously; therefore, the firing of a t-unit with multiple inputs requires confidence that all of its inputs can fire at the same time. If the initial seeds are pinned (not allowed to change), then they fire at every time step and all downstream s-units fire repeatedly as well. However, if the initial seeds are only perturbed once (pulse perturbation), then their future state cannot be guaranteed after time \(t=0\). The type of initial perturbation, therefore, affects the pathway module that unfolds on the DCM. As an example, consider the case of the _engrailed_ (_en_) protein from the _drosophila_ SPN (see Fig. 3). If _en_ is perturbed to ON for only a single time step, the subsequent pathway module takes four time steps to unfold (panel a). However, if _en_ is constitutively active, then the resulting pathway module takes six time steps to unfold and reaches two extra node states (_hh-1_ and HH-1), which can fire because the logical condition for _hh-1_ (_hh_t+1=EN\({}_{t}\)\(\wedge\)\(\neg\)CIR\({}_{t}\)) is satisfied by the ongoing firing of EN-1 (panel b). Perturbations may be more complicated than these two extremes of pulse or pinning perturbation. For example, an s-unit may be fired in k time step intervals or may be controlled for a specific length of time and then released. It is also important to factor logical contradiction into the search algorithm when determining pathway modules. If an s-unit representing the state of variable x fires at time t, then an s-unit representing a different state of x Figure 2: **Example GRN**. The interaction graph is shown on the left, the DCM on the right. This network has two inputs (_i_r,\(i\)2), two genes (_g_r,\(g\)2), and two proteins (P1,P2). There is a negative feedback loop present as P1 promotes the creation of P2, an inhibitory protein. P1 requires both _g_r and the absence of P2, while P2 requires both \(g\)2 and \(i\)2. The exact logic of the dynamics is captured in the DCM, where black s-units represent nodes in their active (ON) state, white s-units represent nodes in their inactive (OFF) state, and grey diamonds represent t-units. The seeds of pathway modules \(\mathbf{M}_{P2-1}\) and \(\mathbf{M}_{i1-0}\) are indicated by double edges; the pathway module \(\mathbf{M}_{i1-0}\) is shown in blue. Note that t-units with \(\tau=1\) and no permutation redundancy are left out of the DCM for simplicity. cannot also fire at time t. For pinning perturbation, this guarantees that only one state of any variable is ever reached, while for pulse perturbation s-units representing different states of x may fire. The consequence of this is that modules based on pinning perturbation may uncover fixed points in the network but not limit cycles. In the example GRN in Fig. 2, the module \(\mathbf{M}_{\texttt{i}1-\texttt{o}}\) includes the same s-units (g1-o, P1-o, g2-o, and P2-o) and t-units (not shown in the figure), regardless of perturbation type. The module \(\mathbf{M}_{\texttt{P2-1}}\), however, includes the s-units P1-o, g2-o, and P2-o under pulse perturbation but only P1-o and g2-o under pinning perturbation (because P2-o is a logical contradiction to P2-1). Multiple pathway modules may simultaneously unfold within the complex dynamics of a network. Thus, it is natural to explore how different pathway modules may interact, assuming that their initial conditions are simultaneously activated. This leads to concepts such as independence, logical obstruction, and synergy. Synergy is used to define complex modules, which are maximal pathway modules whose seeds have synergy between them; complex modules are further used to define the dynamical modularity of the network. Formal definitions of these concepts are given in the following subsections. Figure 3: **Example of different perturbations in the _drosophila_ SPN. Black represents nodes in their active (ON) state, white represents nodes in their inactive (OFF) state, and grey represents nodes whose state is unknown. (a) The dynamical unfolding of the pathway module \(\mathbf{M}_{\texttt{en}-1}\) in the _drosophila_ single-cell SPN under pulse perturbation. Node states are not maintained over time, and as a result, cannot activate _lhi_ or HH which require the activation of EN. (b) \(\mathbf{M}_{\texttt{en}-1}\) unfolding under pinning perturbation. The s-units _lhi_-1 and HH-1 fire due to the ongoing activation of EN.** ### Defining Pathway Modules The definition of the DCM is provided in the Introduction. Formally, let X be the set of all nodes in an automata network B and \(\mathcal{S}\) be the set of all s-units in the DCM (which represents all node states) of B. For Boolean networks, \(|\mathcal{S}|=2*|\mathcal{X}|\). Let \(\Theta\) be the set of all threshold units in the DCM (its size \(|\Theta|\) depends on the number of redescribed schemata for each node in B). At any given time t, a set of s-units, \(S^{\text{t}}\subset\mathcal{S}\), and a set of t-units, \(\theta^{\text{t}}\subset\Theta\), fire in the DCM. A t-unit fires at time t if its logical conditions are met (determined by its threshold value \(\tau\)), causing its neighboring s-unit to fire at time t + 1. The firing of a t-unit means that there is sufficient information to satisfy a schema in the look-up table of the variable associated with the downstream s-unit. The output state of this schema is the state associated with the s-unit (i.e., o or i). An s-unit fires if its predecessor (a t-unit) fires; when it fires, it sends signals to all of its neighboring t-units. An s-unit may also fire at time t due to an external signal, i.e., a perturbation that is applied to that node variable of B. Such a signal can immediately trigger downstream t-units to fire if their logical conditions are met. In other words, while signals from t-units to s-units propagate with a one time step delay (dt = 1), signals from s-units to t-units propagate immediately (dt = 0) 1. This asymmetry guarantees that the original dynamics of state transitions in B is preserved in the DCM. Here, we denote the state of a Boolean variable x by x-t if x is ON and x-o if x is OFF. Footnote 1: Alternatively, the s-unit to t-unit transition can have delay dt = 1 and the t-unit to s-unit transition can have delay dt = 0. The choice of which one has dt = 1 is arbitrary because t-units can only connect to s-units in the DCM and vice versa (i.e., either preserves the dynamics of B). Importantly, it takes one time step for an s-unit to fire after its inputs have fired. There is an important constraint on the set \(S^{\text{t}}\) of s-units that fire at time t. If an s-unit that represents the state of x fires at time t, the s-unit that represents the negation of the state of x cannot fire. Thus, x-t \(\in S^{\text{t}}\implies\text{x-o}\notin S^{\text{t}}\) and vice versa. \(S^{\text{t}}_{\text{t}}\) represents a partial configuration whereby the logical state of the corresponding node variables x \(\in\) X is known, but any other variable may be in either state (ON or OFF). The number of (full) configurations that a partial configuration \(S^{\text{t}}\) describes is \(2^{|\mathcal{X}|-|S^{\text{t}}|}\). As a corollary, \(\max(|S^{\text{t}}_{\text{t}}|)=|\mathcal{X}|\) denotes the situation when the logical state of all variables is known, and we know the precise configuration the network is in at time \(t\). A _seed set_ is a set of s-units \(S^{0}\subset S\) that, without loss of generality, are assumed to fire at time \(t=0\). This functions as an external perturbation applied to a subset of variables \(X^{0}\subset X\) of network \(B\). Note that these s-units can represent variables in either the ON or OFF states. Canalized dynamics is represented by a function \(\mu(S^{t})\to S^{t+1}\), whereby the set of s-units \(S^{t}\) that fire at time \(t\) transition to another set of s-units that fire at time \(t+1\). Naturally, this function can be applied sequentially for \(t=0,1,...,\infty\), which we refer to as dynamical unfolding (see below). Note also that \(S^{t+1}\cap S^{t}\not\equiv\emptyset\) if any s-units in \(S^{t}\) fire again due to external perturbation, regulation by other members of \(S^{t}\), or self-reinforcement. Importantly, unlike the computation of dynamics in the original Boolean network \(B\), dynamical unfolding in the DCM via \(\mu\) is typically computed with no information about the logical state of many variables. This is possible because the schemata redescription of logical functions introduces the wildcard and position-free symbols. \(S^{0}\) can be treated differently in the dynamical unfolding process. When studying _pulse perturbations_, s-units \(s\in S^{0}\) are assumed to fire only at \(t=0\), unless the logic of the canalized dynamics (\(\mu\)) leads them to fire again. In contrast, _pinning perturbations_ lock the s-units \(s\in S^{0}\) into firing at every time step \(t\)2. In this case, we always have \(S^{t}\subset S^{t+1}\) in the canalized dynamics. Unless otherwise noted, we use pinning perturbation, which represents situations where the perturbation signal is assumed to be in steady-state and last much longer than the transient dynamics that it affects. This is a realistic scenario, especially for gene regulatory network models. Footnote 2: In control theory, pinning may refer to controlling a variable to any pattern of set states (e.g., an oscillation), but here we use the stricter sense of pinning to a single state. This is equivalent to the node state overrides assumed by feedback vertex set theory and applied to biological networks [42, 43, 44]. A _module_ in the DCM is defined as \(M^{t}\equiv(S^{t},\theta^{t})\). It is a tuple consisting of a set of s-units \(S^{t}\subset S\) and t-units \(\theta^{t}\subset\Theta\) that fire at time \(t\) due to a perturbation applied to seed set \(S^{0}\) at \(t=0\). For convenience we define set operations on modules by applying the operation to both elements of the tuple. Thus, \(M^{t}_{t}\subset M^{\mu}_{j}\) if and only if \(S^{t}_{t}\subset S^{\mu}_{j}\) and \(\theta^{t}_{t}\subset\theta^{\mu}_{j}\), for modules \(M_{i}\) and \(M_{j}\) at time steps \(t\) and \(u\). Other operators, such as union and intersection, are defined similarly. Modules _dynamically unfold_ in time via the function \(\mu(S^{t})\to S^{t+1}\) until a time \(t=T\) when the module either repeats (\(M^{T}\equiv M^{t<T}\)) or is empty (\(M^{T}\equiv(\emptyset,\emptyset)\)) because no \(s\)- or \(t\)-units can logically fire with the known information. This leads to a _pathway module_, a time-ordered sequence of all modules that unfold due to a perturbation given by seed set \(S^{0}:M(S^{0})=[M^{0},M^{1},...,M^{T}]\) for all time steps \(t\leq T\). In turn, the _pathway module set_ includes all \(s\)-units that fire in at least one module \(M^{t}\in M(S^{0})\), and is defined as \(S=S^{0\to T}=\bigcup_{t\in T}S^{t}\). Under pulse perturbation (or other perturbation types), not all elements of \(S\) are guaranteed to fire at time \(T\). However, in the case of pinning perturbation, \(S=S^{T}=S^{\infty}\), since \(S^{t}\subset S^{t+1}\)3. The _length_ of a pathway module \(M(S^{0})\) is \(T\), whereas its _size_ is simply the cardinality of its corresponding pathway module set \(|S|\). Finally, \(P_{s}\) denotes the set of all pathway modules \(M(S^{0})\) that unfold in a DCM from seed sets of size \(s=|S^{0}|\). Footnote 3: We assume that external signals take precedence over natural dynamics of the network. As such, a perturbation that causes an \(s\)-unit to fire will prevent any logically contradictory \(s\)-units from firing during the same time step. For pinning perturbation, a variable may only fire in one state during the dynamical unfolding of a pathway module; that is, \(x\)-\(t\in S\implies x\)-\(x\circ\not\in S\) and vice versa. Thus, pinning perturbation cannot lead to limit cycles or other oscillations. ### Interaction of pathway modules Given that there is an exponential number of pathway modules, for computational purposes we must narrow down the possible dynamical interactions of the system. Therefore, we define how pathway modules may interact to hone in on those that best define the computational dynamics of the network. Specifically, we consider that two pathway modules \(M_{i}\) and \(M_{j}\) can be _combined_ by considering the dynamical unfolding of the union of their seed sets, \(M_{i,j}=\mu(S^{0}_{i}\cup S^{0}_{j})\)4. Footnote 4: Seed sets can, of course, be combined via set-theoretical operations at distinct time steps, but here we only consider unions of seed sets at time \(t=0\). Pathway module \(M_{i}\) is _subsumed_ by \(M_{j}\) if \(S_{i}\subset S_{j}\). \(M_{i}\) is _partially subsumed_ by \(M_{j}\) if \(S_{i}\cap S_{j}\neq\emptyset\) but \(S_{i}\not\subset S_{j}\) (i.e., there is some overlap between the dynamical unfolding of the two modules). \(M_{i}\) is _temporally subsumed_ by \(M_{j}\) if there exists a \(k\geq 0\) such that \(M_{i}^{t}\subset M_{j}^{t+k}\) for all \(t\leq T\) (the sequence is preserved temporally, where constant \(k\geq 0\) indicates that the dynamical unfolding of the subsequence \(M_{i}\) within \(M_{j}\) begins \(k\) time steps after \(t=0\)). Pathway module \(M_{i}\) is a _submodule_ of \(M_{j}\) if \(S_{i}^{0}\subset S_{j}^{0}\) and \(S_{i}^{0}\neq\emptyset\). Note that when \(M_{i}\) is a submodule of \(M_{j}\), it does not imply that \(M_{i}\) is subsumed by \(M_{j}\) due to the possibility of logical obstruction (see below). Pathway module \(M_{i}\) is considered _maximal_ if it cannot be temporally subsumed by any other pathway modules with the same seed set size. For pinning perturbation, \(S_{i}\not\subset S_{j}\) for all pathway modules \(M_{j}\) with seed set size \(|S_{j}^{0}|\leq|S_{i}^{0}|\). In other words, \(\mu(S_{i}^{0})\) results in a unique set of \(s\)-units that cannot be obtained from dynamically unfolding any other pathway module with the same seed set size. \(\Lambda_{s}\subset P_{s}\) is the set of all maximal pathway modules with seed set size \(s\). _Logical obstruction_ exists between pathway modules \(M_{i}\) and \(M_{j}\) if their combination \(M_{i,j}\) results in an \(s\)-unit not firing during time step \(t=k\) when that \(s\)-unit does fire in either pathway module \(M_{i}\) or \(M_{j}\) at time \(t=k\). For pinning perturbation, this situation occurs when \(\mu(S_{i}^{0})\cup\mu(S_{j}^{0})-\mu(S_{i}^{0}\cup S_{j}^{0})\neq\emptyset\). In cases where there is also no synergy between \(M_{i}\) and \(M_{j}\) (see below), \(\mu(S_{i}^{0}\cup S_{j}^{0})\subset\mu(S_{i}^{0})\cup\mu(S_{j}^{0})\). Logical obstruction can only occur when the union of the seed sets results in a logical contradiction whereby \(s\)-units for the state of at least one node variable and its negation are both included (in the seed sets or their dynamical unfolding logic), for example, \(x\)-\(1\in S_{i}^{0}\wedge x\)-\(0\in S_{j}^{0}\). This means that combining the pathway modules results in one or more \(s\)-units not firing that would have fired had the pathway modules been separate. _Synergy_ exists between pathway modules \(M_{i}\) and \(M_{j}\) if their combination \(M_{i,j}\) results in an \(s\)-unit (or \(t\)-unit) firing during time step \(t=k\) when that \(s\)-unit (or \(t\)-unit) does not fire in either pathway module \(M_{i}\) or \(M_{j}\) at time \(t=k\). For pinning perturbation we have: \(\mu(S_{i}^{0}\cup S_{j}^{0})-\mu(S_{i}^{0})\cup\mu(S_{j}^{0})\neq\emptyset\). If there is no logical obstruction (see above) between \(M_{i}\) and \(M_{j}\), then: \(\mu(S_{i}^{0}\cup S_{j}^{0})\supset\mu(S_{i}^{0})\cup\mu(S_{j}^{0})\). This means that additional \(s\)-units fire as a result of combining the pathway modules. Pathway modules \(M_{i}\) and \(M_{j}\) are _decoupled_ if there is no partial subsumption (\(s\)-unit overlap), logical obstruction, or synergy between them. This means that \(\mu(S_{\rm t}^{0}\cup S_{\rm j}^{0})\equiv\mu(S_{\rm t}^{0})\cup\mu(S_{\rm j}^{0})\) and \(|\mu(S_{\rm t}^{0}\cup S_{\rm j}^{0})|=|\mu(S_{\rm t}^{0})|+|\mu(S_{\rm j}^{0})|\) are necessary and sufficient conditions for pinning perturbation. Pathway module \(\mathbf{M}_{\rm i}\) is a _complex module_ if \(\mathbf{M}_{\rm i}\in\Lambda_{\rm s}\) and any submodule \(\mathbf{M}_{\rm j}\) of \(\mathbf{M}_{\rm i}\) is synergistic with the submodule \(\mathbf{M}_{\rm k}\) whose seed set is \(S_{\rm k}^{0}=S_{\rm t}^{0}-S_{\rm j}^{0}\). For \(s=1\), this means that all modules \(\mathbf{M}_{\rm i}\in\Lambda_{1}\) are complex. We define \(I_{\rm s}\subset P_{\rm s}\) as the set of all complex modules with seed size \(s\). To be a complex module then, a pathway module must be maximal and every submodule (seed combination) within it must add synergy. For a given \(s\), each complex module \(\mathbf{M}_{\rm i}\in I_{s}\) represents the introduction of a unique firing sequence that is not part of any other pathway module with seed set size \(x\leqslant s\) (otherwise it would not be maximal) or any combination of pathway modules with seed set sizes \(x\leqslant s\) (because every submodule adds synergy). As a consequence, complex modules are the building blocks of a network's dynamics. Importantly, complex modules allow the vast number of possible dynamical interactions to be reduced to a smaller set of unique signaling sequences that may represent (full or partial) biologically functional pathways. Given a DCM, its complex modules are defined using only parameter \(s\), the size of the seed set. Moreover, the complex modules are hierarchical in the sense that modules with larger seed set sizes are composed of (partially or fully subsumed) modules with smaller seed set sizes. Higher-level complex modules thus capture the emergent behavior (e.g., synergy) that results from combining lower-level modules. For \(s=1\), the set of complex modules \(I_{1}=\Lambda_{1}\) is the minimal set of pathway modules that covers the DCM. However, synergy allows for the DCM to be covered using fewer pathway modules (with larger seed sets). Possible interactions between pathway modules are demonstrated in Fig. 4. In this example network, module \(\mathbf{M}_{\rm P\,I-0}\) is subsumed by module \(\mathbf{M}_{\rm i1-0}\) with either pinning or pulse perturbation (panel a). Thus, \(\mathbf{M}_{\rm i1-0}\in\Lambda_{1}\) is a maximal module, whereas \(\mathbf{M}_{\rm P\,I-0}\) is not. Modules \(\mathbf{M}_{\rm P\,I-1}\) and \(\mathbf{M}_{\rm i1-1}\) are decoupled (dynamically independent) as they share no overlap or synergy, regardless of perturbation type (panel b). Module \(\mathbf{M}_{\rm P\,I-1}\) logically obstructs module \(\mathbf{M}_{\rm P\,2-1}\) with pinning perturbation as the pinning of P1 prevents the firing of P1-0 and the subsequent unfolding of \(\mathbf{M}_{\rm P\,2-1}\) (panel c). By contrast, there is synergy between the modules \(\mathbf{M}_{\texttt{P1}-1}\) and \(\mathbf{M}_{\texttt{i}2-1}\) that results in the firing of P2-1 (panel d). If the input i2 is constant, but the node P1 is not (pulse perturbation), several additional downstream s-units will be reached. Panel e shows a limit cycle of the network reached by the combination of modules \(\mathbf{M}_{\texttt{P1}-1}\), \(\mathbf{M}_{\texttt{i}1-1}\), and \(\mathbf{M}_{\texttt{i}2-1}\). In this case, the inputs i1 and i2 are held constant but the node P1 is not (pulse perturbation). Finally, all six complex modules of seed set size \(s=1\) are shown, assuming pinning perturbation (panel f). ### Dynamical Modularity With the characterization of pathway module interaction and the introduction of complex modules, we may now characterize the macro-scale properties of the network, such as its organization and modularity. The concept of dynamical modularity is analogous to structural modularity in graphs, whereby it is considered to occur when interactions within modules are strong and interactions between modules are weak. Dynamical modularity is, however, a more general concept derived from Simon's framing of complex systems in terms of near-decomposability [45], which includes both structural and dynamical interactions [46, 24, 21]. We use it to find subsets of variables and their dynamical states that are more or less easy to decouple in a given multivariate dynamical system. Thus, we can compare complex (dynamical) networks (or systems) to determine which ones are easier to decouple into independently-functioning parts. The _cover_ of a DCM is a set of pathway modules \(\Pi=\{\mathbf{M}_{\texttt{i}},\mathbf{M}_{\texttt{j}},...\}\) such that the union of their corresponding pathway modules sets is equivalent to the set of all s-units in the DCM; that is, \(S_{\Pi}=\bigcup_{\texttt{i}}S_{\texttt{i}}=8\) for set \(S_{\texttt{i}}\) with corresponding pathway module \(\mathbf{M}_{\texttt{i}}\in\Pi\). Because pathway module sets may overlap, this results in a fuzzy partition of s-units. \(|\Pi|\) is the size of the cover in terms of the number of pathway modules that comprise it. In practice, we are interested in finding a cover \(\Pi_{\texttt{s}}\) that is composed of modules with no more than s seeds; additionally, we focus on small s because modules with smaller seed sets are generally easier to control. Figure 4: **Example GRN interactions.** The DCM shown here is the same as in Figure 2. Individual pathway modules are differentiated by color; the seed s-unit has a double edge. (a) \(\mathbf{M}_{\mathbf{P}1-0}\) is subsumed by module \(\mathbf{M}_{\mathbf{i}1-0}\). (b) \(\mathbf{M}_{\mathbf{P}1-1}\) and \(\mathbf{M}_{\mathbf{i}1-1}\) are decoupled. (c) There is logical obstruction between \(\mathbf{M}_{\mathbf{P}1-1}\) and \(\mathbf{M}_{\mathbf{P}2-1}\). (d) There is synergy between \(\mathbf{M}_{\mathbf{P}1-1}\) and \(\mathbf{M}_{\mathbf{i}2-1}\). (e) A limit cycle of the network due to \(\mathbf{M}_{\mathbf{P}1-1}\), \(\mathbf{M}_{\mathbf{i}1-1}\), and \(\mathbf{M}_{\mathbf{i}2-1}\). The inputs are held constant but the node \(\mathbf{P}\) is not (pulse perturbation). (f) All six complex modules with seed set size \(\mathrm{s}=1\). The nodes shown in yellow are subsumed by multiple other modules. The _independence_ of a pathway module \(\mathbf{M}_{i}\) from a set of pathway modules \(\Sigma\) is the fraction of unique s-units that fire within the pathway module: \[\operatorname{ind}(\mathbf{M}_{i},\Sigma)=|S_{i}-S_{\Sigma}|/|S_{i}| \tag{1}\] where \(S_{\Sigma}=\bigcup_{j}S_{j}\) for set \(S_{j}\) with corresponding pathway module \(\mathbf{M}_{j}\in\Sigma\). If \(\{\mathbf{M}_{i}\}\cup\Sigma=\Pi\), then \(S_{i}\cup S_{\Sigma}=S_{\Pi}\) and \(\operatorname{ind}(\mathbf{M}_{i},\Sigma)\) gives us the independence of pathway module \(\mathbf{M}_{i}\) within a cover \(\Pi\). The _dynamical modularity_ of a cover \(\Pi\) is the distribution of independence scores for all its pathway modules: \(\operatorname{D}(\Pi)\equiv\{\operatorname{ind}(\mathbf{M}_{i},\Pi-\mathbf{ M}_{i})\}\), \(\forall_{\mathbf{M}_{i}\in\Pi}\). One can derive several meaningful statistics from this distribution, but here we use its mean value as a characterization of dynamical modularity: \[\overline{\operatorname{D}}(\Pi)=(\sum_{\mathbf{M}_{i}\in\Pi}\operatorname{ ind}(\mathbf{M}_{i},\Pi-\mathbf{M}_{i}))/|\Pi| \tag{2}\] As there are many possible covers of the DCM, it is useful to find one that is optimal for measuring modularity. One can define optimality of a cover in different ways, for example, the cover with the minimal size, \(\Pi_{\min}\). However, as the goal is to find modules that are decouple from one another, here we define optimality by maximizing the mean dynamical modularity \(\overline{\operatorname{D}}(\Pi)\). We define \(\Pi_{s}^{*}\) as the _optimal cover_ of the DCM, where \(\Pi_{s}^{*}\) has the maximum mean dynamical modularity among covers that satisfy the property that \(\mathbf{M}_{i}\in 1\) for all pathway modules \(\mathbf{M}_{i}\in\Pi_{s}\) (i.e., the cover is composed only of complex modules with seed set size \(|S^{0}|\leqslant s\)). The corresponding mean dynamical modularity of this cover indicates how decouplable a network is into its constituent dynamical building blocks. We note that the optimal cover may be difficult to calculate exactly due to the exponential number of possible module combinations. When networks and seed set sizes are too large to find an exact solution, we use a _greedy selection_ to estimate \(\Pi_{s}^{*}\). The algorithm starts with an empty set \(\Sigma=\{\}\) and then iteratively adds the complex module \(\mathbf{M}_{i}\in I_{s}\) that has the highest independence score from \(\Sigma\), \(\max(\operatorname{ind}(\mathbf{M}_{i},\Sigma))\), until a cover is reached, such that \(S_{\Sigma}=\mathcal{S}\). We use this cover \(\Sigma\) to estimate the optimal cover and its mean dynamical modularity \(\overline{\operatorname{D}}(\Sigma)\) to estimate the optimal mean dynamical modularity of the network \(\overline{\mathsf{D}}(\Pi_{\mathsf{s}}^{*})\). This quasi-optimal solution provides a lower-bound on \(\overline{\mathsf{D}}(\Pi_{\mathsf{s}}^{*})\). In practice, multiple covers may have the same mean dynamical modularity, meaning that there may be multiple optimal solutions. The modules that compose a cover \(\Pi_{\mathsf{s}}\) have a distribution of seed set sizes that are bounded by \(s\). By definition, the mean dynamical modularity of \(\Pi_{\mathsf{s}}^{*}\) should increase or remain the same as \(s\) increases, because each increase in maximum seed set size allows for more modules to choose from when finding an optimal cover. The _characteristic seed number_\(s*\) occurs when increasing \(s\) no longer increases the mean dynamical modularity of \(\Pi_{\mathsf{s}}^{*}\). This represents the minimal seed set size that allows for a network to optimally be dynamically decoupled. In Fig. 4 panel f, the six complex modules shown comprise \(\Lambda_{1}\) and the associated optimal cover \(\Pi_{\mathsf{i}}^{*}\); these are \(\mathbf{M}_{i1-1}\), \(\mathbf{M}_{\mathsf{P}1-1}\), \(\mathbf{M}_{\mathsf{i}2-1}\), \(\mathbf{M}_{\mathsf{P}2-1}\), \(\mathbf{M}_{\mathsf{i}1-0}\), and \(\mathbf{M}_{\mathsf{i}2-0}\). The mean dynamical modularity of this cover is \(\overline{\mathsf{D}}(\Pi_{\mathsf{i}}^{*})=0.71\). Additionally, there are two complex modules (assuming pinning perturbation) with \(s=2\) (\(\mathbf{M}_{\mathsf{P}1-1,\mathsf{i}2-1}\) and \(\mathbf{M}_{\mathsf{i}1-1,\mathsf{i}2-0}\)). With pinning perturbation, this network can be covered by three modules: \(\Pi_{2}=\{\mathbf{M}_{\mathsf{i}1-1,\mathsf{i}2-0}\), \(\mathbf{M}_{\mathsf{P}1-1,\mathsf{i}2-1}\), \(\mathbf{M}_{\mathsf{i}1-0}\}\), with mean dynamical modularity \(d=0.6\). Note that this minimal cover is not optimal because \(\overline{\mathsf{D}}(\Pi_{\mathsf{i}}^{*})\) has a higher mean dynamical modularity. Rather, the optimal cover at \(s=2\) is \(\Pi_{2}^{*}=\{\mathbf{M}_{\mathsf{i}1-1},\mathbf{M}_{\mathsf{i}2-0}\), \(\mathbf{M}_{\mathsf{P}1-1,\mathsf{i}2-1}\), \(\mathbf{M}_{\mathsf{i}1-0}\}\); \(\overline{\mathsf{D}}(\Pi_{\mathsf{2}}^{*})=0.83\). The dynamical modularity does not increase with greater \(s\) so the characteristic seed number of this network is \(s*=2\). In summary, dynamical modularity allows for the comparison of dynamical organization between networks. Dynamically simple networks will be easier to decouple and will tend to have higher mean dynamical modularity and lower characteristic seed number. More complex networks will have more overlap dynamically and will tend to have lower mean dynamical modularity and higher characteristic seed number. We note that, because our dynamical cover partitions \(s\)-_units_ in the DCM, the same node will belong to different modules depending on what state it is in. This is a more granular but lengthier description of the dynamics of the system than considering the nodes themselves. Computationally, our optimal measure of dynamical modularity is NP-hard as it entails the set cover optimization problem [47, 48] (by comparing all possible sets of complex modules that cover the set \(8\)). However, the number of modules to consider is greatly reduced by considering only complex modules rather than all pathway modules. For all real-world networks that we observed, \(\mathrm{I_{s}}<<\mathrm{P_{s}}\) because many pathway modules are either subsumed into longer modules or they have non-synergistic seeds. Thus, by only considering complex modules, we can greatly reduce the computational complexity of our dynamical modularity optimization problem. ## 3 Experimental Results We demonstrate our approach by using the above formalism to analyze the dynamical modularity of experimentally-validated models of biochemical regulation from systems biology. We focus on the single-cell and full parasegment _drosophila_ SPN models [28] and compare the analysis to that of additional models such as the yeast cell-cycle [49], _Arabidopsis thaliana_ floral development [50], and T-LGL leukemia [51] networks. Two versions of the SPN exist to model how a pathway of segment polarity genes (and their protein products) regulate segmentation of the body of the fruit fly in its development. The **single-cell** model is a regulatory network of \(\mathrm{n}=17\) interacting genes and proteins represented as Boolean variables [52, 1]. It has 3 input nodes that are not regulated by other nodes and are usually assumed to be pinned ON or OFF: SLP (Sloppy-Paired proteins), \(\mathrm{n}W\mathrm{G}\) (neighboring Wingless protein), and \(\mathrm{n}\mathrm{H}\mathrm{H}\) (neighboring Hedgehog protein). The remaining 14 (internal) nodes are regulated by other nodes in the network. A larger **parasegment** network of \(\mathrm{n}=60\) gene and protein variables is built by connecting the single-cell model in a linear lattice of 4 cells with periodic boundary conditions. This network models intra- and inter-cellular regulatory pathways where each of the 4 cells has the same 14 internal nodes as the single-cell model, plus one SLP input per cell (which represents the output of upstream developmental pathways and is usually assumed to be pinned ON or OFF). Furthermore, each of the 4 cells in the parasegment regulates its two neighboring cells via its internal WG and HH nodes, which act as external, inter-cellular inputs to the two neighboring cells. Thus, in the parasegment SPN there are no input nWG or nHH nodes for each cell, which results in a network of \(n=60\) variables. This model has been extensively studied [53, 28, 54, 55, 56, 57, 1, 21] and its attractors are fully known for the single-cell case, which makes it a useful example to test our modularity methodology. We compute all complex modules present in the dynamics of the single-cell and full parasegment SPN models for seed set size \(s\leq 6\). This analysis allows us to calculate the dynamical modularity of the SPN and compare it to the other systems biology models we analyzed. In the present work we only consider the dynamics that ensues from pinning perturbation of the seed set. This form of perturbation is feasible for experimental work with the target biological models (e.g., gene silencing in _drosophila_[58]), but our approach also allows for other forms of perturbation (e.g., pulse perturbations) that can be explored in the future. ### Complex modules in the Drosophila Single-Cell SPN Analysis of the _drosophila_ single-cell SPN highlights many useful features of our dynamical modularity methodology. It allows us to reduce the enormous complexity of this dynamical system's state space to only a few building blocks, characterize which dynamics are involved in transient configurations and attractors, understand the dynamical overlap between similar modules, and study the specific effects of individual nodes and sets of nodes (e.g., comparing network inputs to internal nodes). To understand the effect of perturbing a specific node in the single-cell SPN with a brute-force approach would require analysis of \(2^{17}\) configurations. Additionally, there are \(2^{s}\)\(\binom{17}{s}\) different ways to perturb \(s\) seeds, where each node x in the seed set of size s may be in the state x-o or x-1. However, our dynamical modularity methodology offers a much more direct approach without enumerating all network configurations and by reducing the total possible number of seed sets to only those that are complex modules. For seed sets of size \(s=1\), there are 34 possible pathway modules to consider (because there are 34 possible seeds; each node \(x\) of the \(n=17\) nodes may be in the state \(x\)-\(0\) or \(x\)-\(1\)), but only 14 complex modules. This indicates that the other 20 modules do not reveal any dynamical information that is not contained in the 14 complex modules. These complex modules cover the DCM, meaning that they include all 34 node states. For seed sets of size \(s=2\), there are 9 complex modules out of 544 pathway modules; for size \(s=3\), there are 19 complex modules out of 5440 pathway modules; for size \(s=4\), there are 14 complex modules out of 38080 pathway modules. For seed set sizes \(s>4\) there are no additional complex modules. Any higher-order interaction between variables (i.e., modules with larger seed sets) can be understood as a combination of lower-order interactions that may include overlap or contradictions between the respective modules but no synergy. In other words, no novel synergy emerges in the network by pinning more than 4 seeds. Thus, all of the macro-scale computation that the dynamical system is doing involves building blocks that are controllable by a small number of nodes. This results in a total of 56 complex modules. However, this number can be further reduced using the _maximal seed heuristic_ where modules are discarded if they contain seeds that do not initiate maximal pathway modules. That is, we consider _core_ complex modules to be those where each seed's pathway module is itself maximal, \(\mathbf{M}_{x}\in\Lambda_{1}\,,\forall x\in S^{0}\). This reduction removes modules that would have been subsumed if not for contradiction. For example, both \(\mathbf{M}_{\mathrm{ptc-1},n\mathrm{HH-0}}\) and \(\mathbf{M}_{\mathrm{PTC-1},n\mathrm{HH-0}}\) are complex modules in the single-cell SPN; however, the only reason that \(\mathbf{M}_{\mathrm{PTC-1},n\mathrm{HH-0}}\) is not subsumed by \(\mathbf{M}_{\mathrm{ptc-1},n\mathrm{HH-0}}\) is because the former includes the state ptc-\(0\), which contradicts the seed set of the latter. Moreover, \(\mathbf{M}_{\mathrm{PTC-1},n\mathrm{HH-0}}\) is not a core complex module because its submodule \(\mathbf{M}_{\mathrm{PTC-1}}\) is subsumed by \(\mathbf{M}_{\mathrm{ptc-1}}\). Using the maximal seed heuristic, there are 28 core complex modules for the single-cell SPN with maximal seed set size \(s=3\). These 28 are sufficient to describe all attractors in the network (with a slight caveat mentioned below) and are listed in the supplementary materials. Only one complex module directly results in an attractor, meaning that the states of all \(n=17\) variables are resolved: \(|S_{(nWG-1,\text{SLP-0},nHH-1)}|=17\), as shown in Fig. 5d. The other 9 attractors of the single-cell SPN can be recovered by combining two or three complex modules (see the supplementary materials for a full description of each attractor). Certain modules appear in multiple attractors. For example, \(|S_{(\text{SLP-1},nHH-1)}|=16\) (shown in Fig. 5c) resolves every node except the input \(nWG\). The module \(|S_{(nWG-0,nHH-1)}|=14\) is identical in its unfolding except that it does not resolve \(wg\) or \(WG\); therefore, this module must be combined with \(\mathbf{M}_{wg-0}\) or \(\mathbf{M}_{wg-1}\) and \(\mathbf{M}_{\text{SLP-0}}\) or \(\mathbf{M}_{\text{SLP-1}}\) to reach an attractor. The module \(|S_{(nWG-1,\text{SLP-0})}|=13\) (shown in Fig. 5c) resolves most nodes and is in the basin of two attractors when \(nHH\) is OFF (when \(nHH\) is ON, it is subsumed by \(\mathbf{M}_{\{nWG-1,\text{SLP-0},nHH-1)}}\) mentioned above). Finally, \(|S_{(\text{ptc-1},\text{SLP-1},nHH-0)}|=16\) (shown in Fig. 5d) and the similar module \(|S_{(\text{ptc-1},nWG-0,nHH-0)}|=16\) reach one of three attractors, depending on the state of the input nodes SLP and nWG5. Footnote 5: Here we see the caveat mentioned above: these three attractors are not quite reached if \(\text{ptc}\) is left pinned. The modules \(\mathbf{M}_{(\text{ptc-1},\text{SLP-1},nHH-0)}\) and \(\mathbf{M}_{(\text{ptc-1},nWG-0,nHH-0)}\) pin \(\text{ptc}\) in the ON state; however, if \(\text{ptc}\) is unpinned, these modules will turn \(\text{ptc}\) OFF (as seen in the respective attractors). This behavior is captured by the complex modules \(\mathbf{M}_{(\text{ptc-1},\text{SLP-1},nHH-0)}\) and \(\mathbf{M}_{(\text{ptc-1},nWG-0,nHH-0)}\), which do result in \(\text{ptc-0}\); however, these are not core complex modules because \(\mathbf{M}_{\text{ptc-1}}\) subsumes \(\mathbf{M}_{\text{PTc-1}}\). Even modules that do not result in a steady-state of the network give us important information about partial control of the network state. The size of the module indicates how many other nodes can be controlled. For example, if a researcher can only perturb one node in the single-cell SPN, then turning en ON has the most effect because \(|S_{\mathbf{en-1}}|=9\) is the longest module at \(s=1\) (note that this one seed alone controls half of the other network nodes). If perturbing two nodes, then \(|S_{(\text{SLP-1},nHH-1)}|=16\) gives the most control. In addition to quantifying how influential a node x is, our methodology also identifies which other node states are reached if x is perturbed. This is useful information when a researcher is interested in reaching certain states in the network. For example, for the T-LGL leukemia network, modules of interest may include those that result in Apoptosis-1 because perturbation of the respective seed set guarantees that the cell death state is reached. Our methodology also indicates how synergy occurs between seeds and the time step at which it occurs. For example, \(|S_{\mathrm{SLP-1}}|=7\) and \(|S_{\mathrm{PTC-1}}|=4\); however, combining the modules results in novel synergy, \(|S_{(\mathrm{SLP-1,PTC-1})}|=15\) (and thus resolves every node in the network except the two additional inputs \(nWG\) and \(nHH\)), with synergy between the two modules occurring at time steps \(t=5\) and \(t=6\). Knowing specific states of specific nodes allows for additional inferences when making further perturbations. For example, \(\mathbf{M}_{(\mathrm{SLP-1,nHH-1})}\) results in \(\mathsf{ptc-1}\), so if the state of \(nHH\) is flipped, then \(\mathbf{M}_{(\mathsf{ptc-1,SLP-1,nHH-0})}\) will unfold, resulting in a different attractor (see supplementary materials for a description of the modules). Complex modules also allow us to see how dynamics overlap between different modules. In some cases, this offers alternative control strategies. For example, the unfolding of \(\mathbf{M}_{\mathrm{en-1}}\) is identical to the unfolding of \(\mathbf{M}_{(\mathrm{SLP-0,nWG-1})}\), except that in the former the states of \(wg\) and \(WG\) remain unknown. Perhaps less obvious is that \(\mathsf{PTC-0}\) behaves similarly to \(nHH\)-1: they both fire \(\mathsf{SMO-1}\) and \(\mathsf{CIR-0}\) and contribute to firing \(\mathsf{CIA-1}\). This shared behavior means that \(S_{(\mathrm{SLP-1,nHH-1})}\) and \(S_{(\mathrm{SLP-1,PTC-0})}\) are identical except for the seed nodes and the state of the \(PH\) node. In other cases, the lack of overlap tells us that two modules are independent. For example, \(\mathbf{M}_{\mathrm{en-1}}\) and \(\mathbf{M}_{\mathrm{SLP-1}}\) have no overlap and all of their downstream products are in opposing states to one another. This indicates that only one of these modules will ever be active at a time in the dynamical system. Because of this independence, these modules and modules of greater seed set size that subsume them appear in the optimal covers of the DCM for seed set sizes \(s=[1,4]\). Finally, these complex modules help to elucidate the role of specific nodes, such as the inputs, in the single-cell SPN dynamics. It is already known that the input nodes control much of the network dynamics [1, 21]; however, our analysis shows when the inputs themselves may result in an attractor and when an additional node such as \(wg\) or \(\mathsf{PTC}\) is needed 6. Additionally, we see the effects that internal nodes, such as \(\mathsf{PTC}\) or \(\mathsf{CIR}\), have by themselves. This is especially useful in analyzing the parasegment SPN (\(n=60\)) because all nodes in this network are internal except for the four SLP inputs. For help in analyzing the parasegment SPN, we group modules that have nearly identical (possibly differing in the inclusion of wg or WG) downstream products together: let \(\Sigma_{1}=\{\mathbf{M}_{nWG-0},\mathbf{M}_{\mathrm{SLP-1}}\}\), \(\Sigma_{2}=\{\mathbf{M}_{en-t},\mathbf{M}_{nWG-1,\mathrm{SLP-0}}\}\), \(\Sigma_{3}=\{\mathbf{M}_{nWG-0,nHH-1},\mathbf{M}_{\mathrm{SLP-1,nHH-1}}\}\), \(\Sigma_{4}=\{\mathbf{M}_{en-t,nHH-1},\mathbf{M}_{nWG-1,\mathrm{SLP-0,nHH-1}}\}\), and \(\Sigma_{5}=\{\mathbf{M}_{nWG-0,\mathrm{PTC-0}},\mathbf{M}_{\mathrm{SLP-1,PTC-0}}\}\). As discussed below, these groups are seen often in the parasegment's dynamics. Even though the complex modules discussed here are based on pinning perturbation, many of the results are generalizable to any type of perturbation because the input node states are self-sustaining and, therefore, all of their downstream products are guaranteed to fire at some future point. Other modules may be self-sustaining as well. For example, \(|S_{(\mathrm{PTC-1,nHH-0})}|=6\) maintains the state of its inputs (and, therefore, its downstream products) due to a positive feedback loop between PTC-1 and nHH-0. ### Complex modules in the Drosophila Parasgment SPN We use our dynamical modularity methodology to aid the analysis of the _drosophila_ parasegment SPN. As in the single-cell case, this allows for a great reduction in the complexity of the dynamical system model and enables us to characterize dynamical trajectories within the system. Using complex modules, we are able to find a minimal seed set that reaches the wildtype attractor of drosophila that is lower than previous estimates seen in the literature. Furthermore, we are able to characterize the transient dynamics toward this attractor in terms of the intracellular modules we found in the single-cell SPN. It is computationally restrictive to discover all complex modules for the _drosophila_ parasegment model, but as in the single-cell case, the number of complex modules found for low seed set sizes is substantially less than the number of possible pinned perturbations. We again use the maximal seed heuristic to speed up computation time. For \(s=1\), we find 40 complex modules out of 120 possible pathway modules; for \(s=2\), we find 40 core Figure 5: **Complex modules in the _drosophila_ single-cell SPN.** (a) The DCM of the _drosophila_ single-cell SPN (input nodes have a double edge). Black represents variables in their active (ON) state, white represents variables in their inactive (OFF) state, and grey diamonds represent t-units. The DCM consists of 34 s-units. Note that several t-units have been removed or modified in the figure for clarity of the transition function. (b) Select complex modules (indicated by color) are shown for seed set size \(|\mathrm{S}^{0}_{\mathrm{i}}|=1\). These modules dynamically unfold from their respective seed sets (highlighted with a double edge) and include the s and t-units that are guaranteed to fire given pinning perturbation. Modules \(\mathbf{M}_{\mathrm{en-1}}\) (blue), \(\mathbf{M}_{\mathrm{nHH-1}}\) (green), \(\mathbf{M}_{\mathrm{SLP-1}}\) (pink), and \(\mathbf{M}_{\mathrm{nHH-0}}\) (gold) are shown. The s-unit CIR-o is in both \(\mathbf{M}_{\mathrm{en-1}}\) and \(\mathbf{M}_{\mathrm{nHH-1}}\). Together these four modules cover \(59\%\) of the s-units in the DCM. (c) The same as b, but for \(|\mathrm{S}^{0}_{\mathrm{i}}|=2\). Modules \(\mathbf{M}_{\mathrm{n}\mathrm{W}\mathrm{G-1},\mathrm{SLP-0}}\) (blue) and \(\mathbf{M}_{\mathrm{nHH-1},\mathrm{SLP-1}}\) (red) are shown. The s-unit CIR-o is in both modules. Together both modules cover \(82\%\) of the s-units in the DCM. (d) The same as b, but for \(|\mathrm{S}^{0}_{\mathrm{i}}|=3\). Modules \(\mathbf{M}_{\mathrm{nHH-1},\mathrm{nW}\mathrm{G-1},\mathrm{SLP-0}}\) (violet) and \(\mathbf{M}_{\mathrm{nHH-0},\mathrm{ptc-1},\mathrm{SLP-1}}\) (brown) are shown. The s-units CIA-o, PH-o, wg-o and WG-o are in both modules. Together both modules cover \(85\%\) of the s-units in the DCM. Additional modules are shown in supplementary materials. complex modules out of \(\gamma 080\) pathway modules; for \(s=3\), we find 152 core complex modules out of 273760; for \(s=6\), we find 1475 out of more than 3-2 billion. Despite this great reduction, there are still enough core complex modules to make analysis difficult; therefore, we focus only on the modules with the greatest size for each \(s\) value. We are aided also by the symmetry in the model (the index of each of the four cells does not matter because each one is identical to the next) and the fact that the intracellular dynamics of each cell in the parasegment is the exact same as in the single-cell case, except that they are influenced by inputs from neighboring cells (WG and HH) as well as the external input SLP. As in the single-cell case, the size of complex modules gives us information about partial control in the network. For \(s=1\), the largest module is similarly \(|S_{en-1}|=13\) in any of the four cells. For \(s=2\), we see behavior not seen in the single-cell case: the largest modules are \(|S_{(en-1_{i},en-1_{i\pm 1})}|=28\) and \(|S_{(CIR-1_{i},CIR-1_{i\pm 2})}|=28\), where \(i\) denotes the index of the cell. For \(s=3\), the largest modules are \(|S_{(SLP-1_{i},SLP-1_{i\pm 2},wg-1_{i\pm 1})}|=52\). We already see that three seeds alone can resolve 87% of the states of the \(n=60\) nodes in the network. For \(s=4\), the largest complex modules resolve 56 node states; for \(s=5\), the largest complex modules resolve 59 node states; for \(s=6\), the largest complex modules resolve all 60 node states, which indicates a network attractor (see supplementary materials). By sampling larger pathway modules, we find that 9 nodes are sufficient to reach the wildtype attractor of the parasegment SPN, which is lower than previous estimates seen in the literature [1, 44, 59] (see Fig. 6). The discovery of the complex modules allows us to describe the transient dynamics that occur in the parasegment model toward the wildtype attractor (see Fig. 6 and supplementary materials) and other attractors in the network. We find that inter-cellular interactions can be described by the intracellular complex modules found for the single-cell SPN, where the unfolding of one intracellular module \(\mathbf{M}_{i}\) may initiate another module \(\mathbf{M}_{j}\) by causing the seed of that module to fire (\(S_{j}^{0}\in S_{i}\)). We observe that these transient dynamics move back and forth across cell boundaries as the unfolding of one intracellular module affects its neighboring cells (see the example of the wildtype attractor in Fig. 6 and further examples in supplementary materials). For example, a \(\Sigma_{2}\) module will initiate \(\mathbf{M}_{n\textsc{HH}-1}\) in both adjacent cells (which will interact synergistically with any \(\Sigma_{1}\) or \(\Sigma_{2}\) modules already active in those cells) and \(\Sigma_{2}\) modules in non-adjacent cells will activate \(\Sigma_{3}\) modules in their neighboring cells (via neighboring WG-o and HH-1). Interestingly, some intracellular modules that were dynamically independent in the single-cell SPN work together across cell boundaries in the parasegment. For example, \(\mathbf{M}_{(\textsc{SLP}-1,n\textsc{HH}-1)}\) (a \(\Sigma_{3}\) module) will activate WG-1, which can interact with SLP-o in an adjacent cell to initiate \(\mathbf{M}_{n\textsc{WG}-1,\textsc{SLP}-0}\) (a \(\Sigma_{2}\) module) in that cell, even though \(\Sigma_{2}\) and \(\Sigma_{3}\) modules are dynamically independent (with the exception of CIR-0) within the same cell. By analyzing the largest complex modules, we find that internal nodes play a large role in control of the dynamics, particularly en, wg, PTC, and CIR. For example, the expression of \(\mathbf{M}_{\textsc{CIR}-1}\) in non-adjacent cells activates \(\mathbf{M}_{n\textsc{WG}-0}\) and \(\mathbf{M}_{n\textsc{HH}-0}\) in the other two cells. As in the single-cell case, we also find alternative control strategies. For example, either \(\mathbf{M}_{\textsc{PTC}-0,\textsc{SLP}-1}\) or \(\mathbf{M}_{n\textsc{HH}-1,\textsc{SLP}-1}\) activates \(\mathbf{M}_{n\textsc{WG}-1}\) in the two neighboring cells. We observe that the unfolding of pathway modules indicates a temporal order of events, such that one intracellular module may need to completely unfold to initiate another module in the same cell or an adjacent cell. Of course, during the natural dynamics of the network, multiple modules will be initiated simultaneously based on initial conditions and this natural activation may speed up convergence to a final state. However, the unfolding of a pathway module \(\mathbf{M}_{i}\) guarantees that any downstream node states \(\mathrm{x}\in\textsc{S}_{i}\) will occur (and gives an upper bound on the number of iterations it will take to do so via the module length), as long as the seed nodes are pinned for a long enough period. As an example, pinning the 9 seeds indicated in Fig. 6 (4 of which are the SLP inputs) is guaranteed to drive the network to the wildtype attractor in no more than 16 iterations. Given that this is a steady state, 16 iterations is also an upper bound on how long the seed nodes must be controlled, as the network will remain in the wildtype configuration once it reaches it. As seen above, using the easily discoverable intracellular complex modules allows for a concise description and qualitative understanding of what is happening in the parasegment network, even though the network is too large for an exhaustive description. Because the _drosophila_ parasegment is divided into four identical and modular cells, we can understand the dynamics of the larger network by understanding the complex modules within the individual cells and how those modules interact across cell boundaries. This gives us a clearer picture of the multivariate dynamics in the network than we could get via simple attractor analysis, while at the same time (due to its Boolean nature) being easier to analyze than a more detailed model (e.g., parameter estimation of differential equations). To better understand how signals propagate between dynamically dependent modules, we simplify the macro-level dynamics of the system and, thus, how the network computes. ### _Dynamical Modularity of the Drosophila SPN_ We use the complex modules found in the _drosophila_ SPN to estimate dynamical modularity of the single-cell and parasegment models by finding an optimal cover of the DCM, as described in section 2.3. For seed set size \(s=1\), the optimal cover is defined by the set of complex modules \(\mathsf{A}_{1}\); however, the general problem of finding an optimal cover composed of modules with at most \(s\) seeds is NP-hard based on the set-cover optimization problem [47, 48]. We can, however, restrict the number of modules in the cover to a maximum defined by parameter \(q\). For \(I\) complex modules there are, therefore, \(\binom{I}{q}\) options to consider for the optimal cover. In general, dynamical modularity scores increase with \(q\) because there are a greater number of module combinations to choose from. However, for larger networks like the parasegment SPN, it is computationally prohibitive to estimate covers with even a moderately high \(q\). In this case, we estimate the cover using a greedy algorithm (see section 2.3), where at each step the module with the highest independence score is added to the cover until \(q\) selections have been made. In both cases, we only consider core complex modules using the maximal seed heuristic for inclusion in the cover. For the single-cell SPN, the optimal cover at \(s=1\) is composed of 14 complex modules and has mean dynamical modularity \(\overline{D}(\Pi_{1}^{s})=0.69\). For \(s=2\), we calculate the optimal cover \(\Pi_{2}^{s}\) for \(q\leqslant 11\); for \(s=3\), we calculate Figure 6: **Dynamical unfolding of the wildtype attractor in the _drosophila_ parasegment.** The minimal steady-state perturbation that we found sufficient to fully resolve the wildtype attractor is shown. Cell boundaries are separated by a blue line; white indicates that a node is OFF in that time step, black indicates that a node is ON, and grey indicates that the node state is unknown. The color of the node label indicates an associated complex module. Select modules are also highlighted in the dynamical unfolding process by colored borders. Arrows indicate the initiation of a new module; arrows pointing to the side or upward indicate that the respective s-unit is initiating a module in a neighboring cell. For example, \(\mathbf{M}_{\mathrm{SLP-0}}\) is present in cell \(\mathbf{i}\) based on initial conditions (blue arrow). The influence of WG-\(\mathbf{i}\) in cell \(\mathbf{i}\) (blue arrow) initiates \(\mathbf{M}_{\mathrm{SLP-0,nWG-1}}\) in cell \(\mathbf{i}\) (blue borders). This module turns ON _hh_ and HH, which initiates \(\mathbf{M}_{\mathrm{HH-1}}\) in cell \(\mathbf{i}\) (green borders) and then turns OFF WG in cell \(\mathbf{i}\) which, together with WG-o in cell \(\mathbf{3}\) (red arrows), initiates \(\mathbf{M}_{nWG-0}\) in cell \(\mathbf{i}\); the synergy between \(\mathbf{M}_{\mathrm{HH-1}}\) and \(\mathbf{M}_{nWG-0}\) results in \(\mathbf{M}_{\mathrm{HH-1,nWG-0}}\) in cell \(\mathbf{i}\) (which subsumes both individual modules, shown by red borders). This unfolding demonstrates how developmental dynamics interact across cells in the parasegment. See supplementary materials for a full description of the unfolding of the wildtype attractor and the intracellular complex modules involved. the optimal cover \(\Pi_{3}^{*}\) for \(q\leqslant 8\); for \(s\geqslant 4\) there are no more core complex modules so the solution does not change. We find that the optimal covers have more than the minimal number of modules necessary to cover the DCM for \(s>1\) (see Fig. 7, a-b). For \(s=2\), the DCM can be covered with 5 modules, but the optimal cover is \(|\Pi_{2}^{*}|=10\), \(\overline{D}(\Pi_{2}^{*})=0.73\). For \(s=3\), the minimal cover is only three modules, but the optimal cover is \(|\Pi_{3}^{*}|=8\), \(\overline{D}(\Pi_{3}^{*})=0.81\) (\(s*=3\) is also the characteristic seed number of the single-cell SPN). We also find that the DCM cannot be covered by only pinning the s-units representing the input nodes (SLP, nWG, nHH) because none of these modules include the s-units SMO-o or CIR-1. Comparing _drosophila_ to other small GRNs, the yeast cell-cycle network [49] (\(|\Pi_{3}^{*}|=7\), \(\overline{D}(\Pi_{3}^{*})=1.0\)) and the _thaliana_ cell-fate specification network [50] (\(|\Pi_{3}^{*}|=11\), \(\overline{D}(\Pi_{3}^{*})=0.98\)) have higher modularity scores and characteristic seed numbers \(s*=4\) and \(s*=2\) respectively. Interestingly, in both these networks, we find covers composed of complex modules that are (nearly) completely independent. For the parasegment SPN, the optimal cover for seed set size \(s=1\) is \(|\Pi_{1}^{*}|=40\), \(\overline{D}(\Pi_{1}^{*})=0.83\). Using our greedy algorithm, we find similar or increased scores for higher \(s\) values: \(|\Pi_{2}^{*}|=35\), \(\overline{D}(\Pi_{2}^{*})=0.83\), \(|\Pi_{3}^{*}|=36\), \(\overline{D}(\Pi_{3}^{*})=0.83\), and \(|\Pi_{4}^{*}|=32\), \(\overline{D}(\Pi_{4}^{*})=0.87\). Due to the sub-optimality of the algorithm, the estimated score is somewhat lower at higher \(s\) values, with \(|\Pi_{6}^{*}|=24\), \(\overline{D}(\Pi_{6}^{*})=0.84\). Our results suggest that the _drosophila_ parasegment SPN is more modular than both the single-cell SPN and the similarly-sized leukemia T-LGL network [51] (estimated \(|\Pi_{4}^{*}|=36\), \(\overline{D}(\Pi_{4}^{*})=0.61\), see Figure 7, c-d). We can also identify the modules that are included in the optimal cover. For the single-cell SPN, at \(s=2\), this includes \(|S_{(nHH-1,CIR-1)}|=10\) and \(|S_{(nHH-0,ptc-1)}|=6\), and single seed modules such as \(|S_{en-1}|=9\). For \(s=3\), this includes \(|S_{(SLP-1,nHH-0,ptc-1)}|=16\) (shown in Fig. 5d), \(|S_{(nHH-1,en-1)}|=13\), and additional single seed modules. In both cases, larger complex modules that are mostly independent cover much of the DCM and single seed modules fill in the missing node states. For the parasegment SPN, the majority of the modules in our estimated covers are also of seed set size \(s=1\) with only a few larger complex modules. ## 3 Experimental results Additionally, we see in Fig. 7 that the estimated mean dynamical modularity is roughly constant for both the parasegment SPN and the T-LGL leukemia network as s is increased. This suggests that even though the optimal cover is impossible to calculate at higher s values, and larger complex modules that include useful synergies may be missed, the mean dynamical modularity at \(s=1\) is a useful lower bound that may estimate well the true value for higher seed set sizes and may be used for comparison to other networks. Figure 7: **Comparison of dynamical modularity across GRNs.** (a) Results for the _drosophila_ single-cell SPN, the yeast cell cycle network, and _thaliana arabadopsis_. The relative cover size \(|\Pi_{\mathrm{s}}|/2n\) is shown for each network’s optimal cover \(\Pi^{*}\) (based on maximizing \(\overline{\mathrm{D}}(\Pi_{s})\), shown as full lines) and minimally-sized cover \(\Pi_{\mathrm{min}}\) (dashed lines) per maximum seed set size s. Note that \(|\Pi_{\mathrm{s}}|/2n\leqslant 1\) because there are 2n s-units that together create a trivial cover (every s-unit acts as a module’s seed). The intermediate line \(|\Pi_{\mathrm{s}}|=n\) is indicated by grey dashes. We see that the optimal cover has more than the minimal number of modules for s \(>1\). (b) Same as panel a, but the dynamical modularity score \(d=\overline{\mathrm{D}}(\Pi_{\mathrm{s}})\) is shown for each network’s optimal cover \(\Pi^{*}\) (full lines) and minimally-sized cover \(\Pi_{\mathrm{min}}\) (dashed lines). (c) Results for the _drosophila_ parasegment SPN and the T-LGL leukemia network. We estimate the optimal cover \(\Pi_{\mathrm{s}}^{*}\) by greedy selection of the maximum module independence, \(\max(\mathrm{ind}(M_{\mathrm{t}},\Pi_{\mathrm{s}}-M_{\mathrm{i}}))\), as described in Section 2.3. The relative cover size \(|\Pi_{\mathrm{s}}^{*}|/2n\) is shown for each network. Grey dashes indicate the line \(|\Pi_{\mathrm{s}}|=n\). (d) Same as panel c, but the dynamical modularity score \(d=\overline{\mathrm{D}}(\Pi_{\mathrm{s}}^{*})\) is shown for each network. Due to the suboptimal nature of the greedy selection, \(d\) is sometimes lower for larger maximum seed set sizes. ## 4 Conclusion We have formally defined the pathway modules first discovered in [1] and offered an algorithm to discover them via a modified breadth-first search on a network's DCM. We have also formally analyzed interactions between pathway modules and defined complex modules, which are maximal pathway modules with synergy (dynamic dependence) between seed s-units. This enables us to define the optimal cover of a DCM (a partition of s-units based on complex modules) and the associated mean dynamical modularity. This modularity methodology is advantageous in that it greatly reduces the combinatoric number of pathway modules in Boolean GRNs for a given seed set size, \(2^{s}\binom{n}{s}\), and only considers those that are complex modules. It also allows a bottom-up approach to defining modularity based on a single parameter, the maximum seed set size s. We were thus able to analyze all complex modules that occur in the _drosophila_ single-cell SPN, and sample many that occur in the parasegment. This analysis of dynamical modularity allows an in-depth understanding of how dynamic signals propagate between cells in the parasegment and how dynamics unfold toward biologically-relevant attractors. It also provides a means to compare different GRNs based on the size and dynamical modularity of the optimal cover of their DCM for a given seed set size. This method can be applied to any automata network DCM, including those outside of the biological domain. Importantly, this analysis describes the emergent computation that takes place in the example models as single seeds synergistically interact, causing their combined dynamical influence to logically propagate downstream, resulting in either an attractor or a partial configuration that represents the (much reduced) possible network configurations remaining. These partial configurations are useful for control problems because the description of pathway modules elucidates exactly which downstream nodes are guaranteed to be affected and which state they will be changed to. This can be especially helpful for target control [60, 41], where reaching a certain variable state is desirable (such as an apoptotic state in a cancer growth model) or not desirable (such as a proliferative state). Pathway modules, and complex modules in particular, can also be used to find seeds that have a similar effect on downstream targets. Alternative control strategies such as these have potential uses in biological networks, such as designing optimal therapeutic targets. Pathway modules describe the network dynamics in a mechanistic, causal way. All s-units in a pathway module set are guaranteed to be reached given perturbation of the seed set, as long as there are no other interfering signals. This suggests that modules should be somewhat robust to the updating scheme chosen, although additional work is needed to explore module robustness in depth. Pathway module sets, furthermore, provide the logical domain of influence of the seed set [41]. This is a way to estimate which other variables are stabilized (guaranteed to be in a certain state) based on perturbation of a given seed set, regardless of the state of the other variables in the network. One drawback to this dynamical modularity methodology is that it is still NP-hard in general, and analysis of moderately-sized networks (such as the _drosophila_ parasegment or the T-LGL leukemia network) becomes difficult. However, it is possible to sample even very large networks with low seed set size and greedy selection criteria to estimate optimal covers. We leave additional analysis of efficient heuristics for finding optimal covers for future work as it is outside the scope of this paper. Despite the computational difficulty of the problem, our methodology is a step toward understanding how network components interact based on the micro-dynamics of node state updates, and how these dynamical building blocks give rise to the macro-dynamics of network behavior. ## Acknowledgements Luis M. Rocha was partially funded by: National Institutes of Health, National Library of Medicine grant 1Ro1LM012832; Fundacao para Ciencia e Tecnologia grant: 2022.09122.PTDC; National Science Foundation Research Traineeship "Interdisciplinary Training in Complex Networks and Systems" Grant 1735095. The funders had no role in study design, data collection and analysis, decision to publish, or any opinions, findings, conclusions or recommendations expressed in the manuscript. The authors would also like to thank Deborah Rocha for scientific copy editing. ## Data Availability Network data can be retrieved at [https://github.com/tjparmer/dynamical_modularity](https://github.com/tjparmer/dynamical_modularity) and [https://github.com/rionbr/CANA/tree/master/cana/datasets](https://github.com/rionbr/CANA/tree/master/cana/datasets)[61]. ## Code Availability The code developed for this paper is made available at [https://github.com/tjparmer/dynamical_modularity](https://github.com/tjparmer/dynamical_modularity). ## References * [1] Manuel Marques-Pita and Luis M Rocha. Canalization and control in automata networks: body segmentation in drosophila melanogaster. _PloS one_, 8(3):e55946, 2013. * [2] Leland H Hartwell, John J Hopfield, Stanislas Leibler, and Andrew W Murray. From molecular to modular cell biology. _Nature_, 402(6761):C47-C52, 1999. * [3] Esti Yeger-Lotem, Shmuel Sattath, Nadav Kashtan, Shalev Itzkovitz, Ron Milo, Ron Y Pinter, Uri Alon, and Hanah Margalit. Network motifs in integrated cellular networks of transcription-regulation and protein-protein interaction. _Proceedings of the National Academy of Sciences_, 101(16):5934-5939, 2004. * [4] Isabelle S Peter and Eric H Davidson. Modularity and design principles in the sea urchin embryo gene regulatory network. _FEBS letters_, 583(24):3948-3958, 2009. * [5] Eric H Davidson. Emerging properties of animal gene regulatory networks. _Nature_, 468(7326):911-920, 2010. * [6] Ariel Jaimovich, Ruty Rinott, Maya Schuldiner, Hanah Margalit, and Nir Friedman. Modularity and directionality in genetic interaction maps. _Bioinformatics_, 26(12):i228-i236, 2010. * [7] Marc Vidal, Michael E Cusick, and Albert-Laszlo Barabasi. Interactome networks and human disease. _Cell_, 144(6):986-998, 2011. * [8] Jorg Stelling, Uwe Sauer, Zoltan Szallasi, Francis J Doyle III, and John Doyle. Robustness of cellular functions. _Cell_, 118(6):675-685, 2004. * [9] Amos Tanay, Aviv Regev, and Ron Shamir. Conservation and evolvability in regulatory networks: the evolution of ribosomal regulation in yeast. _Proceedings of the National Academy of Sciences_, 102(20):7203-7208, 2005. * [10] Massimo Pigliucci. Is evolvability evolvable? _Nature Reviews Genetics_, 9(1):75, 2008. * [11] U Hernandez, L Posadas-Vidales, and Carlos Espinosa-Soto. On the effects of the modularity of gene regulatory networks on phenotypic variability and its association with robustness. _Biosystems_, 212:104586, 2022. * [12] Reka Albert and Rui-Sheng Wang. Discrete dynamic modeling of cellular signaling networks. _Methods in enzymology_, 467:281-306, 2009. * [13] Jorge GT Zanudo, Steven N Steinway, and Reka Albert. Discrete dynamic network modeling of oncogenic signaling: Mechanistic insights for personalized treatment of cancer. _Current Opinion in Systems Biology_, 9:1-10, 2018. * [14] Stuart A Kauffman. Metabolic stability and epigenesis in randomly constructed genetic nets. _Journal of theoretical biology_, 22(3):437-467, 1969. * [15] Rene Thomas. Boolean formalization of genetic control circuits. _Journal of theoretical biology_, 42(3):563-585, 1973. * [16] Reka Albert and Juilee Thakar. Boolean modeling: a logic-based dynamic approach for understanding signaling and regulatory networks and for making useful predictions. _Wiley Interdisciplinary Reviews: Systems Biology and Medicine_, 6(5):353-369, 2014. * [17] Aaron Clauset, Mark EJ Newman, and Cristopher Moore. Finding community structure in very large networks. _Physical review E_, 70(6):066111, 2004. * [18] Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. Fast unfolding of communities in large networks. _Journal of statistical mechanics: theory and experiment_, 2008(10):P10008, 2008. * [19] Roger P Alexander, Philip M Kim, Thierry Emonet, and Mark B Gerstein. Understanding modularity in molecular networks requires dynamics. _Science signaling_, 2(81):pe44-pe44, 2009. * [20] Berta Verd, Nicholas AM Monk, and Johannes Jaeger. Modularity, criticality, and evolvability of a developmental gene regulatory network. _Elife_, 8:e42832, 2019. * [21] Alexander J Gates and Luis M Rocha. Control of complex networks requires both structure and dynamics. _Scientific reports_, 6:24456, 2016. * [22] Alba Jimenez, James Cotterell, Andreea Munteanu, and James Sharpe. A spectrum of modularity in multi-functional gene circuits. _Molecular systems biology_, 13(4):925, 2017. * [23] David J Irons and Nicholas AM Monk. Identifying dynamical modules from genetic regulatory systems: applications to the segment polarity network. _BMC bioinformatics_, 8(1):413, 2007. * [24] A Kolchinsky and LM Rocha. Prediction and modularity in dynamical systems. In _Advances in artificial life. Proceedings of the Eleventh European conference on the synthesis and simulation of living systems (ECAL 2011)_, pages 423-430. MIT Press, 2011. * [25] Soumya Paul, Cui Su, Jun Pang, and Andrzej Mizera. A decomposition-based approach towards the control of boolean networks. In _Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics_, pages 11-20, 2018. * [26] Claus Kadelka, Reinhard Laubenbacher, David Murrugarra, Alan Veliz-Cuba, and Matthew Wheeler. Decomposition of boolean networks: An approach to modularity of biological systems. _arXiv preprint arXiv:2206.04217_, 2022. * [27] Jorge GT Zanudo and Reka Albert. Cell fate reprogramming by control of intracellular network dynamics. _PLoS computational biology_, 11(4):e1004193, 2015. * [28] Reka Albert and Hans G Othmer. The topology of the regulatory interactions predicts the expression pattern of the segment polarity genes in drosophila melanogaster. _Journal of theoretical biology_, 223(1):1-18, 2003. * [29] Carlos Gershenson. Introduction to random boolean networks. _arXiv preprint nlin/0408006_, 2004. * [30] Inman Harvey and Terry Bossomaier. Time out of joint: Attractors in asynchronous random boolean networks. In _Proceedings of the Fourth European Conference on Artificial Life_, pages 67-75. MIT Press, Cambridge, 1997. * [31] Conrad H Waddington. Canalization of development and the inheritance of acquired characters. _Nature_, 150(3811):563-565, 1942. * [32] Stuart A Kauffman. Emergent properties in random complex automata. _Physica D: Nonlinear Phenomena_, 10(1-2):145-156, 1984. * [33] Mark L Siegal and Aviv Bergman. Waddington's canalization revisited: developmental stability and evolution. _Proceedings of the National Academy of Sciences_, 99(16):10528-10532, 2002. * [34] Stuart Kauffman, Carsten Peterson, Bjorn Samuelsson, and Carl Troein. Genetic networks with canalyzing boolean rules are always stable. _Proceedings of the National Academy of Sciences_, 101(49):17102-17107, 2004. * [35] Santosh Venkatiah Sudharshan Manicka. The role of canalization in the spreading of perturbations in boolean networks. 2017. * [36] W V Quine. A Way to Simplify Truth Functions. _American Mathematical Monthly_, 62:627-631, 1955. * [37] Michael Conrad. The geometry of evolution. _BioSystems_, 24(1):61-81, 1990. * [38] Warren S McCulloch and Walter Pitts. A logical calculus of the ideas immanent in nervous activity. _The bulletin of mathematical biophysics_, 5(4):115-133, 1943. * [39] Steffen Klamt, Julio Saez-Rodriguez, Jonathan A Lindquist, Luca Simenoni, and Ernst D Gilles. A methodology for the structural and functional analysis of signaling and regulatory networks. _BMC bioinformatics_, 7(1):1-26, 2006. * [40] Rui-Sheng Wang and Reka Albert. Elementary signaling modes predict the essentiality of signal transduction network components. _BMC systems biology_, 5(1):1-14, 2011. * [41] Gang Yang, Jorge Gomez Tejeda Zanudo, and Reka Albert. Target control in logical models using the domain of influence of nodes. _Frontiers in physiology_, page 454, 2018. * [42] Bernold Fiedler, Atsushi Mochizuki, Gen Kurosawa, and Daisuke Saito. Dynamics and control at feedback vertex sets. i: Informative and determining nodes in regulatory networks. _Journal of Dynamics and Differential Equations_, 25(3):563-604, 2013. * [43] Atsushi Mochizuki, Bernold Fiedler, Gen Kurosawa, and Daisuke Saito. Dynamics and control at feedback vertex sets. ii: A faithful monitor to determine the diversity of molecular activities in regulatory networks. _Journal of theoretical biology_, 335:130-146, 2013. * [44] Jorge Gomez Tejeda Zanudo, Gang Yang, and Reka Albert. Structure-based control of complex networks with nonlinear dynamics. _Proceedings of the National Academy of Sciences_, 114(28):7234-7239, 2017. * [45] Herbert A Simon. The architecture of complexity. _Proceedings of the American Philosophical Society_, 106, 1962. * [46] Artemy Kolchinsky, Alexander J Gates, and Luis M Rocha. Modularity and the spread of perturbations in complex dynamical systems. _Physical Review E_, 92(6):060801, 2015. * [47] Richard M Karp. Reducibility among combinatorial problems. In _Complexity of computer computations_, pages 85-103. Springer, 1972. * [48] Bernhard Korte, Jens Vygen, B Korte, and J Vygen. _Combinatorial optimization_, volume 2. Springer, 2012. * [49] Fangting Li, Tao Long, Ying Lu, Qi Ouyang, and Chao Tang. The yeast cell-cycle network is robustly designed. _Proceedings of the National Academy of Sciences_, 101(14):4781-4786, 2004. * [50] Alvaro Chaos, Max Aldana, Carlos Espinosa-Soto, Berenice Garcia Ponce de Leon, Adriana Garay Arroyo, and Elena R Alvarez-Buylla. From genes to flower patterns and evolution: dynamic models of gene regulatory networks. _Journal of Plant Growth Regulation_, 25(4):278-289, 2006. * [51] Ranran Zhang, Mithun Vinod Shah, Jun Yang, Susan B Nyland, Xin Liu, Jong K Yun, Reka Albert, and Thomas P Loughran. Network model of survival signaling in large granular lymphocyte leukemia. _Proceedings of the National Academy of Sciences_, 2008. * [52] Kai Willadsen and Janet Wiles. Robustness and state-space structure of boolean gene regulatory models. _Journal of theoretical biology_, 249(4):749-765, 2007. * [53] George Von Dassow, Eli Meir, Edwin M Munro, and Garrett M Odell. The segment polarity network is a robust developmental module. _Nature_, 406(6792):188, 2000. * [54] Nicholas T Ingolia. Topology and robustness in the drosophila segment polarity network. _PLoS biology_, 2(6):e123, 2004. * [55] Madalena Chaves, Reka Albert, and Eduardo D Sontag. Robustness and fragility of boolean models for genetic regulatory networks. _Journal of theoretical biology_, 235(3):431-449, 2005. * [56] Madalena Chaves, Eduardo D Sontag, and Reka Albert. Methods of robustness analysis for boolean models of gene control networks. _arXiv preprint q-bio/0605004_, 2006. * [57] Madalena Chaves and Reka Albert. Studying the effect of cell division on expression patterns of the segment polarity genes. _Journal of The Royal Society Interface_, 5(Suppl 1):S71-S84, 2008. * [58] Rion Brattig Correia, Joana M. Almeida, Margot J. Wyrwoll, Irene Julca, Daniel Sobral, Chandra Shekhar Misra, Leonardo G. Guilgur, Hans-Christian Schuppe, Neide Silva, Pedro Prudencio, Ana Novoa, Ana S. Leocadio, Joana Bom, Moises Mallo, Sabine Kliesch, Marek Mutwil, Luis M. Rocha, Frank Tuttelmann, Jorg D. Becker, and Paulo Navarro-Costa. The conserved transcriptional program of metazoan male germ cells uncovers ancient origins of human infertility. _bioRxiv_, 2022. * [59] Thomas Parmer, Luis M Rocha, and Filippo Radicchi. Influence maximization in boolean networks. _Nature communications_, 13(1):1-11, 2022. * [60] Jianxi Gao, Yang-Yu Liu, Raissa M D'souza, and Albert-Laszlo Barabasi. Target control of complex networks. _Nature communications_, 5(1):1-8, 2014. * [61] Rion Brattig Correia, Alexander J Gates, Xuan Wang, and Luis M Rocha. Cana: A python package for quantifying control and canalization in boolean networks. _arXiv preprint arXiv:1803.04774_, 2018. Supplementary Materials for Dynamical Modularity in Automata Models of Biochemical Networks April 19, 2023 1_drosophila_model The update rules of the single-cell segment polarity network (SPN) are listed here. _Inputs:_ \(\mathrm{SLP_{t+1}=}\mathrm{SLP_{t}}\) \(\mathrm{nWG_{t+1}=}\mathrm{nWG_{t}}\) \(\mathrm{nHH_{t+1}=}\mathrm{nHH_{t}}\) _Internal nodes:_ \(wg_{t+1}=\)(CIA\({}_{\mathrm{t}}\wedge\mathrm{SLP_{t}}\)\(\wedge\)\(\neg\)CIR\({}_{\mathrm{t}}\)) \(\vee\)(\(wg_{t}\)\(\wedge\)(CIA\({}_{\mathrm{t}}\)\(\vee\)SLP\({}_{\mathrm{t}}\)) \(\wedge\)\(\neg\)CIR\({}_{\mathrm{t}}\)) \(\mathrm{WG_{t+1}=}\)\(wg_{t}\) \(en_{t+1}=\)\(\mathrm{nWG_{t}}\)\(\wedge\)\(\neg\)SLP\({}_{\mathrm{t}}\) \(\mathrm{EN_{t+1}=}\)\(en_{t}\) \(hh_{t+1}=\)\(EN_{t}\)\(\wedge\)\(\neg\)CIR\({}_{\mathrm{t}}\) \(\mathrm{HH_{t+1}=}\)\(hh_{t}\) \(\mathrm{ptc_{t+1}=}\)CIA\({}_{\mathrm{t}}\)\(\wedge\)\(\neg\)EN\({}_{\mathrm{t}}\)\(\wedge\)\(\neg\)CIR\({}_{\mathrm{t}}\) \(\mathrm{PTC_{t+1}=}\)\(ptc_{\mathrm{t}}\)\(\vee\)(PTC\({}_{\mathrm{t}}\)\(\wedge\)\(\neg\)nHH\({}_{\mathrm{t}}\)) \(\mathrm{PH_{t+1}=}\)\(\mathrm{PTC_{t}}\)\(\wedge\)nHH\({}_{\mathrm{t}}\) \(\mathrm{SMO_{t+1}=}\)\(\neg\)PTC\({}_{\mathrm{t}}\)\(\vee\)nHH\({}_{\mathrm{t}}\) \(cit_{t+1}=\)\(\neg\)EN\({}_{\mathrm{t}}\) \(\mathrm{CI_{t+1}=}\)\(ci_{t}\) \(\mathrm{CIA_{t+1}=}\)CI\({}_{\mathrm{t}}\)\(\wedge\)(\(\neg\)PTC\({}_{\mathrm{t}}\)\(\vee\)nHH\({}_{\mathrm{t}}\)) \(\mathrm{CIR_{t+1}=}\)\(\mathrm{CI_{t}}\)\(\wedge\)PTC\({}_{\mathrm{t}}\)\(\wedge\)\(\neg\)nHH\({}_{\mathrm{t}}\) The parasegment SPN (size \(n=60\)) extends the single-cell model by including all 14 internal nodes in each of four cells, where WG and HH influence both of the cell's neighbors (periodic boundary conditions assumed), and an additional node, SLP, acts as an input to each cell (see [1] and [2] for a full description). ## 2 Algorithms Pathway modules can be discovered by a breadth-first search (BFS) on the DCM (_algorithm_1, see Fig. 1), given the caveat of accounting for the temporal dynamics of state updates. Input to the algorithm is a DCM, a seed set \(S^{0}\), and a perturbation type (pinning or pulse). The algorithm tracks which s-units and t-units fire in each time step. As with regular BFS units are visited via a first-in first-out (FIFO) queue, but a separate queue is created for each time step. The time counter is initialized at o and increases once all of the units in the associated queue have been visited. T-units are added to the queue associated with the current time step t if their threshold is met, while s-units are added to the queue associated with time step t + 1. If the perturbation type is pinning, any given unit is only visited once (as they are afterwards assumed to fire every time step). If the perturbation type is pulse, by contrast, units that previously fired are allowed to be visited again in future time steps. The algorithm halts when no new s-units or t-units can be added to a queue (logically, no more units can fire) or the set of s-units and t-units that fire at time t = i is equivalent to the set of s-units and t-units that fired at a previous iteration t = j < i (a cycle is reached) 1. Footnote 1: This only applies to pulse perturbation where the set of units that fire is reset after each time step; for pinning perturbation, this condition is equivalent to the previous one (i.e., no new units are added to the queue) since all units that fire at time t also fire at time t + 1. The algorithm returns a hash table keyed by time step, where each value is the set of s-units and t-units that fire in the associated time step. The total set of s-units that fire, S, can then be easily derived by finding the superset of s-units from each time step (and ignoring all t-units). This algorithm can be extended to the general case where each seed s-unit x is associated with a firing schedule, \(\text{f}(\text{x},\text{t})=\{\text{i}\text{ if x is forced to fire at time t},\) \(\text{o if x is not forced to fire at time t}\), rather than only allowing pinning or pulse perturbation. In this case, the external perturbations dictated by the firing schedule take precedence over any units already in the queue. Additionally, the delay associated with units can be generalized; here all s-units take one time step to fire (i.e., they have delay dt = 1); however, s-units can be allowed a higher delay dt > 1 such that it takes several (relative) time steps for a signal to reach an s-unit (for example, this may represent cellular processes that operate on a longer time scale than transcription or translation and could be useful for non-synchronous updating schemes [3, 4]). The set of pathway modules discovered by _algorithm 1_ can then be reduced to a set of complex modules (_algorithm 2_, see Fig. 2). Input to the algorithm is a DCM, a maximum seed set size _max_s_, a set of s-units _seeds_, and a perturbation type (pinning or pulse). Algorithm 2 initially finds all pathway modules with seed set size s = 1 whose seed is in the set _seeds_ using _algorithm 1_ and then removes any subsumed modules to create an initial set of complex modules I\({}_{1}\). It then iteratively finds the set of complex modules I\({}_{s>1}\) with increasing seed set size s = 2,3,4..._max_s_. For each size s, pathway modules are found for all possible seeds sets, where seeds are obtained from the set _seeds_. This set of potential modules is then reduced to a set of complex modules by removing all modules that are not maximal or that contain non-synergistic submodules. The algorithm halts when the maximum seed set size _max_s_ has been reached and returns the set of complex modules found thus far. The set _seeds_ can be any set of s-units of interest (for example, the s-units representing the network inputs). If _seeds_\(=\) 8, the set of all s-units in the network, then all pathway modules (and therefore all complex modules) will be found; if, however, _seeds_\(=\) 1, then only complex modules with maximal seeds will be found 2. This is the maximal seed heuristic which finds all core complex modules in the network. All analysis and code for this paper, including _algorithm 1_ and _algorithm 2_, was written in Python 2.7 with the assistance of the CANA package for finding Boolean network DCMs [5]. The code is publicly available at [https://github.com/tjparmer/dynamical_modularity](https://github.com/tjparmer/dynamical_modularity). ``` functionthresholded_BFS (DCM, \(S^{0}\), perturbation_type): time\(t\)=o time_steps={t: queue(\(S^{0}\))} unfolding={t: {}} active_step={\(S^{0}\)} visited={} while\(t\)\(\leqslant\)max(time_steps): visited\(\leftarrow\) active_step ifactive_step\(=\)unfolding{i < t}: returnunfolding ifperturbation_type='pulse': active_step={} resett-unit thresholds whiletime_steps[t]: popfirst unit\(x\) from queue attime_steps[t] unfolding[t]\(\leftarrow\)x for each DCM neighbor (s-unit/t-unit)\(n\not\in\)active_step: if\(n\) is not a contradictionandthreshold\(\tau\) of \(n\) is met: \(dt\) = the delay of \(n\) time_steps[t+\(dt\)]\(\leftarrow\)\(n\) active_step\(\leftarrow\)\(n\) returnunfolding ``` **Algorithm 1**Pathway module breadth-first search Figure 1: **Pseudo-code for algorithm 1.** Input to the function is a DCM, a seed set of s-units \(S^{0}\), and a perturbation type (‘pinning’ or ‘pulse’). The algorithm is similar to breadth-first search (BFS) and explores what part of the DCM is reachable from the seed set, factoring in the perturbation type, relative time steps, logical contradiction, and t-unit thresholds. FIFO queues hashed by relative time step are stored in _time_steps_ while sets of visited units hashed by relative time step are stored in _unfolding_. Units (s or t) are only added to the queue associated with the respective time step if they meet the requisite conditions. S-units are given a default threshold \(\tau=1\) and delay dt = 1, while t-units have delay dt = 0. Units that are visited are added to the sets _visited_ and _active_step_. Each t-unit must track the s-units that contribute towards its threshold; with pulse perturbation, unit thresholds are reset after each time step. The algorithm halts when a repeat set of s-units and t-units is found for a given time step or when no new units can be reached based on their logical conditions. The function returns the pathway module \(\mathbf{M}_{S^{0}}\) initiated by the seed set, i.e., the sets of units visited at each relative time step. In the above pseudo-code, [key:value] represents a hash table and {} a set. ## 3 Complex modules and dynamical unfolding Figure 2: **Pseudo-code for algorithm 2.** Input to the function is a DCM, the maximum seed set size _max_s_, a set of s-units _seeds_, and a perturbation type (‘pinning’ or ‘pulse’). At each iteration, new seed sets of size \(s=k\) are generated by combining seed sets of current pathway modules of size \(s=k-1\) with the members of _seeds_ (size \(s=1\)). The pathway modules for these new seed sets are found using algorithm 1; this set of modules is then condensed to a set of complex modules by removing those that are subsumed by other modules or that fail a synergy check. For pinning perturbation, the former condition filters any module \(p\) where \(S_{p}\subset S_{m}\) for any other module \(m\in P\), whereas for pulse perturbation the temporal order of when units fire must be checked as well. The function returns the set of complex modules \(I\). In the above pseudo-code, \(\{\}\) represents a set. Figure 3: **Example interactions between pathway modules.** (a) Module \(\mathbf{M}_{\mathrm{A}}\) subsumes \(\mathbf{M}_{\mathrm{B}}\) as the unfolding of \(\mathbf{M}_{\mathrm{B}}\) is completely contained in \(\mathbf{M}_{\mathrm{A}}\). (b) \(\mathbf{M}_{\mathrm{A,B-0}}\) logically obstructs \(\mathbf{M}_{\mathrm{A}}\) because of the contradiction between B-o and B-1, which prevents the rest of \(\mathbf{M}_{\mathrm{A}}\) from unfolding. (c) \(\mathbf{M}_{\mathrm{A}}\) and \(\mathbf{M}_{\mathrm{D}}\) are decoupled (dynamically independent); even though they share a threshold, \(\mathbf{M}_{\mathrm{A,D}}\) does not fire the t-unit and thus cannot cause any downstream effects. (d) \(\mathbf{M}_{\mathrm{A}}\) and \(\mathbf{M}_{\mathrm{F}}\) are synergistic because \(\mathbf{M}_{\mathrm{A,F}}\) fires a t-unit that neither module can fire by itself, thereby reaching the node H. (e) There is both synergy and logical obstruction present between \(\mathbf{M}_{\mathrm{A}}\) and \(\mathbf{M}_{\mathrm{C-1}}\), as a logical contradiction with C-1 blocks the complete unfolding of \(\mathbf{M}_{\mathrm{A}}\); however, the synergy present also allows \(\mathbf{M}_{\mathrm{A,C-1}}\) to include I and J. (f) An example of synergy between non-maximal seeds. \(\mathbf{M}_{\mathrm{C}}\) is subsumed by \(\mathbf{M}_{\mathrm{A}}\) and \(\mathbf{M}_{\mathrm{K}}\) is subsumed by \(\mathbf{M}_{\mathrm{B-0}}\). Because of the contradiction of B-o and B-1, \(|\mathbf{S}_{\mathrm{A,B-0}}|=3\) (shown in green). However, if C and K are used as seeds, then a larger module is seen which includes the extra nodes L and M, \(|\mathbf{S}_{\mathrm{C,K}}|=4\) (shown in brown). This synergy is found in two complex modules, \(|\mathbf{S}_{\mathrm{C,B-0}}|=5\) and \(|\mathbf{S}_{\mathrm{A,K}}|=6\); however, neither of these are core complex modules because \(\mathbf{M}_{\mathrm{C}}\) and \(\mathbf{M}_{\mathrm{K}}\) are not maximal. \begin{table} \begin{tabular}{l l c} \hline \hline \(\mathbf{M}_{\mathbf{i}}\) & dynamical unfolding & \(|\mathbf{S}_{\mathbf{i}}|\) \\ \hline \(\mathbf{M}_{\text{en-t}}\) (\(\Sigma_{2}\)) & EN-1, _ci-o_, _ptc-o_, CI-o, CIR-o, CIA-o, _hh-1_, HH-1 & 9 \\ \(\mathbf{M}_{\text{nWG-o}}\) (\(\Sigma_{1}\)) & _en-o_, EN-o, _ci-1_, _hh-o_, CI-1, HH-o & 7 \\ \(\mathbf{M}_{\text{SLP-1}}\) (\(\Sigma_{1}\)) & _en-o_, EN-o, _ci-1_, _hh-o_, CI-1, HH-o & 7 \\ \(\mathbf{M}_{\text{CIR-1}}\) & _hh-o_, _wg-o_, _ptc-o_, HH-o, WG-o & 6 \\ \(\mathbf{M}_{\text{PTC-o}}\) & SMO-1, PH-o, CIR-o & 4 \\ \(\mathbf{M}_{\text{HH-1}}\) & SMO-1, CIR-o & 3 \\ \(\mathbf{M}_{\text{HH-o}}\) & PH-o & 2 \\ \(\mathbf{M}_{\text{wg-1}}\) & WG-1 & 2 \\ \(\mathbf{M}_{\text{ptc-1}}\) & PTC-1 & 2 \\ \(\mathbf{M}_{\text{nWG-t}}\) & – & 1 \\ \(\mathbf{M}_{\text{SLP-o}}\) & – & 1 \\ \(\mathbf{M}_{\text{PH-t}}\) & – & 1 \\ \(\mathbf{M}_{\text{SMO-o}}\) & – & 1 \\ \(\mathbf{M}_{\text{CIA-t}}\) & – & 1 \\ \hline \(\mathbf{M}_{\text{SLP-1,nHH-1}}\) (\(\Sigma_{3}\)) & SMO-1, CIR-o, _en-o_, EN-o, _ci-1_, _lh-o_, CI-1, HH-o, _vg-1_, _ptc-1_, WG-1, PTC-1, PH-1 \\ \(\mathbf{M}_{\text{SLP-1,PTC-o}}\) (\(\Sigma_{5}\)) & SMO-1, PH-o, CIR-o, _en-o_, EN-o, _ci-1_, _hh-o_, CI-1, HH-o, _vg-1_, _ptc-1_, _hh-o_, CI-1, HH-o, _vg-1_, _ptc-1_, PH-1 \\ \(\mathbf{M}_{\text{nWG-o,PTC-o}}\) (\(\Sigma_{5}\)) & SMO-1, PH-o, CIR-o, _en-o_, EN-o, _ci-1_, _hh-o_, CI-1, HH-o, _vg-1_, _ptc-1_, PH-1 \\ \(\mathbf{M}_{\text{nWG-o,PTC-o}}\) (\(\Sigma_{4}\)) & CIA-1, _ptc-1_, _hh-1_, _ci-o_, _ptc-o_, HH-1, CI-o, PTC-o, _vg-1_, _ptc-1_, _vg-o_, Figure 4: **Complex modules in the _drosophila_ single-cell SPN, seed set size \(s=1\). Select complex modules (indicated by color) are shown: module \(\mathbf{M}_{nWG-0}\) (a), module \(\mathbf{M}_{\mathrm{CIR-1}}\) (b), module \(\mathbf{M}_{\mathrm{PTC-0}}\) (c), and all 14 complex modules with seed set size \(s=1\) (d). These modules dynamically unfold from their seed set of size \(s=1\) (highlighted with a double edge) to include the s and t-units that are guaranteed to fire given pinning perturbation. In panel d, the 14 modules indicated compose \(\Lambda_{1}\) and cover the DCM. S-units and t-units are colored by the largest module that they are a part of; modules that only contain the seed (\(|\mathrm{S}_{i}|=1\)) are in purple. Note that several t-units have been removed or modified in the figure for clarity of the transition function.** ## 3 Complex modules and dynamical unfolding Figure 5: **Complex modules in the _drosophila_ single-cell SPN, seed set size \(s=2\). Select complex modules (indicated by color) are shown: module \(\mathbf{M}_{\mathrm{SLP-1,PTC-0}}\) (a), module \(\mathbf{M}_{\mathrm{nHH-1,en-1}}\) (b), module \(\mathbf{M}_{\mathrm{nHH-1,CIR-1}}\) (c), and module \(\mathbf{M}_{\mathrm{nHH-0,ptc-1}}\) (d). These modules dynamically unfold from their seed set of size \(s=2\) (highlighted with a double edge) to include the s and t-units that are guaranteed to fire given pinning perturbation. In panel d, note that s-unit _ptc-o_ does not fire with pinning perturbation due to contradiction with _ptc-1_. Note that several t-units have been removed or modified in the figure for clarity of the transition function.** Figure 6: **Complex modules in the _drosophila_ single-cell SPN, seed set size \(s=3\). Select complex modules (indicated by color) are shown: module \(\mathbf{M}_{\mathrm{SLP-0,nWG-1,nHH-1}}\) (a), module \(\mathbf{M}_{\mathrm{nWG-0,nHH-1,CIR-1}}\) (b), module \(\mathbf{M}_{\mathrm{nWG-0,nHH-0,ptc-1}}\) (c), and module \(\mathbf{M}_{\mathrm{SLP-0,nHH-0,ptc-1}}\) (d). These modules dynamically unfold from their seed set of size \(s=3\) (highlighted with a double edge) to include the \(s\) and \(t\)-units that are guaranteed to fire given pinning perturbation. In panel a, the indicated module resolves all \(17\) variable states and results in one of the network’s \(10\) attractors. In panel b, the indicated module is identical to \(\mathbf{M}_{\mathrm{SLP-1,nHH-1,CIR-1}}\) but differs only in the seed. In panel c, the indicated module is identical to \(\mathbf{M}_{\mathrm{SLP-1,nHH-0,ptc-1}}\) but differs only in the seed. Note that several \(t\)-units have been removed or modified in the figure for clarity of the transition function.** ## 3 Complex Modules and Dynamical Unfolding Figure 7: **Dynamical unfolding in the _drosophila_ parasegment SPN.** Cell boundaries are separated by a blue line; white indicates that a node is OFF in that time step, black indicates that a node is ON, and grey indicates that the node’s state is unknown. The color of the node label indicates the associated complex module. (a) The dynamical unfolding of \(\mathrm{SLP}_{1}=\mathrm{SLP}_{3}=0\), \(w_{2}=1\). The presence of WG-1 in cell 2 allows \(\mathbf{M}_{\mathrm{SLP}-0,nWG-1}\) (labeled as blue) to unfold in cells 1 and 3 by acting as an additional input to SLP-o. These modules turn off WG and turn on HH in cells 1 and 3, which initiates \(\mathbf{M}_{nWG-0,nHH-1}\) in cells 2 and 4 (labeled as red). Finally \(\mathbf{M}_{nWG-0,nHH-1}\) in cells 2 and 4 turns off HH which initiates the (small) \(\mathbf{M}_{nHH-0}\) module in cells 1 and 3 (labeled as yellow). This resolves almost the entire network (52 variables) with only three inputs. (b) The dynamical unfolding of \(\mathrm{SLP}_{1}=\mathrm{SLP}_{3}=\mathrm{PTC}_{2}=0\), \(\mathrm{SLP}_{2}=1\) is almost identical, except instead of \(w_{\text{$-1$}}\), SLP-1 and PTC-o are pinned in cell 2. \(\mathbf{M}_{\mathrm{SLP}-1,\mathrm{PTC}-0}\) (labeled as orange) similarly turns on WG, initiating the \(\mathbf{M}_{\mathrm{SLP}-0,nWG-1}\) module in cells 1 and 3. However, the time differences are more pronounced. The unfolding can be seen in three distinct epochs where first \(\mathbf{M}_{\mathrm{SLP}-1,\mathrm{PTC}-0}\) must unfold in cell 2, then the \(\mathbf{M}_{\mathrm{SLP}-0,nWG-1}\) modules are initiated in cells 1 and 3, and then \(\mathbf{M}_{nWG-0,nHH-1}\) is initiated in cell 4. This demonstrates how important timing is in the dynamic interplay between cells. \begin{table} \begin{tabular}{l l c l} \hline \hline \(|S^{0}_{i}|\) & \(S^{0}_{i}\) & \(|S_{i}|\) & \begin{tabular}{l} associated \\ modules \\ \end{tabular} \\ \hline \(s=1\) & \(en\)-\(1_{i}\) & 13 & \(M_{2}/M_{3}\) \\ \hline \(s=2\) & \( \begin{tabular}{l} (en\)-\(1_{i}\), \(en\)-\(1_{i+1}\)), \\ (CIR-\(1_{i}\), CIR-\(1_{i+2}\)) \\ \end{tabular} & 28 & \(M_{8}/M_{3}\), \\ \hline \(s=3\) & \(\{\) (SLP-\(0_{i}\), SLP-\(0_{i+2}\), \(wg\)-\(1_{i+1}\)) \(\}=X\) & 52 & \(M_{7}/M_{6}/M_{4}\) \\ \hline \(s=4\) & \(\{\) (X + \(en\)-\(1_{i+1}\)), (X + \(en\)-\(1_{i+3}\)) \(\}=Y\) & 56 & \(M_{8}/M_{10}/M_{6}\), \\ \hline \multirow{4}{*}{\(s=5\)} & \(\{\) (SLP-\(1_{i}\), SLP-\(1_{i+1}\), SLP-\(0_{i+2}\), \(wg\)-\(1_{i+3}\), \(en\)-\(1_{i+3}\)), (SLP-\(1_{i}\), SLP-\(1_{i+1}\), SLP-\(0_{i+3}\), \(wg\)-\(1_{i+2}\), \(en\)-\(1_{i+2}\)), (Y + SLP-\(1_{i+3}\)), (Y + SLP-\(0_{i+3}\)), (Y + SLP-\(0_{i+3}\)), (SLP-\(0_{i}\), SLP-\(1_{i+2}\), \(wg\)-\(1_{i+3}\), \(en\)-\(1_{i+3}\)), (SLP-\(0_{i}\), SLP-\(0_{i+1}\), SLP-\(1_{i+3}\), \(wg\)-\(1_{i+2}\), \(en\)-\(1_{i+2}\)) \(\}=Z\) & \\ \hline \multirow{4}{*}{\(s=6\)} & \(\{\) (Z + SLP-\(0_{i}\)), (Z + SLP-\(1_{i}\))} & & \(M_{8}/M_{10}/M_{6}\) \\ & \(\{\) (\(en\)-\(1_{i}\), PTC-\(0_{i+2}\), SLP-\(0_{i}\), SLP-\(0_{i+1}\), SLP-\(1_{i+2}\), SLP-\(1_{i+3}\)) & 60 &... \\ \cline{1-1} & \(\{\) (\(en\)-\(1_{i}\), PTC-\(0_{i+2}\), SLP-\(0_{i}\), SLP-\(1_{i+1}\), SLP-\(1_{i+2}\), SLP-\(0_{i+3}\)) & \(M_{10}/M_{6}/M_{9}\) \\ \cline{1-1} \cline{2-4} & \(\{\) (\(en\)-\(1_{i}\), PTC-\(0_{i+2}\), SLP-\(0_{i}\), SLP-\(0_{i+1}\), SLP-\(1_{i+2}\), SLP-\(0_{i+3}\)) & \\ \hline \hline \end{tabular} \end{table} Table 2: **Maximal modules in the _drosophila_ parasegment SPN.** The seed set \(S^{0}_{i}\) and size \(|S_{i}|\) of the maximal pathway module per seed set size \(s=|S^{0}_{i}|\) are shown, along with the associated intracellular complex modules found within the maximal module. Here, \(M_{1}\) indicates \(M_{nW\text{G}-0}\); \(M_{2}\) indicates \(M_{en-1}\); \(M_{3}\) indicates \(M_{nHH-1}\); \(M_{4}\) indicates \(M_{nHH-0}\); \(M_{5}\) indicates \(M_{CIR-1}\); \(M_{6}\) indicates \(M_{SLP-1,nHH-1}\); \(M_{7}\) indicates \(M_{5LP-0,nW-1}\); \(M_{8}\) indicates \(M_{nHH-1,en-1}\); \(M_{9}\) indicates \(M_{PTC-0}\); finally, \(M_{10}\) indicates \(M_{SLP-0,nW-1,nHH-1}\). For \(s=5\), all maximal pathway modules have the same associated intracellular modules; for \(s=6\), the first two groupings of seed sets have associated modules \(M_{8}\), \(M_{10}\), and \(M_{6}\), while the latter three groupings have associated modules \(M_{10}\), \(M_{6}\), and \(M_{9}\). The subscript on the s-unit labels denotes the cell of the parasegment where i and j are arbitrary cells chosen such that there is no logical contradiction between variable states and cellular addition uses modular arithmetic. As maximal modules with larger seed sets tend to build on the same set of modules with smaller seed sets, particular seed sets are condensed to variables for readability. Given the periodic boundaries of the parasegment and the similarity of complex modules, a significant degree of redundancy in regards to cellular indices can be seen in the maximal module sets. A seed set of size \(s\geqslant 6\) is needed to resolve every variable in the network. ## 3 Complex modules and dynamical unfolding Figure 8: **Minimal schemata for attractor control in the _drosophila_ parasegment SPN. (a) One of the minimal schemata found to reach an attractor, representing the minimal seeds sufficient to drive the dynamics to that attractor with pinning control. Each row represents a cell in the _drosophila_ parasegment and each column the state of the respective node: black represents ON, white represents OFF, and grey represents a wildcard value (i.e., the node state is redundant for convergence to the attractor under pinning perturbation of the seeds). The color of the row label represents the main module group active in that cell during unfolding (pink represents \(\Sigma_{1}\), red represents \(\Sigma_{3}\), and violet represents \(\Sigma_{4}\), see Table 1). (b) Same as panel a, but for the minimal schema sufficient to reach the wildtype attractor. The minimal configurations found are much smaller than the previous estimates without pinning perturbation found in [2].** Figure 9: **Dynamical unfolding of a minimal seed set to reach an attractor in the _drosophila_ parasegment SPN. Cell boundaries are separated by a blue line; white indicates that a node is OFF in that time step, black indicates that a node is ON, and grey indicates that the node’s state is unknown. The color of the node label indicates the associated complex module. All four SLP inputs are pinned; additionally, _wg_ and _en_ are ON in the first cell. \(\mathbf{M}_{\mathrm{en-1}}\) unfolds in cell 1, and _wg_-1 initiates \(\mathbf{M}_{\mathrm{SLP-0,nWG-1}}\) in cell 2 by acting as an input along with SLP-o. This turns on HH in both cells which extends the unfolding in each cell via additional synergy (modules \(\mathbf{M}_{\mathrm{en-1,nHH-1}}\) and \(\mathbf{M}_{\mathrm{SLP-0,nWG-1,nHH-1}}\), labeled as violet). It also extends the unfolding of \(\mathbf{M}_{\mathrm{SLP-1}}\) in cells 3 and 4 (module \(\mathbf{M}_{\mathrm{SLP-1,nHH-1}}\), labeled as red). This final configuration is in the basin of attraction of the broad stripes attractor [1], and the network will reach this attractor once the seeds _wg_ and _en_ in cell 1 are no longer pinned. Figure 10: **Dynamical unfolding of the wildtype attractor in the _drosophila_ parasegement SPN.** This figure is the same as Fig. 6 in the main text except that the full unfolding of the minimal seed set to the wildtype attractor is shown in terms of intracellular complex modules. \(\mathbf{M}_{\mathrm{SLP}-0}\) is present in cell 1 based on initial conditions (blue arrow). The influence of WG-1 in cell 4 (blue arrow) initiates \(\mathbf{M}_{\mathrm{SLP}-0,nWG-1}\) in cell 1 (shown by blue borders). This module turns on _hh_ and HH which initiates \(\mathbf{M}_{\mathrm{nHH}-1}\) in cell 2 (shown in green) and then turns off WG in cell 1 which, together with WG-o in cell 3 (indicated by red arrows), initiates \(\mathbf{M}_{nWG-0}\) in cell 2; the synergy between \(\mathbf{M}_{\mathrm{nHH}-1}\) and \(\mathbf{M}_{\mathrm{nWG-0}}\) results in \(\mathbf{M}_{\mathrm{nHH}-1,nWG-0}\) in cell 2 (which subsumes both modules and is shown by red borders). Based on initial conditions, \(\mathbf{M}_{\mathrm{PTC}-0}\) is also present in cell 1, \(\mathbf{M}_{\mathrm{SLP}-0}\) and \(\mathbf{M}_{\mathrm{wg}-0}\) are present in cell 2, \(\mathbf{M}_{\mathrm{SLP}-1}\) and \(\mathbf{M}_{\mathrm{CIR}-1}\) are present in cell 3, and \(\mathbf{M}_{\mathrm{SLP}-1}\) and \(\mathbf{M}_{\mathrm{wg}-1}\) are present in cell 4. \(\mathbf{M}_{\mathrm{SLP}-1}\) in cell 4 (shown by pink borders) turns off _hh_ and HH which, together with _hh_-o and HH-o in cell 2 (indicated by brown arrows), initiates \(\mathbf{M}_{\mathrm{nHH}-0}\) in cell 3, which is subsumed by \(\mathbf{M}_{\mathrm{nHH}-0,\mathrm{PTC}-1}\) (shown by brown borders). Finally, the presence of _hh_-1 and HH-i in cell 1 initiates \(\mathbf{M}_{\mathrm{nHH}-1}\) in cell 4, which is subsumed by \(\mathbf{M}_{\mathrm{SLP}-1,nHH-1}\) (shown in red). ## 4 _drosophila_ observability and control The single-cell SPN is small enough that the attractors of the network can be fully enumerated. Analysis of the dynamical activity within each of the 10 attractors corresponds well with the analysis of complex modules, as each attractor can be associated with one or more modules (Table 3). The \(\Sigma_{4}\) module \(\mathbf{M}_{\mathrm{SLP}-0,nWG-1,nHH-1}\) is seen in only one attractor as it requires an AND condition between all three inputs, resulting in the driver set (SLP=0, nWG=1, nHH=1). The \(\Sigma_{3}\) modules, \(\mathbf{M}_{\mathrm{SLP}-1,nHH-1}\) and \(\mathbf{M}_{nWG-0,nHH-1}\), are seen in four attractors: the former module resolves every variable except for the input nWG and therefore leads to two attractors depending on the state of this input with driver sets (SLP=1, nWG=1, nHH=1) and (SLP=1, nWG=0, nHH=1); the latter module resolves every variable except for SLP, _wg_, and WG and can lead to three attractors, with driver sets (SLP=1, nWG=0, nHH=1) (seen above), (SLP=0, nWG=0, nHH=1, _wg_=0), and (SLP=0, nWG=0, nHH=1, _wg_=1). The modules \(\mathbf{M}_{\mathrm{SLP}-1,nHH-0,PTC-1}\) and \(\mathbf{M}_{nWG-0,nHH-0,PTC-1}\) are identical except for their seeds and lead to three attractors, as both modules resolve every variable state except for the missing input (which my be in either state) 3. The driver sets for the three attractors are therefore (SLP=1, nWG=0, nHH=0, PTC=1), (SLP=1, nWG=1, nHH=0, PTC=1), and (SLP=0, nWG=0, nHH=0, PTC=1). The final two attractors require \(\mathbf{M}_{\mathrm{SLP}-0,nWG-1}\) in combination with other modules; one requires \(\mathbf{M}_{nHH-0}\) and \(\mathbf{M}_{\mathrm{PTC}-0}\), while the other requires \(\mathbf{M}_{nHH-0,PTC-1}\) (note that there is synergy between nHH-0 and PTC-1 and that this module is also a submodule of \(\mathbf{M}_{\mathrm{SLP}-1,nHH-0,PTC-1}\) and \(\mathbf{M}_{nWG-0,nHH-0,PTC-1}\), meaning that it appears in four attractors). The driver sets for these two attractors are therefore (SLP=0, nWG=1, nHH=0, PTC=0) and (SLP=0, nWG=1, nHH=0, PTC=1). Interestingly, the superset of the nodes associated with the minimal driver sets found via this methodology are the same driver nodes predicted by feedback vertex set theory (inputs SLP, nWG, nHH and internal nodes PTC and _wg_). Footnote 3: These are complex modules but not core complex modules as PTC is not a maximal seed (because \(\mathbf{M}_{\mathrm{Ptc}-1}\) subsumes \(\mathbf{M}_{\mathrm{PTC}-1}\)). The characterization of dynamical modules here highlights the difference between the driver node set with pinning and pulse perturbation control. If a driver node's state must be constant, as with pinning perturbations, PTC and _wg_ are needed to fully control the network. However, if one is allowed to flip the states of the input variables, then no internal nodes need to be pinned. For example, to reach the attractor that requires \(\mathbf{M}_{\mathrm{SLP}-0,nWG-1}\), \(\mathbf{M}_{nHH-0}\), and \(\mathbf{M}_{\mathrm{PTC}-0}\), the module \(\mathbf{M}_{\mathrm{SLP}-0,nWG-1,nHH-1}\) can be used which turns off PTC (and maintains PTC in the OFF state), and nHH can then be switched OFF to reach the attractor. Furthermore, modules \(\mathbf{M}_{\mathrm{SLP}-1,nHH-1}\) and \(\mathbf{M}_{nWG-0,nHH-1}\) turn off _ptc_; if the state of nHH is then flipped, the module \(\mathbf{M}_{nHH-0,PTC-1}\) will unfold, as is seen in four attractors. In general, perturbing nHH along with the other input nodes replaces the need to control PTC as nHH is an input to PTC. Perturbing the internal node _wg_, meanwhile, allows to turn on or off _wg_ and WG, when SLP=0, nWG=0, and nHH=1. However, SLP can also be turned ON after \(\mathbf{M}_{nWG-0,nHH-1}\) unfolds to turn on _wg_ and WG and then turned OFF. Alternatively, SLP can be turned OFF while nWG and nHH are ON to initiate \(\mathbf{M}_{\mathrm{SLP}-0,nWG-1,nHH-1}\), which turns off _wg_ and WG; then nWG can be flipped to initiate \(\mathbf{M}_{nWG-0,nHH-1}\). Therefore, perturbing SLP and nWG replaces the need to control _wg_. In a sense, pinning the internal nodes then is just a shortcut to replace the need for input node perturbations. This highlights the usefulness of complex modules in developing a qualitative understanding of control in a system. Perhaps unsurprisingly, the same intracellular modules that lead to attractors in the single-cell SPN lead to attractors in the parasegment SPN, when considering six biologically-relevant attractors from [1]. In particular, \(\mathbf{M}_{\mathrm{SLP-0,nWG-1}}\), \(\mathbf{M}_{\mathrm{nWG-0,nHH-1}}\), \(\mathbf{M}_{\mathrm{SLP-1,nHH-1}}\), and \(\mathbf{M}_{\mathrm{SLP-1,nHH-0,PTC-1}}\) each play a significant role, appearing in at least four of the attractors (see Table 4). The logical inferences made during the dynamical unfolding process are also useful in the problem of observability. If a network has reached a fixed point, then each variable state is unchanging (i.e., pinned); thus, observing a set of sensor nodes with corresponding s-units \(\mathrm{S}_{i}^{0}\) guarantees that the module \(\mathbf{M}_{\mathrm{S}_{i}^{0}}\) has fully unfolded and all the variable states contained in \(\mathrm{S}_{i}\) are present in the current state of the network. Observing only a small subset of such states (2-3 in the case of the _drosophila_ single-cell SPN, equivalent to less than 18% of the network size) is sufficient to figure out which attractor the network is in. The size of this minimal observability set is naturally upper-bounded by the size of the minimal driver set. For the _drosophila_ parasegment SPN, the minimal observability sets require only two s-units (3% of the network size) when considering only the six biologically-relevant attractors in Table 4. \begin{table} \begin{tabular}{l l l l} \hline \hline Attractor & Minimal observability sets & Minimal control set & Associated modules \\ \hline & \{ (nHH-1, SLP-1, nWG-o), & & \\ & (SLP-1, _ptc-1_, nWG-o), & & \\ & (_vg-1_, SLP-1, nWG-o), & & \\ & (SLP-1, nWG-o, WG-1), & & \\ 11100001111111010 & (CIR-o, SLP-1, nWG-o), & (nHH-1, SLP-1, nWG-o) & \(\mathbf{M_{6}}\),\(\mathbf{M_{zi}}\) \\ & (SLP-1, nWG-o, CIA-1), & & \\ & (SLP-1, nWG-o, PH-1), & & \\ & (SLP-1, SMO-1, nWG-o) \} & & \\ \hline 0110000111111010 & \{(_vg-1_, SLP-o), (SLP-o, WG-1)\} & (nHH-1, nWG-o, _wg-1_, SLP-o) & \(\mathbf{M_{zi}}\),\(\mathbf{M_{wg-1}}\),\(\mathbf{M_{SLP-o}}\) \\ \hline & \{ (nWG-1, _vg-1_, _ptc-1_, nWG-1), & & \\ 111000011111101101 & (nWG-1, PH-1), (nWG-1, CIA-1), & & \\ & (nWG-1, WG-1) \} & & \\ \hline & \{ (nHH-1, PTC-o), (nHH-1, _en-1_), & & \\ & (EN-1, nHH-1), (nHH-1, _ci-o_), & & \\ 0001110001000011 & (CI-o, nHH-1), (nHH-1, CIA-o), & & \\ & (nHH-1, _ptc-o_), (nHH-1, _thh-1_), & & \\ & (HH-1, nHH-1), (nHH-1, PH-o) \} & & \\ \hline 100000001001101 & \{(CIR-1, nWG-1)\} & (SLP-1, nHH-o, PTC-1, nWG-1) & \(\mathbf{M_{i3}}\),\(\mathbf{M_{nWG-i}}\) \\ \hline 00011100010000001 & \{(nHH-o, PTC-o), (SMO-1, nHH-o)\} & (nWG-1, PTC-o, nHH-o, SLP-o) & \(\mathbf{M_{7}}\),\(\mathbf{M_{9}}\),\(\mathbf{M_{4}}\) \\ \hline & \{ (_vg-o, ptc-1_), (_ptc-1_, WG-o), & & \\ 000000001111111010 & (_vg-o, PH-1), (_vg-o_, CIA-1), & (nWG-o, nHH-1, SLP-o, _wg-o_) & \(\mathbf{M_{zi}}\),\(\mathbf{M_{wg-o}}\),\(\mathbf{M_{SLF-o}}\) \\ & (WG-o, PH-1), (WG-o, CIA-1) \} & & \\ \hline & \{ (PTC-1, _en-1_), (SMO-o, _en-1_), & & \\ & (SMO-o, EN-1), (PTc-1, EN-1), & & \\ & (PTc-1, _ci-o_), (SMO-o, _ci-o_), & (PTc-1, _ci-o_), & \\ 0001110100000001 & (CI-o, SMO-o), (PTc-1, CI-o), & (nWG-1, PTC-1, SLP-o, nHH-o) & \(\mathbf{M_{7}}\),\(\mathbf{M_{12}}\) \\ & (PTc-1, _hh-1_), (SMO-o, _hh-1_), & & \\ & (PTc-1, HH-1), (CIR-o, SMO-o), & & \\ & (HH-1, SMO-o) \} & & \\ \hline 00000001011010 & \{(CIR-1, SLP-o)\} & (nWG-o, nHH-o, PTC-1, SLP-o) & \(\mathbf{M_{zi}}\),\(\mathbf{M_{SLF-o}}\) \\ \hline & \{ (nWG-o, SLP-1, _vg-o_), & & \\ & (nWG-o, nHH-o, SLP-1), & & \\ & (CIR-1, SLP-1, nWG-o,), & & \\ & (SLP-1, nWG-o, CIA-o), & & \\ & (SLP-1, SMO-o, nWG-o), & & \\ & (SLP-1, SMO-o, nWG-o), & & \\ & (SLP-1, nWG-o, nWG-o), & & \\ & (SLP-1, nWG-o, PH-o), & & \\ & (SLP-1, _ptc-o_, nWG-o) \} & & \\ \hline \hline \end{tabular} \end{table} Table 3: **Minimal observability and control sets for the _drosophila_ single-cell SPN._ The attractors label the node states in order of SLP, _vg_, WG, _et_, EN, _hh_, HH, _ptc_, PTC, PH, SMO, _ci_, CI, CIA, CIR, nHH, nWG. The associated complex modules are colored to correspond to Table 1. Here, \(\mathbf{M_{4}}\) indicates \(\mathbf{M_{nHH-o}}\); \(\mathbf{M_{6}}\) indicates \(\mathbf{M_{SLP-1,nHH-1}}\); \(\mathbf{M_{7}}\) indicates \(\mathbf{M_{SLP-0,nWG-1}}\); \(\mathbf{M_{9}}\) indicates \(\mathbf{M_{PTc-0}}\); \(\mathbf{M_{10}}\) indicates \(\mathbf{M_{SLP-0,nWG-1,nHH-1}}\); \(\mathbf{M_{11}}\) indicates \(\mathbf{M_{nWG-0,nHH-1}}\); \(\mathbf{M_{12}}\) indicates \(\mathbf{M_{nHH-0,PTc-1}}\); \(\mathbf{M_{13}}\) indicates \(\mathbf{M_{SLP-1,nHH-0,PTc-1}}\); finally, \(\mathbf{M_{14}}\) indicates \(\mathbf{M_{nWG-0,nHH-0,PTc-1}}\). \begin{table} \begin{tabular}{l c c c c c} \hline \hline & Average driver & Min driver & Max driver & Driver superset & FVS \\ & set size & set size & set size & size & size \\ \hline Example GRN & 2 & 2 & 2 & 2 & 3 \\ _Drosophila_ & & & & & \\ single-cell & 3\(\cdot\)7 & 3 & 4 & 5 & 5 \\ _Thaliana_ & 6.1 & 5 & 7 & 9 & 9 \\ Yeast & 6.45 & 5 & 7 & 7 & 8 \\ \hline \hline \end{tabular} \end{table} Table 6: **Comparison of attractor controllability between genetic regulatory networks. Statistics for the driver sets sufficient to fully resolve each fixed point attractor in the network, assuming pinning perturbation, are shown. Driver sets are equivalent to the seed sets of the minimal pathway modules whose unfolding includes all node states found in the respective attractor. The superset of driver nodes needed (based on the set of s-units sufficient to control to each attractor) is compared to that predicted by feedback vertex set theory [9, 10]. See the caption of Fig. 5 for a description of the networks used here.** \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & Nodes & S-units & Attractors & \(\overline{\mathrm{D}}(\Pi^{*})\) & \(|\Pi^{*}|\) & avg. \(|S_{i}|\in\Pi^{*}\) & max \(|S_{i}|\in\Pi^{*}\) \\ \hline Example GRN & 6 & 12 & 5 & 0.83 & 4 & 3\(\cdot\)25 & 5 (83\%) \\ _Drosophila_ & & & & & & \\ single-cell & 17 & 34 & 10 & 0.81 & 8 & 5\(\cdot\)25 & 16 (94\%) \\ _Thaliana_ & 15 & 30 & 10 & 0.98 & 11 & 2.82 & 11 (73\%) \\ Yeast & 12 & 24 & 11 & 1.0 & 7 & 3\(\cdot\)43 & 12 (100\%) \\ \hline \begin{tabular}{l} _Drosophila_ \\ parasegment \\ Leukemia \\ \end{tabular} & 60 & 120 &? & 0.87 & 32 & 4\(\cdot\)44 & 33 (55\%) \\ \hline \hline \end{tabular} \end{table} Table 5: **Comparison of dynamical modularity between genetic regulatory networks. The optimal cover \(\Pi^{*}\) is estimated for each network by maximizing the mean dynamical modularity among covers across seed set sizes, assuming pinning perturbation. Six networks are considered: the example GRN from Fig. 2 in the main text, the _drosophila_ single-cell and parasegment SPNs [1], the _Thaliana arabidopsis_ cell-fate specification network [6], the yeast cell-cycle network [7], and the T-LGL leukemia network [8]. For the first grouping of networks, \(\Pi^{*}\) was calculated considering all possible covers of \(q\leqslant 6\) core complex modules with seed set size \(s\leqslant 5\), with higher \(q\) values used when computationally feasible. By contrast, \(\Pi^{*}\) for the _drosophila_ parasegment and leukemia networks was estimated using the greedy heuristic mentioned in Section 2.3. For each network the mean dynamical modularity of the optimal cover, \(\overline{\mathrm{D}}(\Pi^{*})\), the size of the estimated optimal cover, \(|\Pi^{*}|\), the average module size within the optimal cover, avg. \(|S_{i}|\in\Pi^{*}\), and the maximum module size within the optimal cover, max \(|S_{i}|\in\Pi^{*}\), is shown. Percentages are calculated based on the number of nodes in the network.** \begin{table} \begin{tabular}{l l l l|l l} \hline \hline & Yeast & _Thaliana_ & \begin{tabular}{l} _Drosophila_ \\ single-cell \\ \end{tabular} & \begin{tabular}{l} _Drosophila_ \\ parasegment \\ \end{tabular} & Leukemia \\ \hline Network size & 12 & 15 & 17 & 6o & 6o \\ \hline Number of attractors & 11 & 10 & 10 &? &? \\ \hline \(s=1\) & & & & & \\ number of modules, & 17, & 16, & 14, & 40, & 48, \\ maximum module size & 4 (33\%) & 8 (53\%) & 9 (53\%) & 13 (22\%) & 24 (40\%) \\ (\% of network) & & & & & \\ \hline \(s=2\) & & & & & \\ number of modules, & 12, & 7, & 8, & 40, & 64, \\ maximum module size & 10 (83\%) & 12 (80\%) & 16 (94\%) & 28 (47\%) & 41 (68\%) \\ (\% of network) & & & & & \\ \hline \(s=3\) & & & & & \\ number of modules, & 12, & 3, & 6, & 152, & 223, \\ maximum module size & 12 (100\%) & 14 (93\%) & 17 (100\%) & 52 (87\%) & 53 (88\%) \\ (\% of network) & & & & & \\ \hline \(s=4\) & & & & & \\ number of modules, & 4, & 0, & 472, & 646, \\ maximum module size & 12 (100\%) & & & 56 (93\%) & 54 (90\%) \\ (\% of network) & & & & & \\ \hline \(s=5\) & & & & & \\ number of modules, & 2, & 0, & 0, & 1073, & 1446, \\ maximum module size & 12 (100\%) & & & 59 (98\%) & 56 (93\%) \\ (\% of network) & & & & & \\ \hline \(s=6\) & & & & & \\ number of modules, & & & & & \\ maximum module size & 0, & 0, & 1475, & 2353, \\ (\% of network) & & & & 60 (100\%) & 57 (95\%) \\ \hline Min fixed point seed set size & & & & \\ (\% of network) & 3 (25\%) & 4 (27\%) & 3 (18\%) & 6 (10\%) & 8 (13\%) \\ \hline \hline \end{tabular} \end{table} Table 7: **Comparison of core complex modules between genetic regulatory networks.** The number of core complex modules, along with the maximum module size, is found per network per seed set size s. Additionally, the minimal seed set size found to fully resolve every variable state (min fixed point seed set size) is shown; note that this fixed point is not necessarily an attractor of the original network, due to the pinning of the seeds involved. It is apparent that variable states in each network are quickly resolved as \(s\) is increased when considering the largest complex module. In comparing the larger networks, the leukemia network has more core complex modules than the _drosophila_ parasegment per \(s\) value, suggesting more complicated dynamics. Percentages are calculated based on the number of variables (nodes) in the network. See the caption of Fig. 5 for a description of the networks used here.
2305.12289
Revisiting the Architectures like Pointer Networks to Efficiently Improve the Next Word Distribution, Summarization Factuality, and Beyond
Is the output softmax layer, which is adopted by most language models (LMs), always the best way to compute the next word probability? Given so many attention layers in a modern transformer-based LM, are the pointer networks redundant nowadays? In this study, we discover that the answers to both questions are no. This is because the softmax bottleneck sometimes prevents the LMs from predicting the desired distribution and the pointer networks can be used to break the bottleneck efficiently. Based on the finding, we propose several softmax alternatives by simplifying the pointer networks and accelerating the word-by-word rerankers. In GPT-2, our proposals are significantly better and more efficient than mixture of softmax, a state-of-the-art softmax alternative. In summarization experiments, without significantly decreasing its training/testing speed, our best method based on T5-Small improves factCC score by 2 points in CNN/DM and XSUM dataset, and improves MAUVE scores by 30% in BookSum paragraph-level dataset.
Haw-Shiuan Chang, Zonghai Yao, Alolika Gon, Hong Yu, Andrew McCallum
2023-05-20T21:52:24Z
http://arxiv.org/abs/2305.12289v1
# Revisiting the Architectures like Pointer Networks to Efficiently Improve ###### Abstract Is the output softmax layer, which is adopted by most language models (LMs), always the best way to compute the next word probability? Given so many attention layers in a modern transformer-based LM, are the pointer networks redundant nowadays? In this study, we discover that the answers to both questions are no. This is because the softmax bottleneck sometimes prevents the LMs from predicting the desired distribution and the pointer networks can be used to break the bottleneck efficiently. Based on the finding, we propose several softmax alternatives by simplifying the pointer networks and accelerating the word-by-word rerankers. In GPT-2, our proposals are significantly better and more efficient than mixture of softmax, a state-of-the-art softmax alternative. In summarization experiments, without significantly decreasing its training/testing speed, our best method based on T5-Small improves factCC score by 2 points in CNN/DM and XSUM dataset, and improves MAUVE scores by 30% in Book-Sum paragraph-level dataset. ## 1 Introduction When recurrent neural networks such as LSTM Hochreiter and Schmidhuber (1997) are the mainstream language model (LM) architecture, pointer networks, or so-called copy mechanisms Gu et al. (2016), have been shown to improve the state-of-the-art LMs for next word prediction Merity et al. (2017) and summarizations See et al. (2017) by a large margin. However, after transformer Vaswani et al. (2017) becomes the dominating LM architectures, the pointer networks are rarely used in the state-of-the-art pretrained LMs.One major reason is that the attention mechanism in every transformer layer can learn to copy the words from the context, so it seems to be redundant to add a copying mechanism on top of the transformer. In this paper, we demonstrate that the architectures like pointer networks can still substantially improve the state-of-the-art transformer LM architectures such as GPT-2 Radford et al. (2019) and T5 Raffel et al. (2020) mainly due to breaking the bottleneck of their final softmax layer Yang et al. (2018); Chang and McCallum (2022). In Figure 1, we illustrate a simple example from Chang and McCallum (2022) to explain the softmax bottleneck and why the pointer networks could alleviate the problem. When predicting the next Figure 1: Illustration of the softmax bottleneck and pointer network using an example from Chang and McCallum (2022). GPT-2 cannot output both _king_ or _woman_ as the possible next word due to the parallel-ogram structure in the output word embedding space, while the pointer network could solve this by directly copying words from the context. The standard softmax estimate the probabilities of outputting _king_ and _woman_ by the dot products between the hidden state \(\mathbf{h}_{c_{t},V}\) and their global word embeddings. By contrast, The pointer networks compute the dot products between the projected current hidden state \(\mathbf{h}_{c_{t},S}\) and projected hidden states \(\mathbf{h}_{e_{r},}\) for _king_ and _woman_ to estimate their probabilities. word, most LMs would try to output a hidden state \(\mathbf{h}_{c_{t},V}\) that is close to all the next word possibilities. For example, when the next word should be either _king_ or _woman_ with similar probabilities, the ideal hidden state is supposed to be the average of the global output word embeddings of _king_ and _woman_. However, there might be other interfering words (_queen_ and _man_ in this case) between the ideal next word candidates, which force the LM to output the wrong distribution. To solve this problem, we can let the LMs predict the probability of copying the words in the context separately by paying attention to the previous hidden states Gu et al. (2016) and we call this kind of architecture pointer networks in this paper. That is, we can compute the dot products with the hidden states of _king_\(\mathbf{h}_{e,k}\) and the hidden states of _woman_\(\mathbf{h}_{e,w}\) rather than with their global output word embeddings in order to estimate the probabilities of copying these two words in the context. Our experiments show that the pointer networks consistently improve the performance of GPT-2 in next word prediction and the quality of summarization from T5 and BART. Contrary to the mainstream explanation in previous pointer network literature, we discover that most of the improvements in our experiments do not come from the attention mechanism. To study these improvements, we propose a very simple pointer network variant that does not use any previous hidden states and we show that the proposed method can achieve similar improvements. As shown in Figure 2, we simply project the last hidden state into two embeddings. One embedding \(\mathbf{h}_{c_{t},S}\) is to compute the dot product with the context words, and \(\mathbf{h}_{c_{t},V}\) is for the dot product of the other words. Then, the GPT-2 can output the hidden state for context words \(\mathbf{h}_{c_{t},S}\) as the average embedding of the _king_ and _woman_ without interfered by the words of _man_ and _queen_ that are handled by \(\mathbf{h}_{c_{t},V}\). We call this method context partition. In addition to words in the context, we can also use another embedding for the top-k likely next words. This can be viewed as a very simple and efficient alternative to a reranker, so we call it reranker partition. In our experiments, we show that the context partition performs similarly to pointer networks while combining a pointer network, context partition, and reranker partition would significantly outperform each individual method. Compared to the state-of-the-art solutions for alleviating the softmax bottleneck such as mixture of softmax Yang et al. (2018); Chang and McCallum (2022), our proposed method is more efficient while achieving lower perplexity on GPT-2. Furthermore, we find that adding a very expensive word-by-word reranker only improves our method slightly, which suggested the difficulty of further improving the final softmax layer over the proposed alternatives. In the text completion task using GPT-2, we find that the proposed softmax alternatives reduce hallucination by copying more proper nouns from the context even though we did not provide any part-of-speech information during training. In summarization, our methods and pointer networks output a more specific summary, increase the factuality, and consistently improve 9 metrics, especially in the smaller language models. Finally, we show that the softmax bottleneck problem is not completely solved in GPT-3.5 in the limitation section. ### Main Contributions * We propose a series of efficient softmax alternatives that unify the ideas of pointer network, reranker, multiple embeddings, and vocabulary partitioning.1 Footnote 1: Our codes are released at [https://github.com/iesl/Softmax-CPR](https://github.com/iesl/Softmax-CPR) * We evaluate the proposed softmax alternatives in text completion tasks and summarization tasks using various metrics to identify where our methods improve the most. * Our experiments indicate pointer networks and our proposed alternatives can still improve the modern transformer-based LMs. By breaking Figure 2: We simplify the pointer network / reranker by using another embedding \(\mathbf{h}_{c_{t},S}\) for the words in the context / the top-k likely words. the softmax bottleneck, our methods learn to sometimes copy the context words to reduce generation hallucination and sometimes exclude the context words to reduce the repetition. Besides, we find that the softmax bottleneck problem won't be completely solved by the huge size of GPT-3.5. ## 2 Background Before introducing our method, we would first briefly review the problem we are solving and its state-of-the-art solutions. ### Softmax Bottleneck Problem Most LMs use a softmax layer to compute the final probability of predicting the word \(x\): \[P_{M}(x|c_{t})=\frac{\text{exp}(\text{Logit}(x,c_{t}))}{\sum_{x^{\prime}}\text {exp}(\text{Logit}(x^{\prime},c_{t}))}, \tag{1}\] where \(c_{t}\) is the context words. Typically, the logit \(\text{Logit}(x,c_{t})=(\mathbf{h}_{c_{t}}^{M})^{T}\mathbf{w}_{x}\), \(\mathbf{h}_{c_{t}}^{M}\) is the \(M\)th-layer hidden state given the input context \(c_{t}\) and \(\mathbf{w}_{x}\) is the output word embeddings for \(x\). One problem is that the output word embeddings \(\mathbf{w}_{x}\) are global and independent to the context. After pretraining, the similar words would have similar output word embeddings. However, the similarity structure in the word embedding space might prevent LMs from outputting the desired distribution. The parallelogram structure among the embeddings of _king_, _queen_, _woman_, and _man_ is a simple example. Chang and McCallum (2022) generalize this observation and show that some words in a small subspace would create some multi-mode distributions that a LM cannot output using a single hidden state \(\mathbf{h}_{c_{t}}\) in the softmax layer. ### Mixture of Softmax Method To overcome the bottleneck, one natural solution is to have multiple hidden states and each hidden state corresponds to a group of possible words Yang et al. (2018). For example, we can have one hidden state for _king_ and another hidden state for _woman_. One major concern of this mixture of softmax (MoS) approach is the computational overhead. MoS needs to compute the final softmax multiple times and merge their resulting distributions. That is, we need to compute the dot products between every hidden state and all the words in the vocabulary, which is expensive especially when the vocabulary size is large. ### Multiple Input State Enhancement In MoS, the multiple hidden states come from the linear projections of the last hidden state. Chang and McCallum (2022) point out that the total degree of freedom among the multiple hidden states is limited by the dimensionality of the hidden state. To allow LMs to move multiple hidden states more freely, Chang and McCallum (2022) propose to concatenate the projection of a block of hidden state with the last hidden state \(\mathbf{h}_{c_{t}}^{M}\) so as to increase its dimensionality: \[\mathbf{q}_{c_{t}}=\mathbf{h}_{c_{t}}^{M}\oplus GELU\left(L^{h}(\oplus_{i,m}\mathbf{h}_{c_ {t-i}}^{M-m})\right), \tag{2}\] where \(GELU\) is the non-linear transformation used in GPT-2 and \(L^{h}\) is a linear transformation that allows us to consider more hidden states without significantly increasing the model size. \(\oplus_{i,m}\mathbf{h}_{c_{t-i}}^{M-m}\) is the concatenation of a block of hidden states. We set the block size to be 3x3 in our GPT-2 experiments and 1x3 in our summarization experiments (i.e., considering the last 3 hidden states in the last layer as shown in Figure 3). ## 3 Methods To break the softmax bottleneck more efficiently compared to MoS, our overall strategy is simple. If we can identify a small partition of words that are very likely to become the next word, we can just compute the dot products between a hidden state and the embeddings of these likely words instead of all the words as in MoS. For example, if we can identify _king_ and _woman_ are much more likely to appear than _queen_ and _man_, we can only compute the dot product between a hidden state and the embeddings of _king_ and _woman_ without being interfered by other words. Specifically, when we compute the next word probability in Equation 1, the logit of the word \(x\) given the context \(c_{t}\) \[\text{Logit}(x,c_{t})=\begin{cases}\mathbf{f}_{c_{t},V}^{T}\mathbf{e}_{x}&\text{if }x \in S\\ \mathbf{f}_{c_{t},V}^{T}\mathbf{w}_{x}&\text{O/W}\end{cases}, \tag{3}\] \begin{table} \begin{tabular}{l l|l l} & Abbr. & Partition (\(S\)) & Word Emb (\(\mathbf{e}_{x}\)) \\ \hline Context Partition & C & Decoder context & Global word emb \\ Encoder Partition & E & Encoder input & Global word emb \\ PS (LD) Merity et al. (2017) & P & Decoder context & Decoder state \\ PG (LE) See et al. (2017) & P & Encoder input & Encoder state \\ Retinker Partition & R & Top k & Global word emb \\ \end{tabular} \end{table} Table 1: Comparison of different softmax alternatives and their abbreviation (Abbr.) using Equation 3. PS: Pointer Sentinel. PG: Pointer Generator. LD: local decoder embedding. LE: local encoder embedding. where \(\mathbf{f}_{c_{t},S}=L_{S}^{f}(\mathbf{q}_{c_{t}})\) and \(\mathbf{f}_{c_{t},V}=L_{V}^{f}(\mathbf{q}_{c_{t}})\) are the linear projections of the hidden state concatenation \(\mathbf{q}_{c_{t}}\) in Equation 2. As shown in Table 1, different softmax alternatives have different ways of constructing this set \(S\) and use different word embeddings \(\mathbf{e}_{x}\). To simplify our explanation, we will focus on the decoder-only LM (i.e., GPT-2) first and extend our method to encoder-decoder LM (i.e., T5 and BART). ### Gpt-2 We will explain each softmax alternative individually and their connections to previous work such as pointer networks or rerankers. #### 3.1.1 Pointer Network (P) as Local Word Embedding Similar to Pointer Sentinel (PS) (Merity et al., 2017), we treat the words in the context differently (\(S=\{x|x\in c_{t}\}\)) and let their word embeddings \(\mathbf{e}_{x}\) come from the previous hidden states: \[\mathbf{e}_{x}=\mathbf{f}_{x,c_{t},LD}=\frac{\sum_{i=1}^{t}\mathbbm{1}_{c_{t}^{i}=x}L_ {LD}^{f}(\mathbf{q}_{c_{t}^{i}})}{\sum_{i=1}^{t}\mathbbm{1}_{c_{t}^{i}=x}}, \tag{4}\] where \(c_{t}^{i}\) is the \(i\)th input words in the context \(c_{t}\), \(L_{LD}^{f}\) is a linear layer, and \(\mathbbm{1}_{c_{t}^{i}=x}=1\) if \(c_{t}^{i}=x\). As a result, we can use the GPT-2 model to not only predict the hidden state \(\mathbf{f}_{c_{t},S}=\mathbf{f}_{c_{t},PD}=L_{PD}^{f}(\mathbf{q}_{c_{t}})\) and \(\mathbf{f}_{c_{t},V}\) but also predict the word embedding of context words \(\mathbf{e}_{x}\). Unlike the global word embedding \(\mathbf{w}_{x}\), the local word embedding \(\mathbf{e}_{x}\) is context-dependent, so the LM can break the softmax bottleneck by adjusting the similarity of words based on the context. For example, GPT-2 could increase the similarity between \(\mathbf{e}_{\text{king}}\) and \(\mathbf{e}_{\text{woman}}\) to output the high probability for both words easily. We call this version of pointer network local decoder (LD) embedding, which has some minor differences compared to PS (Merity et al., 2017) and other variants. For example, we merge their logits while PS merges their probabilities. PS does not do normalization when computing \(\mathbf{e}_{x}\). In our experiments, we would show that these pointer network variants all have very similar improvements in modern LMs. #### 3.1.2 Context Partition (C) To understand the source of the improvements from pointer networks, we simplify their architectures by setting the word embedding \(\mathbf{e}_{x}=\mathbf{w}_{x}\) and the partition \(S\) is still the set of context words. Although much simpler, the LM with this context partition method can still break the softmax bottleneck by properly coordinating the hidden state specifically for the context words \(\mathbf{f}_{c_{t},S}=\mathbf{f}_{c_{t},C}=L_{C}^{f}(\mathbf{q}_{c_{t}})\) and the hidden state for other words \(\mathbf{f}_{c_{t},V}\). Compared to the pointer network, one advantage of context partition is that the LM can still leverage the learned global word similarity when estimating Figure 3: Architectures of our method for T5/BART that computes Logit\({}_{CEPR}\) in Equation 6. In GPT-2, we use same architecture except that we take the 3x3 input hidden state block rather than the 1x3 block and there are no encoder-related components, which are marked by dotted lines. the probabilities of context words. #### 3.1.3 Reranker Partition (R) In some cases, the possible next words might not be mentioned in the context. For example, in the context _My favorite actor is Ryan [MASK]_, the next word could be _Reynolds_, _Gosling_, or the last names of other _Ryan_. Hence, using only the context partition does not completely solve the multimodal distribution problem. Inspired by the idea of the reranker, we set \(S\) to be the top \(k\) words with the highest logits \(\mathbf{f}_{c_{t},V}^{T}\mathbf{w}_{x}\). In practice, finding an ideal \(k\) could be difficult. When \(k\) is small, the reranker partition might not include the very likely next word. When \(k\) is large, the reranker partition might not be able to separate the output candidates and the interfering words. To alleviate the problem, we can have multiple reranker partitions and use different hidden state embeddings (e.g., \(\mathbf{f}_{c_{t},R1}\) and \(\mathbf{f}_{c_{t},R2}\)) for different partitions. #### 3.1.4 Hybrid Approach (CPR) Local embeddings in the pointer networks and global embeddings in the context partition are complementary. Using local embeddings is representational powerful while using global embedding can leverage the global similarity of words. Hence, we can combine the two methods by summing their dot products. For the methods that use different \(S\), we can simply determine an order of computing the dot products and let the later dot products overwrite the existing values. In our experiments, we always use the order illustrated in Figure 3. That is, we compute the logits (\(\text{Logit}_{CPR}(x,c_{t})\)) by \[\begin{cases}\mathbf{f}_{c_{t},C}^{T}\mathbf{w}_{x}+\mathbf{f}_{c_{t},PD}^{T}\mathbf{f}_{x,c _{t},LD}\ \ \text{if}\ x\in c_{t}\\ \mathbf{f}_{c_{t},R1}^{T}\mathbf{w}_{x}\ \ \text{if}\ x\in W(k_{1})-c_{t}\\ \mathbf{f}_{c_{t},R2}^{T}\mathbf{w}_{x}\ \ \text{if}\ x\in W(k_{2})-W(k_{1})-c_{t}\\ \mathbf{f}_{c_{t},V}^{T}\mathbf{w}_{x}\ \ \text{O/W}\end{cases}, \tag{5}\] where \(W(k_{2})\) is the top \(k_{2}\) words with the highest \(\mathbf{f}_{c_{t},V}^{T}\mathbf{w}_{x}\) and \(W(k_{1})\) is the top \(k_{1}\) words with the highest \(\max(\mathbf{f}_{c_{t},V}^{T}\mathbf{w}_{x},\mathbf{f}_{c_{t},R2}^{T}\mathbf{w}_{x})\). ### T5 and BART In the encoder-decoder architectures, our local decoder embedding, context partition, and reranker partitions are still applicable. Besides, we can leverage the words in the encoder input to further improve the performance. #### 3.2.1 Encoder Partition (E) and Local Encoder Embedding (P) Similar to the context partition, the encoder partition handles the words in the encoder input \(I\) differently by setting \(S=\{x|x\in I\}\) and using the global word embedding \(\mathbf{e}_{x}=\mathbf{w}_{x}\). As in Equation 4, we can also let the hidden states in the last layer pass through another linear layer \(L_{LE}^{f}()\) to predict the embeddings of the words in the encoder input. The method is called local encoder (LE) embedding. #### 3.2.2 Hybrid Approach (CEPR) Similar to GPT-2, we combine local encoder embedding and encoder partition for computing the probabilities of the words that are in the encoder context but not in the decoder context. As shown in Figure 3, we compute \(\text{Logit}_{CEPR}(x,c_{t})\) by \[\begin{cases}\mathbf{f}_{c_{t},C}^{T}\mathbf{w}_{x}+\mathbf{f}_{c_{t},PD}^{T}\mathbf{f}_{x,c_{t },LD}\ \ \text{if}\ x\in c_{t}\\ \mathbf{f}_{c_{t},E}^{T}\mathbf{w}_{x}+\mathbf{f}_{c_{t},PE}^{T}\mathbf{f}_{x,I,LE}\ \ \text{if}\ x\in I-c_{t}\\ \mathbf{f}_{c_{t},R1}^{T}\mathbf{w}_{x}\ \ \text{if}\ x\in W(k_{1})-c_{t}-I\\ \mathbf{f}_{c_{t},V}^{T}\mathbf{w}_{x}\ \ \ \text{O/W}\end{cases}, \tag{6}\] which is the same as Equation 5 except that we add the encoder partition and local encoder embedding, and we remove the second reranker partition. ## 4 Experiments The pointer network was a popular technique in language modeling (Merity et al., 2017) and summarization (See et al., 2017). Thus, we also focus on these two fundamental applications. ### Gpt-2 We follow the setup in Chang and McCallum (2022) to continue training GPT-2 on Wikipedia 2021 and OpenWebText (Radford et al., 2019). #### 4.1.1 Perplexity Comparison In Table 2, we first compare their predictions on the next word distribution using the testing data perplexity, which is a standard metric in the LM architecture studies. In the table, Mi refers to multiple input state enhancement, which is proposed to break the softmax bottleneck more effectively (please see details in Section 2.3 and Chang and McCallum (2022)). As we can see, **Softmax + CPR:20,100 + Mi**, which combines all the efficient approaches (i.e., context partition, reranker partition, and local decoder embedding), results in better performance and faster inference speed than the mixture of softmax (**MoS**) (Yang et al., 2018; Chang and McCallum, 2022). The inference speed is measured by our pure PyTorch implementation, which we believe could be further accelerated by implementing some new PyTorch operations using CUDA code. If only using one method, the context partition (**Softmax + C + Mi**) is better than the reranker partitions (**Softmax + R:20,100 + Mi**) while performing similarly compared to local decoder word embedding (**Softmax + P + Mi**), Pointer Generator (**PG + Mi**) (See et al., 2017), and Pointer Sentinel (**PS + Mi**) (Merity et al., 2017).2 Their similar performances indicate that the improvement of pointer networks come from breaking the softmax bottleneck. The significantly better performance of **PS + Mi** compared to **PS** further supports the finding. Footnote 2: Notice that the pointer networks from the previous work were originally designed for RNN. To add them on top of the transformer based LMs and make it more comparable to our methods, we simplify their architectures a little. Please see Appendix C.2 for more details. To know how well our method breaks the softmax bottleneck, we implement a word-by-word reranker model on GPT-2, which appends the most likely 100 words to the context when predicting each next word (see Appendix C.3 for more details). In Table 3, we show that our efficient softmax alternative **Softmax + CPR:20,100 + Mi** achieves significantly lower perplexity. Furthermore, the word-by-word reranker is at least 10 times slower during training. Combining word-by-word reranker with our method only improves the perplexity very slightly, which suggests the challenges of further improving LM by breaking softmax bottleneck. #### 4.1.2 Generated Text Comparison Next, we would like to understand how the distribution improvement affects the text generation. We sample some contexts in the test set of Wikipedia 2021 and compare the generated text quality of the different models given the contexts. The quality is measured by the ROUGE-1 F1 scores between the generated text and the actual continuation. To know how much the different models copy from the context, we also report the ROUGE-1 scores between the generation and the contexts. The results in Table 4 show that different meth \begin{table} \begin{tabular}{c|c c|c c|c} & \multicolumn{2}{c|}{All} & \multicolumn{2}{c|}{Proper Noun} \\ Model Name & Ref & Context & Ref & Context \\ \hline Softmax + Mi & 22.90 & 24.04 & 7.49 & 14.84 \\ MoS + Mi & 22.88 & 23.98 & 7.70 & 15.49 \\ PS + Mi & 22.85 & 25.01 & **8.16** & 18.21 \\ Softmax + CPR:20,100 + Mi & **23.05** & 25.36 & **8.16** & 17.92 \\ \hline \end{tabular} \end{table} Table 4: ROUGE-1 F1 (%) of different methods on GPT-2. We compare the scores between the generated text and the reference (i.e., continuation), and between the generation and context. More methods and metrics are reported in Table 8. \begin{table} \begin{tabular}{c|c c c c|c c c} & \multicolumn{4}{c}{GPT-2 Small} & \multicolumn{4}{c}{GPT-2 Medium} \\ Model Name & Size & Time (ms) & OWT (\(\downarrow\)) & Wiki (\(\downarrow\)) & Size & Time (ms) & OWT (\(\downarrow\)) & Wiki (\(\downarrow\)) \\ \hline Softmax (GPT-2) & 125.0M & 82.9 & 18.96 & 24.28 & 355.9M & 207.8 & 15.81 & 20.12 \\ Softmax + Mi & 130.9M & 85.6 & 18.74 & 24.08 & 366.4M & 213.8 & 15.71 & 20.07 \\ Mixture of Softmax (MoS) (Yang et al., 2018) & 126.2M & 130.2 & 18.97 & 24.10 & 358.0M & 262.9 & 15.71 & 19.95 \\ MoS + Mi (Chang and McCallum, 2022) & 133.3M & 133.2 & 18.68 & 23.82 & 370.6M & 268.2 & 15.61 & 19.86 \\ Pointer Generator (PG) (See et al., 2017) & 126.2M & 106.0 & 18.67 & 23.70 & 358.0M & 237.8 & 15.72 & 19.95 \\ Pointer Sentinel (PS) (Merity et al., 2017) & 126.2M & 94.1 & 18.70 & 23.79 & 358.0M & 218.3 & 15.72 & 19.95 \\ Softmax + R:20 + Mi & 132.1M & 90.4 & 18.67 & 24.03 & 368.5M & 203.6 & 15.64 & 19.94 \\ Softmax + R:20,100 + Mi & 133.3M & 101.1 & 18.69 & 23.93 & 370.6M & 228.5 & 15.61 & 19.89 \\ Softmax + C + Mi & 132.1M & 94.8 & 18.48 & 23.56 & 368.5M & 222.7 & 15.60 & 19.83 \\ Softmax + P + Mi & 133.3M & 99.1 & 18.58 & 23.66 & 370.6M & 214.7 & 15.63 & 19.90 \\ PG + Mi & 133.3M & 111.2 & 18.43 & 23.43 & 370.6M & 242.5 & 15.60 & 19.89 \\ PS + Mi & 133.3M & 98.0 & 18.48 & 23.53 & 370.6M & 224.6 & 15.60 & 19.87 \\ Softmax + CR:20,100 + Mi & 134.5M & 113.3 & 18.46 & 23.48 & 372.7M & 234.5 & 15.54 & 19.75 \\ Softmax + CPR:20,100 + Mi & 136.8M & 119.9 & 18.43 & 23.42 & 376.9M & 249.9 & 15.53 & 19.71 \\ MoS + CPR:20,100 + Mi & 139.2M & 165.1 & **18.39** & **23.29** & 381.1M & 300.6 & **15.44** & **19.57** \\ \end{tabular} \end{table} Table 2: Comparison of different methods on top of GPT-2. Wiki and OWT refer to the testing perplexity of Wikipedia 2021 and OpenWebText, respectively. Lower perplexity is better. Time is the inference time of a batch; Mi is the multiple input hidden state enhancement; C is the context partition; R:20,100 is the reranker partition with \(k_{1}=20\) and \(k_{2}=100\); P is the pointer network (i.e., local decoder embedding). Please see Equation 5 for the details of CPR. The best scores are highlighted. \begin{table} \begin{tabular}{c c|c c} \hline Softmax + Mi & 29.33 & Softmax + wbwR:100 + Mi & 28.89 \\ \hline Softmax + & \multicolumn{2}{c}{Softmax +} & 28.40 \\ CPR:20,100 + Mi & \multicolumn{2}{c}{CPR:20,100 + wbwR:100 + Mi} & \\ \hline \end{tabular} \end{table} Table 3: Comparison between our method and word-by-word reranker for the most likely 100 words (wbwR:100). The numbers are the validation perplexities on Wikipedia 2021 after training for 0.15 epochs. ods have very similar overall ROUGE-1 scores. Nevertheless, compared to **Softmax + Mi**, **Softmax + CPR:20,100 + Mi** is 21% more likely to copy the proper nouns (i.e., entity names) from the context and 9% more likely to generate the proper nouns in the actual continuation. This suggests that our method could alleviate the common incoherence problem of entities in generated text (Shuster et al., 2022; Papalampidi et al., 2022; Zhang et al., 2022; Guan et al., 2022; Goyal et al., 2022). In Table 8, we compare methods using more metrics to further support the conclusion. #### 4.1.3 Qualitative Analysis In Table 5, we visualize some distributions to explain our improvements. The softmax layer of GPT-2 is unable to properly learn to copy or exclude the word from the input context. For example, **Softmax + Mi** and **MoS + Mi** might output "_There are plates, keys, scissors, toys, and balloons in front of me, and I pick up the phone_", which causes a hallucination problem, while **Softmax + CPR:20,100 + Mi** and **Pointer Sentinel (PS) + Mi** can output the mentioned options with similar probability by copying the words in the context. In addition, **GPT-2**, **MoS**, and **PS + Mi** are very likely to output "_I like tennis, baseball, golf, basketball, and tennis_". This repetition problem happens because the next word should be some words similar to the listed sports names except for the sports that have been mentioned and the softmax layer has difficulties in outputting a donut-shape next word distribution in embedding space. In contrast, **Softmax + CPR:20,100 + Mi** can learn to exclude the listed sports by putting very negative logits on the context words, which yield the desired donut-shape distribution. ### T5 and BART in Summarization We test our methods on two popular encoder-decoder LMs, T5 (Raffel et al., 2020) and BART (Lewis et al., 2020). We fine-tune the pretrained LMs with different softmax alternatives on two news summarization datasets: CNN/DM (See et al., 2017) and XSUM (Narayan et al., 2018), one narrative summarization dataset: BookSum at paragraph level (Kryscinski et al., 2021), and one dialogue summarization dataset: SAMSUM (Gliwa et al., 2019). In the main paper, we evaluate the quality of summaries using four metrics. ROUGE-1 F1 (Lin, 2004) measures the unigram overlapping between the generated summary and the ground truth summary; CIDEr (Vedantam et al., 2015) adds a tf-idf weighting on the n-gram overlapping score to emphasize correct prediction of rare phrases; factCC (Kryscinski et al., 2020) evaluates the factuality of the summary; MAUVE (Pillutla et al., 2021) compares the word distribution of summary and ground truth in a quantized embedding space. To further support our conclusions, we also compare the quality measured by several other metrics and their model sizes in Table 9 and Table 10. The results are reported in Table 6. Similar to the GPT-2 experiments, the results are generally better as we combine more partitions and local embedding approaches. This demonstrates that we can directly fine-tune the LMs with our softmax alternatives without expensive pretraining. Unlike the GPT-2 experiments, multiple input hidden state enhancement (Mi) is not very effective, so we mainly compare the methods without Mi (i.e., \(\mathbf{q}_{c_{t}}=\mathbf{h}_{c_{t}}^{M}\), unlike Equation 2). We hypothesize one possible reason is that we haven't pretrained the T5 and BART with our softmax alternatives. Our improvements are larger in smaller models. This is probably because in a smaller word embedding space, there are more likely to be interfering words between the desired next word possibilities. Compared to our methods, the pointer networks perform well in BART-base but usually perform worse in other LMs. We need further investigations in the future to explore the reasons. Compared to ROUGE-1 score, the improvement percentage of CIDEr is overall higher. One major problem of the summarization LMs is that the generated summary contains too many commonly used phrases (King et al., 2022) and our considerably higher CIDEr scores indicate the alleviation of the problem. Our improvement on the factCC is also significant (Cao and Wang, 2021). Finally, our MAUVE improvement percentage on BookSum Paragraph dataset could reach around 30% in T5-Small. We hypothesize this is because we often mention the global entity names in the news (e.g., Obama) while the meaning of names in stories (e.g., John) is often defined by the context. ## 5 Related Work Repetition and hallucination are two common problems in language generation tasks. One common solution for repetition is to avoid outputting the words in the context, which is often called unlike lihood training (Welleck et al., 2020; Jiang et al., 2022; Su et al., 2022). However, when LM should mention some names in the context, this might exacerbate the hallucination problem. In contrast, our method can learn to copy and exclude the words in context as in Table 5. To alleviate the hallucination problem or satisfy some constraints, many recent generation models rerank the generated text (Deng et al., 2020; Gabriel et al., 2021; Cobbe et al., 2021; Ravaut et al., 2022; Krishna et al., 2022; Glass et al., 2022; An et al., 2022; Arora et al., 2022; Adolphs et al., 2022; Meng et al., 2022; Mireshballah et al., 2022; Kumar et al., 2022; Wan and Bansal, 2022; Jiang et al., 2022a). Although being effective, the rerankers usually slow down significantly the training and/or inference speed (as our word-by-word reranker baseline) and might occupy extra memory resources. Our analyses demonstrate that parts of the hallucination and repetition problem come from the softmax bottleneck. The findings provide an explanation for the effectiveness of prior studies such as the above reranker approaches and pointer networks (Li et al., 2021; Zhong et al., 2022; Ma et al., 2023). Another example is encouraging the word embeddings to be isotropy (Wang et al., 2020; Su et al., 2022). Their improvement might also come \begin{table} \begin{tabular}{|c|c c c c c c c c c c c c c c|} \hline & \multicolumn{4}{c|}{CNN/DM} & \multicolumn{4}{c|}{SUSUM} & \multicolumn{4}{c|}{BackSan Pragragraph} & \multicolumn{4}{c|}{SAMSUM} \\ Model Name & R1 & CIDF & fncC & MALIVE & R1 & CIDF & fncC & MALIVE & R1 & CIDF & fncC & fncC & MALIVE & R1 & CIDF & fncC & fncC & MALIVE \\ \hline \multicolumn{12}{|c|}{T-Small} & \multicolumn{1}{c|}{} \\ \hline Softmax (S) & 38.255 & 0.442 & 0.62 & 0.861 & 28.713 & 0.446 & 0.593 & 16.313 & 0.083 & 0.424 & 0.328 & 39.472 & 0.817 & 0.577 & 0.898 \\ CopyNet (Gu et al., 2016) & 37.990 & 0.488 & 0.628 & 28.753 & 0.442 & 0.724 & 0.940 & 16.666 & 0.029 & 0.439 & 0.402 & 39.525 & 0.853 & 0.599 & 0.924 \\ PG (Ger et al., 2017) & 37.913 & 0.422 & 0.467 & 0.874 & 28.777 & 0.450 & 0.527 & 0.591 & 16.462 & 0.068 & 0.429 & 0.376 & 32.451 & 0.585 & 0.522 & 0.153 \\ PS (Mirty et al., 2017) & 38.058 & 0.444 & 0.466 & 0.584 & 28.442 & 0.435 & 0.267 & 0.922 & 16.48 & 0.090 & 0.446 & 0.395 & 38.71 & 0.578 & 0.865 \\ S x R-D2 & 37.837 & 0.433 & 0.474 & 0.872 & 28.557 & 0.440 & 0.526 & 0.591 & 16.336 & 0.086 & 0.431 & 0.370 & 0.752 & 0.579 & 0.847 \\ S x E & 38.137 & 0.441 & 0.477 & 0.866 & 28.723 & 0.444 & 0.727 & 0.942 & 16.542 & 0.090 & 0.435 & 0.790 & 39.056 & 0.784 & 0.579 & 0.904 \\ S x Ck & 38.641 & 0.460 & 0.475 & 0.874 & 29.15 & 0.464 & 0.270 & **0.948** & 16.623 & 0.093 & 0.436 & 0.400 & **4.005** & 0.835 & **0.838** & 0.943 \\ S x Ck (20k) & 38.346 & 0.450 & **0.482** & **0.890** & 29.067 & 0.499 & **0.472** & **0.942** & 16.638 & 0.093 & 0.436 & 0.400 & **4.000** & **8.056** & 0.846 & 0.915 \\ S x Ck (20k) & **38.807** & **0.486** & 0.481 & 0.877 & **2.398** & **0.474** & 0.723 & 0.942 & **16.948** & **0.098** & **0.440** & **0.418** & 0.417 & 0.287 & **0.891** & 0.582 & **0.936** \\ S x Ck (20k) & **38.055** & 0.415 & 0.475 & 0.878 & 23.348 & 0.470 & 0.275 & **0.496** & **16.78** & **0.099** & 0.438 & **0.426** & **0.328** & 0.574 & 0.582 & 0.932 \\ \hline Softmax (S) & 40.198 & 0.504 & 0.478 & 0.907 & 33.517 & 0.667 & 0.249 & 0.579 & 17.654 & 0.096 & 0.424 & 0.467 & 44.348 & 1.006 & 0.574 & **0.866** \\ CopyNet (Gu et al., 2016) & 39.940 & 0.507 & 0.848 & 0.903 & 33.557 & 0.666 & 0.253 & 0.979 & 16.918 & 0.101 & 0.430 & 0.531 & 44.14 & 1.052 & 0.570 & 0.973 \\ PG (Ger et al., 2017) & 39.902 & 0.499 & 0.498 & 0.911 & 33.603 & 0.663 & 0.258 & 0.922 & 16.611 & 0.095 & 0.425 & 0.463 & 37.977 & 0.784 & 0.548 & 0.140 \\ PS (Mirty et al., 2017) & 39.495 & 0.493 & 0.493 & 33.632 & 0.672 & 0.209 & 0.893 & 16.905 & 0.400 & 0.408 & 0.409 & 1.008 & 0.575 & 0.946 \\ S x Ck (20k) & **40.511** & **0.487** & **0.4919** & 33.700 & 0.675 & 0.250 & 0.980 & **1.969** & **1.002** & **0.324** & **0.549** & **4.486** & **1.064** & 0.573 & 0.963 \\ S x Ck (20k) & **40.511** & **0.506** & 0.481 & 0.918 & **33.053** & **0.633** & **0.632** & **0.635** & **0.613** & **0.613** & **0.51** & **0.54** & **44.488** & 1.055 & **0.576** & **0.900** \\ \hline \multicolumn{12}{|c|}{R-Mart Bace} \\ \hline Softmax (S) & 39.790 & 0.428 & 0.479 & 0.900 & 35.575 & 0.814 & 0.241 & 0.958 & 16.903 & 0.994 & 0.414 & 0.408 & 5.132 & 1.129 & 0.567 & 0.866 \\ CopyNet (Gu et al., 2016) & 39.385 & 0.438 & 0.484 & 0.906 & 35.513 & 0.814 & 0.241 & 0.985 & 16.94 & 0.412 & 0.405 & 5.132 & 1.129 & 0.567 & 0.866 \\ PG (Ger et al., 2017) & 39.264 & 0.444 & 0.899 & 35.633 & 0.810 & 0.242 & 0.897 & 16.422 & 0.948 & 16.91 & 0.422 & **0.495** & 44.316 & 1.103 & 0.577 & 0.700 \\ PS (Mirty et al., 2017) & 39.741 & **0.499** & **0.900** & 35.541 & 0.890 & 0.242 & 0.987 & 16.742 & 0.094 & 0 from reducing linear dependency of the candidate word embeddings. Nevertheless, their side effect of breaking the similarity structure in the word embedding space might hurt the generation quality in some cases. Concurrently to our work, Wan et al. (2023) also use the softmax bottleneck theory Chang and McCallum (2022) to explain the improvement of a pointer network. Their empirical results also support our conclusion that softmax bottleneck is a major reason that causes the factuality problem of LMs. Our work is motivated and inspired by Chang and McCallum (2022). In their work, they also propose to use different hidden states for different vocabulary partitions, but their partitioning is global and needs to be combined with the mixture of softmax (MoS) approach, which adds a significant overhead compared to the standard softmax layer. Our dynamic partitioning methods not only perform better but greatly reduce the overhead by removing the reliance on MoS. ## 6 Conclusion Since the transformer becomes the mainstream encoder and decoder for LMs, the output softmax layer seems to be the only reasonable option for computing the word probability distribution. Although being simple and efficient, the softmax layer is inherently limited while the existing solutions are relatively slow Chang and McCallum (2022). This work proposes a series of softmax alternatives that can improve the text generation models without increasing the computational costs significantly. Our experiments suggest that the main improvement of the pointer network on top of a transformer comes from breaking the softmax bottleneck. Our results also indicate that the alternatives could alleviate some problems of hallucination, repetition, and too generic generation. Furthermore, all of the proposed alternatives can be applied to the LMs that have already been pretrained using softmax without requiring retraining from scratch. For the practitioner, we recommend using all the partitioning methods together to get the best performance, or using only the simple context partition to keep the architecture simple while getting the majority of the gain. ## 7 Acknowledgement We thank Nadar Akoury and the anonymous reviewers for their constructive feedback. This work was supported in part by the Center for Data Science and the Center for Intelligent Information Retrieval, in part by the Chan Zuckerberg Initiative under the project Scientific Knowledge Base Construction, in part by the IBM Research AI through the AI Horizons Network, in part using high performance computing equipment obtained under a grant from the Collaborative R&D Fund managed by the Massachusetts Technology Collaborative, and in part by the National Science Foundation (NSF) grant numbers IIS-1922090 and IIS-1763618. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor. ## 8 Limitations In our experiments, we find that the improvement of our methods tend to be larger in relatively smaller language models. Due to our limited access of computational resources, we are not able to try our methods on larger LMs. To know if a larger LM still suffers from the softmax bottleneck problem, we input the examples we used in Table 5 to GPT-3.5 and report their results in Figure 4. We find that although GPT-3.5 greatly reduces the chance of hallucination compared to GPT-2, the next word distribution is still not ideal. For example, in Figure 3(a), although the incorrect answer _queen_ receives only a small probability, GPT-3.5 puts around 67% probability on _woman_. Similarly, even though GPT-3.5 is unlikely to hallucinate the sentence: _There are plates, keys, scissors, toys, and balloons in front of me, and I pick up the phone_ as GPT-2, Figure 3(b) and Figure 3(d) show that the output distribution is still heavily biased toward one of the options and the most likely next word could change if the order of the options in the context changes. These results suggest that increasing model size indeed alleviates the softmax bottleneck problem but the problem is not completely solved even if a huge hidden state size (12k) and model size (175B) are used Brown et al. (2020). We expect that adding our methods to the large LMs could rectify the biased distributions as shown in our experiments on smaller LMs (Table 5). Therefore, although improving smaller LMs has already had wide applications in practice, trying our methods on a larger LM is a promising next step, which we haven't been able to do. The current implementation of our methods also has some room for improvements. Our codes currently contain some unnecessary computation to circumvent the restrictions of PyTorch library, so we should be able to further accelerate it by writing CUDA code. Furthermore, our codes haven't supported the pretraining of BART or T5. We expect that completing the future work could make our method faster and better. Since the focus of this paper is improving the architecture of general transformer decoder, our evaluation of each application is not as comprehensive as the studies for a particular application. For example, although we test our methods using many metrics and the metrics show a consistent trend, there are many other factuality metrics we haven't tried Li et al. (2022). We also haven't conducted human evaluation to further verify our conclusion because conducting human evaluation properly is challenging Karpinska et al. (2021) and time-consuming. In addition, if we include more words in a context partition, the performance might be better at the cost of extra computational overhead. We leave the analyses of the tradeoff as future work. ## 9 Ethics Statement In our experiments, we find that our methods usually copy more words from the context or encoder input. The tendency might have some potential issues. For example, our improvements might be reduced on the languages with more morphology. Furthermore, in some summarization applications, increasing the factuality by increasing the attractiveness might not be ideal Ladhak et al. (2022); Goyal et al. (2022). As described in Section 2.1, one major limitation of the popular softmax layer is its global word embeddings. The problem would become more serious when there are more tokens whose meanings are locally defined (e.g., names in the BookSum dataset). Our methods would be more useful in those circumstances and might alleviate some biases described in Shwartz et al. (2020) and Ladhak et al. (2023). Moreover, the meaning of tokens are also locally defined in many other applications such as variables in code or math problems, the new terminologies in a scientific paper, or the products in a sequential recommendation problem. We believe that our methods could become an efficient alternative of reranker Cobbe et al. (2021); Welleck et al. (2022) and create impacts in those areas. Finally, our results show that when there are some uncertainties in the next word (e.g., could be _king_ or _woman_), existing LMs could have some difficulties of copying the words from the context and our methods alleviate the problem. Thus, our methods should also be able to improve the lexically controllable language generation models that put the desired keywords into the context such as Goldfarb-Tarrant et al. (2019) and Lu et al. (2021).
2309.03051
Feasibility of Local Trajectory Planning for Level-2+ Semi-autonomous Driving without Absolute Localization
Autonomous driving has long grappled with the need for precise absolute localization, making full autonomy elusive and raising the capital entry barriers for startups. This study delves into the feasibility of local trajectory planning for level-2+ (L2+) semi-autonomous vehicles without the dependence on accurate absolute localization. Instead, we emphasize the estimation of the pose change between consecutive planning frames from motion sensors and integration of relative locations of traffic objects to the local planning problem under the ego car's local coordinate system, therefore eliminating the need for an absolute localization. Without the availability of absolute localization for correction, the measurement errors of speed and yaw rate greatly affect the estimation accuracy of the relative pose change between frames. We proved that the feasibility/stability of the continuous planning problem under such motion sensor errors can be guaranteed at certain defined conditions. This was achieved by formulating it as a Lyapunov-stability analysis problem. Moreover, a simulation pipeline was developed to further validate the proposed local planning method. Simulations were conducted at two traffic scenes with different error settings for speed and yaw rate measurements. The results substantiate the proposed framework's functionality even under relatively inferior sensor errors. We also experiment the stability limits of the planned results under abnormally larger motion sensor errors. The results provide a good match to the previous theoretical analysis. Our findings suggested that precise absolute localization may not be the sole path to achieving reliable trajectory planning, eliminating the necessity for high-accuracy dual-antenna GPS as well as the high-fidelity maps for SLAM localization.
Sheng Zhu, Jiawei Wang, Yu Yang, Bilin Aksun-Guvenc
2023-09-06T14:44:58Z
http://arxiv.org/abs/2309.03051v1
Feasibility of Local Trajectory Planning for Level-2+ Semi-autonomous Driving without Absolute Localization ###### Abstract Autonomous driving has long grappled with the need for precise absolute localization, making full autonomy elusive and raising the capital entry barriers for startups. This study delves into the feasibility of local trajectory planning for level-2+ (L2+) semi-autonomous vehicles without the dependence on accurate absolute localization. Instead, we emphasize the estimation of the pose change between consecutive planning frames from motion sensors and integration of relative locations of traffic objects to the local planning problem under the ego car's local coordinate system, therefore eliminating the need for an absolute localization. Without the availability of absolute localization for correction, the measurement errors of speed and yaw rate greatly affect the estimation accuracy of the relative pose change between frames. We proved that the feasibility/stability of the continuous planning problem under such motion sensor errors can be guaranteed at certain defined conditions. This was achieved by formulating it as a Lyapunov-stability analysis problem. Moreover, a simulation pipeline was developed to further validate the proposed local planning method. Simulations were conducted at two traffic scenes with different error settings for speed and yaw rate measurements. The results substantiate the proposed framework's functionality even under relatively inferior sensor errors. We also experiment the stability limits of the planned results under abnormally larger motion sensor errors. The results provide a good match to the previous theoretical analysis. Our findings suggested that precise absolute localization may not be the sole path to achieving reliable trajectory planning, eliminating the necessity for high-accuracy dual-antenna GPS as well as the high-fidelity maps for SLAM localization. Semi-autonomous Driving, Local Trajectory Planning, Relative Localization, journal. ## I Introduction Over the years, fully autonomous driving has struggled to achieve reliability and large-scale implementation. Even today, leading representatives in autonomous driving, such as Waymo and Cruise, face challenges navigating urban environments like San Francisco, with their driverless vehicles occasionally clogging traffic in the middle of the road [1, 2]. Despite more than a decade of industry R&D, driverless autonomous solutions appear far from successful business applications based on current performance. Meanwhile, autonomous driving startups are rapidly consuming investments. Uber alone reportedly spent an annual $457 million on self-driving R&D before selling its unit to Aurora [3]. Over time, investors have become less attracted to driverless technology and increasingly hesitant about the technology's potential business yields in the near future. The landmark shutdown of the star unicorn startup Argo AI [4] and the closure of the first public self-driving truck company Embark Technology [5] both reflect the struggles of Level-4 (L4) startups to secure capital funding. The heightened global economic uncertainty, along with potential recession, exacerbates this struggle. L4 providers like Waymo and TuSimple have reportedly laid off large numbers of employees to cut operating costs [6, 7]. The valuation of autonomous driving startups has plummeted drastically in recent times. Waymo's valuation dropped 80% from $200 billion to $30 billion in just 18 months. TuSimple's stock, which once peaked above $60, now stands at slightly above $1 as of April 2023. In contrast, Tesla's Autopilot driver assistance system, first deployed in 2014, has been a significant feature that greatly promotes the sale of Tesla vehicles. Its so-called Full Self-Driving (FSD) add-on package, though arguably named, directly contributed $324 million in revenue for the fourth quarter alone in 2022 [8]. The FSD package is so popular that 19% of Tesla owners opted in despite the price hike from an initial $5,000 in 2019 to $15,000 in 2022 [9]. The debate over the autonomous driving development strategy between Tesla and Waymo has persisted for years [10]. Tesla insisted on a progressive evolution path with a vision-based solution that emphasizes cost reduction and mass production application, while Waymo aimed for an all-in-one driverless solution from day one, incorporating advanced and expensive sensors like Lidar and relying heavily on detailed prerequisite information, such as high-definition (HD) maps. As of now, it seems that the Tesla approach prevails in the industry, generating continuous cash flow to back support its own progression. The autonomous vehicle industry has quietly but largely shifted interest towards cost-efficient Level-2+ (L2+) semi-autonomous driving solutions. OEMs are especially interested in the potential urban Navigate on Autopilot (NOA) feature [11], an L2+ feature that traditional Tier-1 supplier unable to provide, to assist driver navigate in complex urban scenarios. L2+ solutions do not guarantee driving safety and demand human attention and intervention during driving. In such applications, "The driver is still responsible for, and ultimately in control of, the car," as Tesla stated [12]. This easing of liability and the compromise on fully self-driving realization provide L4 startups a midway transition to package their solutions into a viable product. However, this transition is not merely a hardware downgrade and algorithm transplant. The lite version of hardware may imply fewer available resources, such as weaker computing power, unavailability of sensor data, or less accurate measurement data, among others. As a result, transplanting L4 algorithms, which are designed for data-abundant hardware platforms, to fit L2+ applications is not as simple as it may seem [13]. One challenge to deal with is the lack of highly-accurate global/absolute localization. Centimeter-level accuracy of absolute localization is essential for L4 driving request detailed lane information from HD map. This high localization accuracy is usually achieved by the fusion of GNSS/INS (Inertial Navigation System), or simultaneous localization and mapping (SLAM) in areas with weak GPS signals by matching Lidar data and the normal distribution transform (NDT) map. However, in cost-sensitive mass production, L2+ vehicles do not come equipped with expensive dual-antenna GPS. NDT maps are also not available to cover every traffic route to work with SLAM. Therefore, for L2+ applications, centimeter-level accuracy of absolute localization is not available. Now the question arises: without this critical absolute localization information, is semi-autonomous driving still technically feasible? Consider the case of human driving: we don't necessarily need to be aware of our absolute localization by centimeter accuracy. One also does not have the detailed lane-to-lane transition routes at intersections, as HD maps provide. We are more aware of our surroundings, such as the distance from other traffic objects or whether we are in the correct lane. This analogy to human driving may seem simplistic but implies that L2+ semi-autonomous driving may be feasible without accurate absolute localization, but instead using relative localization. Relative localization refers to the process of determining the relative position and orientation of the ego vehicle with respective to the surrounding environment, including other vehicles, pedestrians, and obstacles. The relative localization is typically achieved from perception system using a combination of sensors such as cameras, radars, and possibly Lidars, which are available for L2+ ready vehicles. But how does local trajectory planning works with relative localization? During planning, one consideration is the trajectory consistency, meaning the trajectories planned in consecutive frames must maintain or approximate the spatial and temporal continuity. This is achievable for absolute localization since the ego vehicle localization, planned trajectories, and traffic object movements are all under the same global coordinate system. However, for relative localization, there's no way to accurately reflect all these information under the global coordinate system. Few research have been addressed this practical problem. To ensure the consistency of the planned trajectory, we emphasized that relative localization with respect to surrounding traffic between adjacent frames must be associated. The relative motion of the ego vehicle between adjacent frames can be estimated by integrating acceleration and yaw rate data from the inertial measurement unit (IMU) sensor. This integration inevitably introduces error in the estimation of position and posture changes. The INS is based on exactly this idea to estimate absolute localization but needs periodic correction/fusion with GNSS data to avoid the build-up of the integration errors. In the L2+ semi-autonomous driving case, there is no accurate source to correct the INS estimate of absolute localization. Therefore, instead of choosing global coordinates, trajectory planning for L2+ semi-autonomous driving was done under the local vehicle coordinate. The trajectory from last frame was projected to the local vehicle coordinate at current frame to ensure planning consistency. In this way, the localization error is limited between frames and will not accumulate. Although the trajectory planning topic is not new in research fields with different approaches raised based on spline [14, 15], potential fields [16], sampling method [17, 18], graph search [19], optimization [20], and so on, few have evaluated the dynamic stability of the continuously changing planned trajectories. Trajectory planning is usually continuously ongoing in cycles and may subject to change in response to perturbations including changes in the environment, sensor noise, execution errors from control, and model uncertainties from vehicle dynamics and the environment. It's important to ensure the planned trajectory is stable and feasible under such perturbations. [21] presented a learning-based motion planning with stability guaranteed by designing a differential Lyapunov function using contraction theory. In [22] a motion planning framework was designed to maximize a marine vehicle's stability margins against ocean disturbances. But few comparable analysis have been seen for trajectory planning in autonomous driving field. [23] demonstrated the stability of the cost-based lattice controller in event of dynamic environment change but limited to simulation without rigorous theoretical proof. Under the context of trajectory planning without absolute localization, the drift and offset errors from the IMU sensor could build up the estimated relative localization error and affects the continuously planning stability. Hence this paper specifically considers perturbations from relative localization and proves the necessary conditions under which the trajectory planning could maintain dynamic stability. One major contribution from this paper is the proposal of local trajectory planning framework that works without the availability of absolute localization, which is challenging for L2+ semi-autonomous driving applications with limited hardware. Another contribution comes from the proof and conclusion that stability of the dynamic trajectory planning subject to motions sensor errors could be ensured under certain conditions easy to met. The stability of the proposed local trajectory planning framework is also further validated under different simulation scenarios given drifting and offset noise from the IMU sensor. The paper could bring some insights and prove the validity for L2+ semi-autonomous application development with limited hardware equipment. Organization of the paper is in follows: In Section II, the methodology of L2+ local trajectory planning without absolute localization was introduced. This is followed by proof and discussion on the effects of the relative localization error on the trajectory planning stability in Section III. A simulation pipeline was built based on the proposed L2+ trajectory plan ning framework. The effects of drifting and offset noise from the IMU sensors on the continuous trajectory planning result were shown in simulation results and discussed in Section IV. Based on the theoretical analysis as well as simulation results under measurement errors of speed and yaw rate, we concluded that local trajectory planning for semi-autonomous driving without absolute localization is feasible. ## II Methodology ### _Trajectory Planning Framework_ Without loss of generality, assume that the vehicle state \(x\) can be represented by projection of its position and heading in the XY plane for simplicity, \(x\in\mathbb{R}^{3}\). The trajectory planning problem for the semi-autonomous driving vehicle is to determine its trajectory as a function of time to avoid collisions with obstacles or intrusions to untraversable area as by traffic rules. Let \(T=[t_{0},t_{f}]\) represent the time interval over which the trajectory of the semi-autonomous vehicle is to be planned, where \(t_{0}\) and \(t_{f}\) denote the planning initial time and terminal time respectively. The vehicle position is represented as \(x\in\mathbb{R}^{3}\), with \(x_{0}\) and \(x_{f}\) indicating the position at time \(t_{0}\) and \(t_{f}\) separately. Let \(O\) denote the set of road objects (obstacles, other road users, illegal traffic area) that are not traversable. \(O(t)=\{O_{1}(t)\cup O_{2}(t)\cup\cdots\cup O_{n}(t)\}\subset\mathbb{R}^{3}\) is the space occupied by all road objects at time \(t\). The vehicle trajectory can be interpreted as the continuous mapping \(\mathcal{T}\) from \(T\) to \(\mathbb{R}^{3}\) that does not overlap with \(O(t)\). Note that the map \(\mathcal{T}\) has to be continuous to be physically realizable. The trajectory planning problem can be formally stated as [24]: Find a mapping \(\mathcal{T}:T\rightarrow\mathbb{R}^{3}\) with \(x(t_{0})=x_{0}\), \(x(t_{f})=x_{f}\), such that \(\forall t\in T\), \(x(t)|_{\mathcal{T}}\notin O(t)\). During driving, the complex and dynamic-changing traffic environment demands the planning to continuously update the trajectory. To ensure the continuity of the planned trajectories between frames, planning at current time step/frame usually sets the initial start point \(x_{0}\) from last planning result. This would also have the benefit to decouple the planning process from the execution result of its downstream control module. The diagram in Figure 1 shows the implementation of the trajectory planning proposed for level-2+ semi-autonomous driving. As shown in Figure 1, at frame/timestep \(t_{k}\) (note that frame and time step may be used interchangeably in this paper), the new trajectory \(\mathcal{T}_{k}\) is planned under the local vehicle coordinate system \(P_{k}\) from a planning initial state given the obstacle set \(O(t)\) from the perception results. The perception result \(O(t)\) itself is with respect to local vehicle coordinate system \(P_{k}\) and does not need coordinate transformation. The planning initial state \(x_{k}|\mathcal{T}_{k-1}\) represents the planned-ahead state for time \(t_{k}\) by then last trajectory \(/mathcalT_{k-1}\). Its projection to current local coordinate system \(P_{k}\) however requires coordinate transformation: \[{}^{P_{k}}\mathcal{T}_{k}= {}^{P_{k-1}}\mathcal{T}_{k-1}-{}^{P_{k-1}}P_{k} \tag{1}\] \[\approx {}^{P_{k-1}}\mathcal{T}_{k-1}-\Delta\hat{P}_{k}, \tag{2}\] where \(\Delta\hat{P}_{k}\) is the estimation for state change \({}^{P_{k-1}}P_{k}\). \(\Delta\hat{P}_{k}\) could be derived from the onboard IMU sensor and/or wheel speed sensors: \[\Delta\hat{P}_{k}=\begin{pmatrix}\Delta\hat{x}\\ \Delta\hat{y}\\ \Delta\hat{\theta}\end{pmatrix}=\begin{bmatrix}\int\limits_{k-1}^{t_{k}}\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ## III Feasibility/Stability Analysis ### _Problem Description_ Equation (3) and (4) gives an approximation of relative motion change between consecutive planning frames. However, due to the integrals in the equations, the estimation of vehicle position and posture changes since the first frame could build up as time goes. The accumulated error could have adverse effects on the proposed local trajectory planning method, making it infeasible to achieve to the desired destination. To simply the analysis, the terminal destination \(x_{f}\) of the trajectory planning for consecutive frames is assumed unchanged in order to show the stability concept for continuous planning. This simplification is legit for consecutive frames in scenarios like traffic stop, lane changing, and etc. The diagram in Figure 2 illustrates the potential effect of the accumulated estimation error to the continuous trajectory planning results at timestep \(t_{k-1}\) and its next timestep \(t_{k}\). To clearly show the drift of the trajectory due to the estimation error, the trajectories \(\mathcal{T}_{k-1}\) and \(\mathcal{T}_{k}\) planned under each own local coordinate system are put under the same global coordinate system, under which the terminal destination is set to \(\mathbf{0}\) without loss of generality. Figure 1(a) shows that without estimation error of the state change, trajectory \(\mathcal{T}_{k}\)'s start point \(x_{k|\mathcal{T}_{k}}\) has the same state as the planned next-time-step state \(x_{k|\mathcal{T}_{k-1}}\) by last trajectory \(\mathcal{T}_{k-1}\), which is the prerequisite to realize the consistency of planning between frames. This can be proved as follows. From Equation (1), we have: \[{}^{P_{k}}x_{k|\mathcal{T}_{k-1}}=^{P_{k-1}}x_{k|\mathcal{T}_{k-1}}-^{P_{k-1}} P_{k}, \tag{7}\] \[{}^{P_{k}}\hat{x}_{k|\mathcal{T}_{k-1}}=^{P_{k-1}}x_{k|\mathcal{T }_{k-1}}-\Delta\hat{P}_{k}, \tag{8}\] The estimation error of the state change between neighbor frames thus comes from: \[\varepsilon=^{P_{k-1}}P_{k}-\Delta\hat{P}_{k}. \tag{9}\] In the case of Figure 1(a) when estimation error \(\varepsilon=0\), \(\Delta\hat{P}_{k}=^{P_{k-1}}P_{k}\). In this case \(\hat{x}_{k|\mathcal{T}_{k-1}}\) used as the initial planning start state \(x_{k|\mathcal{T}_{k}}\) at time \(t_{k}\) has the relationship: \[x_{k|\mathcal{T}_{k}}=\hat{x}_{k|\mathcal{T}_{k-1}}=x_{k|\mathcal{T}_{k-1}}. \tag{10}\] In the case \(\varepsilon\neq 0\) in (9), minus Equation (7) from (8) we have: \[{}^{P_{k}}\hat{x}_{k|\mathcal{T}_{k-1}}=^{P_{k}}x_{k|\mathcal{T}_{k-1}}+(^{P_ {k-1}}P_{k}-\Delta\hat{P}_{k}), \tag{11}\] \[=^{P_{k}}x_{k|\mathcal{T}_{k-1}}+\varepsilon, \tag{12}\] and therefore \[\hat{x}_{k|\mathcal{T}_{k-1}}=x_{k|\mathcal{T}_{k-1}}+\varepsilon, \tag{13}\] under the global coordinate system. Under this case in Figure 1(b), the start state \(x_{k|\mathcal{T}_{k}}\) for \(\mathcal{T}_{k}\) deviates from the planned next-time-step state \(x_{k|\mathcal{T}_{k-1}}\) at \(\mathcal{T}_{k-1}\). The deviation between \(x_{k|\mathcal{T}_{k}}\) and \(x_{k|\mathcal{T}_{k-1}}\) is exactly the estimation error \(\varepsilon\) of state change. Also note that both trajectories leads to the same terminal state \(x_{f}\) despite existence of the error term \(\varepsilon\). This is because that the terminal state is assumed fixed for stability analysis purpose as we mentioned before. Although different local vehicle coordinate system are used to represent the terminal state at each frame during planning, i.e. \({}^{P_{k-1}}x_{f}\) vs \({}^{P_{k}}x_{f}\), the coordinate transformation itself does not change any global object's state including \(x_{f}\). Hence \(x_{f}\) is not affected by the estimation error \(\varepsilon\) of state change and every trajectory at each time step attempts to reach this destination \(x_{f}\). It's likely that under the case Figure 1(b), if the error term \(\varepsilon\) is large enough, the continuously planned trajectory may never converge to the terminal state \(x_{f}\) as \(\varepsilon\) may drag the planning start point further and further away from \(x_{f}\). Such inference is intuitive but lacks the support of theoretical derivation. We are interested in the questions: Is terminal state reachable during continuous planning given the estimation error \(\varepsilon\) of state change? What is the bounds allowed for \(\varepsilon\)? Fig. 1: Proposed local trajectory planning for level-2+ semi-autonomous driving without absolute localization ### _Stability Analysis_ In the presence of estimation error \(\varepsilon\), we have proved in last subsection that: \[x_{k|\mathcal{T}_{k}} =\hat{x}_{k|\mathcal{T}_{k-1}}=x_{k|\mathcal{T}_{k-1}}+\varepsilon, \tag{14}\] \[=x_{k-1|\mathcal{T}_{k-1}}+\underbrace{\Delta x_{k|\mathcal{T}_{k -1}}}_{\text{Planned state change}}+\underbrace{\varepsilon}_{\text{Estimation error of}} \tag{15}\] as illustrated in Figure (b)b. Denote \(x_{k|\mathcal{T}_{k}}\) by \(x_{k}\) in the above equation, then we have the following discrete-time system described by: \[x_{k}=f(x_{k-1}) =x_{k-1}+\underbrace{\Delta x_{k|\mathcal{T}_{k-1}}+\varepsilon}_ {\rho_{k-1}} \tag{16}\] \[=x_{k-1}+\underbrace{\rho_{k-1}}_{\rho_{k-1}}, \tag{17}\] where \(f:D\to\mathbb{R}\) is locally Lipschitz in \(D\subset\mathbb{R}^{3}\), \(D\) an open set containing the origin \(\mathbf{0}\in D\). The feasibility problem of the continuously trajectory planning then become the stability analysis for the discrete-time orbit, i.e. the sequence of state \(x_{k}\) starting from an initial state \(x_{0}\). Suppose \(f\) has an equilibrium at \(x_{f}=\mathbf{0}\), then the equilibrium \(\mathbf{0}\) is said to be locally **Lyapunov stable** if: For every \(r>0\), there exists a \(\delta>0\) such that, if \(\|x_{0}-\mathbf{0}\|<\delta\), then \(\|x_{k}-\mathbf{0}\|<r\) for every \(k>=0\). Figure 3 shows an exemplary sequence of discrete state \(x(\cdot)\) confined in the open ball of radius \(r\), \(B_{r}=\{x\in\mathbb{R}^{3}\mid\|x\|<r\}\), projected in 2D plane. Define a Lyapunov-alike function \(V:D\to\mathbb{R}\) locally Lipschitz in \(D\) with the form: \[V(x)=x^{T}x,x\in D, \tag{18}\] which satisfies the properties: \[V(\mathbf{0})=0,\text{ and }V(x)>0,\forall x\in D-\mathbf{0}. \tag{19}\] Given the \((k-1)\)-th state \(x_{k-1}\), the value change of function \(V:D\to\mathbb{R}\) from \(x_{k-1}\) to \(x_{k}\) is: \[\Delta V(x_{k-1}) =V(f(x_{k-1}))-V(x_{k-1}) \tag{20}\] \[=\left(x_{k-1}+\rho_{k-1}\right)^{T}\left(x_{k-1}+\rho_{k-1} \right)-x_{k-1}^{T}x_{k-1}\] (21) \[=\left(2x_{k-1}+\rho_{k-1}\right)^{T}\rho_{k-1}. \tag{22}\] We assume for \(\Delta V(x_{k-1})\), the following prerequisite is satisfied: **Prerequisite 1**.: \(\exists\eta>0\), _such that \(\forall x_{k-1}\in\{x\in D\mid\|x\|>\eta\}\), \(\Delta V(x_{k-1})\leq 0\) is always satisfied, given the Lipschitz-continuous function \(V:D\to\mathbb{R}\) defined in Equation (18) and (19)._ **Remark 1**.: _Prerequisite 1 is not stringent in the context of the continuously trajectory planning problem. We will show in Fig. 3: Stability in the sense of Lyapunov for discrete-time system projected in 2D plane. Fig. 2: Illustration of effects of estimation error of state change \(\varepsilon\) on continuous planning results at time step \(t_{k-1}\) and \(t_{k}\). the following that under certain conditions easy to met, the prerequisite assumed can be guaranteed._ To satisfy \(\Delta V(x)\leq 0\), from Equation (22) the inner product of \((2x+\rho)\) and \(\rho\) have to be no greater than 0. Figure 4 shows the physical meaning of this in the Euclidean plane. It shows different possibilities of \(\rho\) and how it affects the \(2x+\rho\) and correspondingly their inner product. Of the three examples given in the figure, either \(\rho\) too lengthy or its wrong direction leads to \(\overrightarrow{2x+\rho}\cdot\overrightarrow{\rho}>0\). It is straightforward that two conditions have to be met to satisfy non-positive inner product in Euclidean plane: (1) \(\overrightarrow{\rho}\cdot\overrightarrow{x}\leq 0\); and (2) \(|\overrightarrow{\rho}|\) should be less than the projection of \(2\overrightarrow{x}\) on \(\overrightarrow{\rho}\). For the vector space \(\mathbb{R}^{3}\), similarly we concluded that the following conditions have to be satisfied to ensure \(\Delta V(x)\leq 0\): \[x^{T}\rho\leq 0,\text{ and }\rho^{T}\rho<-2x^{T}\rho. \tag{23}\] We then show that condition (23) can be met in the following discussion. The term \(\rho\) contains the estimation error of state change, \(\varepsilon\), as seen from Equation (15) to (17). The error \(\varepsilon\) is in probabilistic distribution due to noise and error from the motion sensors. Sensor noise and error is usually small and hence it is safe to assume a upper bound that: \(\exists\varepsilon\), such that \(\varepsilon\) is bounded \(\|\varepsilon\|<\varepsilon\). The other term contained in \(\rho\) is \(\Delta x_{k|\mathcal{T}_{k-1}}\), which is the planned next-step state change at \(\mathcal{T}_{k-1}\). Figure 5 shows a possible vector of \(\Delta x_{k|\mathcal{T}_{k-1}}\) given the \((k-1)\)-th step's planning initial point \(x_{k-1}\) under Euclidean plane as an example. Due to the nature of trajectory planning, the planned trajectory steps towards the terminal state \(\mathbf{0}\). Therefore, we can assume the following relation: \[\Delta x_{k|\mathcal{T}_{k-1}}^{T}x_{k-1}<0. \tag{24}\] In Figure 5, the open ball of radius \(\varepsilon\) is shown in the grey shaded area top the tip of \(\Delta x_{k|\mathcal{T}_{k-1}}\). Then the vector \(\rho\), i.e. the sum of \(\Delta x_{k|\mathcal{T}_{k-1}}\) and \(\varepsilon\), is known to be confined in this shaded area. If for any \(\varepsilon\) that bounded in \(\|\varepsilon\|<\bar{\varepsilon}\), the condition (23) could be satisfied, then \(\Delta V(x)\leq 0\) can be guaranteed. This is very likely to be satisfied when the error bound \(\bar{\varepsilon}\) is much smaller compared to the norm \(\|x_{k-1}\|\) or \(\|\Delta x_{k|\mathcal{T}_{k-1}}\|\) if the relation (24) is met. Therefore, we can make the assumption that \(\exists\eta>0\), for any \(x_{k-1}\in D\) that satisfies \(\|x_{k-1}\|>\eta\), condition (23) is always met, and hence \(\Delta V(x_{k-1})\leq 0\). The above discussion shows how the Prerequisite 1 is made. **Prerequisite 2**.: _In the discrete system (17), \(\rho\) is bounded within an open ball with radius \(\bar{\rho}\), i.e. \(\|\rho\|<\bar{\rho}\)._ Proof.: During planning, the vehicle's own physical motion capabilities would be considered to limit the planned state change \(\Delta x_{k|\mathcal{T}_{k-1}}\), i.e. \[\left\|\Delta x_{k|\mathcal{T}_{k-1}}\right\|<\Delta\bar{x}. \tag{25}\] And hence, \[\|\rho\|\leq\left\|\Delta x_{k|\mathcal{T}_{k-1}}\right\|+\|\varepsilon\|< \Delta\bar{x}+\bar{\varepsilon} \tag{26}\] Therefore, there must exist a \(\bar{\rho}\), such that \(\|\rho\|<\bar{\rho}\) and \(\bar{\rho}\leq\Delta\bar{x}+\bar{\varepsilon}\). **Remark 2**.: _For the trajectory planning problem, the terminal state \(x_{f}=\mathbf{0}\) may not be an equilibrium point since \(f(\mathbf{0})=\mathbf{0}\) is not guaranteed due to the term \(\rho\) in Equation (17). However, from Prerequisite 2, we know that next state from \(\mathbf{0}\) is nearby, i.e. \(\|f(\mathbf{0})\|\leq\bar{\rho}\)._ **Claim 1**.: _For the discrete system described in Equation (17), with Prerequisite 1 and 2, the Lyapunov stability could not be achieved; but a weaker conclusion could be made: There exists a \(\delta>0\), if \(\|x_{0}-\mathbf{0}\|<\delta\), then \(x_{k}\) is bounded in the sense that \(\|x_{k}-\mathbf{0}\|<r\) for every \(k\geq 0\) for some \(r\)._ Proof.: Choose \(s=max\left\{\eta,\bar{\rho}\right\}\) where \(\eta\) and \(\bar{\rho}\) are declared in Prerequisite 1 and 2, such that the open ball \(B_{s}=\left\{x\in\mathbb{R}^{3}\mid\|x\|<s\right\}\subset D\). Then choose \(r>s+\bar{\rho}>0\) that \(B_{r}=\left\{x\in\mathbb{R}^{3}\mid\|x\|<r\right\}\subset D\). Let \(\alpha=\min_{\|x|=r}V(x)\), then we know \(\alpha>0\) due to (19). Take \(\beta\in(0,\alpha)\), the set Fig. 4: Inner product of \(\rho\) and \(2x+\rho\) at different combinations in Euclidean plane. \(\Omega_{\beta}=B_{r}\cap V^{-1}([0,\beta])\subset B_{r}\) could have several connected components as shaded in Figure 6. Consider \(C_{\beta}\subset\Omega_{\beta}\) is the connected component that contains \(B_{s}\), i.e. \(B_{s}\subset C_{\beta}\). Since the function has the form as in (19), this can be ensured with a chosen \(\beta\geq(s+\bar{\rho})^{2}\). In the following, we will prove that \(f^{n}(C_{\beta})\subset C_{\beta}\) for every \(n\geq 0\). First we show that the next discrete state from \(\mathbf{0}\) still belongs to \(C_{\beta}\), i.e. \(f(\mathbf{0})\in C_{\beta}\). From Prerequisite 2, we have \(\|f(\mathbf{0})\|<\bar{\rho}\leq s\). Therefore, \(f(\mathbf{0})\in B_{s}\subset C_{\beta}\). Next we prove that \(f(C_{\beta})\subset B_{r}\cap V^{-1}([0,\beta])\). This has to be discussed for \(B_{s}\) and \(C_{\beta}\backslash B_{s}\) separately. For \(x\in B_{s}\), \[\|f(x)\|\leq\|x\|+\|\rho\|<s+\bar{\rho}<r, \tag{27}\] and hence, \[f^{T}(x)f(x)<(s+\bar{\rho})^{2}\leq\beta. \tag{28}\] Therefore, \(f(B_{s})\subset B_{r}\cap V^{-1}([0,\beta])\). Then since \(f:D\rightarrow\mathbb{R}^{3}\) is Lipschitz in \(D\), \(f(C_{\beta})\) is also connected and \(f(B_{s})\subset f(C_{\beta})\). This means that at least part of \(f(C_{\beta}\backslash B_{s})\) is a subset of \(B_{r}\) too. We can further conclude that \(f(C_{\beta}\backslash B_{s})\subset B_{r}\). If this is not true, then \(f(C_{\beta}\backslash B_{s})\) overlaps with \(B_{r}\). There's a point \(x\in C_{\beta}\backslash B_{s}\) such that \(\|f(x)\|=r\). Then the Lyapunov-alike function \[V(f(x))\geq\alpha>\beta\geq V(x). \tag{29}\] This is contradictory with the non-increasing characteristics of \(V(x)\) in Prerequisite 1. Thus \(f(C_{\beta})\) is connected and a subset in \(B_{r}\cap V^{-1}([0,\beta])\). Meanwhile \(f(\mathbf{0})\in f(C_{\beta})\) and \(f(\mathbf{0})\in C_{\beta}\). This implies that \(f(C_{\beta})\subset C_{\beta}\). We can then conclude that \(f^{n}(C_{\beta})\subset C_{\beta}\) for any \(n\in\mathbb{N}\). Hence we can choose a \(\delta>0\) that satisfies \(\{x\in D\mid\|x\|<\delta\}\subset C_{\beta}\), if \(\|x\|<\delta\), then \(\|f^{n}(x)\|<r\). **Claim 2**.: _If \(\Delta V(x)\) is strictly decreasing in Prerequisite 1, for the system (17) along with Prerequisite 2, there exists \(\delta>0\), such that if \(\|x_{0}-\mathbf{0}\|<\delta\), then \(\lim_{n\rightarrow\infty}\|x_{n}-\mathbf{0}\|<\eta+\bar{\rho}\). This means that the state \(x_{k}\) will be contained in \(B_{\eta+\bar{\rho}}=\{x\in D\mid\|x\|<\eta+\bar{\rho}\}\) at some point, and its orbit will remain inside thereafter._ Proof.: From the proof part for Claim 1, choose \(\delta\) that satisfies \(\{x\in D\mid\|x\|<\delta\}\subset C_{\beta}\), if \(\|x\|<\delta\), then \(f^{n}(x)\in C_{\beta}\) for every \(n\geq 0\). First we show that for \(x\in C_{\beta}\backslash B_{\eta}\), at some point k, \(f^{k}(x)\in B_{\eta}\), where \(B_{\eta}=\{x\in D\mid\|x\|<\eta\}\). For the sake of contradiction, suppose that this is not the case, then for all \(k\geq 0\) we have \[f^{k}(x)\in C_{\beta}\backslash B_{\eta}. \tag{30}\] Since \(C_{\beta}\backslash B_{\eta}\) is compact, and \(\Delta V\) is continuous and \(\Delta V(x)<0\) for \(x\in C_{\beta}\backslash B_{\eta}\), then from Weierstrass theorem, we know \(\Delta V\) attains a negative maximum \(-\mu\), i.e. \[\Delta V(x)\leq-\mu<0,\text{ if }x\in C_{\beta}\backslash B_{\eta}, \tag{31}\] and hence, \[V(f^{n}(x)) =V(f^{n-1}(x))+\Delta V(f^{n-1}(x)) \tag{32}\] \[=V(f^{n-2}(x))+\Delta V(f^{n-2}(x))+\Delta V(f^{n-1}(x))\] (33) \[\cdots\] (34) \[=V(x)+\sum_{k=1}^{n-1}\Delta V\left(f^{k}(x)\right)\] (35) \[\leq V(x)-(n-1)\mu. \tag{36}\] Letting \(n\rightarrow+\infty\), the right-hand side tends to \(-\infty\). This is in contradiction with \(V(x)>0\) for \(x\in D-\mathbf{0}\). Therefore, for \(x\in C_{\beta}\backslash B_{\eta}\), there must exist some \(k\) such that, \[f^{k}(x)\in B_{\eta}. \tag{37}\] For \(x\in B_{\eta}\), \[\|f(x)\| \leq\|x\|+\|\rho\| \tag{38}\] \[<\|x\|+\bar{\rho}. \tag{39}\] This shows that \(f(x)\in B_{\eta+\bar{\rho}}\) if \(x\in B_{\eta}\). Together with the summary made in (37), the Claim 2 can be proved. **Remark 3**.: _Claim 2 further extends the boundedness conclusion made in Claim 1, and shows that for the trajectory planning problem, as \(k\rightarrow+\infty\), the planned state will eventually contained in \(B_{\eta+\bar{\rho}}\), despite the state change estimate error. The \(B_{\eta+\bar{\rho}}\) is a relatively small area and should satisfy the need for the trajectory planning problem._ ## IV Results and Discussions Two scenarios are designed to experiment the local planning method without global positioning information. Simulations of two scenes were conducted with the validation pipeline developed in subsection II-B1. Footnote 1: Code for this work is available at [https://github.com/codexos09/I2_frenet_planner.git](https://github.com/codexos09/I2_frenet_planner.git) To experiment the effects of sensor drift and noise to the planning results, the offset and standard deviation of speed and yaw rate sensor readings in (5) and (6) are set to: \[v_{\text{offset}} =-0.1\text{ m/s},\quad\sigma_{v}=0.1\text{ m/s}. \tag{40}\] \[\dot{\theta}_{\text{offset}} =0.57\text{ deg/s},\quad\sigma_{\dot{\theta}}=1.72\text{ deg/s}. \tag{41}\] Fig. 6: Illustration of the sets projection to 2D plane These errors are set to large values intentionally to validate the feasibility of the proposed local planning methodology. Normally the speed and yaw rate sensor would achieve more accurate readings with respect to the ground truth. ### _Moving Traffic Scene_ In this scene, the ego car is set to run in a double-lane road with other moving vehicles. Figure 7 shows the the continuous local planning results under the measurement error settings (40) and (41). The moving vehicles are shown in blue bounding boxes, which are enlarged to ensure safe distance with the ego car. The dashed-line boxes represent the predicted motions for these vehicles. The ego vehicle is represented in a green box in this figure. The candidate trajectory planned for each lane (left lane in blue, right lane in orange) with a planning horizon of 5.0 seconds is also displayed extended from the tail of the ego vehicle. The selected trajectory is highlighted in red points with each point representing an increment of 0.1 second per time step. This helps visualize the speed change by observing the density and spread of the selected trajectory. In Figure 7, the ego car performs a right lane change first, and then follows the front car until the gap on the left lane is safe enough for it to make another lane change back to the left lane. The safety distance in both longitudinal and lateral directions of the ego car are well maintained. The ego car centers well within the lane bounds when not performing lane changing. This demonstrates that despite the unavailability of global localization information and a not-so-accurate sensor readings of speed and yaw rate to estimate the relative motion, the continuous planning remains highly feasible under the proposed methodology in Figure 1. Figure 8 shows the change of speed and yaw rate of the ego car during the moving traffic scene, as well as the measurement readings in blue-cross line. Time plots of the measurement errors are also displayed in the right subplots to show the deviation and noise of the sensor readings. We also examined how the measurement errors from speed and yaw rate affect the relative motion estimate between planning frames. In Figure 9, the top two plots shows the box plot as well as the scattering of the measurement errors. The bottom plots show how the above measurement errors resulted in the estimated errors \(\varepsilon\) in Equation (9) for the pose change between planning frames, i.e. \(\Delta x\), \(\Delta y\), and \(\Delta\theta\), under the local coordinate system. Figure 10 shows that traffic in both lanes are blocked by a long semi-truck and a sedan in front. The ego car makes a deceleration to a full stop. The bottom two plots demonstrate that the ego car is capable to maintain the stop pose despite the estimation error of relative motion change between frames caused by the sensor errors. Interesting readers can refer to the animation4to see how the planning compensates the estimation error and get a sense of how the condition (23) is met under this error setting. Footnote 4: The animation gif is available at [https://bit.ly/47XMcsg](https://bit.ly/47XMcsg). Footnote 5: The animation gif is available at [https://bit.ly/3E6Ulhk](https://bit.ly/3E6Ulhk). Note that the speed planned is negative in both bottom plots in Figure 10 when the vehicle is stopped. This is due to that negative speed error offset \(v_{\text{offset}}=-0.1\) m/s leads to a positive estimation error offset of \(\Delta x\) along the longitudinal direction, as can be seen in Figure 9. From Equation (13), the planning start point for next frame will be ahead due to the positive estimation error of the pose change, and thus making the planning to move backward under the local coordinate system. ### _Stability Limits at Sensor Offset Errors_ As discussed in Section III, Prerequisite 1 and Prerequisite 2 have to be both satisfied to ensure the stability of the local planning problem as stated in Claim 2. We experimented the larger sensor offset errors to check the stability limits for the tested two scenes. Figure 11 shows the screenshots for the two scenes at a much larger speed measurement offset error, \(v_{\text{offset}}=-1.0\) m/s. Figure (a)a shows at timestamp 11.5 second, the ego car reaches to a further position compared to Figure 7 and starts a third lane-change to surpass the vehicle in left lane. Figure (b)b on the other hand shows that the ego car keeps creeping forward until it crashes to the front car. This is due to the reason that the estimate error of pose change between frames is too large for the planning motion to compensate, which then gradually accumulated to larger deviations from the target pose, essentially drifting away from the target pose. Claim 2 would not be valid as the Prerequisite 2 is not satisfied under such cases. Figure 12 and Figure 13 are screenshots of the two scenes under yaw rate offset \(\hat{\theta}_{\text{offset}}\) of 2.29 deg/s and 5.73 deg/s, Fig. 11: Effects of speed offset error: \(v_{\text{offset}}=-1.0\) m/s. Fig. 10: Local planning in a stop scene3. Fig. 9: Box plots of measurement errors (upper plots) and the resulted estimation errors of relative motion between planning frames (bottom plots) respectively. At \(\hat{\theta}_{\text{offset}}=2.29\)deg/s in Figure 12, the ego car is not capable to center itself in the middle of the lane and almost rides on the right lane bound compared to Figure 10. Considering that 2.29 deg/s yaw rate offset is already abnormally large, it shows that the proposed local planning framework is very robust in maintaining the stability of continous planning. Under an even larger error \(\hat{\theta}_{\text{offset}}=5.73\)deg/s in Figure 13, the ego car drifts right so much that it eventually crosses over the lane bound and leads to potential crash or fails in both scenarios. ## V Conclusions Overcoming the challenge of level-2+ semi-autonomous driving without relying on accurate absolute localization has been the focal point of this study. This paper has explored the viability of local trajectory planning without the need for absolute localization, focusing on a local vehicle coordinate system. The proposed local trajectory planning methodology for level-2+ semi-autonomous driving relying on the pose change estimation between planning frames from motion sensors, as well as the relative locations of traffic objects and lane lines with respect to the ego vehicle under local vehicle coordinate system. The proposed planning methodology under motion sensor errors was described as a Lyapunov stability problem with its feasibility proven under certain conditions in Section III. And finally a validation pipeline was built with two scenes chosen to experiment the proposed planning framework under different error settings of speed and yaw rate measurements. The simulation results strongly support the feasibility analysis and demonstrate that the continuous planning can function properly even under relatively inferior sensor errors in (40) and (41). ## Acknowledgments The authors thank the Automated Driving Lab at the Ohio State University.
2302.01534
Optimized time-lapse acquisition design via spectral gap ratio minimization
Modern-day reservoir management and monitoring of geological carbon storage increasingly call for costly time-lapse seismic data collection. In this letter, we show how techniques from graph theory can be used to optimize acquisition geometries for low-cost sparse 4D seismic. Based on midpoint-offset domain connectivity arguments, the proposed algorithm automatically produces sparse non-replicated time-lapse acquisition geometries that favor wavefield recovery.
Yijun Zhang, Ziyi Yin, Oscar Lopez, Ali Siahkoohi, Mathias Louboutin, Rajiv Kumar, Felix J. Herrmann
2023-02-03T04:17:10Z
http://arxiv.org/abs/2302.01534v1
# Optimized time-lapse acquisition design via spectral gap ratio minimization ###### Abstract Modern-day reservoir management and monitoring of geological carbon storage increasingly call for costly time-lapse seismic data collection. In this letter, we show how techniques from graph theory can be used to optimize acquisition geometries for low-cost sparse 4D seismic. Based on midpoint-offset domain connectivity arguments, the proposed algorithm automatically produces sparse non-replicated time-lapse acquisition geometries that favor wavefield recovery. ## Introduction Time-lapse seismic data acquisition is a costly but crucial endeavor for reservoir management and monitoring of geological carbon storage. While sparse randomized collection of seismic data can lead to major improvements in acquisition productivity (Herrmann and Hennenfent, 2008; Hennenfent and Herrmann, 2008; Herrmann, 2010; Mosher et al., 2014), systematic approaches to performance prediction, other than computationally expensive simulation-based studies, are mostly lacking. Besides, acquisition optimization approaches, such as minimizing the mutual coherence (Tang et al., 2008; Mosher et al., 2014; Obermeier and Martinez-Lorenzo, 2017) or minimizing the spectral gap ratio (SGR, Lopez et al., 2022; Zhang et al., 2022), do not handle the unique challenges of time-lapse seismic data acquisition. To meet these challenges, inversion with the joint recovery model (JRM, Oghenekohwo et al., 2017; Wason et al., 2017) will be combined with automatic binary sampling mask generation driven by SGR minimization (Zhang et al., 2022). We opt for the JRM because it inverts baseline and monitor surveys jointly for the common component, which contains information shared between the surveys, and innovations with respect to the common component. Since the fictitious common component is observed by all surveys, its recovery improves when the time-lapse surveys contain complementary information. This is the case when sparse surveys are not replicated (Oghenekohwo et al., 2017; Wason et al., 2017) or when the time-lapse datasets contain independent noise terms (Tian et al., 2018). In either case, the JRM leads without insisting on replication of the surveys to high degrees of time-lapse repeatability both in the data (Oghenekohwo et al., 2017; Wason et al., 2017) and image space (Yin et al., 2023). It also yields better interpretability of time-lapse field data (Wei et al., 2018). As demonstrated by this letter, including the common component offers additional advantages when optimizing time-lapse acquisition via SGR minimization. To demonstrate this, we first explain the relationship between the SGR and connectivity within graphs associated with binary sampling masks. Next, we describe how this connectivity, which favors wavefield reconstruction, can be improved by minimizing the SGR via optimization. To enhance inversion of time-lapse data with the JRM, a new optimization objective will be introduced that contains SGRs of the common component and of the baseline/monitor surveys. After a brief discussion on minimizing this objective with simulated annealing, the proposed methodology for automatic time-lapse binary mask generation is numerically validated on realistic synthetic 2D data. ## Optimized time-lapse acquisition While the SGR has been used successfully to predict and improve the performance of wavefield reconstruction, it has not yet been used to optimize time-lapse acquisition. After briefly discussing the SGR and JRM, we introduce our methodology to optimize time-lapse data acquisition. ### The spectral gap ratio As shown by Lopez et al. (2022), the success of seismic wavefield reconstruction via universal matrix completion (Bhojanapalli and Jain, 2014) can be predicted by the ratio of the first two singular values of binary sampling masks, \(\sigma_{2}(\mathbf{M})/\sigma_{1}(\mathbf{M})\in[0,\,1]\) where \(\mathbf{M}\) is a binary matrix with 1's where data is sampled and with 0's otherwise. This ratio is known as the spectral gap ratio (SGR) and provides a cheap-to-compute quantitative measure to predict recovery quality. The smaller the SGR, the better the connectivity within graphs spanned by binary sampling masks. Improved connectivity leads to improved wavefield recovery (Lopez et al., 2022). While useful, the SGR itself is not constructive because it does not produce sampling masks with small SGRs. With simulated annealing, Zhang et al. (2022) came up with a practical algorithm to generate acquisition geometries with small SGRs. In this work, we extend this approach by optimizing sparse geometries for time-lapse data acquisition. ### Optimized sampling mask generation Given an initial binary mask, \(\mathbf{M}\in\{0,1\}^{n_{s}\times n_{r}}\), with \(n_{s}\) sources and \(n_{r}\) receivers, Zhang et al. (2022) proposed a methodology to minimize the SGR via \[\underset{\mathbf{M}}{\text{minimize}}\quad\mathcal{L}(\mathbf{M})\quad \text{subject to}\quad\mathbf{M}\in\mathcal{C}, \tag{1}\] with the objective, \(\mathcal{L}(\mathbf{M})=\sigma_{2}(\mathbf{M})/\sigma_{1}(\mathbf{M})\), given by the SGR. To ensure feasibility of the optimized binary masks, the constraint, \(\mathcal{C}=\bigcap\limits_{i=1}^{3}\mathcal{C}_{i}\), is included, which consists of the intersection of the cardinality constraint, \(\mathcal{C}_{1}=\{\mathbf{M}\mid\#(\mathbf{M})=\lfloor n_{s}\times\rho\rfloor \times n_{r}\}\), the binary mask constraint, \(\mathcal{C}_{2}=\{\mathbf{M}\mid\mathbf{M}\in\{0,1\}^{n_{s}\times n_{r}}\}\), and a constraint on the maximum gap size between consecutive samples, \(\mathcal{C}_{3}=\{\mathbf{M}\mid\text{maxgap}(\mathbf{M})\leq\Delta\}\), where \(\Delta\) is the maximal permitted gap size. By solving Equation 1, Zhang et al. (2022) produced binary sampling masks that improved wavefield reconstruction compared to masks generated with randomized jittered sampling (Hennenfent and Herrmann, 2008). Figure 1 contrasts jittered with optimized sampling in the midpoint-offset domain, reducing the SGR from 0.333 to 0.196. The optimized mask increases the sampling at the near offsets where there are more ways to connect to midpoints, which favors wavefield reconstruction (Lopez et al., 2022). ### Joint recovery model Lowering costs while ensuring time-lapse repeatability are the main challenges in the design of seismic monitoring systems employed to optimize reservoir management and to safeguard geological carbon storage. Both challenges can be met by inverting sparsely sampled baseline and monitor data jointly. For time-lapse acquisition with a single monitor survey, this entails inverting \[\mathbf{b}=\mathcal{A}\left(\mathbf{Z}\right)\quad\text{with}\quad\mathcal{A} \left(\cdot\right)=\begin{bmatrix}\mathcal{A}_{1}&\mathcal{A}_{1}&0\\ \mathcal{A}_{2}&0&\mathcal{A}_{2}\end{bmatrix}\left(\cdot\right). \tag{2}\] In this JRM, the linear operators, \(\mathcal{A}_{j}\), \(j=1,2\), stand for the combined action of converting monochromatic time-lapse data from the midpoint-offset to the source-receiver domain, followed by trace collection with the acquisition geometries defined by the binary sampling masks, \(\mathbf{M}_{j}\), \(j=1,2\) with \(j=1\) and \(j=2\) masks for the baseline/monitor surveys. With this model, time-lapse data, \(\mathbf{b}\), which contains the baseline, \(\mathbf{b}_{1}\) and monitor data, \(\mathbf{b}_{2}\), are linearly related to \(\mathbf{Z}\), which contains matrices for the unknown densely sampled common component, \(\mathbf{Z}_{0}\), and innovations with respect to this common component, \(\mathbf{Z}_{j}\), \(j=1,2\). Compared to recovering the baseline/monitor surveys separately, the JRM produces repeatable results from non-replicated (Oghenekohwo et al., 2017; Wason et al., 2017; Kumar et al., 2017), non-calibrated (Oghenekohwo and Herrmann, 2017), and noisy (Tian et al., 2018), time-lapse data. These enhanced results are due to the improved recovery of the fictitious common component. ### Time-lapse optimized mask generation Based on the success of the JRM, we carry the argument of minimizing the SGR a step further by optimizing this quantity for the baseline/monitoring surveys. Because \(\mathbf{Z}_{0}\) is observed by both surveys, the set of sampling points, \(\{\mathbf{M}_{0}\}\), equals the union \(\{\mathbf{M}_{0}\}=\{\mathbf{M}_{1}\}\cup\{\mathbf{M}_{2}\}\). When surveys are replicated, \(\{\mathbf{M}_{0}\}=\{\mathbf{M}_{1}\}=\{\mathbf{M}_{2}\}\). However, \(\mathbf{M}_{0}\) becomes larger when the baseline and monitor surveys are not replicated explaining why the common component is better resolved when the surveys are not replicated. While Equation 1 leads to improved sampling masks for individual surveys, it does not exploit the fact that the common component is observed by all surveys. For this reason, we propose an optimization procedure with respect to \(\mathbf{M}_{1}\) and \(\mathbf{M}_{2}\) with an objective that also includes the SGR for the common component. To avoid generation of poor sampling masks, we follow a mini-max principle where the maximum--i.e., the \(\ell_{\infty}\)-norm--of the SGRs for the common and innovation components is minimized. To compensate for likely smaller SGRs for the common component when the surveys do not overlap (\(\#\left\{\mathbf{M}_{0}\right\}>\#\left\{\mathbf{M}_{1}\right\},\#\left\{ \mathbf{M}_{2}\right\}\)), we also introduce a scaling. We base this scaling on the property (see Definition 3.1 in Bhojanapalli and Jain, 2014; Hoory et al., 2006) that the second singular value of \(d\)-regular graphs--i.e., seismic sampling masks with \(d\) non-zero entries per midpoint or offset-- scales with \(\sqrt{d}\). Given this scaling, we propose to minimize the following constrained optimization problem, for \(\;j=1,2\): \[\underset{\mathbf{M}_{1},\,\mathbf{M}_{2}}{\text{minimize}}\quad\mathcal{L}( \mathbf{M}_{1},\mathbf{M}_{2})\quad\text{subject to}\quad\{\mathbf{M}_{0}\}= \left\{\mathbf{M}_{1}\right\}\cup\left\{\mathbf{M}_{2}\right\},\mathbf{M}_{j} \in\mathcal{C}_{j}, \tag{3}\] with \(\mathcal{L}(\mathbf{M}_{1},\mathbf{M}_{2})=\left\|\left[\mathcal{L}(\mathbf{M} _{0}),\sqrt{\frac{\#(\mathbf{M}_{1})}{\#(\mathbf{M}_{0})}}\mathcal{L}(\mathbf{ M}_{1}),\sqrt{\frac{\#(\mathbf{M}_{2})}{\#(\mathbf{M}_{0})}}\mathcal{L}( \mathbf{M}_{2})\right]\right\|_{\infty}.\) As before, the minimization is subject to constraints, \(\mathcal{C}_{j}\), \(j=1,2\), which can be chosen for each time-lapse survey separately. Figure 1: (a) Jittered versus (b) optimized sampling mask in the midpoint-offset domain. To produce time-lapse sampling masks, we employ simulated annealing as proposed by Zhang et al. (2022) but with the following differences: _(i)_ randomly perturbed masks are drawn for each survey independently; _(ii)_ the compound objectives and constraints of Equation 3 are used; _(iii)_ to be relocated sample points are allowed to move more freely than during jitter sampling, which allows us to better explore candidate sampling masks. Figure 2 illustrates how the algorithm progresses when initialized with a replicated jittered subsampled (removing 80% of the sources) acquisition. From Figure 1(a), we observe that the co-located source positions (denoted by the black dots) are gradually replaced by non-coincident source locations for the baseline (blue dots) and monitor surveys (red dots). Even though the objective of Equation 3 decreases non-monotonically (see Figure 1(b)), the reconstruction SNR increases for the baseline and monitor surveys for the selected points. ## Numerical validation To confirm the benefits of optimized acquisition, we consider time-lapse data, which differs by a complex gas cloud (Wason et al., 2017; Jones et al., 2012). Using finite-differences (Witte et al., 2019; Louboutin et al., 2022, 2019; Luporini et al., 2020), fully sampled (split spread) 2D baseline and monitor surveys are simulated each consisting of 300 sources/receivers sampled at \(12.5\,\mathrm{m}\). By using a single jittered subsampling mask 80% of the sources are removed, yielding an average source sampling rate of \(62.5\,\mathrm{m}\) with 100% overlap. After running the optimization, the SGRs of the baseline/monitor surveys decreases from 0.346 to 0.268 and 0.262, respectively. The reduction in the overlap ratio (to 22%) leads to improvement in wavefield recovery via matrix completion (Kumar et al., 2015, 2017), which results in better SNRs for the baseline from \(6.55\,\mathrm{dB}\) to \(17.03\,\mathrm{dB}\) and for the monitor from \(6.67\,\mathrm{dB}\) to \(16.99\,\mathrm{dB}\). For reasons explained by Oghenekohwo et al. (2017) and Wason et al. (2017), time-lapse difference plots are not included because the benefits of exact replication vanish when acquisition geometries undergo relatively small (\(1-2\mathrm{m}\)) random shifts. While these improvements are encouraging, the proposed optimization is approximate and the produced binary masks will be different for different starting masks. To investigate this effect, 30 overlapping jittered masks are generated by removing 75% of the sources. By reducing the overlap to \(29\%\pm 8\%\), the optimized masks improve the SGRs as can be observed from the violin plots in Figure 3(a). As before, the reductions in SGRs translate into improved SNRs as can be seen in Figure 3(b). Compared to box plots, violin plots Figure 2: Automatic time-lapse sampling mask generation. (a) Starting from a jittered replicated sampling mask, the algorithm produces masks that have smaller SGRs but are no longer replicated. (b) Non-monotonically decaying objective and reconstruction SNR evaluated at points where the objective decreased by more than 0.003. display the entire distribution including lines for the median (long dashes), first and third quartile (short dashes). We can make the following observations: _(i)_ the SGRs for the baseline/monitor surveys decrease significantly; _(ii)_ because of the larger number of sampled sources, the SGR for the common component is smaller and more narrowly distributed; _(iii)_ the distribution of the SGRs of the baseline/monitor surveys is also narrow compared to the one of the initial jittered binary sampling masks; _(iv)_ the SNRs for the recovered baseline/monitor surveys improve significantly. Even though the above results are encouraging and consistent with published reports that claim benefits of the JRM (Oghenekohwo et al., 2017; Wason et al., 2017; Yin et al., 2023), further scrutiny is in order. To this end, additional experiments were conducted to better understand robustness of the proposed methodology. Aside from predictable behavior for different starting masks (Figure 4), we also fo Figure 4: Violin plots for the SGRs (a) and recovery SNRs (b) for 30 independent experiments. These experiments show systematic reductions in SGR and significantly improves reconstruction SNRs for optimized surveys. Figure 3: Time-lapse wavefield reconstruction in the time-domain. (a) wavefield reconstruction from 80% jittered subsampling for the baseline SNR = 6.55 dB, monitor SNR = 6.67 dB, and errors between the ground truth and the reconstructed wavefields. (b) the same but with optimized sampling masks, yielding improved recovery baseline/monitor surveys with SNR = 17.03, 16.99dB, respectively. are relatively insensitive to different runs of SA and to random perturbations in the optimized masks. The first observation implies that while SA may produce different masks, the SGRs remain very close, yielding wavefield reconstructions of near equal quality. The second observation indicates that postplot errors by single gridpoint shifts (12.5m) in the worst scenario offset the gains made by the optimization. However, on average improvements are mostly preserved although with higher variability. The observed robustness of the presented method is consistent with reported behavior of the JRM. Even though we only considered the on-the-grid case, the argument can be made that improvements will carry over to the off-the-grid situation (Wason et al., 2017; Oghenekohwo and Herrmann, 2017; Lopez et al., 2016). However, to turn this claim into a more solid argument, we would have to extend the presented approach to the infinite-dimensional case, which is beyond the scope of this letter. ## Conclusions Acquisition costs form a major impediment to time-lapse seismic. To reduce these costs while ensuring time-lapse repeatability, a novel acquisition optimization scheme was proposed that produces binary sampling masks that favor wavefield reconstruction with the joint recovery model. Optimized sampling masks were generated automatically by minimizing a new objective function consisting of spectral gap ratios for the baseline/monitor surveys and for the common component shared by the surveys. Aside from allowing for wave-simulation free, and therefore computationally feasible, optimized acquisition design, the proposed method also reaffirms the suggestion that deliberate relaxation of survey replication may lead to improved quality of jointly inverted surveys. This claim is solely based on connectivity arguments for the acquisition geometries associated with the baseline/monitor surveys and the common component. Because the spectral gap ratio is extremely cheap to evaluate, it lends itself very well to be extended to multiple monitoring surveys and to 3D. Off-the-grid acquisition geometries are also conducive to being improved by spectral gap ratios, but we will leave that extension to future work.
2306.02455
Raman Sideband Cooling of Molecules in an Optical Tweezer Array
Ultracold molecules, because of their rich internal structures and interactions, have been proposed as a promising platform for quantum science and precision measurement. Direct laser-cooling promises to be a rapid and efficient way to bring molecules to ultracold temperatures. For trapped molecules, laser-cooling to the quantum motional ground state remains an outstanding challenge. A technique capable of reaching the motional ground state is Raman sideband cooling, first demonstrated in trapped ions and atoms. In this work, we demonstrate for the first time Raman sideband cooling of molecules. Specifically, we demonstrate 3D Raman cooling for single CaF molecules trapped in an optical tweezer array, achieving average radial (axial) motional occupation as low as $\bar{n}_r=0.27(7)$ ($\bar{n}_z=7.0(10)$). Notably, we measure a 1D ground state fraction as high as 0.79(4), and a motional entropy per particle of $s = 4.9(3)$, the lowest reported for laser-cooled molecules to date. These lower temperatures could enable longer coherence times and higher fidelity molecular qubit gates desirable for quantum information processing and quantum simulation. With further improvements, Raman cooling could also be a new route towards molecular quantum degeneracy applicable to many laser-coolable molecular species including polyatomic ones.
Yukai Lu, Samuel J. Li, Connor M. Holland, Lawrence W. Cheuk
2023-06-04T20:10:52Z
http://arxiv.org/abs/2306.02455v1
# Raman Sideband Cooling of Molecules in an Optical Tweezer Array ###### Abstract Ultracold molecules, because of their rich internal structures and interactions, have been proposed as a promising platform for quantum science and precision measurement. Direct laser-cooling promises to be a rapid and efficient way to bring molecules to ultracold temperatures. For trapped molecules, laser-cooling to the quantum motional ground state remains an outstanding challenge. A technique capable of reaching the motional ground state is Raman sideband cooling, first demonstrated in trapped ions and atoms. In this work, we demonstrate for the first time Raman sideband cooling of molecules. Specifically, we demonstrate 3D Raman cooling for single CaF molecules trapped in an optical tweezer array, achieving average radial (axial) motional occupation as low as \(\bar{n}_{r}=0.27(7)\) (\(\bar{n}_{z}=7.0(10)\)). Notably, we measure a 1D ground state fraction as high as \(0.79(4)\), and a motional entropy per particle of \(s=4.9(3)\), the lowest reported for laser-cooled molecules to date. These lower temperatures could enable longer coherence times and higher fidelity molecular qubit gates desirable for quantum information processing and quantum simulation. With further improvements, Raman cooling could also be a new route towards molecular quantum degeneracy applicable to many laser-coolable molecular species including polyatomic ones. + Footnote †: preprint: APS/123-QED Ultracold molecules have been proposed as a new platform for exploring many areas in physics ranging from simulation of quantum many-body Hamiltonians, to quantum information processing, to precision measurements in searches for physics beyond the Standard Model [1; 2; 3; 4]. Yet, cooling and fully controlling molecules have been long-standing experimental challenges. One route to produce ultracold molecules is via assembly from atoms, for which cooling techniques are well-developed. This approach has successfully been used to produce bialkali molecules, enabling explorations in ultracold chemistry [5; 6] and the creation of quantum-degenerate molecular gases well-suited for studying long-ranged interacting many-body systems [7; 8]. In contrast to assembly from ultracold atoms, methods that directly cool could be broadly applicable to a large variety of molecular species including polyatomic ones. In particular, direct laser-cooling of molecules has seen great advances recently. Starting with molecular magneto-optical traps near the Doppler limit of \(\sim 100\,\mu\)K [9; 10; 11; 12; 13], sub-Doppler cooling techniques have allowed experiments to enter into the \(\mu\)K regime [10; 13; 14; 15; 16; 17; 18; 19]. Importantly, sub-Doppler cooling has enabled optically trapped molecular samples with record densities [18; 20; 14] and arrays of single molecules trapped in optical tweezers [21; 22; 23]. Access to molecular samples at even lower temperatures could enable new possibilities. For example, molecular tweezer arrays have recently emerged as a promising platform for quantum science. Notably, recent work has shown high-fidelity control over molecular positions and internal states, coherent dipolar interactions, and implementation of an entangling two-qubit gate [22; 23]. Yet, residual thermal motion limits the achievable coherence times and gate fidelities. These limitations can be largely eliminated by cooling to the motional ground state. One technique capable of cooling to the motional ground state is Raman sideband cooling (RSC) [24]. First, a Raman process transfers a molecule initially in internal state \(\ket{\uparrow}\) to \(\ket{\downarrow}\) while removing \(\Delta n\) quanta of motional energy. Subsequently, optical pumping reinitializes the internal state to \(\ket{\uparrow}\) while largely preserving the motional state. By iterating over these two steps, cooling is achieved. RSC was first proposed and demonstrated for trapped ions and atoms in optical lattices [25; 26; 27], and has since been used to cool single atoms in tweezer traps to their motional ground states [28; 29], for imaging in quantum gas microscopes [30; 31], and to produce single molecules via assembly from two RSC-cooled atoms [32; 33; 34]. RSC also provides a rapid and efficient (low-loss) all-optical method to create atomic Bose-Einstein condensates [35; 36], circumventing the need for evaporation. Raman sideband cooling of optically trapped and laser-cooled molecules faces two main challenges that arise from the complex internal structure of molecules [37]. First, state-dependent optical trapping leads to inhomogeneous broadening of Raman transitions, preventing resolved addressing of cooling sidebands and decreasing transfer efficiency. Second, efficient optical pumping is difficult because of the large number of molecular states and the degradation of free-space selection rules in deep optical traps. In this work, we demonstrate Raman sideband cooling of molecules for the first time. We overcome the above challenges by devising a RSC scheme for CaF molecules that provides both narrow Raman transitions and efficient optical pumping. Our scheme does not require high magnetic fields, in contrast to the one proposed in [37]. We demonstrate 3D Raman cooling for CaF molecules trapped in an optical tweezer array and achieve average motional quanta as low as \(\bar{n}_{r}=0.27(7)\) and \(\bar{n}_{z}=7.0(10)\) in the radial and axial directions, respectively. Our work begins with single CaF molecules that are cooled by \(\Lambda\)-enhanced gray molasses [14] and trapped in a 1D array of linearly polarized optical tweezers [21, 22, 38]. Raman beams are sent along the radial and axial directions, and are near-detuned from the \(X^{2}\Sigma(v=0,N=1)\to B^{2}\Sigma(v=0,N=0)\) transition (Fig. 1). Optical pumping is achieved on the \(X^{2}\Sigma(v=0,N=1)\to A^{2}\Pi_{1/2}(v=0,J=1/2,+)\) transition. We identify a suitable pair of internal states \(\{\ket{\uparrow},\ket{\downarrow}\}\) for RSC. Constrained by optical pumping, we consider optically cyclable states from \(X^{2}\Sigma(v=0,N=1)\). In free space, selection rules enable optical pumping into the stretched states \(\ket{\pm}=\ket{N=1,J=3/2,F=2,m_{F}=\pm 2}\). Specifically, these states are dark to \(\sigma_{\pm}\) and \(\pi\) light addressing the \(X^{2}\Sigma(v=0,N=1)\to A^{2}\Pi_{1/2}(v=0,J=1/2,+)\) transition. In deep tweezer traps, the trapping light can admix in bright states and modify selection rules, degrading optical pumping. The admixture can be reduced by providing a well-defined quantization axis with a magnetic field \(\vec{B}\) along the polarization axis of the trapping light [29, 37]. However, at certain fields, because of tensor ac Stark shifts, level crossings can occur, increasing bright state admixtures. Our calculations indicate that \(\ket{\uparrow}=\ket{-}\) is immune from these crossings at low fields and over a large range of trap depths, therefore robustly providing low bright state admixtures [39]. In particular, the bright state population admixture remains below \(10^{-4}\) even for traps as deep as \(k_{B}\times 2\,\mathrm{mK}\) at \(B=4.4\,\mathrm{G}\). For \(\ket{\downarrow}\), we seek a state that is connected by a two-photon Raman transition to \(\ket{\uparrow}\) and has minimal differential ac Stark shifts with \(\ket{\uparrow}\). Our calculations indicate that \(\ket{\downarrow}=\ket{N=1,J=3/2,F=1,m_{F}=0}\) satisfies these requirements. In particular, while most \(N=1\) states experience fractional differential Stark shifts at the \(10^{-1}\) level, those between \(\ket{\uparrow}\) and \(\ket{\downarrow}\) are suppressed to \(10^{-2}\) even in deep traps with depths \(\sim k_{B}\times 1\,\mathrm{mK}\). The shifts can be further reduced by changing the angle \(\theta\) between \(\vec{B}\) and the tweezer polarization, with \(10^{-3}\) fractional shifts possible at specific "magic" angles. For example, at a trap depth of \(k_{B}\times 0.3\,\mathrm{mK}\) and magnetic field of \(B=5.5\) G, the magic angle is \(\theta\approx 57^{\circ}\). We experimentally verify the properties of \(\{\ket{\uparrow},\ket{\downarrow}\}\) through Raman spectroscopy and optical pumping dynamics. We first probe the inhomogeneous broadening of Raman transitions arising largely from differential Stark shifts. At a trap depth \(U=k_{B}\times 326(7)\,\mathrm{\mu K}\), we measure the linewidth \(\Gamma\) of the carrier (\(\Delta n=0\)) transition using co-propagating Raman beams. We prepare molecules in \(\ket{\uparrow}\) and measure the population in \(\ket{\downarrow}\) (\(P_{\downarrow}\)) versus the two-photon detuning \(\delta\). At \(\theta=0^{\circ}\) and \(\vec{B}=4.4\,\mathrm{G}\), we measure a linewidth of \(\Gamma=2\pi\times 26.5(3)\,\mathrm{kHz}\). This is below the radial trapping frequency \(\omega_{r}=2\pi\times 117.3(4)\,\mathrm{kHz}\), allowing radial sidebands (\(\Delta n\neq 0\)) to be resolved. At \(B=5.5\) G, as a function of \(\theta\), we find that \(\Gamma\) reaches a minimum of \(\approx 2\pi\times 7\,\mathrm{kHz}\) at the magic angle of \(\theta_{m}=56.3(3)^{\circ}\), as predicted (Fig. 2(a)). Notably, at \(\theta_{m}\), \(\hbar\Gamma/U\approx 10^{-3}\), and the linewidth \(\Gamma\) is below the axial trapping frequency \(\omega_{z}\approx 2\pi\times 26\,\mathrm{kHz}\), allowing axial sidebands to be resolved [39]. Next, we measure optical pumping dynamics. Starting Figure 1: Raman Sideband Cooling Scheme. (a) Motional-changing two-photon Raman transitions between \(\ket{\uparrow}\) and \(\ket{\downarrow}\) are driven using laser beams detuned by \(\Delta\approx-2\pi\times 42\) GHz from the \(X^{2}\Sigma(v=0,N=1)\to B^{2}\Sigma(v=0,N=0)\) transition. \(\delta\) denotes the two-photon detuning. Optical pumping into \(\ket{\uparrow}\) is performed using light addressing the \(X^{2}\Sigma(v=0,N=1)\to A^{2}\Pi_{1/2}(v=0,J=1/2,+)\) transition. (b) A magnetic field \(\vec{B}\) is applied in the radial plane, and is oriented at an angle of \(\theta\) relative to the polarization axis of the tweezer light (\(\vec{\varepsilon}\parallel\hat{x}\)). Raman laser beams \(R_{1},R_{2}\) (\(R_{3},R_{4}\)) address the radial (axial) directions. \(R_{1}\) and \(R_{2}\) are optionally retro-reflected. Optical pumping light (OP) is applied radially. Figure 2: Raman Linewidths and Optical Pumping Characterization. (a) Raman linewidths \(\Gamma\) versus \(\theta\), measured at a tweezer depth of \(k_{B}\times 326(7)\,\mathrm{\mu K}\). \(\Gamma\) is smallest at the magic angle \(\theta_{m}=56.3(3)^{\circ}\), indicated by the dashed vertical line. Representative spectra versus two-photon detuning \(\delta\), along with Lorentzian fits (solid), are shown in the sub-panels. (b) \(\ket{\uparrow}\) population \(P_{\uparrow}\) versus optical pumping time \(t\) at a tweezer depth of \(k_{B}\times 930(20)\,\mathrm{\mu K}\). Red triangles (blue circles) show data for \(\theta=0^{\circ}\) (\(\theta=\theta_{m}\)). Solid curves show simultaneous fits to early data that time dynamics. (c) The figures-of-merit \(\kappa\) and \(P_{s}\) at \(\theta_{m}\) versus trap depth \(U/U_{0}\) (\(U_{0}=k_{B}\times 930(20)\,\mathrm{\mu K}\)) are shown by blue circles, with solid lines as guides to the eye. The corresponding values for \(\theta=0^{\circ}\) are shown by the red shaded regions and indicate significantly better optical pumping. with molecules initially distributed over all 12 hyperfine states of \(X^{2}\Sigma(v=0,N=1)\), we measure the \(\left|\uparrow\right\rangle\) population (\(P_{\uparrow}\)) after variable durations of optical pumping. At short times, \(P_{\uparrow}\) increases as molecules are pumped into \(\left|\uparrow\right\rangle\), and subsequently saturates to \(P_{s}\). At long times, \(P_{\uparrow}\) decreases due to molecular loss arising from heating or decay into undetected states due to imperfect darkness of \(\left|\uparrow\right\rangle\) (Fig. 2(b)). The darkness of \(\left|\uparrow\right\rangle\), essential for efficient optical pumping, can therefore be quantified by \(\kappa=\tau_{1}/\tau_{2}\), where \(\tau_{1}\) (\(\tau_{2}\)) is the \(1/e\) rise (fall) time of \(P_{\uparrow}\). \(P_{s}\) provides a complementary measure of optical pumping efficiency. When \(\theta=0^{\circ}\) and \(B=4.4\,\)G, \(\kappa\approx 8\times 10^{3}\) and \(P_{s}\approx 0.8\), even at a deep trap depth of \(k_{B}\times 930(20)\,\mu\)K (Fig. 2(c)). In comparison, in the magic configuration (\(\theta=\theta_{m}\), \(B=5.5\) G), we find that both \(\kappa\) and \(P_{s}\) decrease with increasing trap depth, indicating degrading selection rules. For all tweezer depths explored, \(\kappa\) is a factor of 6 to 20 lower compared to that at \(\theta=0^{\circ}\). This shows that optimal optical pumping (at \(\theta=0^{\circ}\)) and Raman linewidths (at \(\theta=\theta_{m}\)) cannot be simultaneously achieved. For Raman sideband cooling, deep tweezer depths help preserve the motional state during optical pumping, which is critical for cooling. Although the magic configuration at \(\theta_{m}\) provides the narrowest Raman transitions, optical pumping is severely degraded at deep depths. On the other hand, at \(\theta=0^{\circ}\), optical pumping is efficient even in deep tweezers. The Raman linewidth is slightly broader but sufficient to resolve the radial sidebands. We therefore choose to perform Raman cooling at \(\theta=0^{\circ}\). We next verify Raman motional coupling by driving sideband transitions at \(\theta=0^{\circ}\). The Raman coupling between different motional states is characterized by the Lamb-Dicke parameter \(\eta=|\Delta\vec{k}|l/\sqrt{2}\), where \(\Delta\vec{k}\) is the difference in wave-vectors between the two Raman beams and \(l=\sqrt{\hbar/(m\omega)}\) is the harmonic oscillator length, \(m\) being the molecular mass and \(\omega\) the trapping frequency. With molecules initially prepared in \(\left|\uparrow\right\rangle\), we probe radial and axial motional state transfer at \(U=k_{B}\times 326(7)\,\mu\)K, \(B=4.4\,\)G. We pulse on Raman beams and measure \(P_{\downarrow}\) versus \(\delta\). The radial Lamb-Dicke parameter is \(\eta_{r}=0.46\) at this depth, allowing us to observe resolved radial sidebands at \(\delta=(\Delta n_{r})\omega_{r}\) up to \(|\Delta n_{r}|=4\) (Fig. 3(a)), where \(\omega_{r}=2\pi\times 117.3(4)\,\)kHz. Axially, the weaker confinement leads to a larger Lamb-Dicke parameter of \(\eta_{z}=1.34\), allowing significant motional coupling up to \(|\Delta n_{z}|\sim 10\) (Fig. 3(b)). We observe that the axial spectrum is significantly broader than the measured carrier Raman linewidth, indicating that motion-changing Raman transfers are indeed occurring. Having demonstrated motional state-changing Raman transfer, we next construct a radial cooling sequence consisting of two discrete steps: optical pumping and Raman transfer on a cooling (\(\Delta n<0\)) sideband. We optically pump at a deep tweezer depth of \(U_{0}=k_{B}\times 930(20)\,\mu\)K to minimize increase in motional quanta. We estimate that \(\sim 19\) photons are required for optical pumping, increasing the energy by an equivalent of \(\Delta n_{r}^{\mathrm{op}}\sim 1.3\) radial quanta. To attain net cooling, we therefore address the \(\Delta n_{r}=-2\) sideband. To obtain sufficient motional coupling and reduce inhomogeneous broadening, we perform Raman transfer at a reduced tweezer depth \(U_{R}=k_{B}\times 326(7)\,\mu\)K, the same depth where we measured Raman linewidths. Each cooling cycle has a duration of \(0.65\,\)ms and the trap depths are ramped adiabatically over \(0.2\,\)ms between \(U_{0}\) and \(U_{R}\) (Fig. 3(c)). To verify cooling, we first indirectly probe the temperature via adiabatic reduction of the tweezer depth [40; 41]. Hot molecules are spilled progressively as the trap is lowered. At a fixed final trap depth \(U_{\mathrm{spill}}\), the surviving fraction \(f\) increases with lower temperatures. After 90 cycles of radial cooling (RC), we indeed observe that \(f\) increases, indicating radial cooling (Fig. 3(d)). We next add axial cooling by switching on additional axial Raman beams with the same two-photon detuning. This radial/axial cooling sequence (RAC) simultaneously Figure 3: Raman Sideband Spectra at \(\theta=0^{\circ}\) and Adiabatic Trap Lowering Curves. (a) Population transfer \(P_{\downarrow}\) using radial Raman beams versus \(\delta\), the two-photon detuning from the carrier (\(\Delta n=0\)). The solid blue line shows a fit using a sum of nine Lorentzians with an offset. The black (red) dashed line marks the carrier (\(\Delta n_{r}=-2\) sideband). (b) Population transfer \(P_{\downarrow}\) using axial Raman beams versus \(\delta\). Solid curve is a guide to the eye. The red dashed line shows the two-photon detuning \(\delta=2\omega_{r}\) used during cooling. (c) Raman cooling sequence consists of (i) Raman transfers at a tweezer depth of \(U_{R}=k_{B}\times 326(7)\,\mu\)K, and (ii) optical pumping at a higher depth of \(U_{0}=k_{B}\times 930(20)\,\mu\)K. (d) Probing temperature via adiabatic trap lowering. The tweezer depth is lowered to \(U_{\mathrm{spill}}\) over \(1\,\)ms, held \(10\,\)ms to allow hot molecules to escape, and increased back to full depth for detection. The surviving fraction \(f\) versus \(U_{\mathrm{spill}}/U_{0}\) is shown for no cooling (blue circles), radial cooling (RC) (green diamonds), and radial/axial cooling (RAC) (red squares). Inset: \(f\) versus number of cooling cycles \(N\) at a fixed lower depth \(U_{\mathrm{spill}}=k_{B}\times 0.72(2)\,\,\mu\)K (dashed line in main plot). addresses the \(\Delta n_{r}=-2\) radial and the \(\Delta n_{z}\approx-9\) axial sidebands. With 90 cycles of RAC, \(f\) increases further compared to radial cooling alone (Fig. 3(d)), indicating successful cooling in all directions. The cooling rate of RAC can be probed by measuring \(f\) versus the number of cooling cycles \(N\). Fixing \(U_{\rm spill}=k_{B}\times 0.72(2)\)\(\mu\)K, we find a \(1/e\) cooling timescale of \(N_{c}=51(14)\) cycles. We also quantify loss during cooling. With Raman and optical pumping beams off but keeping the tweezer depth ramps, we observe a \(1/e\) lifetime of \(1.28(15)\times 10^{3}\) cycles. Remarkably, with Raman cooling on, the lifetime increases to \(2.7(4)\times 10^{3}\) cycles, corresponding to a fractional loss of \(3.66(6)\times 10^{-4}\) per cycle. This difference could arise from Raman cooling compensating for technical heating, or from Raman beams repumping molecules that decay into \(X^{2}\Sigma(v=0,N=3)\) due to off-resonant scattering of tweezer light [38]. Although adiabatic trap reduction provides qualitative evidence of 3D cooling, it does not directly provide a quantitative temperature. For thermometry, we rely on Raman spectroscopy. We first cool at \(\theta=0^{\circ}\), and then perform spectroscopy in the magic configuration (\(\theta=\theta_{m}\) and \(B=5.5\,\)G) to minimize Raman broadening. Along the radial direction, the \(\Delta n_{r}=\pm 1\) sidebands are well-resolved. Given the radial Lamb-Dicke parameter and the temperature regime, the Raman motional coupling depends weakly on the motional state \(n_{r}\)[39]. This allows us to observe coherent \(\Delta n_{r}=\pm 1\) sideband transfer and also simplifies interpretation of spectra [39]. We apply \(\pi\)-pulses and measure the resulting transfer around each sideband (Fig. 4(a)). The ratio \(\mathcal{A}\) between the peak transfer fractions of the heating versus cooling sidebands allows us to extract 1D ground state fractions and temperatures. Assuming either uniform motional coupling or thermal occupation, we find a 1D radial ground state fraction of \(P_{0}=1-1/\mathcal{A}=0.60(6)\) after radial/axial cooling (RAC). Assuming only a thermal distribution, the mean radial motional occupation \(\bar{n}_{r}\) is given by \(\bar{n}_{r}=1/(\mathcal{A}-1)\). We find \(\bar{n}_{r}=0.66\,(16)\) (\(\bar{T}_{r}=k_{B}T_{r}/(\hbar\omega_{r})=1.1(2)\)) compared to \(\bar{n}_{r}=1.4\,(4)\) (\(\bar{T}_{r}=1.8(5)\)) before cooling. We probe the axial temperature similarly. Although we can observe resolved axial sidebands spaced up to \(|\Delta n_{z}|\sim 10\), extracting a temperature is difficult due to the complex lineshape arising from high axial temperatures and a large Lamb-Dicke parameter [39]. Nevertheless, robust thermometry is possible by probing high-order sidebands in the unresolved regime [42, 29]. The wings of the spectra become Gaussian, and the spectra can be understood as Doppler-sensitive two-photon transfer [39]. By fitting the wings, one can robustly extract a temperature. Experimentally, we increase the Raman Rabi coupling such that the wings of the spectra appear smooth (Fig. 4(b)). Fitting the spectra gives an axial temperature of \(\bar{T}_{z}=k_{B}T_{z}/(\hbar\omega_{z})=7.5^{+1.0}_{-0.7}\) (\(\bar{n}_{z}=7.0^{+1.0}_{-0.7}\)) after RAC, compared to \(\bar{T}_{z}=26.5^{+4.0}_{-3.1}\) (\(\bar{n}_{z}=26^{+4}_{-3}\)) before cooling. Lastly, we demonstrate a way to reach even lower temperatures. In our RAC scheme, since \(\Delta n_{r}=-2\) sidebands are addressed, molecules can accumulate in \(n_{r}=1\), limiting the radial ground state fraction. To circumvent this while maintaining net cooling, we apply an additional radial cooling sequence, where we interlace radial cooling cycles that separately address the \(\Delta n_{r}=-2\) and \(\Delta n_{r}=-1\) sidebands (Fig. 4(c)). With an additional 30 cycles of interlaced radial cooling (IRC), we observe that the 1D radial ground state fraction increases to \(P_{0}=0.79(4)\), corresponding to an average radial occupation of \(\bar{n}_{r}=0.27(7)\) (\(\bar{T}_{r}=0.65(9)\)). Because the axial direction is not cooled during IRC, the axial temperature increases to \(\bar{T}_{z}=13.5(14)\) (\(\bar{n}_{z}=13.0(14)\)), still below the initial temperature. When the motion is highly quantized, as in our case, a useful figure-of-merit in addition to temperature is the motional entropy. This quantifies how many motional states are populated and indicates the level of control over the initial motional state. With RAC, we obtain a motional entropy per particle of \(s=5.2^{+0.5}_{-0.4}\) compared Figure 4: Raman Thermometry. (a,c) Radial Raman spectra showing the carrier along with \(\Delta n_{r}=\pm 1\) sidebands. Grey squares and light gray circles show the carrier after and before cooling, respectively. Red (blue) squares show the \(\Delta n_{r}=1\) (\(\Delta n_{r}=-1\)) sideband after cooling; light red (light blue) circles show the \(\Delta n_{r}=1\) (\(\Delta n_{r}=-1\)) sideband before cooling. (b,d) Unresolved axial Raman spectra. Blue squares (light blue circles) show spectra after (before) cooling. (a,b) Spectra after 90 cycles of RAC (darker colors) compared to spectra before Raman cooling (lighter colors). (c,d) Spectra after 90 cycles of RAC and 30 cycles of IRC (darker colors) and spectra before Raman cooling (lighter colors). For radial (axial) data, solid lines show fits to Lorentzian (Gaussian) distributions with a vertical offset. For all panels, the dashed line and the shaded region (\(\pm 1\) standard deviation) show the independently measured offset without Raman beams. to \(s=7.5^{+0.8}_{-0.7}\) before cooling. With IRC, we further reduce the motional entropy per particle to \(s=4.9(3)\), the lowest reported to date for laser-cooled molecules. Since most of the entropy is in the axial motion, lower entropies and temperatures could be reached with increased axial confinement and further optimized Raman pulse sequences [42; 43; 44]. The motional entropy also allows us to quantify the efficiency of our Raman cooling scheme. In evaporative cooling of atomic and molecular gases, a common efficiency metric is \(\gamma=-d\ln(\text{PSD})/d\ln(N)\) where \(N\) is the particle number, and PSD is the phase space density [45; 46; 47; 48]. One can generalize this metric to \(\gamma_{q}=ds/d\ln(N)\), where \(s\) is the motional entropy per particle [39]. \(\gamma_{q}\) coincides with \(\gamma\) in the classical regime and is convenient when in the highly quantized regime. Using our loss measurements, we estimate that \(\gamma_{q}=70(28)\) for RAC, indicating highly efficient cooling with little loss. Our demonstration of Raman sideband cooling of molecules in this work opens up several new possibilities. In the near term, the lower temperatures achieved could significantly improve coherence times and provide more coherent dipolar interactions between molecules, enabling high-fidelity quantum gates and quantum simulation with long evolution times. Longer term, our work opens the door to direct laser-cooling of trapped molecules to their 3D motional ground states. This would be a key step towards full quantum control of molecules, and could enable efficient production of ensembles with low motional entropy suited for quantum simulation of itinerant many-body systems. Potentially, this could provide an all-optical route towards quantum degeneracy [35; 36] that may be broadly applicable to other laser-coolable molecules including polyatomic ones. ## Acknowledgements We thank Jeff Thompson, Waseem Bakr, and the Bakr group for fruitful discussions. This work is supported by the National Science Foundation under Grant No. 2207518. L.W.C. acknowledges support from the Sloan Foundation. S.J.L. acknowledges support from the Princeton Quantum Initiative. ## Methods ### Preparation of Molecules CaF molecules in the \(X^{2}\Sigma(v=0,N=1)\) manifold are created in a single-stage cryogenic buffer gas cell [49], slowed via chirped slowing, and loaded into a DC magneto-optical trap (MOT). The MOT is subsequently switched off, \(\Lambda\)-cooling is applied, and the molecules are loaded into an optical dipole trap (ODT) with the aid of a repulsive ring trap in the presence of \(\Lambda\)-cooling [50]. The molecules are optically transported and loaded into a reconfigurable array of 37 optical tweezers, which are created with focused laser beams of \(781\,\mathrm{nm}\) light projected vertically through a microscope objective [38]. For normalization, after tweezer loading, the occupation of the tweezers are detected non-destructively with \(\Lambda\)-imaging [22]. After non-destructive detection, the molecules are spread out over all 12 hyperfine states in the \(X^{2}\Sigma(v=0,N=1)\) manifold. ### State-Resolved Detection To probe the population in \(\ket{\downarrow}\), we first lower the trap depth to \(U_{\text{MW}}=k_{B}\times 130(3)\,\mu\text{K}\) and rotate the magnetic field (\(B=4.4\,\text{G}\)) to \(\theta=53^{\circ}\). Microwaves at \(\sim 20.5\) GHz are used to transfer the population from \(\ket{\downarrow}=\ket{N=1,J=3/2,F=1,m_{F}=0}\) to \(\ket{N=0,J=1/2,F=1,m_{F}=-1}\) via a Landau-Zener sweep. A pulse of light resonant with the \(X^{2}\Sigma(v=0,N=1)\to A^{2}\Pi_{1/2}(v=0,J=1/2,+)\) transition removes any molecules remaining in \(X^{2}\Sigma(v=0,N=1)\)[22; 38]. A second Landau-Zener microwave sweep transfers molecules into \(\ket{N=1,J=1/2,F=0,m_{F}=0}\). Finally, the population is measured via \(\Lambda\)-imaging [14], which detects all molecules in \(X^{2}\Sigma(v=0,N=1)\). An analogous approach is used to measure population in \(\ket{\uparrow}\). ### Optical Pumping For optical pumping at \(\theta=0^{\circ}\), we use a single beam (OP) in the radial (horizontal) plane. The beam makes an angle of \(42^{\circ}\) relative to the tweezer polarization axis \(\hat{\varepsilon}\) (\(\hat{x}\)). It is optimized to have minimal \(\sigma_{+}\) component along the quantization axis set by the magnetic field (along \(\hat{x}\) at \(\theta=0^{\circ}\)). For the data exploring optical pumping at the magic angle \(\theta=\theta_{m}\) (Fig. 2(b,c)), we use a second optical pumping beam (OP2) propagating along \(\hat{y}\). This beam is optimized to have minimal \(\sigma_{+}\) component along the quantization axis set by the magnetic field. ### Raman Coupling To achieve Raman coupling between \(\ket{\uparrow}\) and \(\ket{\downarrow}\), we use light with two frequency components \(\omega_{1}\) and \(\omega_{2}\) detuned near the \(X^{2}\Sigma(v=0,N=1)-B^{2}\Sigma(v=0,N=0)\) transition (single-photon detuning of \(\Delta=-2\pi\times 42\,\)GHz). \(\omega_{1}(\omega_{2})\) nominally couples to \(\ket{\uparrow}(\ket{\downarrow})\). These components are generated by acousto-optical modulators (AOMs), allowing the two-photon detuning \(\delta\) to be varied. To achieve motional coupling in the radial (horizontal) \(xy\)-plane, we send beams \(R_{1}\) and \(R_{2}\) along the \(\hat{x}+\hat{y}\) and \(\hat{x}-\hat{y}\) directions, respectively. \(R_{1}(R_{2})\) carries a single frequency component \(\omega_{1}(\omega_{2})\) and is linearly polarized vertically along \(\hat{z}\). The two beams are optionally retro-reflected. The retro-reflections are controlled by shutters that can be controlled mid-sequence. To address axial motion, we send beams \(R_{3}\) and \(R_{4}\) along \(-\hat{z}\) and \(\hat{z}\), respectively. The beam \(R_{3}\) is linearly polarized along \(\hat{y}\), while \(R_{4}\) is linearly polarized perpendicular to \(\theta=\theta_{m}\). ### Spectroscopy Sequences We use the following sequence for the data presented in Fig. 2(a). First, molecules are pumped into \(\ket{\uparrow}\) by turning on OP for \(2\,\)ms in a bias magnetic field of \(B=4.4\,\)G at \(\theta=0^{\circ}\). Next, the tweezer depth is decreased to \(U_{R}\) and the magnetic field is rotated to the desired angle \(\theta\). Both \(\omega_{1}\) and \(\omega_{2}\) are delivered via a single vertical beam \(R_{4}\) along the tweezer axis. A Raman pulse is applied for \(170\,\mu\)s with an estimated two-photon Rabi coupling of \(\Omega_{R}\approx 2\pi\times 3\) kHz. Finally, the population in \(\ket{\downarrow}\) is measured. For all spectra in Figs. 3 and 4, the radial Raman beams \(R_{1}\) and \(R_{2}\) are not retro-reflected. All spectra in Fig. 3 are taken in a bias field of \(B=4.4\,\)G at \(\theta=0^{\circ}\). For the radial spectrum in Fig. 3(a), radial beams \(R_{1}\) and \(R_{2}\) are applied for \(1\,\)ms, probing the radial direction \(\hat{y}\). For the axial spectrum in Fig. 3(b), \(\omega_{1}\) is delivered via \(R_{3}\) and \(\omega_{2}\) via \(R_{4}\), and the beams are applied for \(1\,\)ms. All spectra in Fig. 4 are taken in a magnetic field of \(B=5.5\,\)G at \(\theta=\theta_{m}\). For radial spectra in Fig. 4(a,c), \(R_{1}\) and \(R_{2}\) are used. The beams are applied for \(30\,\mu\)s (carrier) or \(50\,\mu\)s (\(\Delta n_{r}=\pm 1\) sidebands), with \(\Omega_{R}\approx 2\pi\times 20\,\)kHz. For axial spectra in Fig. 4(b,d), \(\omega_{1}\) is delivered via \(R_{3}\) and \(\omega_{2}\) via \(R_{4}\). The beams are applied for \(100\,\mu\)s, with an estimated Rabi frequency of \(\Omega_{R}\approx 2\pi\times 20\,\)kHz. ### Cooling Sequences Prior to Raman sideband cooling, we optically pump molecules into \(\ket{\uparrow}\) by applying beam OP for \(2\,\)ms. For the radial/axial cooling sequence (RAC), both radial beams \(R_{1}\) and \(R_{2}\) are on and retro-reflected. Axially, we send \(\omega_{1}\) into \(R_{3}\), and \(\omega_{2}\) into \(R_{4}\). We perform Raman transfer on the radial \(\Delta n_{r}=-2\) sideband for \(150\,\mu\)s, and optical pumping with OP for \(150\,\mu\)s. The Raman transfer occurs at tweezer depth \(U_{R}\), and the optical pumping occurs at depth \(U_{0}\). The tweezer depth ramps occur over \(200\,\mu\)s between each of these steps. For the interlaced radial cooling sequence (IRC), we only turn on radial beams \(R_{1}\) and \(R_{2}\), with \(R_{2}\) retro-reflected. Raman transfer on the \(\Delta n_{r}=-2\) (\(\Delta n_{r}=-1\)) sideband occurs for \(150\,\mu\)s (\(300\,\mu\)s).
2308.05696
A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment
Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and human preferences. Extensive research has highlighted the importance of the quality and diversity of instruction data. However, the impact of data complexity, as a crucial metric, remains relatively unexplored from three aspects: (1)where the sustainability of performance improvements with increasing complexity is uncertain; (2)whether the improvement brought by complexity merely comes from introducing more training tokens; and (3)where the potential benefits of incorporating instructions from easy to difficult are not yet fully understood. In this paper, we propose Tree-Instruct to systematically enhance the instruction complexity in a controllable manner. By adding a specified number of nodes to instructions' semantic trees, this approach not only yields new instruction data from the modified tree but also allows us to control the difficulty level of modified instructions. Our preliminary experiments reveal the following insights: (1)Increasing complexity consistently leads to sustained performance improvements of LLMs. (2)Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3)Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key.
Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang
2023-08-10T16:58:51Z
http://arxiv.org/abs/2308.05696v2
# A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment ###### Abstract Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and user preferences. Extensive research has highlighted that enhancing the quality and diversity of instruction data consistently improves performance. However, the impact of data complexity, as a crucial metric, remains relatively unexplored in three aspects: (1) scaling law, where the sustainability of performance improvements with increasing complexity is uncertain, (2) additional tokens, whether the improvement brought by complexity comes from introducing more training tokens, and (3) curriculum tuning, where the potential advantages of incorporating instructions ranging from easy to difficult are not yet fully understood. In this paper, we propose _tree-instruct_ to systematically enhance the complexity of instruction data in a controllable manner. This approach adds a specified number of nodes into the instruction semantic tree, yielding new instruction data based on the modified tree. By adjusting the number of added nodes, we can control the difficulty level in the modified instruction data. Our preliminary experiments reveal the following insights: (1) Increasing complexity consistently leads to sustained performance improvements. For instance, using 1,000 instruction data and 10 nodes resulted in a substantial 24% increase in win rate. (2) Under the same token budget, a few complex instructions outperform diverse yet simple instructions. (3) Curriculum instruction tuning might not yield the anticipated results; focusing on increasing complexity appears to be the key2. Footnote 2: The data and code of this work are available at [https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/tree-instruct](https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/tree-instruct) ## 1 Introduction The latest generation of large language models (LLMs) has attracted significant attention due to their immense potential in language technologies [26; 37; 44; 20]. To enhance interactive user requests and chat interfaces, these models undergo instruction-tuning using supervised input-output pairs [16; 17; 10]. This process enables the model to comprehend the required style and format for effective user interaction, showcasing the knowledge and capabilities gained during pre-training [28]. Consequently, the efficacy of instruction data significantly influences LLMs' abilities, shaping users' perceptions of their capabilities [43; 19; 9]. Recently, LIMA has demonstrated that with just 1000 carefully curated prompts and responses, an LLM can achieve remarkably strong performance [48]. This suggests that the scaling laws of instruction tuning are not solely dependent on data quantity but rather influenced by prompt diversity and quality. However, one critical and less-explored aspect of evaluating instruction data is complexity. There are at least three unanswered questions related to complexity: (1) **Scaling law of complexity**: Intuitively, more complex instruction data might elicit more potential capabilities in LLMs to address intricate problems [23, 25]. WizardLM [45] introduce in-depth and in-breadth evolving methods to rewrite prompts into more complex and diverse versions, resulting in a 12.4% increase in LLMs' win rate with the same amount of data. Yet, whether WizardLM's performance improvement is due to complexity or merely derived from diversity remains uncertain. Moreover, the ongoing enhancements in complexity are yet to be explored. (2) **Relationship between complexity-induced performance improvement and token quantity**: Enhancing instance complexity inevitably increases the number of tokens per instance [11]. While WizardLM exhibits performance improvements with the same instance quantity, it increases the number of tokens per instance. This raises the question of whether complexity-induced improvement in LLMs results from increased training tokens. As known, enlarging LLMs' pretraining token counts can lead to better performance [24, 36]. (3) **Effectiveness of complexity-based curriculum instruction learning**: Curriculum learning is a strategy in machine learning that starts with easy instances and gradually introduces harder ones [4]. Its effectiveness has been demonstrated in various NLP tasks like machine translation [49], dialogue [50], and question answering [31]. However, its potential efficacy in instruction tuning is under-explored. However, to answer the aforementioned questions, the key hurdle lies in finding a controlled way to increase the complexity of instruction data without introducing unwanted factors such as diversity. WizardLM [45] employs an in-depth evolving prompt like "Your objective is to rewrite a given prompt into a more complex version to make ChatGPT and GPT4 a bit harder to handle." to complicate the existing instructions. Unfortunately, although intended to enhance complexity, this approach might inadvertently introduce diversity by diverting from the initial instruction objectives. This issue becomes particularly severe when repeatedly employing in-depth evolving to achieve varying levels of complexity. We study and analyze the instructions before and after in-depth evolving in Sec. 4.1. As illustrated in Fig. 2, the iteratively evolved instructions append additional objectives that deviate from the original instructions, showcasing a greater diversity. To address this concern, we propose Tree-Instruct, which involves prompting LLMs to add a specific number of new nodes to the semantic tree of an existing instruction, as opposed to manipulating the text sequence directly, as done in Self-Instruct [41] or WizardLM [45]. We use the number of added nodes to represent the introduced level of complexity. The advantage of this approach lies in the fact that semantic tree nodes lack any sequential order [32]. By enforcing LLMs to operate on the Figure 1: The scaling law of instruction complexity. We experiment with enhancing the complexity of semantic trees for 1,000 Alpaca instructions by adding extra 3, 6, and 10 nodes. We then evaluate models fine-tuned on instruction data of varying complexities against text-davinci003 in terms of win rate on AlpacaEval (Left). Additionally, we examine win rates on different subsets of AlpacaEval (Right). In the left figure, we indicate the average token count for instructions of different complexity levels. We also use WizardLM’s in-depth deepening as the baseline. semantic tree, this process becomes analogous to inserting new words into the middle of the original instructions. This compels the models to complicate while adhering to the structural constraints of the initial instruction rather than merely appending new instructions. It can significantly mitigate the issue of straying from the primary theme of the initial instruction. We leverage GPT-4 to assess the consistency of evolved instructions with original ones, and the results verify that Tree-Instruct improves WizardLM's consistency score from \(0.56\) to \(0.69\). Fig. 1 highlights how the number of added nodes raises the complexity level of the samples. With the help of Tree-Instruct, we have obtained the following preliminary experimental conclusions: (1) **As the complexity of the instruction data increases, the benefits of instruction tuning continue to grow**: Following LIMA, we attempt instruction tuning using 1,000 samples from Alpaca-GPT-4 as a base. We add 3, 6, and 10 nodes to the semantic tree of each sample, resulting in performance gains of 14%, 18%, and 24%, respectively, across eight sub-skills such as commonsense, writing, and coding, showing consistent improvements. Furthermore, this scaling law can be extended to more complex instruction data. For instance, when fine-tuning 6,000 conversations filtered from ShareGPT via OpenChat[38] (showing excellent performance in the open-source LLMs), we observe that by increasing the complexity to around 3,000 users' instructions, the winning rate increases from 80.87% to 82% on the AlpacaEval leaderboard3. Footnote 3: [https://tatsu-lab.github.io/alpaca_eval/](https://tatsu-lab.github.io/alpaca_eval/) (2) **The increase in complexity partly comes from additional tokens, but a few complex instructions outperform diverse yet simple instructions, under the same token budget.**: We find that as the complexity increases, the number of tokens also increases. Adding 10 nodes in the tree increases the average token length of samples from 186 to 607. Hence, to make a fair comparison, we increase the number of original instructions from 1,000 to 4,000 to match the total token quantity of our tree-instructed samples. Under this setting, the performance gain from adding 10 nodes still achieves more than 20%. This indicates that the improvement due to complexity is partly attributed to the increased tokens, but increasing the complexity of samples is equivalent to the diversity achieved by four times the token count of simple samples. Moreover, when considering the same token count, instructions evolved from Tree-Instruct exhibit a 5% higher win rate compared to in-depth deepening of WizardLM, making it a more effective method for increasing complexity. (3) **Curriculum instruction tuning may not be effective; increasing complexity is all you need**: We try curriculum learning by gradually training samples on harder samples, i.e., first train on data with added three nodes, then six nodes, and finally ten nodes. We observe that, with the same training steps, the curriculum learning approach does outperform training with a mixed difficulty of samples but still falls short compared to directly training with the added ten-nodes samples. This indicates that when we have more complex samples, the significance of simpler samples diminishes significantly, suggesting that repeating training with complex samples may be sufficient. ## 2 Related Work Large Language Models (LLMs), trained on extensive textual datasets, have risen as premier solutions for a diverse array of NLP tasks [47]. Despite their remarkable performance, these models are not without their limitations. These limitations encompass potential misunderstandings of human instructions, the propensity to generate biased content, and the sporadic generation of hallucinated information. Consequently, bringing LLMs in line with human expectations has become a central focal point within the research community [3; 34]. To attain this alignment, researchers need to amass high-quality instructional data that authentically mirrors human needs and expectations. A rational starting point for data collection involves the adaptation of existing NLP benchmarks into natural language instructions, like PromptSource [2], SuperNaturalInstruction [42], Unnatural Instructions [15] and FLAN [21] are spearheading this strategy. These benchmarks encompass a wide range of NLP tasks, spanning dialogue, reasoning, and coding, all unified under the realm of language instructions. TULU[40] showcases that instructions from NLP tasks significantly bolster the reasoning prowess of aligned LLMs, where the diversity of tasks plays a pivotal role in shaping the capabilities of LLMs. Nevertheless, a notable trend in NLP datasets is their propensity to emphasize particular skills, consequently yielding instructions that possess a somewhat confined scope. This constraint has Figure 2: The instruction generated by different evolving methods: Tree-instruction after adding ten nodes and WizardLM by iteratively _deepening_ three times. We also demonstrate how Tree-Instruct enhances the complexity of the original instruction’s semantic tree by introducing three nodes (orange), six nodes (green), and ten nodes (purple). the potential to impede their capacity to meet the intricate requirements of real-world applications. In order to tackle these challenges, one possible approach is to formulate instructions via purposeful human annotations. An exemplary precursor to such a corpus is OpenAssistant [19], which comprises over 10k dialogues involving the participation of 13k annotators from around the world. Another remarkable venture into harnessing human-generated instructions through crowd-sourcing is ShareGPT 4. This platform encourages users to contribute and exchange their engaging conversations with ChatGPT and GPT4. Footnote 4: [https://sharegpt.com/](https://sharegpt.com/) While human annotation ensures both quality and diversity, it becomes challenging to ensure the quantity and complexity of instructional data due to the highly expensive annotation process [7], and the distribution of difficulty levels in human-created instructions tends to skew towards being either easy [23]. To address this issue, Self-Instruct [41] leverages ChatGPT's in-context learning capability to generate a large volume of instructions from a predefined set of human-annotated instructions spanning diverse topics and task types. Building upon this foundation, LIMA [48] and Alpagasus [5] separately validate the significant impact of data diversity and quality on instructional effectiveness. The selection of thousands of high-quality and diverse instructional examples proves more advantageous in achieving better results compared to using the entire dataset. Further increasing the number of instructions could potentially induce a semantic shift in the LLMs [1]. Up to this point, three key metrics within the instructional data--diversity, quality, and quantity--have been elucidated for their impact on tuning, though exploration into complexity remains insufficient. While WizardLM [45] demonstrates that evolving both the complexity and diversity of instructions can lead to performance enhancement, it does not deeply investigate the individual importance of complexity. This paper introduces a method, Tree-Instruct, which enhances instructional complexity while simultaneously constraining thematic consistency to mitigate variations in diversity. Our experiments preliminarily establish a scaling law regarding complexity, show that the improvement resulting from increased complexity isn't solely due to the introduction of more training tokens and illustrate that LLMs only require complex samples for instruction tuning, rather than simple samples serving as foundational padding for curriculum learning. ## 3 Tree-Instruct Enhancing the complexity of natural language text seems like a straightforward task for proficient LLMs. For instance, WizardLM utilizes a simple text prompt to complexify instructions as mentioned in Sec. 1. However, due to the extensive pre-training of LLMs on massive corpora, where models predict the next token based on the preceding context, we've noticed that LLMs can often exploit the given instruction by simply continuing the text beyond the initial prompt to artificially amplify complexity. While adding continuation constraints can enhance the complexity of instructions, it simultaneously leads them away from the core thematic focus. This divergence expands the topic and domain, fostering diversity that hinders our ability to solely assess the impact of increased instruction complexity. We leverage GPT-4 to automatically score the consistency (range in \(0\sim 1\)) of the instructions before and after implementing in-depth deepening following WizardLM. We found that it only gets a \(0.56\) alignment score. Furthermore, upon iteratively enhancing the instruction's complexity, the guidance might become ineffective, losing its original essence. For instance, it might cease to present a question, rendering it arduous for the LLM to generate a suitable response. This phenomenon matches with observations made by WizardLM, which prompts them to introduce the Elimination Evolving procedure. To address this issue, we first consider _what determines the complexity of natural language text_. In linguistics and education, there is a lack of precise scientific consensus on determining the complexity of the text. No single source can precisely summarize a text's complexity. Currently, a widely accepted perspective suggests that qualitative measures of text complexity require an informed judgment of text difficulty based on various factors. The standards use factors like purpose, levels of meaning, structure, language conventions, clarity, and knowledge demands to measure text difficulty 5. Among these, text structure is a more measurable indicator, as we can convert text sequences into tree structures using mature dependency or semantic tree parsers [33]. Tree structures, prevalent in natural language representations, offer structural insights reflecting human text comprehension [14]. Furthermore, we can gauge text complexity accurately by measuring the width and depth of trees, as a deeper and wider grammar tree signifies more intricate sentence structures [8; 39]. Inspired by the concept of tree complexity, we propose Tree-Instruct, wherein LLMs directly add a specific number of nodes to the semantic tree of an instruction. This increases the tree's width and depth, thereby enhancing text structure complexity. In detail, Tree-Instruct encompasses three steps: **Step 1: Tree Construction** involves semantic parsing, where a structured representation is created from a natural language sentence. This process yields a tree structure for an instruction. For instance, given the instruction "Implementing effective strategies to curb environmental pollutants in the atmosphere", we derive an original tree structure Tree-1 as shown in the first tree of Fig. 2. **Step 2: Nodes Expansion** operates on the acquired tree structure, expanding it in depth or width by adding new nodes, thus influencing the new tree's complexity. We only add meaningful nodes representing nouns or verbs, since words like adjectives or prepositions contribute little to tree complexity. The second tree in Fig. 2 illustrates Tree-2 after adding ten nodes. **Step 3: Tree Sentenceization** aims to make LLMs revert the complex new tree structure (Tree-2) back to fluent natural language instruction by introducing connected words. ``` \begin{tabular}{l} \begin{tabular}{l} \hline \hline \end{tabular} & \begin{tabular}{l} \hline \hline \end{tabular} & \begin{tabular}{l} \hline \hline \end{tabular} \\ \end{tabular} [MISSING_PAGE_POST] Following LIMA, we randomly select 1,000 instruction samples to form Alpaca-1K, serving as the starting point for our evolutionary process. We query gpt-4 [27] to execute Tree-Instruct, thereby increasing the complexity of each instruction within Alpaca-1K. In order to analyze the scaling law, we introduce three levels of complexity by augmenting the instructions by adding 3, 6, and 10 additional nodes, respectively. This allows us to observe the impact of these varying complexities on the outcomes. For the modified instructions, we employed gpt-4 once again to generate corresponding responses. To validate our findings, we replicate the results by applying the in-depth evolving with deepening prompt provided by WizardLM to the same Alpaca-1K instructions. To demonstrate the scalability of our discoveries to larger datasets, we also conduct experiments on the expansive OpenChat dataset [38]. We employ the pre-trained LLaMA-13B-v1 [37] model as the initialization, fine-tuning it on instruction-tuning datasets generated through different methods. Each GPU processes batches of size 2 (for OpenChat evolved data, the batch size is set to 14), and the maximum sequence length was set to 2048. For optimization, we adopt the AdamW [22] optimizer with a learning rate of 1e-4 and a weight decay of 0.1, following the practices established by OpenChat. The training is performed across 8 A100 GPUs using Deepspeed ZeRO-2 for a duration of 3 epochs. During inference, a temperature of 0.7 and a top-p value of 0.9 are employed to evaluate all the methods under comparison. ### Tree-Instruct is Better than In-Depth Evolving We start by investigating whether operating on a tree, as opposed to a sequence, better aligns with the intended objectives of the original instruction. Recent studies have introduced the LLMs-as-evaluator paradigm, leveraging LLMs to assess candidate samples, which closely approximates human evaluative agreement [6; 13; 18; 46]. Consequently, we employ gpt-4 to gauge which approach exhibits greater consistency with the initial instructions. As depicted in Figure 3, the result indicates that employing Tree-Instruct, which entails adding instructions with 6 additional nodes, achieves a higher degree of alignment with the original instructions in 63% of cases, compared to WizardLM's in-depth deepening that undergoes modifications and generates instructions with similar token quantities to Tree-6-nodes. This observation serves as evidence that the presence of a tree structure constraint enables LLMs to more effectively modify instructions within the framework of the original guidance, rather than diverging and incorporating unrelated content. Furthermore, our findings demonstrate that Tree-Instruct is more effective than in-depth evolving in eliciting the capabilities of LLMs. We conduct evaluations on the AlpacaEval evaluation set for both methods. AlpacaEval is a recent authoritative leaderboard comprising 805 diverse samples, each showcasing various abilities. The evaluations are performed with gpt-4 as the evaluator, comparing the win rates of models against text-davinci003. As depicted in Table 2, under similar total token counts, Tree-Instruct exhibits a win rate improvement of 5 points over WizardLM's in-depth deepening. We attribute this enhancement to Tree-Instruct's adeptness at closely tailoring instructions to the central topic, thereby introducing complexity. In contrast, in-depth evolving might deviate from the original theme and introduce irrelevant content, resulting in instructions of inadequate difficulty. Such instructions could potentially hinder LLMs from generating appropriate responses, rendering them less effective in the generation process. ### More Complexity, Better Capability After demonstrating the effectiveness of Tree-Instruct in enhancing sample complexity, we present a scaling law pertaining to complexity, as depicted in Fig. 1 and Table 2. As the number of nodes gradually increases from Tree-3-Nodes to Tree-6-Nodes and further to Tree-10-Nodes, the model's win rate on AlpacaEval, exhibits a remarkable upward trend. This scaling law underscores the significance of complexity within instruction data. Figure 3: GPT-4’s comparison for the consistency preservation between Tree-6-Nodes and WizardLM’s in-depth deepening with respect to the original instructions. Additionally, we carry out a meticulous evaluation for each skill/category within the Vicuna test sets. These sets are divided into distinct skill sets/categories, allowing for an intricate analysis of the proficiency attained through instruction tuning. Notably, Tree-10-Nodes outperforms Tree-6-Nodes across a majority of categories, encompassing Counterfactual, Roleplay, Knowledge, Generic, and more. Similar trends are evident when comparing Tree-6-Nodes with the original instructions, indicating that augmenting the complexity of Instruction data leads to a comprehensive enhancement in the capabilities of the LLM. Finally, given that our experimentation is based on 1,000 instances, we extend our investigation to validate the effectiveness of Tree-Instruct across a larger dataset using OpenChat. OpenChat is built upon approximately 6K GPT-4 conversations derived from around 90K ShareGPT conversations. It has notably achieved top rankings as an open-source LLM. As we initiate these experiments, OpenChat attains an 80.87% win rate on AlpacaEval. Since OpenChat involves multi-turn conversations, we specifically complexify instructions within single-turn conversations and certain meaningful concluding turns, rather than those containing generic terms like "stop" or "continue." This modification encompasses 3,000 conversations. As delineated in Table 1, following the complexification of Tree-Instruct, we enhance OpenChat's performance from 80.87% to 82.00%, underscoring the sustained efficacy of our approach across a larger volume of data. ### Less but Complex is Better Than More but Simple While we have demonstrated that increasing the complexity of instruction data can enhance the capabilities of LLMs, a new question arises: Is this improvement due to the introduction of more training tokens as complexity increases? Our analysis indicates that the average length of the original Alpaca data, combining both input and output, is 186 tokens. Upon incorporating an additional 10 nodes, this count escalates to 607 tokens - equivalent to a 3.26-fold increase in training data. With this question in mind, we introduce a new baseline: Alpaca-4K, trained with 4,000 samples (additionally sampled 3,000 instances from the original Alpaca data). As shown in Table 2, Alpaca-4K's total token count surpasses that of Tree-10-Nodes by 24%. Despite this, with the same training step, a significant 22% performance gap in win rate remains. However, compared to Alpaca-1K, there is indeed a 2% improvement. This suggests that introducing more instruction tokens does enhance \begin{table} \begin{tabular}{l c c} \hline \hline Method & Win Rate (\%) & Token Length \\ \hline GPT4 & 95.28 & 1365 \\ LLaMA2-Chat-70B & 92.66 & 1790 \\ Claude-2 & 91.36 & 1069 \\ OpenChat-13B-V3.1 & 89.49 & 1484 \\ ChatGPT & 89.37 & 827 \\ WizardLM-13B-V1.2 & 89.17 & 1635 \\ **OpenChat-13B\({}^{*}\)** & 80.87 & 1632 \\ UltraLM-13B & 80.64 & 1087 \\ WizardLM-13B & 75.31 & 985 \\ \hline **OpenChat-13B+Tree-Instruct** & 82.00 (+1.13) & 1543 \\ \hline \hline \end{tabular} \end{table} Table 1: Win rate of different methods vs. text-davinci003 on the AlpacaEval leaderboard. Figure 4: Evaluation of models trained on Alpaca-1K added with various nodes vs. text-davinci003 on categories of the Vicuna test set. model performance. Nonetheless, the effectiveness of diverse yet simple instructions still falls short compared to a smaller quantity of more complex directives. ### Curriculum Learning May Be Not Effective for Instruction Tuning Now, armed with three sets of data featuring increasing difficulty levels and aligned themes, we can delve into an unanswered question in instruction tuning: Is it necessary to train LLM progressively from easy to hard? As depicted in Table 3, we embark on a trial, initially training on Tree-3-Nodes data, followed by Tree-6-Nodes, and finally Tree-10-Nodes. Each segment constitutes one-third of the total training steps. We also devise two baselines: one involving the combined training of all three difficulty levels and another wherein difficult samples were trained. prior to the easy ones. Experimental results reveal that, compared to mixed-difficulty training and training difficult samples before easier ones, an easy-to-hard curriculum learning approach truly enhances model performance. However, the performance gain from curriculum learning only slightly surpasses that of exclusively training on Tree-3-Nodes, the simplest dataset we construct. This outcome contrasts with previous observations of curriculum learning. We attribute this variance to the fact that modern LLMs possess parameter counts several times larger than those of earlier models like BERT [12] or T5 [30]. With this substantial parameter increase, LLMs are now capable of directly learning from challenging samples, diminishing the need for foundational exposure to simpler samples. The more exposure to challenging samples, the more the model's capabilities are ignited. ## 5 Conclusion In this study, we have undertaken a preliminary exploration of the intrinsic relationship between instruction complexity and the ability to follow instructions. Our observations include: (1) As the complexity of the instruction data increases, the benefits of instruction tuning continue to amplify. (2) The rise in complexity is partly attributed to additional tokens, yet a few intricate instructions outperform a range of simpler instructions, all within the same token limit. (3) A curriculum-based instruction tuning, progressing from easier to harder, might not yield the desired effectiveness; embracing increased complexity proves essential. We anticipate that this exploration will supplement existing knowledge regarding the aspects of quality, quantity, diversity, and complexity of instruction data. This contribution aims to assist future researchers in constructing superior instruction data. \begin{table} \begin{tabular}{l c c} \hline \hline Method & Win Rate (\%) & Total Token Size \\ \hline Alpaca-1K & 26.40 & 186,464 \\ Alpaca-4K & 28.60 & 757,042 \\ WizardLM & 40.37 & 556,981 \\ \hline Tree-3-Nodes & 40.80(+14.40) & 385,760 \\ Tree-6-Nodes & 44.78(+18.38) & 546,731 \\ Tree-10-Nodes & 50.19(+23.79) & 608,556 \\ \hline \hline \end{tabular} \end{table} Table 2: Analysis of the complexity scaling laws and win rate-token count relationship. \begin{table} \begin{tabular}{l c c c c|c|c} \hline \hline Method & helpful-base & self-instruction & oasst & koala & vicuna & Overall \\ \hline Mix-training & 43.41 & 25.39 & 36.70 & 40.38 & 35.00 & 34.78 \\ Hard-to-Easy Curriculum & 49.61 & 22.22 & 43.62 & 41.02 & 46.25 & 37.69 \\ Easy-to-Hard Curriculum & 52.71 & 26.98 & 49.47 & 41.02 & 50.00 & 41.37 \\ \hline Tree-3-nodes & 50.38 & 26.58 & 46.81 & 42.31 & 52.50 & 40.80 \\ Tree-6-nodes & 55.81 & 29.76 & 52.13 & 46.15 & 53.75 & 44.78 \\ Tree-10-nodes & 67.44 & 31.74 & 53.19 & 54.48 & 65.00 & 50.19 \\ \hline \hline \end{tabular} \end{table} Table 3: Analysis of mixed difficulty training and curriculum Learning. The numbers demonstrate the win rates on various subsets and the overall AlpacaEval test set, vs. text-davinci003.
2307.15200
An Agent-Based Model Framework for Utility-Based Cryptoeconomies
In this paper, we outline a framework for modeling utility-based blockchain-enabled economic systems using Agent Based Modeling (ABM). Our approach is to model the supply dynamics based on metrics of the cryptoeconomy. We then build autonomous agents that make decisions based on those metrics. Those decisions, in turn, impact the metrics in the next time-step, creating a closed loop that models the evolution of cryptoeconomies over time. We apply this framework as a case-study to Filecoin, a decentralized blockchain-based storage network. We perform several experiments that explore the effect of different strategies, capitalization, and external factors to agent rewards, that highlight the efficacy of our approach to modeling blockchain based cryptoeconomies.
Kiran Karra, Tom Mellan, Maria Silva, Juan P. Madrigal-Cianci, Axel Cubero Cortes, Zixuan Zhang
2023-07-27T21:15:27Z
http://arxiv.org/abs/2307.15200v1
# An Agent-Based Model Framework for Utility-Based Cryptoeconomics ###### Abstract In this paper, we outline a framework for modeling utility-based blockchain-enabled economic systems using Agent Based Modeling (ABM). Our approach is to model the supply dynamics based on metrics of the cryptoeconomy. We then build autonomous agents that make decisions based on those metrics. Those decisions, in turn, impact the metrics in the next time-step, creating a closed loop that models the evolution of cryptoeconomies over time. We apply this framework as a case-study to Filecoin, a decentralized blockchain-based storage network. We perform several experiments that explore the effect of different strategies, capitalization, and external factors to agent rewards, that highlight the efficacy of our approach to modeling blockchain based cryptoeconomies. 1. Agent-Based Modeling. 2. Cryptoeconomics. 3. Digital Twin. + Footnote †: footnote] ## 1 Introduction Cryptoeconomics is an interdisciplinary science that combines fields such as economics, cryptography, and computer science with the goal of designing and analyzing economic incentive structures for resource allocation in decentralized systems [1]. Accordingly, cryptoeconomic systems are often used to create new forms of digital currency, utilities, and markets. Because each system has its own goals and contexts in which it is applicable, cryptoeconomic incentive structures usually need to be customized for each individual application. In addition, these systems typically show features associated with Complex Systems [1]. This means that the long-term evolution of these systems cannot be easily inferred from local changes caused by individuals, which makes the task of customizing cryptoeconomic systems to support a concrete application more difficult. Even though cryptoeconomics is a relatively young field [2], some work has been done to address the complexities of designing and tuning decentralised economies. An exciting new approach is to use Agent-based modeling (ABM) [3]. ABM is a computational modeling technique that has been used to study a wide variety of complex systems, including social systems [4], economic systems [5, 6], and biological systems [7]. In ABM, the system is modeled as a collection of agents, each of which has its own set of rules and behaviors. The agents interact with each other and with the environment, and the system's behavior emerges from the interactions of the individual agents.[3] Within the cryptoeconomics space, ABM has the potential to support practitioners in three main areas: 1. Study the cryptoeconomics and robustness of the blockchain to agent behavior. As an example, Struchkov et al.[8] used ABM to test how Decentralised Exchanges would respond to stress market conditions and front-running, while Cocco et al.[9] used ABM to analyse the mining incentives in the Bitcoin network; 2. Explore the design space of blockchain networks. For instance, ABM has been applied to compare different token designs and their impact on prediction markets;[10] 3. Test new features and protocols. Following the fair launch allocation from Yearn Finance, a group of researchers[11] used ABMs to examine the concentration of voting rights tokens after the launch, under different trading modalities. This paper explores how ABM can be adapted to a particular type of decentralised system, namely utility-based decentralised networks. These networks employ their own currency to provide consumptive rights on the services or the product being offered by the network.[12] Thus, these systems mediate a marketplace of providers of a specific good and users that want to consume the good. Since the entire system depends on the good being traded, any tool attempting to model such system needs to consider how changes in utility impact the system and its agents. Therefore, we propose a framework for applying ABMs to utility-based cryptoeconomies. Our approach is complimentary to other methods in the literature[13] and builds on the work of Zhang, et. al[14] to enable multi-scale coupling between individual microeconomic preferences and protocol specific supply dynamics. Rational users of cryptoeconomic systems will base their decisions on some aspects of the network they are involved in, which in turn affects the network. This natural feedback loop is well represented in our framework. We apply the framework to Filecoin, a decentralised data storage network,[15] and conduct two experiments that uncover interesting aspects of the network. The first explores the agents' reward trajectories under different lending rates, while the other examines how the current cryptoeconomic mechanisms of Filecoin impact wealth distribution. The rest of this paper is organized as follows. We begin in Section 2 by presenting the general framework for applying ABM to utility-based systems. This framework is then applied to the Filecoin network, by first developing a mathematical model of Filecoin's supply dynamics in Section 3. In Section 4, we describe the ABM that leverages this model to simulate a closed loop interaction of programmable agents within the Filecoin economy. Section 5 follows with the two experiments that showcase the utility of our ABM framework to understand the Filecoin system. We conclude in Section 6 by framing the results in the context of utility based token economies and discussing future research paths. ## 2 ABM Framework for Utility Cryptoeconomies Agent-based models (ABMs) are a tool for modeling complex systems[3] with a high degree of granularity. They consist of two primary components which interact with each other: a) the environment, which models the system under study, and b) agents, which take actions that affect the environment. #### Formal Definition We define the general framework of our deterministic, discrete-time ABM as follows. Let \(\mathsf{E}\) denote the set of all possible _environmental variables_. For a given time \(d\in\mathbb{N}\), let \(E_{d}\in\mathsf{E}\) denote the _environment_ at time \(d\), and define the set \(\mathsf{E}\ni\mathcal{E}_{d}:=E_{0}\times\cdots\times E_{d}\). In our specific setting, \(E_{d},E_{d+1},\ldots\) corresponds to the environments at day \(d\), at day \(d+1\), etc. In addition, let \(\mathscr{A}\) be the abstract set of _agents_ and let \(\mathscr{H}\) denote the abstract set of actions. To each agent \(a\in\mathscr{A}\), there corresponds a given set of actions \(h_{a}\in\mathscr{H}\). Furthermore, we define the _update rules_\(f^{H}_{d+1}:\mathscr{A}\times\mathcal{E}_{d}\mapsto\mathscr{H}\), \(f^{E}_{d+1}:\mathscr{H}\times\mathcal{E}\mapsto\mathcal{E}\), and \(f^{A}_{d+1}:\mathcal{E}_{d}\mapsto\mathscr{A}\), as some abstract functions that update the _actions of each agent, the environment and agents_ respectively. Given a set of agents \(A_{0}\subset\mathscr{A}\), at time \(d=0\), where each agent \(a_{0}\in A_{0}\) is equipped with a set of actions \(h_{a_{0}}\in\mathscr{H}\) and an initial environment \(E_{0}\), the ABM proceeds by iterating as follows: \[h_{a,d+1} =f^{S}_{d}(a_{d},\mathcal{E}_{d})\hskip 14.226378pt\forall a_{d}\in A _{d}, \tag{1}\] \[E_{d+1} =f^{E}_{d}\left(\underset{a_{d}\in A_{d}}{\bigtimes}h_{a,d+1}, \mathcal{E}_{d}\right)\] (2) \[A_{d+1} =f^{A}_{d}(\mathcal{E}_{d}). \tag{3}\] #### Environment The supply dynamics, defined as factors which affect the supply of tokens in the cryptoeconomy, are modeled by the environment, \(\mathsf{E}\). Factors include the total supply of tokens, the rate at which new tokens are mined, and the rate at which tokens are taken out of circulation through means such as burning. This information is often used by investors and traders who want to understand the potential value of a cryptoeconomy to then make decisions about potential investments into that economy. Three aspects of the environment must be defined: 1. Network Performance Metrics - what are the key set of metrics that are to be modeled and used as performance indicators when evaluating the results of the ABM simulation? 2. Inputs - when actors take part in the cryptoeconomy, what is the subset of actions that they can take that would affect the network metrics? 3. Outputs - what are the outputs that flow back to miners, which in turn affect miners decisions about their actions in the next time step? For example, in a typical blockchain, rewards are issued to miners and because the rate and trajectory of rewards affects agent's behavior, this is a key output. Additional outputs, such as a subset of the overall network metrics that miners can use to make rational decisions can also be included. #### Agents Miners in blockchains are mapped to agents, \(A_{i}\). Miners take actions that correspond to the inputs of the environment and make those decisions based on the outputs of the environment they are interacting with. The actions they take affect the network's performance metrics, which then affects the outputs that the agents are fed. Through this feedback loop, the dynamical nature of the cryptoeconomy is modeled. Fig. 1 shows a diagram of this proposed framework for mapping cryptoeconomies to an ABM. ### Examples Three examples of cryptoeconomic projects where this framework can provide value include Helium,[16] Ethereum,[17] and Filecoin.[15] Helium is a decentralized wireless network where miners provide wireless coverage in exchange for HNT tokens. An ABM utilizing the described framework can be used to understand, for example, how the rate of token distribution and population density may affect expected network coverage. This requires a mathematical model of how tokens are minted (the supply dynamics), given agent inputs (i.e. the wireless coverage they provide to the network). This can then be used to design new incentive structures to ensure a more even coverage distribution. Ethereum is another example where the described ABM framework can be applied. A specific use case could be analyzing how user behavior (agents) could stress and effect the total circulating supply of ETH tokens after the "Shapella" upgrade, which enabled easier unlocking of staked ETH. In this setting, one could model, e.g., the propensity of a participant to unlock, against more staking inflows _due to_ the ability to unlock. Here, the agents actions (whether to stake or unstake) have a direct effect on the supply dynamics and the outlined framework would enable one quantify various scenarios that may play out. We now discuss the application of the ABM framework to Filecoin in more detail. ## 3 Modeling Filecoin Supply Dynamics Filecoin is a distributed storage network based on the blockchain, where miners, referred to as storage providers (SPs), provide storage capacity for the network and earn units of the Filecoin cryptocurrency (FIL) by periodically producing cryptographic proofs that certify they are providing the promised storage capacity. In contrast to using Nakamoto-style proof of work to maintain consensus on the chain, Filecoin uses proof of storage: a miner's voting power -- the probability that the network elects a miner to create a new block -- is proportional to their current quality-adjusted storage in use in relation to the rest of the network. The cryptoeconomics of Filecoin are designed to incentivize storage providers to participate and grow the collective utility of the data storage network. The following subsections describe various aspects of the Filecoin supply dynamics. Figure 1: Proposed framework for mapping cryptoeconomies to an ABM. Miners are represented by agents and the environment is mapped to a supply dynamics model. ### Circulating Supply Filecoin's circulating supply \(S_{d}\) is modeled at a daily (\(d\)) level of aggregation and has four parts: \[S_{d+1}=\underbrace{M_{d}+V_{d}}_{\text{inflow}}-\underbrace{L_{d}-B_{d}}_{\text {outflow}}. \tag{4}\] These correspond to minted block rewards \(M_{d}\), vested tokens \(V_{d}\), locked tokens \(L_{d}\), and burnt tokens \(B_{d}\). ### Power Onboarding and Renewals The dynamics of \(M_{d}\), \(L_{d}\), and \(B_{d}\) depend on the amount of storage power onboarded and renewed in the network. In Filecoin, storage providers (SPs) participate by onboarding power onto the network by adding a sector for a committed duration. Power is measured in units of sectors, which can be either 32 GiB or 64 GiB in size. Each sector consists of a fraction of committed capacity (CC) and verified deal data (FIL+).[18] An SPs can choose to renew CC sectors when they expire. We model power in aggregate terms rather than at the sector level. This means that we model the network's storage power to be split into two categories: 1) CC and 2) FIL+. This approximation is valid for the granularity that we are seeking to achieve with our modeling. Filecoin has two methods to measure the power of the network, the network's raw byte power (RBP) and the network's quality adjusted power (QAP). Network RBP is a measure of the raw storage capacity (in bytes) of the network -- it does not distinguish between the kind of data that is stored on the network. For example, empty or random data stored on the network is counted the same as a widely used dataset when computing network RBP. A second measure of network power is quality adjusted power (QAP). QAP is a derived measurement that captures the amount of useful data being stored on the network. Considering the aggregated approximation discussed above, we compute the quality adjusted power, \(P_{d}^{\text{QA}}\) of the network on day \(d\) as \[P_{d}^{\text{QA}}=(1-\gamma)\cdot P_{d}^{\text{RB}}+10\cdot\gamma\cdot P_{d}^{ \text{RB}}, \tag{5}\] where \(\gamma\in[0,1]\) is the overall FIL+ rate of the network, and \(P_{d}^{\text{RB}}\) is the raw byte power of the network on day \(d\). Eq. 5 reveals that FIL+ power is given a 10x multiplier when computing the QA power of the network.[18] An initial pledge collateral of FIL tokens is required in order to onboard or renew power, and the specific amounts and time-windows are discussed below. In exchange for onboarding and renewing power onto the network, and continually submitting storage proofs to the chain, SPs can receive block rewards in the form of FIL tokens. ### Rewards from minting Filecoin uses a hybrid minting model that has two components -- simple minting and baseline minting. The total number of tokens minted by day \(d\) is the sum of these two minting mechanisms: \[M_{d}=M_{d}^{\text{S}}+M_{d}^{\text{B}} \tag{6}\] ledgerjournal.org Simple minting is defined by an exponential decay model \[M_{d}^{\text{S}}=M_{\infty}^{\text{S}}\cdot(1-e^{-\lambda\,d}), \tag{7}\] which decays at a rate of \(\lambda=\frac{\ln(2)}{6yrs}\), corresponding to a 6-year half-life. \(M_{\infty}^{\text{S}}\) takes a value of 30% of the maximum possible minting supply of 1.1B tokens. Tokens emitted via simple minting are independent of network power. This is similar to minting schemes present in other blockchains.[19] The second component of minting in Filecoin is baseline minting, \(M_{d}^{\text{B}}\). Baseline minting depends on network power and aims to align incentives with the growth of the network's utility. The minting function still follows an exponential decay, however, it now decays based on the effective network time, \(\mathbf{\theta}_{d}\). The equations describing this are: \[\begin{split} M_{d}^{\text{B}}&=M_{\infty}^{\text{B }}\cdot(1-e^{-\lambda\,\theta_{d}})\\ \theta_{d}&=\frac{1}{g}\ln\left(\frac{g\overline{R }_{d}^{\Sigma}}{b_{0}}+1\right)\\ \overline{R}_{d}^{\Sigma}&=\sum_{d\in D}\min\{b_{d}, P_{d}^{BB}\}\end{split} \tag{8}\] From these definitions, we can compute the cumulative baseline minting by day \(d\) from the cumulative capped RBP of the network: \[\begin{split} M_{d}^{\text{B}}&=M_{\infty}^{\text{B }}\cdot\left(1-e^{\frac{-\lambda}{g}\ln(\frac{g\overline{R}_{d}^{\Sigma}}{b_{0 }}+1)}\right)=\\ &=M_{\infty}^{\text{B}}\cdot\left(1-\left(\frac{g\overline{R}_{ d}^{\Sigma}}{b_{0}}+1\right)^{\frac{-\lambda}{g}}\right)\end{split} \tag{9}\] In this expression \(M_{\infty}^{\text{B}}\) takes a value of 70% of the maximum possible supply of 1.1B tokens. \(\overline{R}^{\Sigma}\) is the cumulative capped network RBP; it is the sum of the point-wise minimum of network's RBP and the baseline storage function for each day: \[b_{d}=b_{0}\,e^{g\,d} \tag{10}\] The baseline storage function serves as a target for the network to hit to maximize the baseline minting rate. In this expression \(g=\frac{\log(2)}{365}\), the baseline storage growth rate which corresponds to 2 year doubling, and \(b_{0}=2.88888888\)EiB is the initial baseline storage. #### 3.2.2 Vesting Vesting supply, which can contribute 0.9B tokens, is modelled daily, summing across the set of recipients \(R\) as \[\text{V}_{d}=\sum_{r\in R}V_{r,d}\,. \tag{11}\] Different recipients have different linear testing schedules. #### 3.2.3 Locked tokens Locked tokens in the network \(L_{d}\) are made up of storage collaterals \(L_{d}^{\text{S}}\) and vesting block rewards \(L_{d}^{\text{R}}\) as ``` ledgerjournal.org \[L_{d}=L_{d}^{\rm S}+L_{d}^{\rm R} \tag{12}\] The locked balance for testing blocked rewards is modeled as \[L_{d}^{\rm R}=0.75\sum_{\tau\leq d}\Delta M_{d-\tau}\cdot r_{d}^{\rm R}\,. \tag{13}\] where \(\Delta M_{d}\) is daily minted rewards and \(r_{d}^{\rm R}\) is a vector specifying the release linear release over 180days. The locked storage collateral is modeled as having the following dynamics \[L_{d}^{\rm S}=\sum_{\tau\leq d}\Delta L_{d-\tau}^{\rm S}\cdot r_{d}^{\rm S}\,. \tag{14}\] where \(\Delta L_{d}^{\rm S}\) is newly locked storage collateral tokens, and \(r_{d}^{\rm S}\) is a vector specifying a release schedule. Newly locked collateral tokens are given by \[\Delta L_{d}^{\rm S}=\Delta L_{d}^{\rm SP}+\Delta L_{d}^{\rm CP}\,. \tag{15}\] where the'storage pledge' locked tokens are \[\Delta L_{d}^{\rm SP}=\max\left(20\cdot\Delta M_{d},0\right)\,, \tag{16}\] and 'consensus pledge' locked tokens are \[\Delta L_{d}^{\rm CP}=\max\left(\frac{0.3\cdot S_{d}\cdot\Delta P_{d}^{\rm QA }}{\max\left(P_{d}^{\rm QA},\,b_{d}\right)},0\right) \tag{17}\] where \(\Delta P_{d}^{\rm QA}\) denotes new quality-adjusted power onboarded on day \(d\). #### Burnt tokens Burnt tokens are modeled as consisting of termination fees \(B_{d}^{\rm T}\) and base fees from gas usage \(B_{d}^{\rm G}\) as: \[B_{d}=B_{d}^{\rm T}+B_{d}^{\rm G} \tag{18}\] where terminations are accounted for by aggregating agent decisions and gas fees as linearly increasing as \(B_{d}^{\rm G}=\beta\,d\) at average rate \(\beta\). ## 4 ABM of Filecoin Utilizing the framework developed in Section 2, we create an ABM of Filecoin. Storage providers (SPs) are mapped to agents, and the environment consists of the supply dynamics described in Section 3. We divide the environment into three logical modules: a) Network State, b) Forecasting, and c) the external environment. These components interact with each other in the following manner. Agents determine the amount of power they will onboard and renew onto the network for day \(d\). All agents decisions are aggregated and passed into the network state module. Using the developed model of the supply dynamics, the network state is updated. By utilizing both historical network metrics and network forecasting information, agents can make rational decisions. Finally, an external environment simulates constraints that SPs are subject to in the real world, such as borrowing costs of pledge collateral. These components interact to create a closed-loop simulation of the Filecoin economy. Fig. 2 summarizes the components and dataflow of the Filecoin ABM. Agents directly influence the outcomes of the simulation, since their actions are aggregated and used as inputs to update the network state. We have developed three types of agents which use the network and forecasting information in different ways to make decisions regarding onboarding and renewing power: 1. DCAAgent - This is the dollar cost averaging agent, and does not use any forecasting information or historical network information to make decisions. The agent is configured to onboard a constant amount of power per day, the percentage of that power which corresponds to verified deals, and the percentage of expiring power which should be renewed. This is a dollar-cost averaging strategy and can be useful in understanding relative performance to more complex strategies. 2. FoFRAgent - This agent utilizes the rewards/sector forecast provided by the network to internally forecast the FIL-on-FIL returns (FoFR) of onboarding sectors for various sector durations, where \(\text{FoFR}=\frac{\text{rewards}}{\text{pledge}}\). This metric can additionally be generalized to introduce arbitrary cost structures. Because pledge (Eq. 17) is dependent upon the Network QAP and agents must make decisions for a given timestep before the overall Network QAP is aggregated for a day, it is approximated by using the previous day's pledge. If the estimated FoFR for any of the tested sector durations exceeds a configurable threshold (which indirectly represents the risk profile of the agent), then the agent will onboard a configured amount of power. It will also renew a configured amount of power Figure 2: Summary of the Agent Based Model of the Filecoin network, with arrows indicating direction of data flow. under the same condition. 3. NPVAgent - This agent utilizes the rewards/sector forecast provided by the network to compute the net present value (NPV) of onboarding power for various sector durations. NPV is the present value of the expected rewards/sector less costs/sector. Present value is computed using the continuous discounting formula. The agent discount rate, configured upon instantiation, is a proxy for the risk profile of the agent, with a higher discount rate representing a higher risk aversion. The agent will onboard and renew power at a sector duration which maximizes NPV, but will take no action for day \(d\) if \(NPV<0\) for all durations tested. ### Network State Network state consists of: a) network power, and b) the status of tokens. The total network power is summed across all of the agents individual contributions for each day, and the type of power onboarded (CC or FIL+) is also tracked. This enables the network state to track both \(P_{d}^{\text{RB}}\) and \(P_{d}^{\text{QA}}\). Token status consists of three parts: a) the amount mined, b) the amount locked due to pledge, and c) the remaining that has been released into the circulating supply through vesting. The mined tokens are distributed to agents as rewards. The fraction of total rewards mined in a day are distributed proportional to the fraction of total network QAP. Tokens are locked when agents onboard power to provide a consensus pledge. This information is aggregated to compute the overall network state of the number of tokens locked. Using Eq. 4, the circulating supply of the network is computed. ### Forecasting To enable agents to make rational decisions, relevant forecasts are computed and provided to agents, who can choose to utilize them. As previously mentioned, agents use rewards/sector to make rational decisions. This quantifies the amount of rewards an agent can expect to receive for onboarding a given sector. The metric is forecast by first utilizing the historical network RBP and QAP to train a time-series forecasting model, \(\mathcal{M}\). \(\mathcal{M}\) is used to forecast RBP and QAP trajectories until the end of the simulation, denoted by \(\hat{P}_{d}^{\text{RB}}\) and \(\hat{P}_{d}^{\text{QA}}\), respectively. \(\hat{P}_{d}^{\text{RB}}\) is then used to compute the expected minting rate, \(\hat{m}_{d}\), using Eqs. 6, 7, and 9. Finally, because rewards are distributed proportional to the network's QA power, \(\frac{\text{\emph{rewards}}}{\text{\emph{sector}}}[d]=\hat{m}_{d}/\hat{P}_{d}^ {\text{QA}}\). There are currently two models implemented for RB and QA forecasting, linear extrapolation and a variant of Markov-Chain Monte Carlo [20, 21]. Agents may also elect to perform custom forecasting using network metrics, and this represents a competitive advantage that a certain agent may have. ### External Environment Agents must borrow tokens in order to satisfy the consensus pledge (Eq. 17) needed to onboard power. The borrowing rate is modeled as an external environment process that specifies the discount rate, \(R_{d}\), at which agents can borrow tokens. Agents use this information to make rational decisions regarding onboarding and renewing power. The purpose of this is to model realities that SPs have to face, when determining their strategy for being involved in the Filecoin network. Additional real-world complexities can also be modeled here. ### Model Validation We begin by validating the model of Filecoin's supply dynamics that was developed and described in Section 3, using backtesting. Our approach is to instantiate one DCAAgent which onboards and renews the historical power that was onboard onto the network for that day. This is in contrast to the typical use-case of an agent, which is making daily decisions about whether and how much power to onboard and renew. Then, the relevant statistics for circulating supply are calculated from the start of simulation to the present date. This is then compared against actual statistics from the Filecoin network retrieved from Spacoscope [22]. Fig.3 shows the results of this experiment: a) shows the minted tokens, and b) shows the circulating supply. For each of these network statistics, the implemented model tracks the historical data with good accuracy. Slight differences are observed, and these can be attributed to not modeling certain intricacies of the Filecoin network, such as variable sector durations. ## 5 Experiments and Results In this section, we describe some experiments that showcase the utility of ABM in modeling blockchain networks, using the Filecoin network as a case study. ### Sensitivity of Rewards to External Discount Rates In this experiment, we explore how the cryptoeconomics of Filecoin and external factors such as borrowing rates affect agent rewards. We instantiate two subpopulations of NPVAgents, one subpopulation is configured to only onboard verified deals (which corresponds to FIL+ power), while the second is configured to only onboard storage capacity (CC power). Both subpopulations of agents are configured to have identical risk profiles by instantiating the agents with the same discount rates. Fig. 4 shows agent rewards trajectory, with different colors indicating different external discount rates. Fig. 4(a) shows that irrespective of external discount rates, FIL+ agents are more profitable than CC agents. This is a direct consequence of the cryptoeconomic mechanism in place in Figure 3: Validating our supply dynamics and ABM through backtesting. (a) The mined FIL of the model to the historical data, and (b) The circulating supply computed by the model against historical data. Filecoin to incentivize FIL+ data, through the 10x quality adjusted (QA) multiplier. Secondly, we see the effect of external borrowing rates on agent profitability. As expected, higher rewards are correlated with lower borrowing rates. However, the rewards trajectory does not change linearly with the borrowing rate and starts to oscillate as borrowing rates increase. This is an example of an interesting dynamic that emerges as a result of the agent based simulation. We extend this experiment by altering one aspect of the previous experimental setup - that is, we increase the risk aversion of the CC agent to be two-times the risk aversion of the FIL+ agent. We then examine the agent rewards trajectory, shown in Fig. 4(b). Because the FIL+ agents have the same risk as before, their rewards trajectories are identical. However, we notice that when the external discount rate is 30%, the risk-averse CC agent manages a more positive rewards trajectory than the non risk-averse CC agent. The effect of this disappears as the external borrowing rates decrease, however. ### Wealth Concentration In this experiment, we explore how the distribution of starting capital in the cryptoeconomic network affects the ability to get rewards from the network. Our experimental setup consists of five DCAAgents which are configured to represent different levels of capitalization. This is represented with a vector \([a_{1},a_{2},a_{3},a_{4},a_{5}]\), where the relative capitalization of \(a_{i}\) is defined as \(c_{i}=a_{i}/\sum_{i=1}^{5}a_{i}\). In Filecoin, onboarding power requires, in addition to pledge collateral, sealing of sectors via cryptographic proofs that require large computational resources. It is reasonable to assume that agents with larger capitalization will have more hardware resources to perform this than agents with smaller capitalization, thereby having a larger sealing throughput. To model this, we scale how much power an agent is able to onboard and renew, per day, by its relative capitalization. The mapping from capitalization to sealing throughput captures the idea of wealth concentration. To compare and interpret the results, the overall power onboard is kept constant across the three experiments. We test three distributions of initial capital: 1. All agents have equal starting capital (20%) - this is considered the baseline and corre Figure 4: Experiment exploring the sensitivity of returns to external discount rates with two subpopulations of agents, FIL+ and CC. In (a) both exhibit the same risk profile, (b) the CC agent has 2x the risk aversion of the FIL+ agent. sponds to the the vector \([1,1,1,1,1]\). 2. One agent has 50% of the starting capital, and the remainder have \(50/4=12.5\%\) starting capital each and corresponds to the vector \([4,1,1,1,1]\). 3. The agent capitalization follows the distribution: \([33\%,27\%,20\%,13\%,7\%]\) and corresponds to the configuration vector \([5,4,3,2,1]\). Fig. 5 shows the reward trajectories of each agent, relative to the baseline case where each agent has 20% of the starting capital. We observe that relative to the max-capitalized agent, the rewards trajectories of other agents are on a decreasing trend. This is a consequence of the fact that both onboarding and renewals are a function of the agent capitalization. ## 6 Conclusion In this paper, we have outlined a framework for applying ABM to modeling utility based blockchain economies, and validated our framework with Filecoin as a case study. Our experiments shed light on some interesting aspects of Filecoin, including agent reward trajectories when taking into account external lending rates, and how the cryptoeconomic structure of Filecoin distributes wealth. The sensitivity experiments indicate that creating new, competitive lending markets with smart contracts leveraging programmable platforms such as FVM[23] can enable network growth and increase miner returns. The wealth concentration experiments indicate that starting capitalization has a significant effect on total rewards in the future. By explicitly modeling this effect with the supply dynamics, one can then design new incentive structures to either accentuate, maintain, or perhaps reverse the trend based on the goals of the project. Insights such as these, enabled by the ABM framework, can help designers and creators of cryptoeconomies to more efficiently achieve their goals. This indicates that ABM can be a valuable tool for researchers to better understand and design blockchain economies. In the future, we plan to explore additional aspects of blockchain economies that are well mapped to ABMs, such as the effect of information quality, availability, and lag on agent reward trajectories, and related network science questions. Another potential research direction is to include uncertainty by considering a probabilistic ABM, while balancing the computational constraints using methods such as Multi-level and Multi-Index Monte Carlo methods.[24, 25, 26, 27] Figure 5: The trajectory of rewards for agents with various starting capitalizations, relative to the baseline distribution, where all agents are equally capitalized (20%). ## Author Contributions KK developed the ABM codebase and helped devise experiments that were conducted with the framework. TM developed the mathematical models of the Filecoin economy and steered the project. They both contributed equally to manuscript preparation. TM and MS implemented the initial mathematical models, which were then ported to the ABM framework. JC provided formalism and consulting on ABM related topics. AC and ZZ helped putting the project in larger context and helped with manuscript preparation.
2301.06694
Photonic Characterization of Oxygen and Air-Annealed Zn3N2 Thin Films
Zinc nitride films were synthesized by reactive radio frequency (rf) magnetron sputtering of a zinc target using an Ar+N2 mixture. The as-deposited films were annealed in the air and O2 at 300 {\deg}C for 1 hr. The XRD measurements indicated that the films had a polycrystalline structure with a preferred (400) Zn3N2 orientation. The annealing process enhanced the crystallinity. After annealing, the AFM and SEM morphology revealed no significant change in the surface morphology and surface roughness. The direct bandgap of Zn3N2 was estimated to be in the range of 1.15 -1.35 eV where annealing resulted in a reduction of the bandgap. The films were confirmed to be p-type conduction and the resistivity was slightly increased by annealing. The photoconductivity measurements indicated that the as-deposited films did not have any photoresponse, whereas the annealed films exhibited photoconductivity.
Ting Wen, Ahalapitiya H Jayatissa
2023-01-17T04:17:38Z
http://arxiv.org/abs/2301.06694v1
# Photonic Characterization of Oxygen and Air-Annealed Zn\({}_{3}\)N\({}_{2}\) Thin Films ###### Abstract Zinc nitride films were synthesized by reactive radio frequency (rf) magnetron sputtering of a zinc target using an Ar+N\({}_{2}\) mixture. The as-deposited films were annealed in the air and O\({}_{2}\) at 300 \({}^{\circ}\)C for 1 hr. The XRD measurements indicated that the films had a polycrystalline structure with a preferred (400) Zn\({}_{3}\)N\({}_{2}\) orientation. The annealing process enhanced the crystallinity. After annealing, the AFM and SEM morphology revealed no significant change in the surface morphology and surface roughness. The direct bandgap of Zn\({}_{3}\)N\({}_{2}\) was estimated to be in the range of 1.15 -1.35 eV where annealing resulted in a reduction of the bandgap. The films were confirmed to be p-type conduction and the resistivity was slightly increased by annealing. The photoconductivity measurements indicated that the as-deposited films did not have any photoresponse, whereas the annealed films exhibited photoconductivity. zinc nitride, thin films, sputtering deposition, thermal annealing, conductivity, carrier concentration ## 1 Introduction Zinc nitride (Zn\({}_{3}\)N\({}_{2}\)) is a relatively least studied A\({}^{\text{II}}\)B\({}^{\text{V}}\) type compound semiconductor. It is dark blue in colour and cubic in structure with lattice constant being \(a=9.78\) A [1]. Numerous deposition methods have been employed to synthesize Zn\({}_{3}\)N\({}_{2}\) films [1-10]. Widely used method is the rf-magnetron sputtering deposition, which is a very useful application in terms of large-area coating of uniform thin semiconductor layers. The reactive sputtering of zinc target in a N\({}_{2}\) and Ar gas mixture has been employed. The Zn\({}_{3}\)N\({}_{2}\) film synthesized by rf-magnetron sputtering has a direct band gap of \(1.23\pm 0.02\) eV [5], which has strong absorption in IR region. The Zn\({}_{3}\)N\({}_{2}\) with nanowire microstructure was reported to exhibit ultraviolet and blue emission [8]. This interesting phenomenon indicated that zinc nitride may be suitable as an optoelectronic material for infrared (IR) (750 - 1600 nm) sensors, smart window, and energy conversion devices. However, being a new material, its optical properties are still controversial, and its behaviour of photoconductivity has not yet been investigated. According to the reviewed works, optical and electronic properties of Zn\({}_{3}\)N\({}_{2}\) films varied over a wide range, depending upon the deposition process and nitrogen amount in the films [2-22]. The zinc and nitrogen concentration in Zn\({}_{3}\)N\({}_{2}\) films can be controlled by the substrate temperature, N\({}_{2}\) flow concentration in the sputtering mixture and deposition time. However, the film composition could be easily changed just after deposition**,** because Zn\({}_{3}\)N\({}_{2}\) is unstable and can absorb oxygen in the humid air. Heat treatment improves the electronic properties while producing more stable Zn\({}_{3}\)N\({}_{2}\) phase. Thus, thermal annealing is used to improve the crystal quality and to remove structural defects in Zn\({}_{3}\)N\({}_{2}\). In this paper, the Zn\({}_{3}\)N\({}_{2}\) films were deposited by a radio frequency (rf) magnetron sputtering method. Thermal annealing was conducted in the air and O\({}_{2}\) to stabilize the film and modify the band gap. The films were investigated for structure, optical, and electrical properties. The photoconductivity of the films for IR light was measured. The properties of the as-deposited and annealed Zn\({}_{3}\)N\({}_{2}\) films were compared to the corresponding ones deposited for 30 min [22] to study the effect of deposition time. ## 2 Experimental Zinc nitride films were deposited by rf magnetron sputtering system using a 6-inch Zn target (1/4-inch-thick, purity 99.995%) and non-alkaline glass substrates (0.75 mm thick). The substrates were rinsed with acetone and isopropyl alcohol before introducing to the sputtering system. The base-pressure was in the range of 2x10\({}^{-6}\) - 5x10\({}^{-6}\) Torr, and the deposition pressure was around 5 x10\({}^{-3}\) Torr. The system was purged with Ar for 30 min prior tom deposition to make the chamber oxygen-free. The target was pre-sputtered for 5 min to remove oxide layers produced in the target by exposing it to the air. Then the substrate was repositioned, and the sputtering was carried out using a mixture of nitrogen and argon (99.999%, N\({}_{2}\)/Ar: 1/1). The rf power density was 2.14 W/cm\({}^{2}\) and the films were deposited for 1 hr. without heating the substrate. The as-deposited films were annealed in the air and O\({}_{2}\) respectively at 300 \({}^{\circ}\)C for 1 hr. using a tube furnace. The crystallinity of the films was examined with the X-ray diffraction (XRD, Cu, K radiation, PANalytical XPert Pro MPD). The film surfaces were studied with atomic microscope (AFM) and scanning electron microscope (SEM). The ultra-violet (UV) and visible double beam spectrometer (UV-1650 PC Shimadzu) was used for measuring the optical transmittance and reflectance. A thin layer of Au (\(\sim\)75 nm) was coated on the films to measure electrical properties and infrared sensitivity using thermal vacuum evaporation method through a shadow mask. The width and spacing of the Au electrodes were 6 mm and 4 mm, respectively. Hence, the light irradiated area was 0.24 cm\({}^{2}\). The illumination of IR light of wavelength 850 nm and intensity around 2.16 \(\upmu\)W/mm\({}^{2}\) was provided by a light emitting diode (LED). The photoconductor was installed in a closed glass chamber (200 cm\({}^{3}\)) with Au connecting electrodes and tested in vacuum (10\({}^{-3}\) Torr). The resistance of zinc nitride layer between Au electrodes was measured using a high mega ohm multimeter (Keithly Model: 22-816). The data were collected using a commercial software package (LabView) available from the National Instruments Company. The electric property was investigated by Van der Pauw measurement. ## 3 Results and Discussion ### Structural property The XRD patterns shown in Fig. 1 characterized the crystalline structure of the as-deposited and annealed Zn\({}_{3}\)N\({}_{2}\) films. The (222), (321), (400), (440) and (600) diffraction peaks indicated that the as-deposited and annealed Zn\({}_{3}\)N\({}_{2}\) films had a cubic anti-bixbyite Zn\({}_{3}\)N\({}_{2}\) structure according to the report in JCPDS file [23]. One ZnO peak with low intensity was also detected at 2\(\theta\) = 36.25\({}^{\circ}\) corresponding to (101) plane in the as-deposited film, which suggested that the as-deposited film was oxidized during storage in the ambient air prior to the XRD characterization. Comparing to the Zn\({}_{3}\)N\({}_{2}\) films deposited for 30 min [22], the films in this study had a preferred orientation at (400) plane rather than (321) plane. It indicated that deposition time heavily affected the structural property of Zn\({}_{3}\)N\({}_{2}\) films. Relative intensity of (400) peak was defined as, \[\mathrm{i}_{(400)}=\frac{\mathrm{I}_{(400)}}{\mathrm{I}_{(400)+1}_{(222)}+1_{(3 21)}+1_{(101)}+1_{(440)}+1_{(600)}} \tag{1}\] The calculated value of relative intensity \(\mathrm{i}_{(400)}\) was shown in Fig. 2, which was much higher than the \(\mathrm{i}_{(321)}\) of the Zn\({}_{3}\)N\({}_{2}\) films deposited for 30 min [22]. Furthermore, \(\mathrm{i}_{(400)}\) was dramatically increased and the ZnO phase was removed after annealing in the air and O\({}_{2}\). It suggested that more single crystalline structure was achieved for the annealed films with longer deposition time. The mean crystallite size of the predominant phase (400) was estimated using the Scherrer formula [24, 25]. \[D=0.9\lambda/[(B)*\cos\theta_{B}]. \tag{2}\] Figure 1: X-ray diffraction patterns of the (a) as-deposited, (b) air-annealed, and (c) O\({}_{2}\)-annealed Zn\({}_{3}\)N\({}_{2}\) films surfaces. Figure 3: Grain size and FWHM of the (400) phase in the as-deposited and annealed Zn\({}_{3}\)N\({}_{2}\) films Figure 2: Relative intensity of i\({}_{(400)}\) of the as-deposited and annealed Zn\({}_{3}\)N\({}_{2}\) films Where, \(\lambda\), \(\theta_{\rm B}\) and B are the X-ray wavelength (1.5418 A), Bragg diffraction angle and the instrumental broadening, respectively. Using \(B\) = 0.05\({}^{\circ}\) and FWHM at (400) reflections, the grain sizes were estimated to be \(\approx\) 35.77, 33.48 and 41.80 nm, respectively, illustrated in Fig. 3. The changes of the calculated crystalline size are also visible in the surface roughness obtained from AFM measurements (Table 2). ### Surface Property Figure 4 and 5 showed the surface microstructure of the as-deposited and annealed Zn\({}_{3}\)N\({}_{2}\) films in 2 \(\upmu\)m scale using atomic microscope (AFM) and in 5 \(\upmu\)m scale using scanning electron microscope (SEM), respectively. The corresponding film roughness was given in Table 1. It was observed that the particle size and the surface roughness were increased comparing to that of the Zn\({}_{3}\)N\({}_{2}\) films deposited for 30 min [22]. The grain illustrated in Fig. 4 was in consistent with the calculated crystalline size in Fig. 3. The change of surface roughness was reasonable since larger grain size resulted in rougher surface. All in all, the film roughness and particle size were not significantly changed by annealing. ### Optical property Figure 6 and 7 plotted the optical transmittance and reflectance of the Zn\({}_{3}\)N\({}_{2}\) films in the photon wavelength range of 200-1800 nm. Both the transmittance and reflectance were decreased comparing to those of the Zn\({}_{3}\)N\({}_{2}\) films deposited for 30 min [22]. It suggested that the Zn\({}_{3}\)N\({}_{2}\) films could absorb more light as increasing the deposition time. The transmittance was decreased about 10% after annealing in the air and O\({}_{2}\). The absorption edge was shifted from 800 nm to 980 nm after annealing. The thickness of Zn\({}_{3}\)N\({}_{2}\) film is calculated with the transmittance spectrum using the formula [26, 27]: \[x=\frac{1}{2n(\frac{1}{\lambda_{1}}\frac{1}{\lambda_{2}})}\,. \tag{3}\] Where, x is the thickness of the Zn\({}_{3}\)N\({}_{2}\) film, n is the refractive index, \(\lambda_{1}\) and \(\lambda_{2}\) are the wavelength of adjacent first two peaks from low energy side in the transmittance spectrum. By fitting of calculated spectra with the measured optical transmittance spectra. we have estimated the film thickness to be x = 2576, 2640, and 2661nm for the as-deposited, air-annealed, O\({}_{2}\)-annealed Zn\({}_{3}\)N\({}_{2}\) films, respectively. The thicknesses of the above films measured with AFM were 2560 nm, 2600 nm, and 2643 nm, respectively. The increased thickness can be Figure 6: Optical transmittance spectra of the (a) as-deposited, (b) air-annealed, and (c) O\({}_{2}\)-annealed Zn\({}_{3}\)N\({}_{2}\) films surfaces. Figure 7: Optical reflectance spectra of the (a) as-deposited, (b) air-annealed, and (c) O\({}_{2}\)-annealed Zn\({}_{3}\)N\({}_{2}\) films surfaces. interpreted as due to the improved surface crystalline structure in the (400) orientation and the interstitial oxygen ions introduced from the air and O\({}_{2}\). The absorption coefficient, \(\alpha\), was calculated using the following equation [28], \[T=[(1\ \textrm{-}\ R)^{2}\exp{(\textrm{-}\alpha d)}]/[1\ \textrm{-}\ \textrm{R}^{2}\exp{(\textrm{-}2\alpha d)}]. \tag{4}\] Where, T, R, \(\alpha\), and d represents the transmittance, reflectance, absorption coefficient the film thickness respectively. Fig. 8 shows the plot of \((\alpha\textit{h}\nu)^{2}\) versus photon energy (\(\textit{h}\nu\)). It is typical in direct semiconductors that the absorption coefficient reaches a value greater than 10\({}^{4}\) cm\({}^{\textrm{-1}}\) at photon energies above the absorption edge. Table 2 shows the corresponding band gap change with annealing. The calculated band gaps are close to that of the corresponding Zn\({}_{3}\)N\({}_{2}\) films deposited for 30 min [22]. The energy gap was estimated from the Tauc's plot (Fig.8) and the optical gap was estimated to the photon energy corresponding to \(\alpha=\)10\({}^{4}\) cm\({}^{\textrm{-1}}\). Optical gap of \(\alpha=\)10\({}^{4}\) cm\({}^{\textrm{-1}}\) is used to identify the absorber materials in thin film solar cells [26]. ### Electrical properties The electrical properties shown in Table 3 were measured by Van der Pauw Hall effect using Au as the contact electrodes under identical conditions. The film resistivity was very low and it was similar to the values reported by Futushara et al. [5]. As the films were annealed, more and more N atoms were activated. Therefore, more holes were produced, which resulted in the increase of the hole-concentration. When the film deposited for 30 min [22] was annealed in the air and O\({}_{2}\), N atoms were not enough to form N acceptors to compensate the electrons produced by zinc interstitials or oxygen vacancies, which mechanism transferred the p-type as-deposited film into n-type. However, for the films deposited for 1 hr., N atoms were enough to compensate the electrons and kept the conduction of the films as p-type. The mobility decreased with the hole-concentration increasing, which was like the phenomenon found by _Ji et al_. [29]. It can be interpreted as the scattering of holes generated by the ionized N acceptors in the annealed films. Figure 9 showed the variation of logarithm of conductance against reciprocal of temperature for as-deposited and annealed Zn\({}_{3}\)N\({}_{2}\) films. The measurement was carried out in vacuum with temperature varying from 20 \({}^{\circ}\)C to 250 \({}^{\circ}\)C. It was observed that the resistance of all films decreased as temperature increasing, which suggested semiconducting nature of Zn\({}_{3}\)N\({}_{2}\) thin films. Furthermore, it was seen that the resistance of the annealed films was higher than that of the as-deposited film. The increased resistance in annealed films can be explained as the result of reduced carrier mobility shown in Table 3. The thermal activation energy of conductivity for the as-deposited, annealed Zn\({}_{3}\)N\({}_{2}\) thin films was shown in Table 4. It was found that the activation energy was increased after annealing in the air and O\({}_{2}\). The likely reason was that the carriers with low mobility in the annealed films need more energy to be activated. Electronic excitation across this energy enables conduction between forbidden-gap energy levels, resulting in an increase in conductivity. \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Samples** & **As-deposited** & **Air-annealed** & **O\({}_{2}\)-annealed** \\ \hline Resistivity (\(\Omega\).cm) & 0.77 & 2.0 & 1.9 \\ \hline Conduction Type & p-type & p-type & p-type \\ \hline Mobility [cm\({}^{2}\)/(V\(\cdot\)s)] & 294.6 & 24.0 & 33.6 \\ \hline N (cm\({}^{-3}\)) x10\({}^{20}\) & 2.74 & 13.02 & 9.47 \\ \hline \end{tabular} \end{table} Table 3: Resistivity, conduction type, mobility and carrier concentration (N) of the as-deposited and annealed Zn\({}_{3}\)N\({}_{2}\) films. (Error of these measurements: Resistivity \(\pm\)0.001 \(\Omega\) and Mobility \(\pm\)0.1cm\({}^{2}\)/(V\(\cdot\)s)). Figure 9: Arrhenius plot of (a) as-deposited, (b) air-annealed, and (c) O\({}_{2}\)-annealed Zn\({}_{3}\)N\({}_{2}\) films. ### Zn\({}_{3}\)N\({}_{2}\) based Photoconductor The photoconductor shown in Fig. 10 consists of a slab of Zn\({}_{3}\)N\({}_{2}\) thin film with ohmic contacts affixed to the opposite ends. The films were irradiated with 850 nm light beam using an array of light emitting array (LED) with an intensity level of the surface 2.16 \(\mu\)W/mm\({}^{2}\). The long wavelength cut off is given by [30], \[\lambda_{c}\mathrm{=}\frac{hc}{E_{g}}=\frac{1.24}{E_{g}(eV)}\ \ (\mu m) \tag{5}\] where, \(\lambda_{c}\) is the wavelength corresponding to the Zn\({}_{3}\)N\({}_{2}\) film band gap \(E_{g}\) in Table 2. For wavelength shorter than \(\lambda_{c}\), the incident radiation is absorbed by the film, and hole-electron pairs are generated by band-to-band transitions, resulting in an increase in conductivity. The intrinsic (band-to-band) photo excitation is illustrated in Fig. 11. \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Samples** & **As-deposited** & **Air-annealed** & **O\({}_{2}\)-annealed** \\ \hline Activation Energy (eV) & 0.12 & 0.19 & 0.58 \\ \hline \end{tabular} \end{table} Table 4: Activation energy of the as-deposited and annealed Zn\({}_{3}\)N\({}_{2}\) films. Figure 11: Process of intrinsic photo excitation. Figure 10: Schematic diagram of the Zn\({}_{3}\)N\({}_{2}\) photoconductor. On the other hand, air and O\({}_{2}\) annealed films indicated very clear change of photoconductivity as shown in Fig.12 (a) and (b). Previous investigations also suggested that the oxygen played a critical role in electrical properties of zinc nitride films [24]. The decrease of the resistance was 1.39% and 1.02%, respectively. Comparing to the as-deposited film, the annealed films exhibited lower optical band gap and higher carrier concentration. It could be interpreted as both the intrinsic and extrinsic photo excitation easier occurred under the incident light. The carrier mobility and optical band gap were close between the two annealed films. Nevertheless, the air-annealed film showed higher carrier concentration and lower activation energy. It indicated that besides the intrinsic excitation, the extrinsic photo excitation was more easily occurred than the O\({}_{2}\)-annealed film. Therefore, the air-annealed film exhibited higher sensitivity than the O\({}_{2}\)-annealed film. It was also noted that the photoconductivity was much higher than the corresponding films deposited for 30 min [22], which indicated that the photoconductivity was increased as increasing the deposition time. The simulated solar light was\(+\) converted into single wavelength between 300 nm to 1000 nm by _monochromator_. Fig. 13 illustrated the change of the resistance when the monochromatic lights fell on the surface of the air-annealed Zn\({}_{3}\)N\({}_{2}\) film. It can be concluded that the film was photoconductive to all the monochromatic lights. The sensitivity was increased with the decreasing of the wavelength. The likely reason was that the light with lower wavelength generated more photon energy according to equation (5). Therefore, the more hole-electron pairs generated, the more resistance deceased. Figure 12: Photo-response and photo-recovery for the Zn\({}_{3}\)N\({}_{2}\) films annealed in the (a) air and (b) O\({}_{2}\). (Resolution of resistance measurement=0.005E+07 \(\Omega\)). ## 4 Conclusion In this investigation, the Zn\({}_{3}\)N\({}_{2}\) thin films were synthesized using the rf-magnetron sputtering and annealed in the air and O\({}_{2}\) gas mixture for 1 hr at 300 \({}^{\circ}\)C. The films were opaque, (400) phase preferred orientated and conductive p-type materials. The intensity of the predominant peak was dramatically improved after annealing in the air and O\({}_{2}\). Comparatively, the Zn\({}_{3}\)N\({}_{2}\) films synthesized for 30 min were oriented at (321) phase and converted to n-type conduction after annealing. The calculated crystal size and surface roughness were not significantly changed by annealing. However, the calculated thickness of the film increased by \(\sim\)35% with annealing. It can be interpreted as due to the improved surface crystalline structure and the interstitial oxygen ions introduced by the air and O\({}_{2}\). The annealing process reduced the optical band gap from 1.35 to 1.15 eV, which was close to that of the corresponding Zn\({}_{3}\)N\({}_{2}\) films synthesized for 30 min. According to Van der Pauw measurement, the films exhibited low resistivity. Annealing process activated N acceptor, which decreased the mobility and increased the carrier concentration. The photoconductivity measurement showed that the as-deposited film was not irritated by the incident light. Nevertheless, the conductivity of the films annealed in the air and O\({}_{2}\) was increased by 1.39% and 1.02%, which is much higher than the corresponding films deposited for 30 min. Therefore, it can be concluded that longer deposition time heavily affected the structural, electric and optical properties of the Zn\({}_{3}\)N\({}_{2}\) films, and improved the film photoconductivity in the end. **Acknowledgements:** This work was supported by a grant (ECS-0928440) from National Science Foundation (NSF) of USA.
2310.01862
Emergent dipole moment conservation and subdiffusion in tilted chains
We study the transport dynamics of an interacting tilted (Stark) chain. We show that the crossover between diffusive and subdiffusive dynamics is governed by $F\sqrt{L}$, where $F$ is the strength of the field, and $L$ is the wave-length of the excitation. While the subdiffusive dynamics persist for large fields, the corresponding transport coefficient is exponentially suppressed with $F$ so that the finite-time dynamics appear almost frozen. We explain the crossover scale between the diffusive and subdiffusive transport by bounding the dynamics of the dipole moment for arbitrary initial state. We also prove its emergent conservation at infinite temperature. Consequently, the studied chain is one of the simplest experimentally realizable models for which numerical data are consistent with the hydrodynamics of fractons.
S. Nandy, J. Herbrych, Z. Lenarčič, A. Głódkowski, P. Prelovšek, M. Mierzejewski
2023-10-03T07:51:46Z
http://arxiv.org/abs/2310.01862v2
# Emergent dipole moment conservation and subdiffusion in tilted chains ###### Abstract We study the transport dynamics of an interacting tilted (Stark) chain. We show that the crossover between diffusive and subdiffusive dynamics is governed by \(F\sqrt{L}\), where \(F\) is the strength of the field, and \(L\) is the wave-length of the excitation. While the subdiffusive dynamics persist for large fields, the corresponding transport coefficient is exponentially suppressed with \(F\) so that the finite-time dynamics appear almost frozen. We explain the crossover scale between the diffusive and subdiffusive transport by bounding the dynamics of the dipole moment for arbitrary initial state. We also prove its emergent conservation at infinite temperature. Consequently, the studied chain is one of the simplest experimentally realizable models for which numerical data are consistent with the hydrodynamics of fractons. _Introduction._ For generic closed many-body systems, the out-of-equilibrium dynamics results in a relaxation into a state of equilibrium specified by the conserved quantities. In such systems, the long-wavelength excitations associated with these quantities attenuate following the near-universal Fick's law of diffusion. For more than a decade, much attention has been devoted to systems that could violate these ubiquitous properties, displaying anomalous diffusion or even failing to thermalize entirely. Tilted (Stark) quantum systems, subjected to a linear potential, offer a concrete and experimentally accessible platform to explore these extraordinary phenomena. Such systems can be realized in cold-atom experiments [1; 2; 3], and their properties have served as one of the main motivations for theoretical studies concerning Stark many-body localization (SMBL), Hilbert space fragmentation, and fracton hydrodynamics. SMBL emerged as a phenomenon that was expected to exhibit physics resembling the conventional many-body localization (MBL) in that single-particle Stark localization survives despite the presence of interactions between particles [4; 5; 6]. Later on, SMBL was studied theoretically for various models [7; 8; 9; 10] and experimentally also in a trapped-ion quantum simulator [11]. Despite a similarity to MBL, the nonergodicity of SMBL systems has a distinct physical origin [2; 12] and is expected to be transient, at least in finite systems [13]. In the case of large tilt, the Schrieffer-Wolff transformation allows one to derive (approximate) effective Hamiltonians, which strictly conserve the dipole moment [2; 14]. Combination of the particle-number and dipole conservations leads to extensive fragmentation of the Hilbert space [15; 16; 17] and to a breakdown of thermalization [18; 19; 20; 21; 22]. On the one hand, the dipole moment is not strictly conserved in experimentally relevant tilted models. Therefore, some studies indicate that the nonergodicity of tilted chains is only a prethermal phenomenon, which is eventually replaced by thermalization [2; 3; 18]. On the other hand, fluids in which charge and dipole moment are conserved [23] exhibit unconventional transport properties, as described by the framework of fracton hydrodynamics [24; 25; 26]. One of the distinctive features of dipole-conserving systems is the prediction of subdiffusive relaxation of charge-density modulation. A modulation with wavevector \(q\) relaxes with the rate \(\Gamma\propto q^{4}\)[27; 28; 29], as observed experimentally in strongly tilted planar cold-atom lattices [1]. A one-dimensional (1D) tilted chain with short-range interactions is the simplest system in which these phenomena may possibly occur and which is closely related to the experimental setups. However, so far, it is not clear how to properly reconcile the presence of seemingly conflicting phenomena: the subdiffusive relaxation (with rate \(\Gamma=Dq^{4}\)), originating from the conservation of the dipole moment, the transient absence of ergodicity originating from nearly fragmented Hilbert space, and the possible asymptotic thermalization originating from the fact that the dipole moment is not strictly conserved but is rather a time-dependent quantity. In this Letter, we study interacting spinless fermions on chains with \(L\) sites tilted by the electric field \(F\). We present compelling numerical evidence supporting the subdiffusive relaxation of the hydrodynamic density modes \(A_{q}\propto\exp(-Dq^{z}t)\). Upon increasing \(F\), for the smallest considered \(q\sim 1/L\), the exponent changes from the diffusive value \(z=2\) to the subdiffusive \(z=4\) and, quite unexpectedly, the crossover in \(z\) is determined by the magnitude of \(F\sqrt{L}\). Moreover, the transport coefficient \(D\) in the subdiffusive regime decreases exponentially with the field, which can be reconciled with previous observations of nonergodicity for strong \(F\). The diffusion-to-subdiffusion crossover can be linked with the emergent conservation of the dipole moment that is shown to hold true also for nonequilibrium evolution starting from an arbitrary initial state. In the case of infinite temperature, one can prove a stronger bound on the time-dependence of the dipole moment. The latter bound holds for a broad class of models and also for multidimensional systems. _Steady-state properties._ We first study a Stark chain with \(L\) sites and open boundary conditions (obc). It contains \(L/2\) spinless fermions interacting via nearest neighbor (nn) repulsion \(V\), \[H = H_{0}+FM\;,\] \[H_{0} = \sum_{l}\left(c_{l}^{\dagger}c_{l+1}+c_{l+1}^{\dagger}c_{l}\right)+ V\sum_{l}\tilde{n}_{l}\tilde{n}_{l+1}+H^{\prime}\;. \tag{1}\] Here, \(c_{l}^{\dagger}\) creates a particle at site \(l\), \(n_{l}=c_{l}^{\dagger}c_{l}\), \(\tilde{n}_{l}=n_{l}-1/2\), and \(M=\sum_{l}(l-L/2)\tilde{n}_{l}\) is the dipole moment. We have added a term \(H^{\prime}\) that breaks the integrability of the model at \(F=0\), allowing for normal diffusion at weak \(F\), and its choice is optimized for the applied numerical method. In the main text, we discuss numerical results for \(V=3\), whereas results for smaller \(V\) are shown in the Supplementary Material, Ref. [30]. First, we study an open chain, Eq. (1), that is driven via boundary Lindblad operators with a small particle current injection rate \(\mu\)[31]. We employ the time-evolving block decimation (TEBD) technique for vectorized density matrices [32; 33] to solve the Lindblad master equation. It allows us to establish the nonequilibrium steady state (NESS) for which we calculate the normalized particle current, \(I/\mu\), and the spatial profile of particles, \(\langle\tilde{n}_{l}\rangle\). When using the TEBD, it is convenient to stay within the nearest-neighbor interaction. Therefore, to break the integrability at \(F=0\), we resort to a term having the form \(H^{\prime}=\tilde{V}\sum_{l}(-1)^{l}\tilde{n}_{l}\tilde{n}_{l+1}\). We set \(\tilde{V}=0.4\) throughout the paper unless mentioned otherwise. All other details concerning numerical calculations are explained in Ref. [30]. Fig. 1(a) shows the rescaled steady-state density profiles, \(\langle\tilde{n}_{l}\rangle/\mu\), for two different field strengths \(F\) with a system size of \(L=50\). Diffusive systems follow the standard hydrodynamic equation, \(\partial_{t}n-D\partial_{x}^{2}n=0\), characterized by a linear steady-state profile, \(\partial_{x}^{2}n=0\). In Fig. 1(a), we indeed see a linear profile at \(F=0\), consistent with the exponent being \(z=2\). However, for a large enough field \(F=0.8\), the numerical results clearly reveal the nonlinearity of the profile. Such pronounced non-linear profile can be very accurately fitted to a cubic function, \(\langle\tilde{n}_{l}\rangle=ax+bx^{3},x=2l/L-1\), as shown in Fig. 1(a). Note that a cubic profile of the steady state is consistent with the generalized hydrodynamic equation for systems conserving the total charge and dipole moment [23], \(\partial_{t}n+D\partial_{x}^{4}n=0\), from which one obtains \(\partial_{x}^{4}n=0\) and \(z=4\). To study the crossover from diffusive (\(z=2\)) to subdiffusive (\(z=4\)) regimes in more detail, we have calculated \(L\)-dependence of normalized particle current \(I/\mu\) shown in Fig. 1(b). The dependence of the dynamical exponent \(z\) on \(L\) is manifested via the bending of the curves, particularly pronounced for the larger \(F\). To extract the exponent \(z\), related to the relaxation of the slowest modes on the system of size \(L\), we fit the slope between two consecutive calculated points on the plot \(\log(I/\mu)\) vs. \(\log(L)\) and locally assume scaling \(I\sim L^{1-z}\). Fig. 1(c) shows that extracted \(z\) collapse on a single curve when plotted as a function of \(F\sqrt{L}\). Later on, we explain the origin of this unexpected scaling. It indicates that macroscopic chains (\(L\to\infty\)) with nonzero \(F\) are always subdiffusive, whereas diffusive behavior can be observed only in finite systems or shorter wave-lengths. Another important observation that follows directly from panel (d) of Fig. 1 is that the normalized current \(I/\mu\) for a fixed \(\mu\) decays exponentially with \(F\). This immediately implies that for a very strong \(F\), the studied systems would appear localized and nonergodic. _Dynamics of the density modulations._ Next we confirm that conclusions formulated from the NESS are consistent also with the time-evolution of initial states with spatially modulated densities of particles, \(\delta n_{l}=\langle\tilde{n}_{l}\rangle\propto\cos(ql)\) with \(q=2\pi m/L\ll 1\) and \(m=1,2\). The evolution is obtained via the Lanczos propagation method [34; 35], which allows to study chains with up to \(L=26\). In order to reduce boundary effects, the time-evolution is carried out for a system equivalent to Eq. (1), but with time-dependent flux and periodic boundary conditions \[H_{F}(t)=\sum_{l}\left(e^{-iFt}c_{l}^{\dagger}c_{l+1}+e^{iFt}c_{l+1}^{\dagger}c _{l}+Vn_{l}n_{l+1}\right)+H^{\prime}\;. \tag{2}\] Here, it is more convenient to choose another form of the integrability-breaking term that does not affect the translational symmetry, namely \(H^{\prime}=V^{\prime}\sum_{l}n_{l}n_{l+2}\) where we take \(V^{\prime}=2\). We start the time-evolution from a microcanonical thermal state obtained numerically [36; 37; 38] for the Hamil Figure 1: Results for NESS in chains with boundary driving. (a) Normalized amplitude of the spatial profile of particle density for \(L=50\) and two different tilts: \(F=0\) (diffusive) and \(F=0.8\) (subdiffusive). A cubic profile, \(\langle\tilde{n}_{l}\rangle=ax+bx^{3},x=2l/L-1\), consistent with the hydrodynamic equation corresponding to the \(z=4\) case, fits the data well for \(F=0.8\). (b) NESS current \(I/\mu\) vs. \(L\) for different values of field \(F\), where \(F\in[0.0,0.1,0.2,0.4,0.6,0.8]\). The arrow points to the direction of increasing \(F\). (c) Exponent \(z\) as a function of \(F\sqrt{L}\). (d) NESS current \(I/\mu\) reveals the exponential decay with \(F\). tonian \(H_{t<0}=H_{F=0}(t)+\sum_{l}\cos(ql)n_{l}\), where the first term is defined in Eq. (2). In order to be in the linear response regime and to obtain small but nonzero particle-density modulation, we use large but finite temperature \(kT=1/\beta=10\). Within the high-temperature expansion one estimates \(\langle\tilde{n}_{l}\rangle\simeq-(\beta/4)\cos(ql)\ll 1/2\). At time \(t=0\), we quench the electric field \(F\) and determine evolution under the Hamiltonian (2) with \(F\neq 0\). We calculate \(\delta n(t)=\sqrt{\langle(\tilde{n}_{l})^{2}\rangle_{l}}\), where \(\langle...\rangle_{l}\) denotes averaging over all lattice sites. Technical details are discussed in the Supplementary Material, Ref. [30]. Results in Figs. 2(a) and 2(b) show time-dependence of the ratio \(\delta_{t}=\delta n(t)/\delta n(0)\) at different fields. It decays (up to an additive finite-size constant [30]) exponentially with time, as expected for systems with either diffusive or subdiffusive transport. We therefore use the ansatz \(\delta_{t}=\exp(-\Gamma_{q}t)\) with \(\Gamma_{q}=Dq^{z}\). Numerical results for \(\Gamma_{q}\) at two smallest wave vectors \(q=2\pi/L,4\pi/L\) determine the exponent \(z\) and the transport coefficient \(D\) which are shown in Figs. 2(c) and 2(d), respectively. Moreover, results for the decay rates \(\Gamma_{min}\) for smallest \(q=2\pi/L\) in Fig. 2(d) reveal exponential dependence on \(F\), and they also agree with results obtained from the density correlation-function analysis within the original tilted model, Eq. (1), as described in Ref. [30]. Results obtained from the NESS properties (Fig. 1) as well as from the dynamics of the spatial profiles (Fig. 2) support a consistent picture of transport in the studied system. The crossover from diffusive transport (\(z=2\)) to subdiffusive (\(z=4\)) depends on the strength of the field as well as on the length-scale of the modulation (or equivalently on the related wavelength \(q\propto 1/L\)) and follows the scaling \(z=z(F\sqrt{L})\). While the transport remains subdiffusive even for strong \(F\), the corresponding transport coefficient, \(D\), is strongly suppressed, i.e., the decrease of \(D\) with increasing \(F\) is at least exponential. Latter is in agreement with a Floquet interpretation of the problem, where \(F\) plays the role of large frequency, causing transitions that are exponentially suppressed in the number of excitations needed to absorb such a large energy [39; 40; 41], however, one should note that we observe exponential supression already at intermediate \(F\). It should also be reminded that the Floquet Hamiltonian, Eq. (2), and the tilted model, Eq. (1) are in finite systems equivalent only up to boundary terms. Furthermore, the \(D(F)\) dependence of the transport coefficient also partially resembles the disorder-dependence of the diffusion constant in disordered interacting chains [42; 43; 44; 45; 46; 38; 47]. As a consequence, finite systems appear almost localized for very strong fields in agreement with the previous numerical studies that suggested Stark many-body localization and absence of thermalization in the effective models with fragmented Hilbert spaces. _Bound on the dynamics of the dipole moment._ The dynamics of the dipole moment remains an important open issue, which determines whether transport in the studied systems is correctly captured by the fracton hydrodynamics. A simple bound on the maximal variation of the dipole moment during the time-evolution can be obtained from the identity, \[\frac{\mathrm{d}\langle M\rangle_{t}}{\mathrm{d}t}=\frac{1}{F}\frac{\mathrm{ d}\langle H-H_{0}\rangle_{t}}{\mathrm{d}t}=-\frac{1}{F}\frac{\mathrm{d} \langle H_{0}\rangle_{t}}{\mathrm{d}t}, \tag{3}\] where \(H\) and \(H_{0}\) are introduced in Eq. (1) while \(\langle M\rangle_{t}=\langle\psi(0)|\exp(iHt)M\exp(-iHt)|\psi(0)\rangle\) and \(|\psi(0)\rangle\) is the initial state. The change of \(\langle H_{0}\rangle_{t}\) is limited by a span of eigenvalues of \(H_{0}\) leading to bound that is linear in \(L\), i.e., \(|\langle H_{0}\rangle_{t}-\langle H_{0}\rangle_{t^{\prime}}|<\alpha L\). Here \(\alpha\) is \(F\)- and \(L\)-independent constant determined by parameters of \(H_{0}\). Then one gets from Eq. (3) also a bound on the variation of the dipole moment \(|\langle M\rangle_{t}-\langle M\rangle_{t^{\prime}}|<\delta_{M}=\alpha L/F\). The latter bound should be compared to the width of the spectrum of the dipole moment operator, \(M|\psi_{n}\rangle=d_{n}|\psi_{n}\rangle\). The density of its eigenvalues \(\rho(d)=\frac{1}{2}\sum_{n}\delta(d-d_{n})\) is described by a Gaussian with the variance \[\sigma_{M}^{2}=\frac{1}{Z}\mathrm{Tr}(M^{2})=\sum_{l=-\frac{L}{2}+1}^{\frac{L}{ 2}}\frac{l^{2}}{4}\simeq\frac{L^{3}}{48}. \tag{4}\] Consequently, one gets \[\frac{\delta_{M}}{\sigma_{M}}=\frac{4\alpha\sqrt{3}}{F\sqrt{L}}. \tag{5}\] Figure 2: Panel (a) and (b) depicts time evolution of normalized amplitudes, \(\delta_{t}\), of the spatially modulated profile for \(L=24\) with the wave-vectors \(q=2\pi/L\) and \(q=4\pi/L\), respectively. For both panels: \(F\in[0.2,0.5,1.0,1.5,2.0,3.0]\) and the arrow points to the direction of increasing \(F\). Panels (c) and (d) respectively show the field \(F\) dependence of the exponent \(z\) and that of the transport coefficient \(D\) alongwith the decay rate, \(\Gamma_{min}=Dq^{z}\), for the smallest \(q=2\pi/L\). Results in (c) and (d) are obtained from exponential fits to data in the shaded areas in (a) and (b). (d) Also shows the \(\Gamma_{min}\) obtained for the Hamiltonian (1) with obc (see the Supplementary Material [30] for the details). For arbitrary nonzero \(F\) and sufficiently large \(L\), changes in the dipole moment become negligible and the fractonic dynamics sets in. The ratio in Eq. (5) also explains the unexpected scaling shown for the crossover from normal diffusive transport to subdiffusive dynamics shown in Fig. 1(c). We stress that Eq. (5) holds for arbitrary initial \(|\psi(0)\rangle\) and hence is applicable also for nonequilibrium dynamics. This equation originates from the exceptionally broad spectrum of the dipole moment. Namely, the width of the spectrum of typical, extensive operators (e.g., \(H_{0}\) or \(H^{\prime}\)) increases as \(L^{1/2}\) in contrast to \(L^{3/2}\) dependence of \(\sigma_{M}\) in Eq. (4). _Emergent conservation of the dipole moment at infinite temperature._ In the case of dynamics at \(T\rightarrow\infty\), one can formulate a stronger bound on the variation of \(M\). Below we prove that the dipole moment is conserved in macroscopic systems, i.e, \[\lim_{L\rightarrow\infty}\frac{\langle M(t)M\rangle}{\langle MM\rangle}=1, \qquad M(t)=e^{iHt}Me^{-iHt}\;. \tag{6}\] Eq. (6) will be shown to hold at any time \(t\) for arbitrary nonzero \(F\). From now on, we use the symbol \(\langle...\rangle\) to denote \(T\rightarrow\infty\) average, \(\langle...\rangle=(1/Z)\mathrm{Tr}(...)\). We recall that \(||A||^{2}=\langle AA\rangle\) is the (squared) Hilbert-Schmidt (HS) norm of a Hermitian operator \(A\) while \(\langle AB\rangle\) is the HS inner product of Hermitian \(A\) and \(B\). We note also that two terms entering Hamiltonian (1) are mutually orthogonal, \(\langle H_{0}M\rangle=0\). Due to this orthogonality, one obtains \[||H||^{2} = ||H_{0}||^{2}+F^{2}||M||^{2}, \tag{7}\] \[\frac{||H_{0}||}{||M||} \propto \frac{L^{1/2}}{L^{3/2}}, \tag{8}\] where we used Eq. (4). As a central step, we split the dipole moment into two mutually orthogonal parts, \(M=M^{\parallel}+M^{\perp}\), defined via the projection \[M^{\parallel} = \frac{\langle MH\rangle}{\langle HH\rangle}H,\qquad M^{\perp}=M- M^{\parallel}, \tag{9}\] \[||M||^{2} = ||M^{\parallel}||^{2}+||M^{\perp}||^{2}. \tag{10}\] so that \(M^{\parallel}\) is conserved by construction. Using Eqs. (7),(9) and (10) we calculate the HS norms of both components of the dipole moment \[||M^{\parallel}||^{2} = \frac{\langle MH\rangle^{2}}{||H||^{2}}=||M||^{2}\frac{F^{2}||M||^ {2}}{||H_{0}||^{2}+F^{2}||M||^{2}}\] \[||M^{\perp}||^{2} = ||M||^{2}\frac{||H_{0}||^{2}}{||H_{0}||^{2}+F^{2}||M||^{2}}. \tag{11}\] Finally, we obtain a bound on the numerator in Eq. (6) \[\langle[M^{\parallel}+M^{\perp}(t)]M\rangle \geq ||M^{\parallel}||^{2}-|\langle M^{\perp}(t)M\rangle| \tag{12}\] \[\geq ||M^{\parallel}||^{2}-||M^{\perp}||\;||M||,\] where we used the orthogonality \(\langle M^{\parallel}M^{\perp}\rangle=0\) as well as the Cauchy-Schwarz (CS) inequality for \(|\langle M^{\perp}(t)M\rangle|\). The CS inequality also implies that the correlation function in Eq. (6) is not larger than unity, hence \[1\geq\frac{\langle M(t)M\rangle}{||M||^{2}}\geq 1-\frac{||M^{\perp}||}{||M||}- \frac{||M^{\perp}||^{2}}{||M||^{2}}. \tag{13}\] These inequalities together with Eqs.(8) and (11) prove Eq. (6). The details of the model enter the above derivation only via Eqs. (7) and (8), which hold true for a broad class of Hamiltonians. Moreover, the same reasoning is applicable also for multidimensional systems with \(L\) sites when \(||H_{0}||\propto L^{1/2}\) while \(||M||\propto L_{F}L^{1/2}\), where \(L_{F}\) is the size of the system along the field. Then the ratio in Eq. (8) vanishes when \(L_{F}\) becomes infinite. We note that the bound on the variation of the dipole moment for a chain at \(T\rightarrow\infty\) (13) is stronger than the one obtained in Eq. (5). Therefore, the emergent conservation of the dipole moment at infinite temperature takes place for much weaker fields, \(F\gg 1/L\) (see also Refs. [1; 48]), compared with dynamics starting from an arbitrary nonequilibrium state where one needs \(F\gg 1/\sqrt{L}\). _Concluding remarks_ We have studied tilted (Stark) chains with short-range interaction. Such a system has previously been studied as a prototype model for several phenomena ranging from the Stark many-body localization, subdiffusive relaxation of the density profiles, and the Hilbert space fragmentation found in the effective models. Our numerical studies provide a coherent and unifying picture that captures these phenomena as well as quantitative results for the dynamics of the tilted chains. We have found that the relaxation rate, \(\Gamma_{q}=Dq^{z}\), of the density profile with the wave-vector \(q\sim 1/L\) exhibits a crossover from a diffusive behavior, \(z=2\), for small \(F\sqrt{L}\) to the subdiffusive one, \(z=4\) when \(F\sqrt{L}\) is large. The subdiffusive transport coefficient \(D\) is exponentially suppressed for strong fields \(F\) so that the finite-time dynamics can hardly be distinguished from a nonergodic behavior. The exponential suppression of \(D\) can be explained via analogy with driven Floquet systems as used in Eq. (2). Nevertheless, we have not found any signature of strict nonergodicity expected in SMBL as well as in the effective models with strictly conserved dipole moment and finite-range interaction. The unexpected scaling \(z=z(F\sqrt{L})\) can be linked to the fraction of the spectrum of the dipole moment, \(M\), that is accessible to the system during its evolution. It means that macroscopic chains (isolated in the bulk) subject to nonzero \(F\) are subdiffusive (at small enough \(q\)) and that they evolve within a vanishing small part of the spectrum of \(M\) independently of the initial state of the evolution. At the same time, the dynamics of \(M\) at \(T\rightarrow\infty\) reveals the emergent conservation of the dipole moment [in the sense of Eq. (6)], formally shown for a broad class of models and also for multidimensional systems. Our results indicate that an interacting tilted chain can be considered one of the simplest systems that realize fractonic hydrodynamics; however, the corresponding transport coefficient is exponentially suppressed by strong fields. ## Acknowledgments S.N. and Z.L. thank Marko Ljubotina, Michael Knap for illuminating discussions. We acknowledge the support by the National Science Centre, Poland, via project 2020/37/B/ST3/00020 (M.M.), 2019/34/E/ST3/00405 (A.G.) and the support by the Slovenian Research Agency via the program P1-0044 (P.P), projects J1-2463, N1-0318 (Z.L and S.N.). Z.L and S.N. were supported also by QuantERA grants QuSiED and T-NiSQ, by MVZI, QuantERA II JTC 2021. The numerical calculations were carried out at the facilities of the Wroclaw Centre for Networking and Supercomputing, at the Spinon cluster of Jozef Stefan Institute, Ljubljana, and supercomputer Vega at the Institute of Information Science (IZUM) in Maribor, Slovenia. Our TEBD code was written using ITensors Library in Julia. [49].
2305.13008
Heuristics Optimization of Boolean Circuits with application in Attribute Based Encryption
We propose a method of optimizing monotone Boolean circuits by re-writing them in a simpler, equivalent form. We use in total six heuristics: Hill Climbing, Simulated Annealing, and variations of them, which operate on the representation of the circuit as a logical formula. Our main motivation is to improve performance in Attribute-Based Encryption (ABE) schemes for Boolean circuits. Therefore, we show how our heuristics improve ABE systems for Boolean circuits. Also, we run tests to evaluate the performance of our heuristics, both as a standalone optimization for Boolean circuits and also inside ABE systems.
Alexandru Ionita, Denis-Andrei Banu, Iulian Oleniuc
2023-05-22T13:12:09Z
http://arxiv.org/abs/2305.13008v1
# Heuristics Optimization of Boolean Circuits with application in Attribute Based Encryption ###### Abstract We propose a method of optimizing monotone Boolean circuits by re-writing them in a simpler, equivalent form. We use in total six heuristics: Hill Climbing, Simulated Annealing, and variations of them, which operate on the representation of the circuit as a logical formula. Our main motivation is to improve performance in Attribute-Based Encryption (ABE) schemes for Boolean circuits. Therefore, we show how our heuristics improve ABE systems for Boolean circuits. Also, we run tests to evaluate the performance of our heuristics, both as a standalone optimization for Boolean circuits and also inside ABE systems. Keywords:Heuristics Nature-inspired models Encryption Public-key cryptography Boolean circuit minimization Hill Climbing Simulated Annealing ## 1 Introduction Modern software systems rely more and more on cloud services for hosting services. These systems bring up a big privacy problem. While using such a service for hosting, including database storage, the Cloud Service Provider usually has access to all the sensitive data, such as client names, addresses, and, depending on the application hosted, possibly medical data or other types of private documents. The obvious solution would be to encrypt the data stored in the cloud. However, this raises the problem of finding expressive encryption systems, in order to obtain a fine-grained access granting mechanism, as we do not wish to offer access to a sensitive document to an unauthorized person. For example, let's consider a scenario where a healthcare provider's app is hosted in the cloud. There, people could upload medical documents, download medical test results and talk with doctors. In order to grant access only to those who should have it, and to no one else, the old-fashioned way would be to generate a new encryption key for each document and encrypt it with the respective key. However, a document's decryption key should be shared with all the persons who should be able to decrypt it. Having hundreds or thousands of such documents will make this approach infeasible, as each one of the documents will have its own keys. Here comes in handy Attribute-Based Encryption (ABE), a relatively new encryption system, first introduced in [1]. With the ease of ABE, we can encrypt a document under certain attributes (such as "Cardiovascular", "Neurological") or even numerically, such as "Year:2023", "Priority:3". The decryption keys issued will contain an access structure that operates on such attributes. An example access structure could be: (Cardiovascular OR Neurological) AND (Year:2022 OR Year:2021). Multiple documents could be encrypted, each one under its own attribute set. When generating decryption keys, a single decryption key will decrypt many documents, if the attributes in the ciphertexts match the access policy in the key. Thus, we can create a system where we have the ability to grant access control based on descriptive attributes, using a single decryption key for multiple documents. This type of ABE presented above, where ciphertexts have associated attributes that are matched with the access policies linked to decryption keys, is called _key-policy_ ABE. In contrast, there also exists another flavor of ABE, namely _ciphertext-policy_[1], where the access policy is linked to the ciphertext, and the decryption keys have attributes associated with them. As the complexity of a system increases, so does the complexity of the access structures. Therefore, a challenge for the ABE systems is to find more and more expressive access policies for which we can build efficient ABE systems. For example, the first ABE system [10] uses Boolean trees as access structures, where nodes can be AND and OR gates. A more expressive access structure could be a monotone Boolean circuit. Note that not all Boolean circuits can be expressed as access trees, but rather as _directed acyclic graphs_. However, for such access structures, the existing ABE schemes are inefficient [21], [14], [15] or rely on mathematical primitives for which there is no secure construction known [16][17][18]. From this problem we get our motivation: we need to re-write a Boolean circuit in an equivalent form, such that the current ABE systems, like [21], will improve in efficiency. ### Our Contribution The most efficient Attribute-Based Encryption scheme for Boolean circuits from bilinear maps uses a secret sharing technique on Boolean circuits, which results in an exponential expansion in key (or ciphertext) size in the worst-case scenario. Our algorithms are optimizing the access structure - Boolean circuits - by finding equivalent circuits, for which the secret sharing is more efficient. Besides our optimization results, we provide open access to our source code as a library which can be used to optimize Boolean circuits for the [21] scheme. Moreover, we also provide an archive with the test cases we used to evaluate the performance of our work, such that subsequent works could be compared to ours using the same datasets. ## 2 Related Work ### Attribute-Based Encryption for Boolean Circuits The first ABE systems were introduced in two flavors, _key-policy_[10] and _ciphertext-policy_[1], both of them supporting Boolean trees as access structures. Then, the problem of finding more expressive access structures arose. Boolean circuits, for example, cover a much larger range of access structures. Unlike a Boolean tree, a Boolean circuit does not limit the fan-out of its gates to one. Finding an efficient ABE scheme for Boolean circuit access structures is an important open problem in cryptography. Garg et al. [16] introduced the first ABE system for Boolean circuits. However, their system relies on multi-linear maps and the MDDH assumption, for which there is no known mathematical construction [1][2]. Other approaches, such as [21][14], offer constructions relying on secure and efficient mathematical primitives - bilinear maps. However, the decryption key could expand exponentially for some circuits. ### Boolean Circuit Minimization The problem of re-writing Boolean circuits in order to obtain a more compact form, with fewer gates is well-known in scientific literature as Boolean Formula Minimization. One of the most well-known algorithms is the Quine-McCluskey Algorithm [17], [18]. However, the problem of Boolean Formula Minimization requires the functions to be given as truth tables. This is impractical for use in an ABE scenario since the Boolean truth table is exponentially larger than the access structure. In ABE we are given an access structure as a Boolean Circuit, and we need to minimize it without computing the truth table. Another algorithm for Logic optimization of Boolean circuits is Espresso [1][1]. The main advantage of Espresso is that it uses various heuristics to minimize the circuits, which makes it far more efficient than Quine-McCluskey and allows it to run on higher inputs. It operates on multiple-valued and multiple-output Boolean functions described by min-terms, where each element is assigned a state - "True", "False", or "Don't care". Other newer Boolean Minimizers have been proposed, following similar ideas with [13][12]. However, all these approaches differ from our scope, since their optimization of Boolean circuits is not the same as the one required for ABE systems. We give more details on this in Section 4. ### Heuristic Optimizations in Cryptography The problem of optimizing cryptographic schemes for obtaining better time performance has been approached before. However, most of the time, the optimization is very particular in order to be compatible with a specific cryptosystem. For example, [14] presents a survey on such optimizations for cryptographic systems with applicability in Wireless Sensor Networks. The heuristics compared include genetic algorithms and nature-inspired methods such as "Particle Swarm" or "Ant Colony". From our knowledge, there is no previous work on optimizing the decryption phase of ABE using heuristic approaches. The only work related to ABE systems is a very recent work [15] that applies a heuristic optimization to ABE for conversion between different types of underlying bilinear maps primitives. This is however a very different type of optimization, both in means and scope, compared to ours. ## 3 Prerequisites When referring to a logical formula, we distinguish between variables and literals: Variables are symbols that can take values \(1\) or \(0\), while literals represent atomic logical formulas (a variable or its negation). Therefore, if a variable appears multiple times in a formula, it is counted each time as a distinct literal. For example, the formula \(A\wedge B\lor A\wedge C\) has \(3\) variables and \(4\) literals. A Boolean circuit is a Directed Acyclic Graph over a set of input wires, concluding to a single output wire, with internal nodes representing logical gates of type AND, OR or NOT. These gates may have fan-out greater than \(1\). A _monotone_ Boolean circuit is a circuit without negation gates. A Boolean tree is a Boolean circuit where each node has a fan-out of a maximum of one. Access Structures [1]Let \(\{p_{1},\ldots,p_{n}\}\) be a set of parties. A collection \(A\subseteq 2^{\{p_{1},\ldots,p_{n}\}}\) is _monotone_ if \((B\in A\wedge B\subseteq C)\to C\in A\). An access structure is a monotone collection \(A\subseteq 2^{\{p_{1},\ldots,p_{n}\}}\) of non-empty subsets of \(\{p_{1},\ldots,p_{n}\}\). Sets in \(A\) are called _authorized_, while sets not in \(A\) are called _unauthorized_. ### KP-ABE Model Key-Policy Attribute-Based Encryption scheme, as first described in [1], consists of four algorithms: **setup(\(\lambda\))**: A randomized algorithm that takes as input the implicit security parameter \(\lambda\) and returns the public and secret keys (_MPK_ and _MSK_). **encrypt(\(m,A,\mathit{MPK}\))**: A probabilistic algorithm that encrypts a message \(m\) under a set of attributes \(A\) with the public key \(\mathit{MPK}\), and outputs the ciphertext \(E\). **keygen(\(\mathcal{C},\mathit{MPK},\mathit{MSK}\))**: This algorithm receives an access structure \(\mathcal{C}\), public and master keys \(\mathit{MPK}\) and \(\mathit{MSK}\), and outputs the corresponding decryption keys \(\mathit{DK}\). **decrypt(\(E,\mathit{DK},\mathit{MPK}\))**: Given the ciphertext \(E\) and the decryption keys \(\mathit{DK}\), the algorithm decrypts the ciphertext and outputs the original message. ### High-Level Description of Secret Sharing in KP-ABE for Boolean Circuits from [14] The _key-policy_ ABE for Boolean circuits scheme from [14] uses bilinear maps as key components to the construction. Therefore, the running time of the four algorithms is strictly related to the number of pairing operations that are computed, since these are by far the most expensive ones. Therefore, our goal will be to minimize the number of pairings that are computed. Unfortunately, due to space limitations, we are not able to provide more details about the bilinear maps as mathematical primitives. However, they are not needed in order to understand our work, but rather just to acknowledge the fact that the pairing operations resulting from the bilinear maps are the most expensive in these ABE systems. The _keygen_ algorithm uses the Boolean circuit as an access structure in order to generate decryption keys. On the top of the Boolean circuit we will have an output node (we will refer to it as \(O_{\mathcal{C}}\)), and each of the input nodes (\(\mathit{In}_{i}\), where \(i=\overline{1,n}\)) of the circuit will have an attribute (labeled from \(1\) to \(n\)) attached to it. Then, a secret sharing technique will be applied to the circuit top-down, starting from \(O_{\mathcal{C}}\) and ending in the input nodes. Each input node \(\mathit{In}_{i}\) will have some values associated with it. For each of these values, in the decryption phase, we must compute a _pairing_ operation. Due to the construction of the secret sharing technique from [14], the number of shares each attribute will receive in the end is equal to the number of paths from that input node (associated with the attribute) to the output node of the circuit. Therefore, the total number of pairings that will be executed in the decryption phase is equal to the total number of paths from the input nodes to the output node. Therefore, our goal is to find equivalent forms of the circuit such that the total number of paths is minimized. We will define a cost function \(c\) for a circuit \(\mathcal{C}\): \(c(\mathcal{C})\) should compute the number of shares the secret sharing technique in ABE is producing on \(\mathcal{C}\). This function can also be computed from the logical formula of the circuit: the number of literals in the formula represents the number of paths from the top of the circuit to the bottom since each literal corresponds to a leaf in the Abstract Syntax Tree of the logical formula. One good example of two equivalent Boolean circuits is depicted in Fig. 1. The first circuit (Fig. 0(a)) leads in the secret sharing scheme from Tiplea-Dragan scheme [14] to a total number of \(4\) shares: one for \(\mathit{In}_{1}\), two for \(\mathit{In}_{2}\) and one for \(\mathit{In}_{3}\). However, the equivalent circuit from Fig. 0(b) leads only to \(3\) shares. Therefore, the decryption time will be roughly \(25\%\) smaller if we use the second circuit. ## 4 Boolean Circuit Minimization for ABE Since the problem of Boolean Minimization is well-studied, the first obvious choice while would be to try to use an existing algorithm. However, the existing algorithms optimize the circuit for constructing Programmable Logic Arrays (PLAs). Therefore, the input and output circuits will be in a format similar to DNF, which allows the easy construction of a PLA. Even if we convert the Boolean circuits from ABE's input to a logic minimizer such as Espresso, then we would have to also process the output. The DNF format in which these minimizers output the formula is an inefficient form for a circuit, w.r.t. the secret sharing scheme used in ABE. The DNF is the most uncompressed form a circuit can have. The logic minimizers are trying to optimize the number of gates used, while we want to minimize the number of literals in the logic expression associated with the circuit, or the number of paths from the output to the input nodes. ### The Approach We thought about how we can obtain, in general, equivalent logical formulas, starting from some given expression. We have observed three operations that we may apply to a monotone (without negations) logical formula to obtain equivalent ones. We will refer to these operations as _factorization_, _defactorization_ and _absorption_: 1. _factorization_ - Using the fact that OR is distributive under AND (and vice-versa), we can search for common factors inside logical formulas. For example, \((A\wedge B)\vee(A\wedge C)\) can be factorized into \(A\wedge(B\lor C)\). Note that this operation always obtains a formula with a strictly lower cost, since it reduces the common factor. 2. _defactorization_ - This is the inverse operation of _factorization_. We choose a conjunction, and we "split the parenthesis", resulting in a cross-product of the elements involved. For example, \(A\wedge(B\lor C)\) can be defactorized into \((A\wedge B)\vee(A\wedge C)\). Note that this operation gives us an equivalent expression, but with a strictly higher cost. 3. _absorption_ is the operation of eliminating 1s after the factorization process. For example, \(A\vee(A\wedge B)\) can be factorized into something like \(A\wedge(1\lor B)\). However, the \(B\) term is actually "shadowed" by 1. Therefore, we can replace the initial formula with \(A\), ignoring the term \(B\). In our implementation, we have embedded the _absorption_ in the _factorization_ procedure. Therefore, throughout the rest of the paper, we will only refer to the first two operations - _factorization_ and _defactorization_. ### Implementation of Operations In order to find a common factor inside multiple terms in a Boolean expression, we make use of the Abstract Syntax Tree (AST) associated with it. Let \(\mathcal{T}_{\varphi}\) be the AST for some Figure 1: Two equivalent Boolean circuits, alongside their equivalent logical formulas formula \(\varphi\). For each node \(T_{i}\), we will denote with \(f(T_{i})\) the logical formula associated with the subtree rooted in \(T_{i}\), with \(\mathit{parent}(T_{i})\) the parent node of \(T_{i}\), and with \(\mathit{children}(T_{i})\) the set of children nodes. Then, in order to get a common factor, we need to get two nodes \(T_{1}\) and \(T_{2}\) such that: * The formula \(f(T_{1})\) will be the common factor. * This is more of a consistency check. A well-formed formula should not have in the AST two siblings with the same formula, as one of them is clearly irrelevant. * Nodes \(T_{1}\) and \(T_{2}\) should have a common grandparent in the AST, as shown in Fig. 2. After this operation, the overall cost of the circuit will drop by \(c(T_{2})\), since we eliminate this part of the circuit. There are some particular cases in the _factorization_ and _defactorization_ processes, but for the sake of simplicity, we decided to omit these cases here. ### Hill Climbing The factorization operation will always reduce the cost of the formula. It made us think it is suitable to be used in a Hill Climbing algorithm as we always move towards an optimum and at the end we will reach that optimum. However, the found optimum can be a _local_ optimum and it means that we depend on the initial form of the formula. Therefore, until there is no possible factorization left, we randomly choose two nodes that can be factorized, and we apply the operation. When there is no factorization possible anymore, it means we have reached a local maximum. ### Simulated Annealing Simulated Annealing, proposed first time in [10], is a probabilistic method for finding the global minimum of a cost function. This method is inspired by the cooling of metals. Figure 2: Modification of an AST after a factorization In general, at each iteration of the algorithm, a new solution is considered. The probability that the solution is accepted varies with the temperature and the solution score. The higher the temperature, the higher the probability of accepting the solution. Similarly the higher the score, the higher probability of accepting the solution. We have a temperature that starts with a value \(t_{\max}\) and drops over time with a cooling rate \(c\). The acceptance probability function used is \(e^{-\Delta t}\), where \(\Delta\) is the difference between the neighbor formula cost and the current formula cost. The neighbor is obtained from the current formula by randomly choosing one operation between factorization and defactorization, and a random place where it can be applied in the circuit. The probability that we used to pick defactorization is \(25\%\). Then, if defactorization is chosen, the probability to accept the new neighbor is given by the acceptance function. The algorithm looks as follows: ``` 1:\(t\gets t_{\max}\) 2:for\(i\in\{1,2,\ldots,L\}\)do 3:operation\(\leftarrow\) defactorization if\(\text{random}(0,1)<d\)else factorization 4: choose a neighbor \(u\) using operation 5:\(\Delta\leftarrow\text{cost}(u)-\text{current\_cost}\) 6:ifoperation\(=\) defactorization then 7: accept \(u\) with probability \(e^{-\Delta t}\) 8:else 9:accept \(u\) 10:\(t\leftarrow(1-c)\cdot t\) # cooling 11:if\(t>t_{\min}\)then 12:goto 2 13: apply factorization until formula is not improved anymore ``` We used the following parameters: \(t_{\max}=100\), \(t_{\min}=10\), \(c=0.1\), \(l=25\). The values for \(t_{\max}\) and \(t_{min}\) were chosen in this way because the cost of each logical formula from the dataset is between 3 and 100. The factorization operation will decrease the cost, while defactorization will increase it. Using the simulated annealing algorithm, at the beginning we have a higher chance to accept defactorization in order to better explore the solution space. During this time, the rate of acceptance for defactorization decreases until we accept only factorizations, in order to improve the final solution as much as possible. ### Custom Heuristic Besides the classical Hill Climbing and Simulated Annealing heuristics we tried to create a new one that is meant to combine both _factorization_ and _defactorization_ operations but in a simpler way than Simulated Annealing. In this algorithm, we have \(k_{\max}\) iterations. At each iteration, we choose either to factorize or defactorize the current formula. At iteration \(k\) the probability to choose defactorization is \(\frac{k_{\max}-k}{5k_{\max}}\). It means that, at first iteration, we have a \(25\%\) chance to choose defactorize and it slowly decreases until reaching \(0\%\) at the last iteration. Then we choose the formula with the smallest cost among all \(k_{\max}\) iterations. Finally, we apply factorization on this formula until it can't be improved anymore. ### Iterated Versions In the simple Hill Climbing algorithm the solution converges very quickly to a local optimum. But, if at one step we choose to factorize a different pair of nodes, we can end up in a different local optimum, which can have a smaller cost. In order to find better solutions, we run the Hill Climbing algorithm multiple times and choose the best solution among them. This gives us the opportunity to explore more and finally, we can end up in the _global_ optimum. However, there is the possibility that the best equivalent formula (the _global_ optimum) can't be obtained from the initial formula only by doing _factorizations_. There are many formulas where Iterated Hill Climbing gives good results but it can't find the global optimum regardless of the number of iterations we give. For similar reasons, we have also constructed iterated versions of our other heuristics: Simulated Annealing and the Custom Heuristic. Finally, we have six algorithms - three main ones and three iterated versions of them. ## 5 Practical Tests ### Dataset Description We have generated four datasets, each of them with some particularities. The first three datasets consist of randomly generated logical formulas. Their numbers of variables and literals respect the values in Table 1. In this case, we tried to simulate real-world scenarios, where, when dealing with access structures, the most usual way of defining an access structure is by enumerating the minimum sets of participants which should have access to decrypt the data. Therefore, we constructed our formulas bottom-up by constructing formulas for minimum authorized sets, and then linking them together with AND or OR nodes. Moreover, in the generation of all our datasets, we have run a special procedure called "trim", which ensures that the generated logical formulas do not have obvious design flaws. One such example could be the formula: \((A\wedge B)\vee(A\wedge B)\vee(A\wedge C)\). Here it is obvious that one of \((A\wedge B)\) could be removed from the expression. ### Results The fourth test is created by taking a concrete example of a complex access policy that can arise in practical systems: comparison queries. There are several works on ABE with access structures supporting such queries [10][11][12]. When dealing with numerical numbers as attributes, the access structure may require to allow decryption for values smaller than some threshold value. Therefore, we created the fourth dataset with such cases. In order to perform a comparison query, a numerical attribute \(A\) in the range \(0\) to \(N\) can be split into \(\log_{2}(N)\) smaller attributes, each of them representing a bit in \(A\)'s binary representation. Then, we can create Boolean circuits which can handle multiple comparison queries. An example of such a Boolean circuit is seen in Fig. 3. There, \(A_{0},A_{1},A_{2},A_{3},A_{4}\) represent attributes for the \begin{table} \begin{tabular}{l|c|c} & Variable count & Literal count \\ \hline Dataset 1 & \(20-25\) & \(20-40\) \\ Dataset 2 & \(20\) & \(60-90\) \\ Dataset 3 & \(25-35\) & \(160-200\) \\ Dataset 4 & \(20-25\) & \(20-40\) \\ \end{tabular} \end{table} Table 1: Dataset parameters Figure 3: Subcircuit for the comparison “\(A\geq 11\)” (\(11=1011_{(2)}\)) bits of the numeral value of \(A\). The attribute \(A_{i}\) is True if and only if \(A\) has the \(i\)th bit set in its binary representation. The Access structures corresponding to such Boolean circuits could be something as ((Year <= 2022 AND Year >= 2020) OR (Year <= 1990)) AND (Month >= 2 AND Month <= 5). Since our heuristics are probabilistic, we want to compute the expected optimization we will obtain. Therefore, for each formula, we run each algorithm 16 times and we computed the average of the optimized percentage after each run. Also, we keep track of the maximum optimization obtained over all iterations. This value will be close to the upper bound of the algorithms. In Table 2 we show the _mean optimization_ (MO), _best over iterations_ (BOI), and _average running time_ (ART) for all the formulas in each set. We have run all three algorithms presented above: HC stands for _Hill Climbing_, SA for _Simulated Annealing_, and CH for _Custom Heuristic_. We also have the iterated versions of these algorithms, denoted with IHC, ISA, and ICH, respectively. We see that Hill Climbing (HC in the table) is by far the fastest, and it also gives decent results for the first three datasets; however they are very low for the one with more practical formulas. Iterated Simulated Annealing (ISA) obtains some very good results on all sets, but its running time is the largest, making it even slower than Iterated Custom Heuristic. The latter beats (or at least ties) all the other algorithms on every set and every metric, and it also has a decently low running time. Depending on the circumstances, one of the algorithms above could be chosen to optimize the ABE access structure, having a trade-off between running time and optimization. In order to better understand this trade-off, we have made the following experiment: 1. Generate 5 Boolean circuit access structures, with 50, 100, 150, 200, and 250 literals each. 2. For each of these access structures, run the HC, CH, and ISA optimization heuristics presented in this paper. 3. Construct an ABE system with each of these access structures (20 in total) and run the KeyGeneration and Decryption (after a previous Encryption) algorithms. 4. Add the running time of the optimization heuristic to the KeyGeneration algorithm. 5. Repeat steps 1-4 for 30 times and compute the mean value for each case. To ensure clarity and ease of interpretation of the plot in Fig. 4, we chose only three of our Heuristics for the second test, namely: HC, CH, and ISA. The first two choices were \begin{table} \begin{tabular}{c||c c c|c c|c c|c c|c c} & \multicolumn{3}{c|}{Dataset 1} & \multicolumn{3}{c|}{Dataset 2} & \multicolumn{3}{c|}{Dataset 3} & \multicolumn{3}{c}{Dataset 4} \\ & MO & BOI & ART & MO & BOI & ART & MO & BOI & ART & MO & BOI & ART \\ \hline \hline HC & 15.0 \% & 16.3 \% & 0.00 s & 35.1 \% & 41.9 \% & 0.00 s & 42.6 \% & 56.5 \% & 0.01 s & 4.8 \% & 7.2 \% & 0.00 s \\ \hline IHC & 16.3 \% & 16.3 \% & 0.08 s & 42.0 \% & 42.1 \% & 0.34 s & 56.5 \% & 56.5 \% & 2.03 s & 7.2 \% & 7.2 \% & 0.11 s \\ \hline SA & 26.9 \% & 43.1 \% & 0.16 s & 41.0 \% & 59.1 \% & 0.86 s & 43.0 \% & 58.6 \% & 0.76 s & 32.3 \% & 50.0 \% & 0.10 s \\ \hline ISA & 40.1 \% & 46.9 \% & 2.39 s & 57.8 \% & 66.0 \% & 13.1 s & 59.3 \% & 63.1 \% & 13.6 s & 48.5 \% & 50.4 \% & 1.44 s \\ \hline CH & 24.8 \% & 44.0 \% & 0.13 s & 35.8 \% & 61.8 \% & 0.38 s & 39.8 \% & 59.4 \% & 0.42 s & 20.8 \% & 47.8 \% & 0.14 s \\ \hline ICH & 43.0 \% & 47.8 \% & 2.74 s & 61.3 \% & 68.6 \% & 7.50 s & 60.6 \% & 64.9 \% & 7.82 s & 48.5 \% & 50.4 \% & 2.94 s \\ \end{tabular} \end{table} Table 2: (Iterated) Hill Climbing/ Simulated Annealing/ Custom Heuristic results on each dataset made based on the low running time while having similar optimization results. Lastly, we have also chosen an iterated version of one of our algorithms - ISA - to be able to evaluate the trade-off between running time and optimization in the iterated versions of the algorithms. The results can be observed in Fig. 4: In a) we can see how the decryption time relates to the size of the original access structure. Since the Boolean circuit optimization should take place in the KeyGeneration phase, the second plot (Fig. 4b) presents the relation between the increase in KeyGeneration time and the access structure size. Here, the key generation time for the unoptimized access structure is the same with the HC optimization, since the running time for HC is negligible. It is obvious that the more aggressive the optimization process is, the more time the KeyGeneration takes. Depending on the circumstances, an inefficient KeyGeneration which produces efficient decryption algorithms may be preferred. This is actually the case in applications where the key generation takes place in some server in the cloud, while the decryption is on systems with limited capabilities, such as mobile phones, or even nodes in Wireless Sensor Networks (Such an ABE system was proposed for example in [1]). We can also observe that we have a drop in optimization for the access structure with 200 shares, compared to the one with 150 shares. This is due to the fact that we have run these tests with a single access structure for each number of shares, rather than generating multiple access structures and computing a median optimization. Also, we want to emphasize that the optimization of the Boolean circuit depends on the exact circuit. Two different circuits will have different potentials for optimization. From the two tests we ran, we can conclude that HC produces decent optimization while the computation overhead is negligible. CH and SA produce better optimization while having some small computational overhead. The iterated versions exploit the full potential of these algorithms but require a considerable additional amount of time. ### Library Our heuristics are publicly available on GitHub [14] as a library. It provides the possibility of running a specific algorithm over a logical formula and returns the best equivalent formula found. Also, another goal of the library is to be integrated with existing ABE systems in order to optimize their performance in real-world systems. Benchmarking datasetWe have also provided txt files with our logical expressions which we used to test our heuristics. These can be used as references for further work on similar problems, in order to make relevant comparisons between similar algorithms with ours. The tests can be found in the inputs folder in our repository. ## 6 Conclusions We have proposed multiple heuristic optimizations for minimizing monotone Boolean circuits, which, as we can see from the tests we ran, provide a substantial improvement in the circuit size. Each of our heuristics behaves differently in terms of optimization and running time, depending on the size and structure of the Boolean circuit. Since we drew our motivation from the problem of optimizing ABE systems for Boolean circuits, we emphasize that our optimizations translate into much smaller decryption keys and decryption times for these encryption systems. This can have an important impact on cloud systems that use ABE schemes to implement cryptographic access control over data: the heuristics will be applied only once when the decryption key is generated, and then, each time the respective decryption key will be used, the decryption time will be smaller compared to the scenario where we do not optimize it using one of our algorithms. For example, using the HC version of our heuristics will add almost no additional time overhead in the key generation process, while still providing an optimization between 7% and 40% in decryption time. We have compiled our work into a library [git23] written in C++, which is publicly available for anyone to use. This could be easily integrated with an existing ABE implementation written in C++, such as OpenABE [1]. Furthermore, there is still space for even bigger improvements, by combining the heuristic with cryptographic improvements: At the moment, our algorithms are limited by searching for Boolean circuits that are equivalent to the original one. However, by using cryptographic strategies for improving secret sharing for parts of a circuit, we could offer our heuristics more space for searching for better solutions. This remains an interesting problem to be studied.
2306.12646
Learnability and Algorithm for Continual Learning
This paper studies the challenging continual learning (CL) setting of Class Incremental Learning (CIL). CIL learns a sequence of tasks consisting of disjoint sets of concepts or classes. At any time, a single model is built that can be applied to predict/classify test instances of any classes learned thus far without providing any task related information for each test instance. Although many techniques have been proposed for CIL, they are mostly empirical. It has been shown recently that a strong CIL system needs a strong within-task prediction (WP) and a strong out-of-distribution (OOD) detection for each task. However, it is still not known whether CIL is actually learnable. This paper shows that CIL is learnable. Based on the theory, a new CIL algorithm is also proposed. Experimental results demonstrate its effectiveness.
Gyuhak Kim, Changnan Xiao, Tatsuya Konishi, Bing Liu
2023-06-22T03:08:42Z
http://arxiv.org/abs/2306.12646v1
# Learnability and Algorithm for Continual Learning ###### Abstract This paper studies the challenging _continual learning_ (CL) setting of _Class Incremental Learning_ (CIL). CIL learns a sequence of tasks consisting of disjoint sets of concepts or classes. At any time, a single model is built that can be applied to predict/classify test instances of any classes learned thus far without providing any task related information for each test instance. Although many techniques have been proposed for CIL, they are mostly empirical. It has been shown recently that a strong CIL system needs a strong within-task prediction (WP) and a strong out-of-distribution (OOD) detection for each task. However, it is still not known whether CIL is actually learnable. This paper shows that CIL is learnable. Based on the theory, a new CIL algorithm is also proposed. Experimental results demonstrate its effectiveness. Machine Learning, Continual Learning, ICML ## 1 Introduction Learning a sequence of tasks incrementally, called _continual learning_, has attracted a great deal of attention recently (Chen & Liu, 2018). In the supervised learning context, each task consists of a set of concepts or classes to be learned. It is assumed that all tasks are learned in one neural network, which results in the key challenge of _catastrophic forgetting_ (CF) because when learning a new task, the system has to modify the network parameters learned from old tasks in order to learn the new task, which may cause performance degradation for the old tasks (McCloskey & Cohen, 1989). Two continual learning settings have been popularly studied: _task incremental learning_ (TIL) (van de Ven & Tolias, 2019) and _class incremental learning_ (CIL). In TIL, each task is an independent classification problem and has a separate model (the tasks may overlap). At test time, the task-id of each test instance is provided to locate the task-specific model to classify the test instance. **Definition 1.1** (Task Incremental Learning (TIL)).: TIL learns a sequence of tasks, \(1,2,...,T\). The training set of task \(k\) is \(\mathcal{D}_{k}=\{((x_{k}^{i},k),y_{k}^{i})_{i=1}^{nk}\}\), where \(n_{k}\) is the number of samples in task \(k\in\mathcal{T}=\{1,2,...,T\}\), and \(x_{k}^{i}\in\mathcal{X}\) is an input sample and \(y_{k}^{i}\in Y_{k}\subseteq\mathcal{Y}\) (\(=\bigcup_{k=1}^{T}Y_{k}\)) is its label. A TIL system learns a function \(f:\mathcal{X}\times\mathcal{T}\rightarrow\mathcal{Y}\) to assign a class label \(y\in Y_{k}\) to \((x,k)\) (a test instance \(x\) from task \(k\)). For CIL, a single model is built for all tasks/classes learned thus far (the classes in each task are distinct). At test time, no task-id is provided for a test instance. **Definition 1.2** (Class Incremental Learning (CIL)).: CIL learns a sequence of tasks, \(1,2,...,T\). The training set of task \(k\) is \(\mathcal{D}_{k}=\{(x_{k}^{i},y_{k}^{i})_{i=1}^{nk}\}\), where \(n_{k}\) is the number of samples in task \(k\in\mathcal{T}=\{1,2,...,T\}\), and \(x_{k}^{i}\in\mathcal{X}\) is an input sample and \(y_{k}^{i}\in Y_{k}\subset\mathcal{Y}\) (\(=\bigcup_{k=1}^{T}Y_{k}\)) is its label. All \(Y_{k}\)'s are disjoint (\(Y_{k}\cap Y_{k^{\prime}}=\emptyset,\,\forall k\neq k^{\prime}\)). A CIL system learns a function (predictor or classifier) \(f:\mathcal{X}\rightarrow\mathcal{Y}\) that assigns a class label \(y\) to a test instance \(x\). CIL is a more challenging setting because in addition to CF, it has the _inter-task class separation_ (ICS) (Kim et al., 2022) problem. ICS refers to the situation that since the learner has no access to the training data of the old tasks when learning a new task, then the learner has no way to establish the decision boundaries between the classes of the old tasks and the classes of the new task, which results in poor classification accuracy. Kim et al. (2022) showed that a good within-task prediction (WP) and a good _out-of-distribution_ (OOD) detection for each task are _necessary_ and _sufficient_ conditions for a strong CIL model. **Definition 1.3** (out-of-distribution (OOD) detection).: Given the training set \(\mathcal{D}=\{(x^{i},y^{i})_{i=1}^{n}\}\), where \(n\) is the number of data samples, \(x^{i}\in\mathcal{X}\) is an input sample and \(y^{i}\in\mathcal{Y}\) is its class label. \(\mathcal{Y}\) is the set of all classes in \(\mathcal{D}\) and called the _in-distribution_ (IND) classes. Our goal is to learn a function \(f:\mathcal{X}\rightarrow\mathcal{Y}\cup\{O\}\) that can detect test instances that do not belong to any classes in \(\mathcal{Y}\) (OOD), which are assigned to class \(O\), denoting _out-of-distribution_ (OOD). The intuition of the theory in (Kim et al., 2022) is that if OOD detection is perfect for each task, then a test instance will be assigned to the correct task model to which the test instance belongs for classification, i.e., within-task prediction (WP). However, (Kim et al., 2022) does not prove that CIL is learnable. To our knowledge, no existing work has reported a learnability study for CIL (see Sec. 2). This paper performs the **CIL learnability** study. The proposed learnability proof requires two assumptions: **(1)** OOD detection is learnable. Fortunately, this has been proven in a recent paper (Fang et al., 2022). **(2)** There is a mechanism that can completely overcome forgetting (CF) for the model of each task. Again, fortunately, there are many existing TIL methods that can eliminate forgetting, e.g., parameter-isolation methods such as HAT (Serra et al., 2018) and SupSup (Wortsman et al., 2020), which work by learning a sub-network in a shared network for each task. The sub-networks of all old tasks are protected when training a new task. Orthogonal projection methods such as PCP (Kim and Liu, 2020) and CUBER (Lin et al., 2022) can also overcome forgetting in the TIL setting. CIL can be solved by a combination of **[a]** a _TIL method_ that is able to protect each task model with no CF, and **[b]** a _normal supervised learning method for WP_ and **[c]** an _OOD detection_ method. **[b]** and **[c]** can be easily combined either **(i)** with an OOD detection model since it also learns the IND classes (see **Definition** 1.3) or **(ii)** a WP model that can also perform OOD detection. That is, for CIL, we simply replace the classification model built for each task in HAT/SupSup with a combined WP and OOD detection model. Based on the theory, we propose a new replay-based CIL method that uses the combination of **[a]** and **[ii]** (two separate heads for each task, one for WP and the other for OOD detection based on the same feature extractor). This paper thus makes two main contributions: (1). It performs **the first learnability study of CIL**. To the best of knowledge, no such a study has been reported so far. (2). Based on the theory, **a new CIL method**, called ROW (_Replay, OOD, and WP_ for CIL), is proposed. Experimental results show that it outperforms existing strong baselines. It is interesting to note that our theory, including our earlier work in (Kim et al., 2022), in fact, unifies OOD detection and continual learning as it covers both (Kim et al., 2023). Additionally, the theory is also applicable to _open world learning_ because OOD detection and class incremental learning are two critical components of an open world learning system (Liu et al., 2023). ## 2 Related Work To our knowledge, we are not aware of any paper that studies the learnability of CIL. Below, we survey the existing CL literature on both the theoretical and empirical sides. On the **theoretical side**. Pentina and Lampert (2014) proposes a PAC-Bayesian framework to provide a learning bound on expected error by the average loss on the observed tasks. However, this work is not about CIL but about TIL. It focuses on knowledge transfer and assumes that all the tasks have the same input space and the same label space and the tasks are very similar. However, in CIL, every task has a distinct set of class labels. Furthermore, this work is not concerned with CF as earlier research in lifelong learning builds a separate model for each task. Lee et al. (2021) studied the generalization error by task similarity. It is again about TIL. Bennani et al. (2020) showed that a specific method called orthogonal gradient descent (OGD) gives a tighter generalization bound than SGD. As noted in Sec. 1, empirically, the CF problem for TIL has been solved (Serra et al., 2018; Kim et al., 2022). Several techniques have also been proposed to carry out knowledge transfer (Ke et al., 2020, 2021; Lin et al., 2022). Our work is entirely different as we study the learnability of CIL, which is a more challenging setting than TIL because of the additional difficulty of ICS (Kim et al., 2022) in CIL. In this work, we are not concerned with knowledge transfer, which is mainly studied for the TIL setting. Recently, Kim et al. (2022) showed that a good within-task prediction (WP) and a good OOD detection for each task are necessary and sufficient conditions for a strong CIL model. However, Kim et al. (2022) did not show that CIL is learnable. This paper performs this study. It also proposes a new CIL algorithm. On the **empirical side**, a large number of algorithms have been proposed. They belong to several families. **(1). Regularization-based methods** mitigate CF by restricting the learned parameters for old tasks from being updated significantly in a new task learning using regularizations (Kirkpatrick et al., 2017; Zenke et al., 2017) or knowledge distillation (Li and Hoiem, 2016; Zhu et al., 2021). Many existing approaches have used similar approaches (Ritter et al., 2018; Schwarz et al., 2018; Xu and Zhu, 2018; Castro et al., 2018; Dhar et al., 2019; Lee et al., 2019; Ahn et al., 2019; Liu et al., 2020). **(2). _Replay-based methods_** alleviate CF by saving a small amount of training data from old tasks in a memory buffer and jointly train the model using the current data and the saved buffer data (Rusu et al., 2016; Lopez-Paz and Ranzato, 2017; Chaudhry et al., 2019; Hou et al., 2019; Wu et al., 2019; Rolnick et al., 2019; Buzzega et al., 2020; Rajasegaran et al., 2020; Liu et al., 2021; Cha et al., 2021; Yan et al., 2021; Wang et al., 2022; Guo et al., 2022; Kim et al., 2022). Some methods in this family also study which samples in memory should be used in replay (Aljundi et al., 2019) or which samples in the training data should be saved (Rebuffi et al., 2017; Liu et al., 2020). **(3). Pseudoreplay methods** generate pseudo replay data for old tasks to serve as the replay data (Kamra et al., 2017; Shin et al., 2017; Wu et al., 2018; Seff et al., 2017; Kemker and Kanan, 2018; Hu et al., 2019; Rostami et al., 2019; Ostapenko et al., 2019). (Zhu et al., 2021) generates features instead of raw data. **(4).**_Parameter-isolation methods_ train and protect a sub-network for each task (Mallya & Lazebnik, 2017; Abati et al., 2020; von Oswald et al., 2020; Rajasegaran et al., 2020; Hung et al., 2019; Henning et al., 2021). Several systems, e.g., HAT (Serra et al., 2018) and SupSup (Wortsman et al., 2020), have largely eliminated CF. A limitation is that the task-id of each test instance must be provided. These methods are thus mainly used for TIL. **(5).**_Orthogonal projection_ methods learn each task in an orthogonal space to other tasks to reduce task interference or CF (Zeng et al., 2019; Kim & Liu, 2020; Chaudhry et al., 2020; Lin et al., 2022). Our empirical part of the work is related to but also very different from the above methods. We use the replay data as OOD training data to fine-tune an OOD detection head for each task based on the features learned for the WP head and uses the TIL method HAT to overcome CF. Some existing methods have used a TIL method for CIL with an additional task-id prediction technique. iTAML (Rajasegaran et al., 2020)'s task-id prediction needs the test data come in batches and each batch must be from the same task, which is unrealistic as the test sample usually comes one after another. CCG (Abati et al., 2020), Expert Gate (Aljundi et al., 2017), HyperNet (von Oswald et al., 2020) and PREnt (Henning et al., 2021) either build a separate network or use entropy to predict the task-id. LMC (Ostapenko et al., 2021) learns task specific knowledge via local modules capable of task-id prediction. However, they all perform poorly because none of the systems deal with the ICS problem, which is the key and is what our OOD detection is trying to tackle. In this line of work, the most closely related work to ours is MORE (Kim et al., 2022), which builds a model for each task treating the replay data as OOD data. However, in inference, it considers only the IND classes of each task, but not OOD detector. Our method is more principled and outperforms MORE. The methods in (Kim et al., 2022) do not use replay data and perform worse. ## 3 Learnability of CIL Before going to the learnability analysis, we first describe the intuition behind. Kim et al. (2022) showed that given a test sample, the CIL prediction probability for each class in a task is the product of two prediction probabilities: _within-task prediction_ (WP) and _task-id prediction_ (TP), \[\mathbf{P}(X_{k,j}|x)=\mathbf{P}(X_{k,j}|x,k)\mathbf{P}(X_{k}|x), \tag{1}\] where \(X_{k,j}\) is the domain of task \(k\) and class \(j\) of the task and \(x\) is an instance. The first probability on the right-hand-side (RHS) is WP and the second probability on the RHS is TP. However, as mentioned earlier, Kim et al. (2022) did not study whether CIL is learnable. We also note that (Kim et al., 2022) proved that TP and OOD are correlated and only differ by a constant factor. Based on the definition of OOD detection (**Definition**\(1.3\)), the OOD detection model can also perform WP. In the recent work in (Fang et al., 2022), it is proven that OOD detection is learnable. We show that if the learning of each task does not cause catastrophic forgetting (CF) for previous tasks, then CIL is learnable.1 Fortunately, CF can be prevented for each task as several existing _task incremental learning_ (TIL) methods including but not limited to HAT (Serra et al., 2018) and SupSup (Wortsman et al., 2020) in the parameter-isolation family and PCP (Kim & Liu, 2020) and CUBER (Lin et al., 2022) in the orthogonal projection family can ensure no CF (Kim et al., 2022). HAT/SupSup basically trains a sub-network as the model for each task. In learning each new task, all the sub-networks for the previous tasks are protected with masks so that their parameters will not be modified in backpropagation. Thus, in this section, we assume that all tasks are learned without _catastrophic forgetting_ (CF). Footnote 1: The current learnability result only applies to offline CIL, but not to online CL, where the task boundary may be blurry. We now discuss the learnability of class incremental learning (CIL). The notations for the following discussion are described in Tab. 1. Let \(\mathcal{X}\) be a feature space, \(\mathcal{Y}\) a label space, and \(\mathcal{H}\) a hypothesis function space. Assume \(\mathcal{H}\) is a _ring_, because we construct hypothesis functions by addition and multiplication in the proof of **Theorem**\(3.3\) and **Theorem**\(3.7\). We use \(D_{(X_{k},Y_{k})}\) to denote the distribution \begin{table} \begin{tabular}{c c} \hline \hline Notation & Description \\ \hline \(\mathcal{X}\) & feature space \\ \(\mathcal{Y}\) & label space \\ \(\mathcal{H}\) & hypothesis space \\ \(X_{k}\) & random variable in \(\mathcal{X}\) of task \(k\) \\ \(Y_{k}\) & random variable in \(\mathcal{Y}\) of task \(k\) \\ \(D_{(X_{k},Y_{k})}\) & distribution of task \(k\) \\ \(l\) & loss function \\ \(h\) & hypothesis function in \(\mathcal{H}\) \\ \(\mathbf{R}_{D_{(X,Y)}}\) & risk function, expectation of loss function on \(D_{(X,Y)}\) \\ \(\mathcal{D}\) & set of all distributions \\ \(S\) & set of samples \\ \(D_{[1:k]}\) & weighted mixture of the first \(k\) distributions \\ \(\pi_{k}\) & mixture weight \\ \(D_{k}\) & equivalent to \(D_{[k:k]}\) and \(D_{(X_{k},Y_{k})}\) \\ \(S|_{[k:1:k2]}\) & set of support samples for \(D_{[k:1:k2]}\) \\ \(S|_{k}\) & equivalent to \(S|_{[k:k]}\) \\ \(A_{k}\) & algorithm after training the \(k\)-th task \\ \(z_{k}^{t,j}\) & score function of the \(j\)-th class of the \(i\)-th task for task \(k\) \\ \(O\) & distribution of OOD \\ \(\alpha\) & constant in \([0,1)\) \\ \(D^{\alpha}\) & mixture of \(D\) and \(O\) with weight \(\alpha\) \\ \(z_{k}^{\alpha}\) & score for the OOD class \\ \(\emptyset\) & empty set \\ \(\epsilon_{n}\) & error rate with total number of samples \(n\) \\ \hline \hline \end{tabular} \end{table} Table 1: Notations used in Sec. 3. of task \(k\). \(X_{k}\in\mathcal{X}\) and \(Y_{k}\in\mathcal{Y}\) are random variables following \(D_{(X_{k},Y_{k})}\). \(\mathcal{D}=\{D_{(X,Y)}\}\) denotes the set of all distributions. \(l(y_{1},y_{2})\geq 0\) denote a loss function. Denote \(h\in\mathcal{H}\) as a hypothesis function. For any \(X\in\mathcal{X},\,Y\in\mathcal{Y}\), the risk function \(\mathbf{R}_{D_{(X,Y)}}(h)\stackrel{{ def}}{{=}}\mathbb{E}_{(x,y) \sim D_{(X,Y)}}[l(h(x),y)]\). \(S\stackrel{{ def}}{{=}}\{(x,y)\sim D_{(X,Y)}\}\) is sampled from \(D_{(X,Y)}\), denoted as \(S\sim D_{(X,Y)}\). For a series of distributions \(D_{(X_{1},Y_{1})},\ldots,D_{(X_{T},Y_{T})}\), we denote the mixture of the first \(k\) distributions as \(D_{[1:k]}=\frac{\sum_{i=1}^{k}\pi_{i}D_{(X_{i},Y_{i})}}{\sum_{i=1}^{k}\pi_{i}}\), where the mixture weights \(\pi_{1},\ldots,\pi_{T}>0\) with \(\sum_{k}\pi_{k}=1\). For brevity, \(D_{k}=D_{[k:k]}=D_{(X_{k},Y_{k})}\). Denote \(S|_{[k1:k2]}\stackrel{{ def}}{{=}}\{s\in S|s\in supp\,D_{[k1:k2]}\}\). For simplicity, \(S|_{k}=S|_{[k:k]}\). Since continual learning tasks come one by one sequentially, we denote the hypothesis function that is found by an algorithm \(\mathbf{A}\) after training the \(k\)-th task as \(\mathbf{A}_{k}(S)\) with \(S\sim D_{[1:k]}\). Strictly speaking, \(h_{k}(x)=\mathbf{A}_{k}(S)(x)\) is only well-defined for \((x,y)\sim D_{[1:k]}\), and is not well-defined for \((x^{\prime},y^{\prime})\sim D_{k^{\prime}}\), \(k^{\prime}>k\). Even if some implementation may predict a real value by \(h_{k}(x^{\prime})\), we regard it as non-sense at time \(k\) and only make sense until time \(k^{\prime}\). For the risk function, we will meet \(\mathbf{R}_{D_{[k_{1}:k_{2}]}}(h_{k})\) and we guarantee that \(k_{1}\leq k_{2}\leq k\). Denote \[h_{k}=\arg\max_{1\leq i\leq k,j\in\{1,\ldots\,\}}\{\ldots,z_{k}^{i,j},\ldots\},\] where \(z_{k}^{i,j}\) is the score function of the \(j\)-th class of the \(i\)-th task. The score function is any function that indicates which class the sample belongs to. For example, the score function could be the predicted logit of each class for a classification algorithm. We calculate \[\mathbf{R}_{D_{[k_{1}:k_{2}]}}(h_{k})=\mathbb{E}_{(x,y)\sim D_{[k_ {1}:k_{2}]}}\] \[[l(\arg\max_{k_{1}\leq i\leq k_{2},j\in\{1,\ldots\,\}}\{\ldots,z_{ k}^{i,j}(x),\ldots\},y)].\] When we write \(\mathbf{R}_{D^{\alpha}}(h)\) with \(D^{\alpha}=(1-\alpha)D+\alpha O\) (where \(O\) denotes OOD and \(\alpha\in[0,1)\)), we require \(h\) to predict one additional OOD class as \[h_{k}=\arg\max\{\ldots,z_{k}^{i,j},\ldots;z_{k}^{o}\},\] where \(z_{k}^{o}\) is the score function of the OOD class. **Definition 3.1** (Fully-Observable Separated-Task Closed-World Learnability).: Given a set of distributions \(\mathcal{D}\), a hypothesis function space \(\mathcal{H}\), we say CIL is learnable if there exists an algorithm \(\mathbf{A}\) and a sequence \(\{\epsilon_{n}|\lim_{n\rightarrow+\infty}\epsilon_{n}=0\}\) s.t. **(i)** for any \(D_{1},\ldots,D_{T}\in\mathcal{D}\) with \(supp\,D_{k}\cap supp\,D_{k^{\prime}}=\emptyset,k\neq k^{\prime}\), and **(ii)** for any \(\pi_{1},\ldots,\pi_{T}>0\) with \(\sum_{k}\pi_{k}=1\), \[\max_{k=1,\ldots,T}\mathbb{E}_{S\sim D_{[1:k]}}[\mathbf{R}_{D_{[1:k]}}( \mathbf{A}_{k}(S))-\inf_{h\in\mathcal{H}}\mathbf{R}_{D_{[1:k]}}(h)]<\epsilon_{n}.\] We use \(\epsilon\) to represent the error rate, where the index \(n\) of \(\epsilon_{n}\) represents the total number of samples. The equation \(\lim_{n\rightarrow+\infty}\epsilon_{n}=0\) means that the error rate decreases to \(0\) as \(n\) goes to \(+\infty\). **Definition 3.1**, the risk function is calculated over \(D_{[1:k]}\) at task \(k\), which means the data of all the past tasks and the current task are observable for optimization. It is a desirable property for CIL to take expectation over \(D_{[1:k]}\) as it constructs a model that is equivalent to the model built with the full training data of all tasks. Generally, when an algorithm satisfies **Definition 3.1**, the system is already learnable because this is just the traditional supervised learning which can see/observe all the training data of all tasks and there is no OOD data involved (which means the _closed-world_). However, when we apply the algorithm \(\mathbf{A}\) to solve for \(\mathbf{A}(S)\) in practice, we usually cannot access all samples in \(S\), which is partially-observable instead of fully-observable. That is the case for continual learning as it assumes that in learning the new/current task, the training data of the previous/past tasks is not accessible, at least a major part of it. Due to the lack of full observations, we have to define \(\mathbf{A}_{k}^{r}\) recursively. For any \(S\sim D_{[1:k]}\), we define \[\mathbf{A}_{k}^{r}(S)=\mathbf{A}_{k}^{r}(S|_{k},\mathbf{A}_{k-1}^{r}(S|_{[1:k-1 ]})). \tag{2}\] The algorithm depends on implementation. In the following discussion, we assume that learning a new task does not interrupt the error bound of previous tasks. This is a valid assumption as existing algorithms (Serra et al., 2018; Wortsman et al., 2020) achieve little or no forgetting. The version of **Definition 3.1** for partial observations is as follows. **Definition 3.2** (Partially-Observable Separated-Task Closed-World Learnability).: Given a set of distributions \(\mathcal{D}\), a hypothesis function space \(\mathcal{H}\), we say CIL is learnable if there exists an algorithm \(\mathbf{A}\) and a sequence \(\{\epsilon_{n}|\lim_{n\rightarrow+\infty}\epsilon_{n}=0\}\) s.t. **(i)** for any \(D_{1},\ldots,D_{T}\in\mathcal{D}\) with \(supp\,D_{k}\cap supp\,D_{k^{\prime}}=\emptyset,k\neq k^{\prime}\), **(ii)** for any \(\pi_{1},\ldots,\pi_{T}>0\) with \(\sum_{k}\pi_{k}=1\), \[\max_{k=1,\ldots,T}\mathbb{E}_{S\sim D_{[1:k]}}[\mathbf{R}_{D_{k}}(\mathbf{A}_{ k}^{r}(S))-\inf_{h\in\mathcal{H}}\mathbf{R}_{D_{k}}(h)]<\epsilon_{n}.\] In **Definition 3.4**, the risk function is calculated over \(D_{k}\) alone as only the current task data \(D_{k}\) is observable while the past tasks are not. It is desirable that **Definition 3.2** implies **Definition 3.1**, which transforms the learnability of a CIL problem into the learnability of a supervised problem. Unfortunately, **Definition 3.2** does not imply **Definition 3.1**. **Theorem 3.3** shows that there exists a trivial hypothesis function that satisfies **Definition 3.2** but doesn't satisfy **Definition 3.1**. **Theorem 3.3** (**Definition 3.2** does not imply **Definition 3.1**).: _For a set of distributions \(\mathcal{D}\) and a hypothesis function space \(\mathcal{H}\), if **Definition 3.2** holds and \(\mathcal{H}\) has the capacity to learn more than one task, then there exists \(h\in\mathcal{H}\) s.t. **Definition 3.2 holds but Definition 3.1 doesn't hold**._ The proof is given in Appendix A. The main reason here is that only the samples of the current task are observable, while samples of both past and future tasks are hard to be exploited. From the perspective of forward looking, when training the current task, we have no access to any information of future tasks, where samples of future tasks are regarded as _out-of-distribution_ (OOD) samples with respect to the current and past tasks. Inspired by **Theorem 3.3**, we include OOD detection into consideration and generalize **Definition 3.1** to the open-world setting. **Definition 3.4** (Fully-Observable Separated-Task Open-World Learnability).: Given a set of distributions \(\mathcal{D}\), a hypothesis function space \(\mathcal{H}\), we say CIL is learnable if there exists an algorithm \(\mathbf{A}\) and a sequence \(\{\epsilon_{n}\}\lim_{n\rightarrow+\infty}\epsilon_{n}=0\}\) s.t. **(i)** for any \(D_{1},\ldots,D_{T}\in\mathcal{D}\) with \(supp\,D_{k}\cap supp\,D_{k^{\prime}}=\emptyset,k\neq k^{\prime}\), **(ii)** for any \(\pi_{1},\ldots,\pi_{T}>0\) with \(\sum_{k}\pi_{k}=1\), and **(iii)** for any \(O_{(X_{1},Y_{1})},\ldots,O_{(X_{T},Y_{T})}\in\mathcal{D}\), any \(\alpha_{1},\ldots,\alpha_{T}\in[0,1)\), \[\max_{k=1,\ldots,T} \mathbb{E}_{S\sim D_{[1:k]}}[\mathbf{R}_{D_{[1:k]}^{\alpha_{[1:k ]}}}(\mathbf{A}_{k}(S))\] \[-\inf_{h\in\mathcal{H}}\mathbf{R}_{D_{[1:k]}^{\alpha_{[1:k]}}}(h) ]<\epsilon_{n},\] where \(D_{[1:k]}^{\alpha_{[1:k]}}=\sum_{i=1}^{k}(1-\alpha_{i})D_{i}+\alpha_{i}O_{(X_{i },Y_{i})}\). The proof of **Definition 3.4** is guaranteed by previous work (Fang et al., 2022), which studies the learnablity of OOD detection. It's obvious that when **Definition 3.4** is satisfied, Definition 3.1 is satisfied, which is shown in **Theorem 3.5**. **Theorem 3.5** (Definition 3.4 implies **Definition 3.1**).: _For a set of distributions \(\mathcal{D}\) and a hypothesis function space \(\mathcal{H}\), if **Definition 3.4** holds, then **Definition 3.1** holds._ The proof is given in Appendix A. When we have no access to samples of past tasks in practice, we define \(\mathbf{A}_{k}\) recursively as in Eq. 2. The partial observable version of **Definition 3.4** is stated below. In **Definition 3.6**, the risk function is over \(D_{k}\) instead of \(D_{[1:k]}\) because it's the partial observable case. **Definition 3.6** (Partially-Observable Separated-Task Open-World Learnability).: Given a set of distributions \(\mathcal{D}\), a hypothesis function space \(\mathcal{H}\), we say CIL is learnable if there exists an algorithm \(\mathbf{A}\) and a sequence \(\{\epsilon_{n}\}\lim_{n\rightarrow+\infty}\epsilon_{n}=0\}\) s.t. **(i)** for any \(D_{1},\ldots,D_{T}\in\mathcal{D}\) with \(supp\,D_{k}\cap supp\,D_{k^{\prime}}=\emptyset,k\neq k^{\prime}\), **(ii)** for any \(\pi_{1},\ldots,\pi_{T}>0\) with \(\sum_{k}\pi_{k}=1\), and **(iii)** for any \(O_{(X_{1},Y_{1})},\ldots,O_{(X_{T},Y_{T})}\in\mathcal{D}\), any \(\alpha_{1},\ldots,\alpha_{T}\in[0,1)\), \[\max_{k=1,\ldots,T}\mathbb{E}_{S\sim D_{[1:k]}}[\mathbf{R}_{D_{k}^{\alpha_{k} }}(\mathbf{A}_{k}^{r}(S))-\inf_{h\in\mathcal{H}}\mathbf{R}_{D_{k}^{\alpha_{k} }}(h)]<\epsilon_{n},\] where \(D_{k}^{\alpha_{k}}=(1-\alpha_{k})D_{k}+\alpha_{k}O_{(X_{k},Y_{k})}\). Note that Fang et al. (2022) showed that OOD detection is learnable under a compatibility condition for a single OOD detection problem and **Definition 3.6** is about learnability with respect to an ensemble of multiple OOD detection problems. It is obvious that once each OOD detection problem is learnable, the ensemble of them is also learnable. With this definition, we derive that CIL is learnable as OOD detection is learnable. Different from **Theorem 3.3** that partially-observable learnability does not imply fully-observable learnability for the closed-world setting, **Theorem 3.7** shows that the learnability of a CIL system can be converted to a series of OOD learnability problems for the open-world setting (meaning there are OOD data). **Theorem 3.7** (Definition 3.6 implies **Definition 3.4**).: _For a set of distributions \(\mathcal{D}\) and a hypothesis function space \(\mathcal{H}\), if Definition 3.6 holds, \(\mathcal{H}\) enjoys enough capacity i.e. \(\inf_{h\in\mathcal{H}}\mathbf{R}_{D_{[1:k]}^{\alpha_{[1:k]}}}(h)=0\), and the loss function on all tasks is bounded by summation of loss function on each task i.e., Eq. 8 in Appendix, then Definition 3.4 holds and the upper bound \(\epsilon_{n}\) is multiplied by \(\max_{k=1,\ldots,T}\sum_{t=1}^{k}\frac{\pi_{[t:T]}}{\pi_{[1:k]}}\)._ The proof is given in Appendix A. **Theorem 3.7** connects **Definitions 3.6** to 3.4 and **Theorem 3.5** connects **Definitions 3.4** to 3.1, which is the desirable property of CIL. When all tasks have the same weight \(\pi_{1}=\cdots=\pi_{T}=1/T\), the multiplier \(\max_{k=1,\ldots,T}\sum_{t=1}^{k}\frac{\pi_{[t:T]}}{\pi_{[1:k]}}=T\), which is positively correlated with the number of tasks. Though **Theorem 3.7** gives an upper bound to induce **Definition 3.4** from **Definition 3.6**, the hypothesis function that satisfies **Definition 3.4** is recursively derived from the previous tasks (see the proof). We can also observe that when tasks have different weights, the multiplier \(\max_{k=1,\ldots,T}\sum_{t=1}^{k}\frac{\pi_{[t:T]}}{\pi_{[1:k]}}\) depends on the order of tasks. It is undesirable that the hypothesis function depends on the order of tasks. When we can acquire some replay data of past tasks and treat them as OOD data, we have the following corollary that gives an order-free hypothesis function. **Corollary 3.8**.: _For a set of distributions \(\mathcal{D}\) and a hypothesis function space \(\mathcal{H}\), if **Definition 3.6** holds, \(\mathcal{H}\) enjoys enough capacity i.e. \(\inf_{h\in\mathcal{H}}\mathbf{R}_{D_{[1:k]}^{\alpha_{[1:k]}}}(h)=0\), and the loss function on all tasks is bounded by summation of the loss functions on every task i.e. Eq. 10 in Appendix, if we treat data of past tasks as OOD data, then **Definition 3.4** holds and the upper bound \(\epsilon_{n}\) is multiplied by \(\max_{k=1,\ldots,T}\frac{k\pi_{[1:T]}}{\pi_{[1:k]}}\)._ The proof is given in Appendix A. ## 4 Proposed Method The learnability in **Definition 3.6** is defined over the OOD function of each task. By **Definition 1.3** of OOD, an OOD function is capable of classification (i.e., WP) for IND in stances and rejection for OOD instances (or TP as it can be defined using OOD and such TP differs from OOD by a constant factor (Kim et al., 2022b)). As discussed early, we use the masks in HAT (Serra et al., 2018) to protect each OOD model to ensure there is no forgetting. Following exactly this theoretical framework, an algorithm can be designed, which works quite well (see ROW (-WP) in Tab. 4). However, it is possible to do better by introducing a WP head so that we can use the OOD head for estimating only TP rather than for handling both WP and TP. The proposed method ROW is a replay-based method. At each task \(k\), the system receives dataset \(D_{k}\) and leverages the replay data saved from previous tasks in the replay memory buffer \(\mathcal{M}\) as the OOD data of the task to train an OOD detection head and also to fine-tune the WP head. Specifically, the model of each task has two heads: one for OOD (for computing TP) and one for WP. That is, we optimize the set of parameters \((\Psi_{k},\theta_{k},\phi_{k})\), where \(\Psi_{k}\) is the parameter set of the feature extractor \(f_{k}\), \(\theta_{k}\) is the parameter set of OOD head \(h_{k}\), and \(\phi_{k}\) is the parameter set of the WP head (i.e., classifier) \(g_{k}\). The two task specific heads \(h_{k}\) and \(g_{k}\) receive feature \(u\) from the shared feature extractor \(f_{k}\) and produce WP and TP probabilities, respectively. The training consists of three steps: 1) training the feature extractor \(f_{k}\) and the OOD head \(h_{k}\) using both IND instances in \(D_{k}\) and OOD instances in \(\mathcal{M}\) (i.e., the replay data), 2) fine-tuning a WP head \(g_{k}\) for the task using \(D_{k}\) based on only the fixed feature extractor \(f_{k}\), and 3) fine-tuning the OOD heads of all tasks that have been learned so far. Training steps 2 and 3 are fast as both are simply fine-tuning the single layer of the classifiers (details below). The outputs from the two heads are used to compute the final CIL prediction probability in Eq. 1. An overview of the training and prediction process is illustrated in Fig. 1. **1) Training Feature Extractor and OOD Head.** This step trains the OOD head \(h_{k}\) for task \(k\). Its feature extractor \(f_{k}\) is also shared by the WP head (see below). An illustration of the training process is given in Fig. 1(a). Since OOD instances are any instances whose labels do not belong to task \(k\), we leverage the task data \(D_{k}\) as IND instances and the saved replay instances of tasks \(k^{\prime}\neq k\) in the memory buffer \(\mathcal{M}\) as OOD instances represented by an OOD class (in red) in the OOD head. The network \(h_{k}\circ f_{k}\) is trained to maximize the probability \(p(y|x,k)=\text{softmax}h_{k}(f(x,k;\Psi_{k});\theta_{k})_{y}\) for an IND instance \(x\in D_{k}\) and maximize the probability \(p(ood|x,k)\) for OOD instance \(x\in\mathcal{M}\). Formally, this is achieved by minimizing the sum of cross-entropy losses \[\begin{split}\mathcal{L}_{ood}(\Psi_{t},\theta_{k})=& -\frac{1}{2N}\Big{(}\sum_{(x,y)\in D_{k}}\log p(y|x,k)\\ &+\sum_{(x,y)\in\mathcal{M}}\log p(ood|x,k)\Big{)},\end{split} \tag{3}\] where \(N\) is the number of instances in \(D_{k}\). We utilize upsampling with the replay instances to achieve an equal number of samples as the current task data \(\mathcal{D}_{k}\). The first loss is to discriminate the IND instances while the second loss is introduced to distinguish between IND and OOD instances. To deal with forgetting, we use the HAT method (Serra et al., 2018) (see Appendix B). **2) Fine-Tuning the WP Head.** Given the feature extractor trained in the first step, we fix the feature extractor and fine-tune the WP head \(g_{k}\) (i.e., the WP classifier) using only \(D_{k}\) by adjusting the parameters \(\phi_{k}\). This is achieved by minimizing the cross-entropy loss \[\mathcal{L}_{WP}(\phi_{k})=-\frac{1}{N}\sum_{(x,y)\in D_{k}}\log p(y|x,k). \tag{4}\] **WP probabilities** for the classes of task \(k\) are just the output softmax probabilities. **3) Fine-Tuning the OOD Heads of All Tasks.** The OOD head \(h_{k}\) built in step 1) is biased because for early tasks, where the instances in \(\mathcal{M}\) are less diverse, the OOD heads for them will be weaker than the OOD heads of later tasks when the instances in \(\mathcal{M}\) are more diverse. To mitigate this bias, we fine-tune all OOD heads of all tasks after training each task using only the replay data in \(\mathcal{M}\). After training task \(k\), we have \(\mathcal{M}\) with replay instances of classes from task \(1\) to \(k\). For each task \(k^{\prime}\leq k\), reconstruct a new IND data \(\tilde{D}_{k^{\prime}}\) by selecting instances corresponding to task \(k^{\prime}\) from \(\mathcal{M}\), and a new pseudo memory buffer \(\mathcal{M}\) after removing the instances in \(\tilde{D}_{k^{\prime}}\). We then fine-tune every OOD head by Figure 1: Overview of training steps at task \(k\) and inference. (a): the first step of training the feature extractor and OOD head for task \(k\). The system receives both IND instance \(x\in D_{k}\) and OOD instance \(x_{o}\in\mathcal{M}\). The output has IND classes (in blue) and the OOD class or label (in red). (b): the second step of fine-tuning the WP head using the IND training data only. (c): fine-tuning all OOD heads using both IND and OOD instances. (d): inference/prediction. For a test instance \(x\), obtain TP and WP probabilities, and compute the CIL probability as in Eq. 1. minimizing the loss function \[\begin{split}\mathcal{L}_{TP}(\theta_{k^{\prime}})-\frac{1}{M}& \Big{(}\sum_{(x,y)\in\mathcal{M}}\log p(ood|x,k^{\prime})\\ &+\sum_{(x,y)\in D_{k^{\prime}}}\log p(y|x,k^{\prime})\Big{)}\end{split} \tag{5}\] where \(M\) is \(|\tilde{D}_{k}|+|\tilde{\mathcal{M}}|\). Although the TP probability can be defined simply using the fine-tuned OOD heads, it can be further improved, which we discuss next. ### Distance-Based Coefficient We can further improve the performance by incorporating a distance-based coefficient defined at the feature level into the output from the OOD head. The intuition is based on the observation that samples identified as OOD using a score function defined at the feature level are not recognized with a score function defined in the output level, and vice versa (Wang et al., 2022). Their combination usually produces a better OOD detector. After training task \(k\), compute the means of the feature vectors per class of the task and the variance of the features. Denote the mean of class \(y\) by \(\mu_{y}\) and the variance by \(\Sigma_{k}=\sum_{y}\Sigma_{y}\), where \(\Sigma_{y}\) is the variance of features of class \(y\). Using Mahalanobis distance (MD), the coefficient of an instance \(x\) for task \(k\) is \[c_{k}(x)=\max_{y}1/MD(x;\mu_{y},\Sigma_{k}). \tag{6}\] The coefficient is large if the feature of a test instance \(x\) is close to one of the sample means of the task and small otherwise. We finally define the **TP probability** for task \(k\) as \[\mathbf{P}(X_{k}|x)=c_{k}(x)\max_{j}\text{softmax}(h_{k}(x))_{j}/Z, \tag{7}\] where \(Z\) is the normalizing factor and the maximum is taken over the softmax outputs of the IND classes \(j\) obtained by the OOD head \(h_{k}\). The \(\max_{j}\) operation can also be seen as the maximum softmax probability score (Hendrycks and Gimpel, 2016). With the WP and TP probabilities, we now make a CIL prediction based on Eq. 1. ## 5 Empirical Evaluation **Baselines.** We compare the proposed ROW2 with 12 baselines. Five are exemplar-free (i.e., saving no previous data) methods and seven are replay-based methods. The exemplar-free methods are: **HAT**(Serra et al., 2018), **OWM**(Zeng et al., 2019), **SLDA**(Hayes and Kanan, 2020), **PASS**(Zhu et al., 2021), and **L2P**(Wang et al., 2022). For the multi-head method HAT, we make prediction by taking \(\arg\max\) over the concatenated logits from each task model as (Kim et al., 2022). The replay methods are: **iCaRL**(Rebuffi et al., 2017), **A-GEM**(Chaudhry et al., 2019), **EEIL**(Castro et al., 2018), **GD**(Lee et al., 2019) without external data, **DER++**(Buzzega et al., 2020), **HAL**(Chaudhry et al., 2021), and **MORE**(Kim et al., 2022). Footnote 2: [https://github.com/k-gyuhak/CLOOD](https://github.com/k-gyuhak/CLOOD) We could not make the recent system in (Wu et al., 2022) using a pre-trained model as no code is released. We also do not include the existing parameter isolation methods that deal with CIL problems as they are very weak. HyperNet (von Oswald et al., 2020) and PR (Henning et al., 2021) find the task-id via an entropy function and SupSup (Wortsman et al., 2020) finds it via gradient update. They then make a within-task prediction. SupSup, PR, and iTAML (Rajasegaran et al., 2020) assume the test instances come in batches and all samples in a batch belong to one task. When tested per sample, HyperNet, SupSup, PR and iTAML achieve 22.4, 11.8, 45.2 and 33.5 on 10 tasks of CIFAR100, respectively, which are much lower than 51.4 of iCaRL. CCG (Abati et al., 2020) has no code. The systems in (Kim et al., 2022) are also not included because they are quite weak as their contrastive learning does not work well with a pre-trained model. The results reported in their paper based on ResNet-18 are also weaker than ROW. **Datasets.** We use three popular continual learning benchmark datasets. **1). CIFAR10**(Krizhevsky and Hinton, 2009). This is an image classification dataset consisting of 60,000 color images of size 32x32, among which 50,000 are training data and 10,000 are testing data. It has 10 different classes. **2). CIFAR100**(Krizhevsky and Hinton, 2009). This dataset consists of 50,000 training images and 10,000 testing images with 100 classes. Each image is colored and of size 32x32. **3). Tiny-ImageNet**(Le and Yang, 2015). This dataset has 200 classes with 500 training images of size 64x64 per class. The validation data has 50 samples per class. Since no label is provided for the test data, we use the validation set for testing as in (Zhu et al., 2021). **Backbone Architecture and Pre-Training.** We use the backbone architecture of transformer DeiT-S/16 (Touvron et al., 2021). As pre-training models or feature extractors are increasingly used in all types of applications, including continual learning (Wang et al., 2022; Kim et al., 2022; Wu et al., 2022), we also take this approach. Following (Kim et al., 2022), to ensure there is no information leak from pre-training to continual learning, the pre-trained model/network is trained using 611 classes of ImageNet after removing 389 classes which are similar or identical to the classes of CIFAR and Tiny-ImageNet. To leverage the strong performance of the pre-trained model while adapting to new knowledge, we fix the feature extractor and append trainable **adapter modules** of fully-connected networks with one hidden layer at each transformer layer (Houlsby et al., 2019) except SLDA and L2P.3 The number of neurons in each hidden layer is 64 for CIFAR10 and 128 for other datasets. Note that _all baselines and ROW use the same architecture and the same pre-training model for fairness_ as using a pre-trained model improves the performance (Kim et al., 2022a; Ostapenko et al., 2022). Footnote 3: For SLDA and L2P, we follow the original papers. SLDA fine-tunes only the classifier with a fixed feature extractor and L2P trains learnable prompts. Note that we do not use the pre-trained models like CLIP (Radford et al., 2021) or others trained using the full ImageNet data due to **information leak** both in terms of features and class labels because our experiment data have been used in training these pre-trained models. This leakage can seriously affect the results. For example, the L2P system using the pre-training model trained using the full ImageNet data performs extremely well, but after those overlapping classes are removed in pre-training, its performances drop greatly. In Tab. 2, we can see that it is in fact quite weak. **Training Details.** For the replay-based methods, we follow the budget sizes of (Rebuffi et al., 2017; Buzzega et al., 2020). For our method, we use the memory budget strategy (Chaudhry et al., 2019) to save equal number of samples per class. Denote the budget size by \(|\mathcal{M}|\). For CIFAR10, we split the 10 classes into 5 tasks with 2 classes per task. We refer the experiment as C10-ST. The memory budget size \(|\mathcal{M}|\) is 200. For CIFAR100, we conduct two experiments. We split the 100 classes into 10 and 20 tasks, where each task has 10 classes and 5 classes, respectively. We refer the experiments as C100-10T and C100-20T. We choose \(|\mathcal{M}|=\) 2,000. For Tiny-ImageNet, we conduct two experiments. We split the 200 classes into 5 tasks with 40 classes per task and 10 tasks with 20 classes per task. We refer the experiments as T-5T and T-10T, respectively. We save 2,000 samples in total for both experiments. Following the random class order protocol of the existing methods (Rebuffi et al., 2017; Lee et al., 2019), we randomly generate 5 different class orders for each experiment and report the average accuracy over the 5 random orders. For all the experiments of our system, we find a good set of learning rates and the number of epochs via validation data made of 10% of the training data. The hyper-parameters of our system is reported in Appendix C. For the baselines, we use the experiment setups as reported in their official papers. If we could not reproduce the result, we find the hyper-parameters via the validation set. **Evaluation Metrics.** We use two metrics: average classification accuracy (ACA) and average forgetting rate. ACA after the last task \(t\) is \(\mathcal{A}_{t}=\sum_{i=1}^{t}A_{i}^{t}/t\), where \(A_{i}\) is the accuracy of the model on task \(i\)th data after learning task \(t\). The average forgetting rate after task \(t\) is \(\mathcal{F}_{t}=\sum_{i=1}^{t-1}A_{i}^{t}-A_{i}^{t}\)(Liu et al., 2020). This is also referred as backward transfer in other literature (Lopez-Paz and Ranzato, 2017). ### Results and Comparison **Average Classification Accuracy.** Tab. 2 shows the average classification accuracy after the final task. The last column Average indicates the average performance of each method over the 5 experiments. Our proposed method ROW performs the best consistently. On average, ROW achieves 73.72% while the best replay baseline (MORE) achieves 71.59%. We observe that MORE is significantly better than the other baselines. This is because MORE actually builds an OOD model for each task, which is close to the proposed theory but less principled than ROW. The baselines SLDA and L2P are proposed to leverage a strong pre-trained feature extractor in the original papers. SLDA freezes the feature extractor and only fine-tunes the classifier. It performs well for the simple experiment C10-5T, but is significantly poorer than ROW on all experiments. This is because the fixed feature extractor does not adapt to new knowledge. Our method updates the feature extractor via adapter modules to new knowledge and it is able to learn more complex problems. L2P trains a set of prompt embeddings. In the original paper, it uses a feature extractor that was pre-trained with ImageNet-21k which already includes the classes of the continual learning (CL) evaluation datasets (information leak). After we remove the classes similar to the datasets used in CL, its performance drops dramatically (60.47% on average over the 5 experiments) and much poorer than our ROW (73.72% on average). We conduct additional experiments with the size of mem \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Method & C10-ST & C100-10T & C100-20T & T-5T & T-10T & Average \\ \hline HAT & 79.36\({}_{\star 0}\) & 68.99\({}_{\star 0}\) & 61.83\({}_{\star 0}\) & **65.85\({}_{\star 0}\)** & 62.05\({}_{\star 0}\) & 67.62 \\ OWM & 41.69\({}_{\star 0}\) & 21.39\({}_{\star 0}\) & 16.98\({}_{\star 0}\) & 24.55\({}_{\star 0}\) & 17.52\({}_{\star 0}\) & 24.43 \\ SLDA & 88.64\({}_{\star 0}\) & 67.82\({}_{\star 0}\) & 67.08\({}_{\star 0}\) & 57.93\({}_{\star 0}\) & 57.93\({}_{\star 0}\) & 68.02 \\ PASS & 86.12\({}_{\star 1}\) & 68.90\({}_{\star 0}\) & 66.77\({}_{\star 0}\) & 61.03\({}_{\star 0}\) & 58.34\({}_{\star 0}\) & 68.25 \\ L2P & 73.59\({}_{\star 0}\) & 61.72\({}_{\star 0}\) & 53.84\({}_{\star 0}\) & 59.12\({}_{\star 0}\) & 54.09\({}_{\star 0}\) & 60.47 \\ \hline CLRL & 87.55\({}_{\star 0}\) & 68.90\({}_{\star 0}\) & 69.15\({}_{\star 0}\) & 53.13\({}_{\star 0}\) & 51.88\({}_{\star 0}\) & 66.12 \\ A-GEM & 56.33\({}_{\star 0}\) & 22.51\({}_{\star 0}\) & 21.99\({}_{\star 0}\) & 53.91\({}_{\star 0}\) & 36.99 \\ EEIL & 82.34\({}_{\star 0}\) & 68.08\({}_{\star 0}\) & 63.79\({}_{\star 0}\) & 53.34\({}_{\star 0}\) & 50.38\({}_{\star 0}\) & 63.59 \\ GD & 89.16\({}_{\star 0}\) & 43.62\({}_{\star 0}\) & 60.10\({}_{\star 0}\) & 53.01\({}_{\star 0}\) & 42.48\({}_{\star 0}\) & 61.82 \\ DER++ & 84.63\({}_{\star 0}\) & 69.73\({}_{\star 0}\) & 70.03\({}_{\star 0}\) & 55.84\({}_{\star 0}\) & 54.02\({}_{\star 0}\) & 66.89 \\ HAL & 84.38\({}_{\star 0}\) & 67.17\({}_{\star 0}\) & 67.37\({}_{\star 0}\) & 52.80\({}_{\star 0}\) & 55.25\({}_{\star 0}\) & 65.39 \\ MORE & 89.16\({}_{\star 0}\) & 70.23\({}_{\star 0}\) & 70.53\({}_{\star 0}\) & 64.97\({}_{\star 0}\) & 63.06\({}_{\star 0}\) & 71.59 \\ \hline ROW & **99.97\({}_{\star 0}\)** & **74.72\({}_{\star 0}\)** & **74.60\({}_{\star 0}\)** & 65.11\({}_{\star 0}\) & **63.21\({}_{\star 0}\)** & **73.72** \\ \hline \hline \end{tabular} \end{table} Table 2: Average classification accuracy after the final task. ‘-XT’ means X number of tasks. Our system ROW and all baselines use the pre-trained network. The last 7 baselines are replay-based systems. The last column shows the average of each row. We highlight the best results in each column in bold. ory buffer reduced by half to show the effectiveness of our method. The new memory buffer sizes for CIFAR10, CIFAR100, and Tiny-ImageNet are 100, 1,000, and 1,000, respectively. Tab. 3 shows that our method ROW experiences little reduction in performance whereas the other replay-based baselines suffer from significant performance reduction. On average over the 5 experiments, ROW achieves 72.70% while previously with the original memory buffer, it achieved 73.72. In contrast, the second best baseline DER++ reduces from 66.89 to 62.16. MORE is also robust with small memory sizes, but its average accuracy is 71.44 which is still lower than ROW. **Average Forgetting Rate.** We compare the forgetting rate of each system after learning the last task in Fig. 2. The forgetting rates of the proposed method ROW are 7.05, 7.99, and 9.72 on C10-5T, C100-10T and C100-20T, respectively. iCaRL forgets less than ours on C10-5T and C100-20T as it achieves 4.95 and 8.31, respectively. However, iCaRL was not able to adapt to new knowledge effectively as its accuracies are much lower than ROW on the same experiments. The forgetting rate of SLDA on C10-5T 5.74, but a similar observation can be made as iCaRL. The average accuracy over the 5 experiments of ROW is 73.72 while that of iCaRL and SLDA are only 66.12 and 68.02, respectively. According to the forgetting rates, the best baseline (MORE) adapts to the new knowledge well, but it was not able to retain the knowledge as effectively as ROW. Its forgetting rates are 10.30, 22.96, and 22.90 on C10-5T, C100-10T, and C100-20T, respectively, and are much larger than ours. This results in lower performance of MORE than ROW. It is **important to note** that our system actually has no forgetting due to the CF prevention by HAT. The _'forgetting'_ occurs not because it forgets the task knowledge, but because the classification becomes harder with more classes. ### Ablation Experiments We conduct an ablation study to measure the performance after each component is removed from ROW. We consider removing two components: WP head and the distance-based coefficient (MD) in Sec. 4.1. The method without WP head (ROW (-WP)) simply uses the OOD head obtained from step 1) with Eq. 3. This method makes the final prediction by taking \(\operatorname*{arg\,max}\) over the concatenated logit values without the OOD label from each task network (i.e. OOD head). Tab. 4 shows the average classification accuracy. The model after removing WP also works greatly as it already outperforms most of the baselines on C10-5T and outperforms the baselines on C100-10T and 20T. In other words, using OOD head constructed following the theoretical framework is effective. The model is still functional after removing both components (WP and the distance-based coefficient by MD) as shown in the last row of the table (ROW (-WP-MD)). ## 6 Conclusion To the best of our knowledge, there is still no reported study of learnability of class incremental learning (CIL). This paper performed such a study and showed that the CIL is learnable with some practically reasonable assumptions. A new CIL algorithm (called ROW) was also proposed based on the theory. Experimental results demonstrated that ROW outperforms strong baselines. ## Acknowledgements The work of Gyuhak Kim and Bing Liu was supported in part by a research contract from KDDI and three NSF grants (IIS-1910424, IIS-1838770, and CNS-2225427). \begin{table} \begin{tabular}{l c c} \hline \hline & C10-5T & C100-10T & C100-20T \\ \hline ROW & 90.97\({}_{-1.05}\) & 74.72\({}_{-0.05}\) & 74.60\({}_{-0.05}\) \\ ROW (-WP) & 88.50\({}_{-1.0}\) & 72.29\({}_{-1.05}\) & 71.97\({}_{-0.07}\) \\ ROW (-WP-MD) & 84.06\({}_{-1.0}\) & 67.53\({}_{-1.15}\) & 65.85\({}_{-0.05}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Performance gains with the proposed techniques. The method -WP indicates removing WP head and using only OOD head obtained in step 1). The method -MD indicates removing the distance-based coefficient. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Method & C10-5T & C100-10T & C100-20T & T-5T & T-10T & Avg. \\ \hline iCaRL & 86.08\({}_{-1.05}\) & 66.96\({}_{-0.05}\) & 68.16\({}_{-0.05}\) & 47.27\({}_{-1.05}\) & 49.51\({}_{-1.05}\) & 63.60 \\ A-GEM & 56.64\({}_{-1.05}\) & 23.18\({}_{-0.05}\) & 20.76\({}_{-1.05}\) & 31.44\({}_{-0.05}\) & 23.73\({}_{-1.15}\) & 31.15 \\ EELL & 77.44\({}_{-1.05}\) & 62.95\({}_{-0.05}\) & 57.66\({}_{-0.05}\) & 48.36\({}_{-0.05}\) & 44.59\({}_{-0.05}\) & 58.24 \\ GD & 85.96\({}_{-0.05}\) & 57.17\({}_{-1.05}\) & 50.30\({}_{-0.05}\) & 46.09\({}_{-1.05}\) & 32.41\({}_{-1.05}\) & 54.39 \\ DER++ & 80.09\({}_{-1.05}\) & 64.89\({}_{-0.05}\) & 65.84\({}_{-1.05}\) & 50.74\({}_{-1.05}\) & 49.24\({}_{-1.05}\) & 62.16 \\ HAL & 79.16\({}_{-1.05}\) & 62.65\({}_{-0.05}\) & 63.96\({}_{-1.05}\) & 48.17\({}_{-1.05}\) & 47.11\({}_{-1.05}\) & 60.21 \\ MORE & 88.13\({}_{-1.05}\) & 71.69\({}_{-0.05}\) & 71.29\({}_{-0.05}\) & 64.17\({}_{-1.05}\) & 61.90\({}_{-0.05}\) & 71.44 \\ \hline ROW & **89.70\({}_{-1.05}\)** & **73.63\({}_{-1.05}\)** & **71.86\({}_{-0.05}\)** & **65.42\({}_{-0.05}\)** & **62.87\({}_{-0.05}\)** & **72.70** \\ \hline \hline \end{tabular} \end{table} Table 3: The classification accuracy of the replay-based baselines and our method ROW with smaller memory buffer sizes. The buffer sizes are reduced by half and the new sizes are: 100 for CIFAR10 and 1,000 for CIFAR100 and Tiny-ImageNet. Numbers in bold are the best results in each column. Figure 2: Average forgetting rate (CIL). The lower the rate, the better the method is.
2305.04574
TAPS: Connecting Certified and Adversarial Training
Training certifiably robust neural networks remains a notoriously hard problem. On one side, adversarial training optimizes under-approximations of the worst-case loss, which leads to insufficient regularization for certification, while on the other, sound certified training methods optimize loose over-approximations, leading to over-regularization and poor (standard) accuracy. In this work we propose TAPS, an (unsound) certified training method that combines IBP and PGD training to yield precise, although not necessarily sound, worst-case loss approximations, reducing over-regularization and increasing certified and standard accuracies. Empirically, TAPS achieves a new state-of-the-art in many settings, e.g., reaching a certified accuracy of $22\%$ on TinyImageNet for $\ell_\infty$-perturbations with radius $\epsilon=1/255$. We make our implementation and networks public at https://github.com/eth-sri/taps.
Yuhao Mao, Mark Niklas Müller, Marc Fischer, Martin Vechev
2023-05-08T09:32:05Z
http://arxiv.org/abs/2305.04574v2
# TAPS: Connecting Certified and Adversarial Training ###### Abstract Training certifiably robust neural networks remains a notoriously hard problem. On one side, adversarial training optimizes under-approximations of the worst-case loss, which leads to insufficient regularization for certification, while on the other, sound certified training methods optimize loose over-approximations, leading to over-regularization and poor (standard) accuracy. In this work we propose TAPS, an (unsound) certified training method that combines IBP and PGD training to yield precise, although not necessarily sound, worst-case loss approximations, reducing over-regularization and increasing certified and standard accuracies. Empirically, TAPS achieves a new state-of-the-art in many settings, e.g., reaching a certified accuracy of \(22\%\) on TinyImageNet for \(\ell_{\infty}\)-perturbations with radius \(\epsilon=1/255\). Machine Learning, ICML ## 1 Introduction Robustness to adversarial attacks, _i.e_., small, imperceptible input perturbations that reduce the performance of neural networks (Biggio et al., 2013; Szegedy et al., 2014), has established itself as an important research area. Adversarial trainingmethods, such as PGD (Madry et al., 2018), compute concrete perturbations that maximize the training loss, before training the network on these samples. This can be seen as optimizing an under-approximation of the worst-case loss. While it empirically improves robustness significantly, it generally does not induce sufficient regularization for certification and has been shown to fail in the face of more powerful attacks (Tramer et al., 2020). Neural Network Certificationrigorously proves a network's adversarial robustness. While complete verification methods (Tjeng et al., 2019; Bunel et al., 2020; Zhang et al., 2022c; Ferrari et al., 2022) can decide every robustness property given enough (exponential) time, incomplete methods (Wong and Kolter, 2018; Singh et al., 2019; Zhang et al., 2018) trade precision for scalability. Sound Certified Trainingmethods are empirically successful at increasing certified accuracies but induce over-regularization leading to severely reduced standard accuracies. These techniques generally optimize an upper bound on the worst-case loss, typically computed via a form of bound propagation (Mirman et al., 2018; Gowal et al., 2018; Wong and Kolter, 2018; Zhang et al., 2018; Shi et al., 2021). We illustrate this idea using the popular interval bound propagation (IBP) in Figure 1. There, we show histograms (left side) of the worst-case loss approximation error over the test set. Positive values (right of the y-axis) correspond to over-approximations and negative values (left of the y-axis) to under-approximations. On the right side, we illustrate the corresponding certified and standard accuracies. Sound methods (IBP: top left in Figure 1), by definition, always yield over-approximations (positive errors) of the optimization objective, _i.e_., the true worst-case loss. Intuitively, this leads to already robust points inducing high losses, resulting in over-regularization and thus reduced network capacity for the remaining not yet robust points, which in turn reduces accuracy (right in Figure 1). Unsound Certified Trainingmethods reduce this over-regularization by sacrificing the soundness of the worst-case loss over-approximations in favor of more precise but not necessarily sound approximations. This tends to result in Figure 1: Histograms of the worst-case loss approximation errors over the test set (left) for different training methods show that TAPS (our work) achieves the most precise approximations and highest certified accuracy (right). networks that achieve higher standard and certified accuracies, but can be harder to verify (Balunovic and Vechev, 2020; Palma et al., 2022; Muller et al., 2022a). Recent advances in certification techniques, however, have made their certification possible (Ferrari et al., 2022; Zhang et al., 2022c). We illustrate this idea for SABR in Figure 1, where we observe a \(6\)-fold reduction in the mean worst-case loss approximation error (middle left), leading to reduced regularization and yielding a more accurate network (right). This Workproposes TAPS1, a novel (unsound) certified training method which yields precise worst-case loss approximations, reducing over-regularization and thus increasing certified and standard accuracies. Compared to SABR (the current state-of-the-art) in Figure 1, TAPS enjoys a further \(5\)-fold mean approximation error reduction and significantly reduced variance (bottom left), leading to improved certified and natural accuracies (right). The key technical insight behind TAPS is to combine IBP and PGD training via a gradient connector, a novel mechanism which allows for joint training, such that over-approximation errors of IBP and under-approximations of PGD cancel out. As we demonstrate empirically, TAPS yields exceptionally tight worst-case loss approximations and obtains state-of-the-art results on MNIST, CIFAR-10, and TinyImageNet. Footnote 1: Training via Adversarial Propagation through Subnetworks ## 2 Adversarial and Certified Training Here we provide the necessary background on adversarial and certified training. We consider a classifier \(F\colon\mathcal{X}\mapsto\mathcal{Y}\) parameterized by weights \(\boldsymbol{\theta}\) and predicting a class \(y_{\text{pred}}=F(\boldsymbol{x})=\operatorname*{arg\,max}_{y\in\mathcal{Y}}f _{y}(x)\) for every input \(\boldsymbol{x}\in\mathcal{X}\subseteq\mathbb{R}^{d}\) with label \(y\in\mathcal{Y}=\{1,\ldots,K\}\) where \(\boldsymbol{f}\colon\mathcal{X}\mapsto\mathbb{R}^{|\mathcal{Y}|}\) is a neural network, assigning a logit \(o_{i}\!:=\!f_{i}(\boldsymbol{x})\) to each class \(i\). Adversarial RobustnessWe call a classifier adversarially robust on an \(\ell_{p}\)-norm ball \(\mathcal{B}_{p}(\boldsymbol{x},\epsilon)\) if it classifies all elements within the ball to the correct class, \(F(\boldsymbol{x}^{\prime})=y\) for all perturbed inputs \(\boldsymbol{x}^{\prime}\in\mathcal{B}_{p}(\boldsymbol{x},\epsilon)\). In this work, we focus on \(\ell_{\infty}\)-robustness with \(\mathcal{B}_{\infty}(\boldsymbol{x},\epsilon):=\{\boldsymbol{x}^{\prime}\mid \|\boldsymbol{x}^{\prime}-\boldsymbol{x}\|_{\infty}\leq\epsilon\}\) and thus drop the subscript \(\infty\). Neural Network Verificationis used to formally verify the robustness of a neural network for a given sample and robustness specification, _i.e_., to _prove_ that all inputs in the region \(\mathcal{B}(\boldsymbol{x},\epsilon)\) yield the correct classification. We call such samples \(\boldsymbol{x}\) certifiably robust and denote the portion of such samples as _certified accuracy_, forming a lower bound to the true robustness. Interval bound propagation (IBP) (Mirman et al., 2018; Gowal et al., 2018) is a particularly simple yet effective verification method. Conceptually, it computes over-approximations of a network's reachable set by propagating the input region \(\mathcal{B}(\boldsymbol{x},\epsilon)\) through the network. Then, it checks whether all outputs in the thus obtained reachable set yield the correct classification. This propagation sequentially computes a Box over-approximation (each dimension is described as an interval) of a layer's output, given a Box input. As an example, consider an \(L\)-layer network \(\boldsymbol{f}=\boldsymbol{h}_{L}\circ\operatorname*{ReLU}\circ\boldsymbol{h }_{L-2}\circ\ldots\circ\boldsymbol{h}_{1}\), with linear layers \(\boldsymbol{h}\) and \(\operatorname*{ReLU}\) activation functions. Given an input region \(\mathcal{B}(\boldsymbol{x},\epsilon)\), we over-approximate it as a hyper-box \([\boldsymbol{x}^{0},\overline{\boldsymbol{x}}^{0}]\), centered at \(\boldsymbol{c}^{0}:=\boldsymbol{x}\) and with radius \(\boldsymbol{\delta}^{0}:=\epsilon\), such that we have the \(i^{\text{th}}\) dimension of the input \(x_{i}^{0}\in[c_{i}^{0}-\delta_{i}^{0},c_{i}^{0}+\delta_{i}^{0}]\). Given a linear layer \(\boldsymbol{h}_{i}(\boldsymbol{x}^{l-1})=\boldsymbol{W}\boldsymbol{x}^{l-1}+ \boldsymbol{b}=:\boldsymbol{x}^{l}\), we then obtain the hyper-box relaxation of its output defined by center \(\boldsymbol{c}^{l}=\boldsymbol{W}\boldsymbol{c}^{l-1}+\boldsymbol{b}\) and radius \(\boldsymbol{\delta}^{l}=|\boldsymbol{W}|\boldsymbol{\delta}^{l-1}\), where \(|\cdot|\) denotes the elementwise absolute value. A ReLU activation \(\operatorname*{ReLU}(\boldsymbol{x}^{l-1}):=\max(0,\boldsymbol{x}^{l-1})\) can be relaxed by propagating the lower and upper bound separately, resulting in the output hyper-box \([\operatorname*{ReLU}(\boldsymbol{x}^{l-1}),\operatorname*{ReLU}( \overline{\boldsymbol{x}}^{l-1})]\). Proceeding this way for all layers, we obtain lower and upper bounds on the network outputs \([\boldsymbol{a},\overline{\boldsymbol{\boldsymbol{\alpha}}}]\) and can check if the score of the target class is greater than that of all other classes by computing upper bounds \(\overline{\boldsymbol{\boldsymbol{\alpha}}}^{\Delta}\) on the logit differences \(\boldsymbol{\boldsymbol{\alpha}}^{\Delta}:=\boldsymbol{\boldsymbol{\alpha}}-o _{y}\boldsymbol{1}\). Note that this is equivalent to showing that the maximum margin loss \(\mathcal{L}_{\text{MA}}(\boldsymbol{x}^{\prime},y)\) is less than \(0\), or all perturbed inputs \(\boldsymbol{x}^{\prime}\in\mathcal{B}(\boldsymbol{x},\epsilon)\) where \[\mathcal{L}_{\text{MA}}(\boldsymbol{x},y):=\max_{i\neq y}\overline{\alpha}_{i} ^{\Delta}. \tag{1}\] We note that other, more precise methods for bound computation exist (Singh et al., 2019, 2018; Xu et al., 2020; Zhang et al., 2020; Tjeng et al., 2019). Adversarial Trainingis used to train neural networks \(\boldsymbol{f}\) with weights \(\boldsymbol{\theta}\) that are _empirically_ robust to adversarial examples, by solving the following optimization problem: \[\boldsymbol{\theta}=\operatorname*{arg\,min}_{\boldsymbol{\theta }}\mathbb{E}_{\boldsymbol{x},y}\left[\max_{\boldsymbol{x}\in\mathcal{B}( \boldsymbol{x},\epsilon)}\mathcal{L}_{\text{CE}}(\hat{\boldsymbol{x}},y) \right],\text{where} \tag{2}\] \[\mathcal{L}_{\text{CE}}(\boldsymbol{x},y):=\ln\big{(}1+\sum_{i \neq y}\exp(f_{i}(\boldsymbol{x})-f_{y}(\boldsymbol{x})\big{)}. \tag{3}\] As the inner maximization objective in Equation (2) can generally not be solved exactly, adversarial training optimizes a lower bound by first computing concrete examples \(\hat{\boldsymbol{x}}\in\mathcal{B}(\boldsymbol{x},\epsilon)\) that maximize this inner loss term and then training the network \(\boldsymbol{\theta}\) with these samples. A well-established method for this is, so-called _Projected Gradient Descent (PGD)_ training (Madry et al., 2018). Starting from a random initialization point \(\hat{\boldsymbol{x}}_{0}\in\mathcal{B}(\boldsymbol{x},\epsilon)\), it performs \(N\) update steps \(\hat{\boldsymbol{x}}_{n+1}=\Pi_{\mathcal{B}(\boldsymbol{x},\epsilon)}\hat{ \boldsymbol{x}}_{n}+\eta\operatorname*{sign}(\nabla_{\hat{\boldsymbol{x}}_{n}} \mathcal{L}_{\text{CE}}(\hat{\boldsymbol{x}}_{n},y))\) with step size \(\eta\) and projection operator \(\Pi\). Networks trained this way typically exhibit good empirical robustness but remain hard to formally verify and vulnerable to stronger or different attacks (Tramer et al., 2020; Croce and Hein, 2020). Certified Trainingis used to train neural networks \(\mathbf{f}\) with weights \(\mathbf{\theta}\) that are _certifiably_ robust to adversarial examples. To this end, methods in the _certification aligned_ paradigm optimize an upper bound of the inner maximization objective in Equation (2), while _training aligned_ methods aim to find a precise approximation. Often, methods in both paradigms are based on evaluating the \(\mathcal{L}_{\text{CE}}\) loss function with upper bounds \(\overline{\mathbf{\sigma}}^{\Delta}\) on the logit differences, obtained via bound propagation as described above. Following the certification aligned paradigm, IBP, for example, uses sound Box bounds on the logit differences, yielding the loss \[\mathcal{L}_{\text{IBP}}(\mathbf{x},y,\epsilon):=\ln\big{(}1+\sum_{i\neq y}\exp( \overline{\sigma}_{i}^{\Delta})\big{)}. \tag{4}\] SABR Muller et al. (2022), in contrast, belongs to the training aligned paradigm and achieves a more precise (although not sound) worst-case loss approximation \(\mathcal{L}_{\text{IBP}}(\mathbf{x}^{\prime},y,\tau)\) by not propagating the whole input region \(\mathcal{B}(\mathbf{x},\epsilon)\) but only an adversarially selected subset \(\mathcal{B}(\mathbf{x}^{\prime},\tau)\subset\mathcal{B}(\mathbf{x},\epsilon)\) with \(\tau<\epsilon\). This reduces approximation errors (see Figure 5) and thus regularization, leading to improved certified and standard accuracy. Orthogonally, COLT Balunovic and Vechev (2020) uses the tighter Zonotope Singh et al. (2019) instead of Box bounds combined with adversarial training to reduce approximation errors: Balunovic and Vechev (2020) split a network into two parts \(\mathbf{f}=\mathbf{f}_{2}\circ\mathbf{f}_{1}\). During every stage of training, they now fix the weights \(\mathbf{\theta}_{1}\) of \(\mathbf{f}_{1}\) and only train \(\mathbf{f}_{2}\) as follows: First, they compute a Zonotope over-approximation of the reachable set of \(\mathbf{f}_{1}\). Then, they conduct adversarial training of the remaining network \(\mathbf{f}_{2}\) within the thus obtained bounds. They repeat this process in stages, at first assigning the whole network to \(\mathbf{f}_{2}\), before slowly moving this interface between Zonotope and PGD propagation through the network. Crucially, they do not facilitate the computation of any gradients with respect to \(\mathbf{f}_{1}\). Thus the weights of \(\mathbf{\theta}_{1}\) are effectively frozen and \(\mathbf{f}_{2}\) is trained to be robust to approximation errors of \(\mathbf{f}_{1}\). However, \(\mathbf{f}_{1}\) is _not_ trained to minimize these approximation errors. This makes training very slow and limited to small networks (even for certified training methods). ## 3 Precise Worst-Case Loss Approximation In this section, we introduce TAPS, our novel certified training method combining IBP and PGD training to obtain precise worst-case loss estimates while permitting joint training and inducing a well-behaved optimization problem. ### Taps The key insight behind TAPS is that adversarial training with PGD and certified training with IBP complement each other perfectly: First, both yield well-behaved optimization problems, as witnessed by their empirical success and second, we can combine them such that the over-approximation errors incurred during IBP are compensated by the under-approximations of PGD. We do this as follows: For every sample, we first propagate the input region part-way through the network using IBP and then conduct PGD training within the thus obtained Box approximation. The key technical challenge with this approach (and the main difference to COLT Balunovic and Vechev (2020)) is that we connect the gradients of the two approaches, enabling joint training. We illustrate this in Figure 2. In more detail, we partition a neural network \(\mathbf{f}\) with weights \(\mathbf{\theta}\) into a _feature extractor_\(\mathbf{f}_{E}\) and _classifier_\(\mathbf{f}_{C}\) with parameters \(\mathbf{\theta}_{E}\) and \(\mathbf{\theta}_{C}\), respectively, such that we have \(\mathbf{f}_{\mathbf{\theta}}=\mathbf{f}_{C}\circ\mathbf{f}_{E}\) and \(\mathbf{\theta}=\mathbf{\theta}_{E}\cup\mathbf{\theta}_{C}\). We refer to the output space of the feature extractor as the _embedding space_. Figure 2: Overview of TAPS training. First, forward propagation (\(\mathbf{\cdot}\)) of a region \(B(\mathbf{x},\epsilon)\) (\(\blacksquare\), left) around an input \(\mathbf{x}\) (\(\diamond\)) through the feature extractor \(\mathbf{f}_{E}\) yields the exact reachable set (\(\blacksquare\), middle) and its IBP approximation \([\underline{\mathbf{z}},\overline{\mathbf{z}}]\) (\(\square\), middle) in the embedding space. Further IBP propagation through the classifier \(\mathbf{f}_{C}\) would yield an imprecise box approximation (\(\square\), right) of the reachable set (\(\blacksquare\), right). Instead, TAPS conducts an adversarial attack (\(\dashedrightarrow\)) in the embedding space IBP approximation (\(\blacksquare\)) yielding an under-approximation (\(\square\)) of its reachable set (\(\blacksquare\), right). We illustrate the points realizing the worst-case loss in every output region with \(\bullet\) and enable back-propagation (\(\cdot\)) through the adversarial attack by introducing the gradient connector (discussed in Section 3.2). Given an input sample \(\mathbf{x}\) (illustrated as \(\mathbf{\times}\) in Figure 2) and a corresponding input region \(\mathcal{B}(\mathbf{x},\epsilon)\) ( in the input panel), training proceeds as follows: During the forward pass (-\(\mathbf{\succ}\)), we first use IBP to compute a box over-approximation \([\underline{\mathbf{z}},\overline{\mathbf{z}}]\) (-\(\underline{\mathbf{z}}\)) of the feature extractor's exact reachable set (), shown in the middle panel of Figure 2. Then, we conduct separate adversarial attacks (\(\rightarrow\)) in this embedding space to bound all output dimensions of the classifier, yielding latent adversarial examples \(\hat{\mathbf{z}}\in[\underline{\mathbf{z}},\overline{\mathbf{z}}]\). This way, we _under-approximate_ the classifier, partially compensating the IBP over-approximation of the feature extractor and yielding the dotted box in the output space. Full IBP propagation, in contrast, continues to exponentially accumulate approximation errors (Muller et al., 2022; Shi et al., 2021), yielding the much larger dashed box. We now compute the TAPS loss analogously to \(\mathcal{L}_{\text{IBP}}\) (Equation (4)) by plugging the TAPS bound estimate (-\(\underline{\mathbf{z}}\)) into the Cross-Entropy loss. Comparing the resulting losses (illustrated as \(\bullet\) and growing towards the top right), we see that while the TAPS bounds are not necessarily sound, they yield a much better approximation of the true worst-case loss. During the backward pass (\(\mathbf{\cdot}\) > in Figure 2), we compute the gradients w.r.t. the classifier's parameters \(\mathbf{\theta}_{C}\) and the latent adversarial examples \(\hat{\mathbf{z}}\) (classifier input) as usual. However, to compute the gradients w.r.t. the feature extractor's parameters \(\mathbf{\theta}_{F}\), we have to compute (pseudo) gradients of the latent adversarial examples \(\hat{\mathbf{z}}\) w.r.t. the box bounds \(\underline{\mathbf{z}}\) and \(\overline{\mathbf{z}}\). As these gradients are not well defined, we introduce the _gradient connector_, discussed next, as an interface between the feature extractor's output and the classifier's input to impose pseudo gradients. This allows us to train \(\mathbf{f}_{E}\) and \(\mathbf{f}_{C}\)_jointly_, leading to a feature extractor that minimizes approximation errors and a classifier that is resilient to the spurious points included in the remaining approximation errors. ### Gradient Connector The key function of the gradient connector is to enable gradient computation through the adversarial example search in the embedding space. Using the chain rule, this only requires us to define the (pseudo) gradients \(\frac{\partial\hat{\mathbf{z}}}{\partial\overline{\mathbf{z}}}\) and \(\frac{\partial\overline{\mathbf{z}}}{\partial\overline{\mathbf{z}}}\) of the latent adversarial examples \(\hat{\mathbf{z}}\) w.r.t. the box bounds \([\underline{\mathbf{z}},\overline{\mathbf{z}}]\) on the feature extractor's outputs. Below, we will focus on the \(i^{\text{th}}\) dimension of the lower box bound and note that all other dimensions and the upper bounds follow analogously. As the latent adversarial examples can be seen as multivariate functions in the box bounds, we obtain the general form \(\frac{d\mathcal{L}}{d\underline{z}_{i}}=\sum_{j}\frac{d\mathcal{L}}{d\underline {z}_{j}}\frac{\partial\hat{z}_{j}}{\partial\underline{z}_{i}}\), depending on all dimensions of the latent adversarial example. However, as a box approximation does not encode any information on the dependence between dimensions, we, intuitively, want the gradients w.r.t. the \(i^{\text{th}}\) dimension of the box bounds \(\underline{\mathbf{z}}_{i}\) to only depend on the corresponding dimension of the latent adversarial example \(\hat{z}_{i}\). Therefore, we set \(\frac{\partial\hat{z}_{j}}{\partial\underline{z}_{i}}=0\) for \(i\neq j\) and obtain \(\frac{d\mathcal{L}}{d\underline{z}_{i}}=\frac{d\mathcal{L}}{d\underline{z}_{i }}\frac{\partial\hat{z}_{i}}{\partial\underline{z}_{i}}\), leaving only \(\frac{\partial\hat{z}_{i}}{\partial\underline{z}_{i}}\) for us to define. The most natural and mathematically correct gradient connector is the _binary connector_, \(i.e.\), set \(\frac{\partial\hat{z}_{i}}{\partial\underline{z}_{i}}=1\) when \(\hat{z}_{i}=\underline{z}_{i}\) and \(0\) otherwise. However, the latent adversarial input often does not lie on a corner (extremal vertex) of the bounding box, leading to sparse gradients and thus a less well-behaved optimization problem. More importantly, the binary connector is very sensitive to the distance between (local) loss extrema and the box boundary and thus inherently ill-suited to gradient-based optimization. For example, a local extremum at \(\hat{z}_{i}\) would induce \(\frac{\partial\hat{z}_{i}}{\partial\underline{z}_{i}}=1\) in the box \([\hat{z}_{i},0]\), but \(\frac{\partial\hat{z}_{i}}{\partial\underline{z}_{i}}=0\) for \([\hat{z}_{i}-\epsilon,0]\), even for arbitrarily small \(\epsilon\). To alleviate both of these problems, we consider a _linear connector_, \(i.e.\), set \(\frac{\partial\hat{z}_{i}}{\partial\underline{z}_{i}}=\frac{\overline{\hat{z}} _{i}-\underline{z}_{i}}{\overline{\hat{z}}_{i}-\underline{z}_{i}}\). However, even when our latent adversarial example is very close to one bound, the linear connector would induce non-zero gradients w.r.t. to the opposite bound. To remedy this undesirable behavior, we propose the _rectified linear connector_, setting \(\frac{\partial\hat{z}_{i}}{\partial\underline{z}_{i}}=\max(0,1-\frac{\hat{z}_ {i}-\underline{z}_{i}}{c(\overline{\hat{z}}_{i}-\underline{z}_{i})})\) where \(c\in[0,1]\) is a constant. We visualize it for \(c=0.3\) in Figure 3 and note that it recovers the binary connector for \(c=0\) and the linear connector for \(c=1\). To prevent gradient sparsity (\(c\leq 0.5\)) while avoiding the above-mentioned counterintuitive gradient connections (\(c\geq 0.5\)), we set \(c=0.5\) unless indicated otherwise. This ensures that, for every dimension, exactly one of the bounds has a non-zero gradient, unless our latent adversarial input is centered between them, \(\hat{z}_{i}=(\underline{z}_{i}+\overline{z}_{i})/2\), where both have zero gradients. To avoid numerical instability when the upper and lower bounds are identical in the dimension, \(\hat{z}_{i}=\overline{z}_{i}\), we set \(\frac{\partial\hat{z}_{i}}{\partial\underline{z}_{i}}=\max(0,1-\frac{\max( \hat{z}_{i}-\underline{z}_{i},\delta/2)}{\max((\overline{\hat{z}}_{i}- \underline{z}_{i}),\delta)})\), where \(\delta=10^{-5}\) is a small positive constant. Observe that this clipping is negligible unless the lower and the upper bounds are very close while yielding \(\frac{\partial\hat{z}_{i}}{\partial\underline{z}_{i}}=\frac{\partial\hat{z}_ {i}}{\partial\overline{z}_{i}}=0.5\) when they are identical, thus turning the gradient connector into an identity function. As outlined before, our approach is conceptually similar to COLT (Balunov and Vechev, 2020). However, our construction and gradient connector allow us to overcome the main issue of COLT, \(i.e.\), that gradients do not flow into the earlier part of the neural network - in our case \(\mathbf{f}_{E}\). Figure 3: Gradient connector visualization for \(c=0.3\). ### TAPS Loss & Multi-estimator PGD The standard PGD attack, used in adversarial training, henceforth called _single-estimator_ PGD, is based on maximizing the Cross-Entropy loss \(\mathcal{L}_{\text{CE}}\) (Equation (3)) of a single input. In the context of TAPS, this results in the overall loss \[\mathcal{L}_{\text{TAPS}}^{\text{single}}(\mathbf{x},y,\epsilon)=\max_{\hat{\mathbf{z }}\in[\hat{\mathbf{z}},\overline{\mathbf{z}}]}\ln\Big{(}1+\sum_{i\neq y}\exp(f_{C}( \hat{\mathbf{z}})_{i}-f_{C}(\hat{\mathbf{z}})_{y}\Big{)},\] where the embedding space bounding box \([\underline{\mathbf{z}},\overline{\mathbf{z}}]\) is obtained via IBP. However, this loss is not well aligned with the robustness objective (Equation (1)). Consider the example illustrated in Figure 4, where only points in the lower-left quadrant are classified correctly (as \(o_{i}^{\Delta}:=o_{i}-o_{y}<0\)). We compute the latent adversarial example \(\hat{\mathbf{z}}\) by conducting a standard adversarial attack on the Cross-Entropy loss (optimally for illustration purposes) over the reachable set \(\blacksquare\) and observe that the corresponding output \(\mathbf{f}(\hat{\mathbf{z}})\) (\(\times\)) is classified correctly. However, if we instead use the logit differences \(o_{1}^{\Delta}\) and \(o_{2}^{\Delta}\) as attack objectives, we obtain two misclassified points (\(\cdot\)). We combine their dimension-wise worst-case bounds (\(\cdots\)) to obtain the point \(\,\bullet\), which is equivalent to the maximum loss point in an optimal box approximation. As this directly corresponds to true robustness, we propose the _multi-estimator_ PGD variant of \(\mathcal{L}_{\text{TAPS}}\), which estimates the upper bounds on the logit differences \(o_{i}^{\Delta}\) using separate samples and then computes the loss function using the per-dimension worst-cases as: \[\mathcal{L}_{\text{TAPS}}(\mathbf{x},y,\epsilon)=\ln\Big{(}1+\sum_{i\neq y}\exp \Big{(}\max_{\hat{\mathbf{z}}\in[\hat{\mathbf{z}},\overline{\mathbf{z}}]}f_{C}(\hat{\mathbf{ z}})_{i}-f_{C}(\hat{\mathbf{z}})_{y}\Big{)}\Big{)}.\] ### Training Objective & Regularization While complete certification methods can decide any robustness property, they can require exponential time. Therefore, networks should not only be regularized to be robust but certifiable as well. Thus, we propose to combine the IBP loss for easy-to-learn and certify samples with the TAPS loss for harder samples as follows: \[\mathcal{L}(\mathbf{x},y,\epsilon)=\mathcal{L}_{\text{TAPS}}(\mathbf{x},y,\epsilon) \cdot\mathcal{L}_{\text{IBP}}(\mathbf{x},y,\epsilon).\] This expresses that every sample should be either certifiable with TAPS or IBP bounds2. Further, as by construction \(\mathcal{L}_{\text{TAPS}}\leq\mathcal{L}_{\text{IBP}}\), we add a scaling term to the loss gradient \(\alpha\): Footnote 2: See Fischer et al. (2019) for further discussion. \[\frac{d\mathcal{L}}{d\mathbf{\theta}}:=\alpha\frac{d\mathcal{L}_{\text{TAPS}}}{d \mathbf{\theta}}\cdot\mathcal{L}_{\text{IBP}}+(1-\alpha)\frac{d\mathcal{L}_{\text{ IBP}}}{d\mathbf{\theta}}\cdot\mathcal{L}_{\text{TAPS}}.\] Here \(\alpha=0\) and \(\alpha=1\) correspond to (weighted) IBP and TAPS gradients only, respectively. Henceforth, we express this as the regularization weight \(w_{\text{TAPS}}=\frac{\alpha}{1-\alpha}\), which intuitively expresses the weight put on TAPS, and use \(w_{\text{TAPS}}=5\) unless specified otherwise. Lastly, we reduce the variance of \(\mathcal{L}\) by averaging \(\mathcal{L}_{\text{IBP}}\) and \(\mathcal{L}_{\text{TAPS}}\) over a data batch before multiplying (see App. A). ## 4 Experimental Evaluation In this section, we evaluate TAPS empirically. First, we compare it to a range of state-of-the-art certified training methods before conducting an extensive ablation study. ### Experimental setup We implement TAPS in PyTorch (Paszke et al., 2019) and use MN-BaB (Ferrari et al., 2022) for certification. We conduct experiments on MNIST (LeCun et al., 2010), CIFAR-10 (Krizhevsky et al., 2009), and TinyImageNet(Le and Yang, 2015) using the challenging \(\ell_{\infty}\) perturbations and a 7-layer convolutional architecture CNN7(Shi et al., 2021; Muller et al., 2022a). For more details, see App. B. ### Main Results In Table 1, we compare TAPS to state-of-the-art certified training methods. The most closely related ones are IBP, recovered by TAPS if the classifier size is zero, and COLT, which also combines bound propagation with adversarial attacks but does not allow for joint training. TAPS dominates IBP, improving on its certified and natural accuracy in all settings and demonstrating the importance of avoiding over-regularization. Compared to COLT, TAPS improves certified accuracies significantly, highlighting the importance of joint optimization. In some settings, this comes at the cost of slightly reduced natural accuracy, potentially due to COLT's use of the more precise Zonotope approximations. Compared to the recent SABR and IBP-R, we observe that TAPS often achieves higher certified accuracies at the cost of slightly reduced natural accuracies. We, therefore, combine TAPS with SABR, by replacing the IBP portion of TAPS with SABR propagation to yield STAPS. STAPS then achieves better certified accuracies in almost all settings and better natural accuracies in many. In Table 2, we compare TAPS to SortNet, a generalization of a range of recent architectures (Zhang et al., 2021, 2022a; Anil et al., 2019), introducing novel activation functions tailored to yield networks with high \(\ell_{\infty}\)-robustness. While SortNet performs well on CIFAR-10 at \(\epsilon=8/255\), it is dominated by STAPS in every other setting, especially at smaller perturbation magnitudes. ### Ablation Study Approximation PrecisionTo evaluate whether TAPS yields more precise approximations of the worst-case loss than other certified training methods, we train a small CNN3 on MNIST using IBP, SABR, and TAPS and compute approximations of the maximum margin loss with IBP, PGD (\(50\) steps \(3\) restarts), SABR (tuned to \(\lambda=0.4\)), and TAPS for all test set samples. We report histograms over the difference to the exact worst-case loss computed with a MILP encoding (Tjeng et al., 2019) in Figure 5 and note that positive values corresponding to over-approximations while negative values correspond to under-approximation. We observe that regardless of the training method, the TAPS approximation is by far the most precise, achieving the smallest mean and mean absolute error as well as variance. \begin{table} \begin{tabular}{l l l l l l} \hline \hline Dataset & \(\epsilon_{\infty}\) & Training Method & Source & Nat. [\%] & Cert. [\%] \\ \hline \multirow{8}{*}{MNIST} & \multirow{8}{*}{0.1} & COLT & Balunovic \& Vechev (2020) & 99.2 & 97.1 \\ & & CROWN-IBP & Zhang et al. (2020) & 98.83 & 97.76 \\ & & IBP & Shi et al. (2021) & 98.84 & 97.95 \\ & & SABR & Müller et al. (2022a) & **99.25** & 98.06 \\ & & TAPS & this work & 99.19 & **98.39** \\ & & STAPS & this work & 99.15 & 98.37 \\ \cline{2-6} & \multirow{8}{*}{0.3} & COLT & Balunovic \& Vechev (2020) & 97.3 & 85.7 \\ & & CROWN-IBP & Zhang et al. (2020) & 98.18 & 92.98 \\ & & IBP & Shi et al. (2021) & 97.67 & 93.10 \\ & & SABR & Müller et al. (2022a) & **98.82** & 93.38 \\ & & TAPS & this work & 97.94 & **93.62** \\ & & STAPS & this work & 98.53 & 93.51 \\ \hline \multirow{8}{*}{CIFAR-10} & \multirow{8}{*}{\(255\)} & COLT & Balunovic \& Vechev (2020) & 78.4 & 60.5 \\ & & CROWN-IBP & Zhang et al. (2020) & 71.52 & 53.97 \\ & & IBP & Shi et al. (2021) & 66.84 & 52.85 \\ \cline{1-1} & & IBP-R & Palma et al. (2022) & 78.19 & 61.97 \\ \cline{1-1} & & SABR & Müller et al. (2022a) & 79.52 & 62.57 \\ \cline{1-1} & & TAPS & this work & 75.09 & 61.56 \\ \cline{1-1} & & STAPS & this work & **79.76** & **62.98** \\ \cline{1-1} \cline{2-6} & \multirow{8}{*}{\(255\)} & COLT & Balunovic \& Vechev (2020) & 51.7 & 27.5 \\ \cline{1-1} & & CROWN-IBP & Xu et al. (2020) & 46.29 & 33.38 \\ \cline{1-1} & & IBP & Shi et al. (2021) & 48.94 & 34.97 \\ \cline{1-1} & & IBP-R & Palma et al. (2022) & 51.43 & 27.87 \\ \cline{1-1} & & SABR & Müller et al. (2022a) & 52.00 & **35.25** \\ \cline{1-1} & & TAPS & this work & 49.76 & 35.10 \\ \cline{1-1} & & STAPS & this work & **52.82** & 34.65 \\ \hline \multirow{8}{*}{TinyImageNet} & \multirow{8}{*}{\(255\)} & COROWN-IBP & Shi et al. (2021) & 25.62 & 17.93 \\ \cline{1-1} & & IBP & Shi et al. (2021) & 25.92 & 17.87 \\ \cline{1-1} & & SABR & Müller et al. (2022a) & 28.64 & 20.34 \\ \cline{1-1} & & TAPS & this work & 28.34 & 20.82 \\ \cline{1-1} & & STAPS & this work & **28.98** & **22.16** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of natural (Nat.) and certified (Cert.) accuracy over certified training methods on the MNIST, CIFAR-10, and TinyImageNet test sets. We report results for other methods from the relevant literature. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{\(\epsilon\)} & \multicolumn{2}{c}{SortNet} & \multicolumn{2}{c}{STAPS (**ours**)} \\ \cline{3-6} & & Nat. & Cert. & Nat. & Cert. \\ \hline \multirow{3}{*}{MNIST} & 0.1 & 99.01 & 98.14 & **99.15** & **98.37** \\ & & 0.3 & 98.46 & 93.40 & **98.53** & **93.51** \\ \hline \multirow{3}{*}{CIFAR-10} & \multirow{2}{*}{2/255} & 67.72 & 56.94 & **79.76** & **62.98** \\ & & 8/255 & **54.84** & **40.39** & 52.82 & 34.65 \\ \cline{1-1} \cline{2-6} & TinyImageNet & 1/255 & 25.69 & 18.18 & **28.98** & **22.16** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of natural (Nat.) and certified (Cert.) accuracy [%] to SortNet (Zhang et al., 2022b). IBP RegularizationTo analyze the effectiveness of the multiplicative IBP regularization discussed in Section 3.4, we train with IBP in isolation (\(\mathcal{L}_{\text{IBP}}\)), IBP with TAPS loss weighted gradients (\(w_{\text{TAPS}}=0\)), varying levels of gradient scaling for the TAPS component (\(w_{\text{TAPS}}\in[1,20]\)), TAPS with IBP loss weighting (\(w_{\text{TAPS}}=\infty\)), and TAPS in isolation. We observe that IBP in isolation yields comparatively low standard and adversarial but moderate certified accuracies with fast certification times. Increasing the weight \(w_{\text{TAPS}}\) of the TAPS gradients reduces regularization, leading to longer certification times. Initially, this also translates to higher adversarial and certified accuracies, peaking at \(w_{\text{TAPS}}=15\) and \(w_{\text{TAPS}}=5\), respectively, before especially certified accuracy decreases as regularization becomes insufficient for certification. Split LocationTAPS splits a given network into a feature extractor and classifier, which are then approximated using IBP and PGD, respectively. As IBP propagation accumulates over-approximation errors while PGD is an under-approximation, the location of this split has a strong impact on the regularization level induced by TAPS. To analyze this effect, we train multiple CNN7s such that we obtain classifiers with between \(0\) and \(6\) (all) ReLUs and illustrate the resulting standard, adversarial, and certified (using different methods) accuracies in Figure 6. For small perturbations (\(\epsilon=2/255\)), natural and adversarial accuracy increase with classifier size and thus decreasing regularization. While the precise MN-BaB certification can translate this into increasing certified accuracies up to large classifier sizes, regularization quickly becomes insufficient for the less precise IBP and CROWN-IBP certification, leading to dropping certified accuracies. For larger perturbations (\(\epsilon=8/255\)), the behavior is more complex with an initial increase of all accuracies with classifier size being followed by a sudden drop and a slow recovery. We hypothesize that this effect is due to the IBP regularization starting to dominate optimization. We observe increased training difficulty as well (see App. C for details). For both perturbation magnitudes, gains in certified accuracy can only be realized with the precise MN-BaB certification (Muller et al., 2022), highlighting the importance of recent developments in neural network verification for certified training. Gradient ConnectorIn Figure 7, we illustrate the effect of our gradient connector's parameterization (Section 3.2) on training. We report TAPS accuracy (portion of samples where all latent adversarial examples are classified correctly) Figure 5: Distribution of the worst-case loss approximation errors over test set samples, depending on the training and bounding method. Positive values correspond to over-approximations and negative values to under-approximations. We use an exact MILP encoding (Tjeng et al., 2019) as reference. Figure 6: Effect of split location on the standard and robust accuracy of TAPS trained networks, depending on the perturbation magnitude \(\epsilon\) for different certification methods for CIFAR-10. 0 ReLUs in the classifier recovers IBP training. \begin{table} \begin{tabular}{l c c c c} \hline \hline \(w_{\text{TAPS}}\) & Avg time (s) & Nat (\%) & Adv. (\%) & Cert. (\%) \\ \hline \(\mathcal{L}_{\text{IBP}}\) & 2.3 & 97.6 & 93.37 & 93.15 \\ 0 & 2.7 & 97.37 & 93.32 & 93.06 \\ 1 & 4.5 & 97.86 & 93.80 & 93.36 \\ 5 & 6.9 & 98.16 & 94.18 & **93.55** \\ 10 & 15.7 & 98.25 & 94.43 & 93.02 \\ 15\({}^{\dagger}\) & 42.8 & 98.53 & **95.00** & 91.55 \\ 20\({}^{\dagger}\) & 73.7 & **98.75** & 94.33 & 82.67 \\ \(\infty^{\dagger}\) & 569.7 & 98.0 & 94.00 & 45.00 \\ \(\mathcal{L}_{\text{TAPS}}\)\({}^{\dagger}\) & 817.1 & 98.5 & 94.50 & 17.50 \\ \hline \hline \end{tabular} * Only evaluated on part of the test set within a 2-day time limit. \end{table} Table 3: Effect of IBP regularization and the TAPS gradient expanding coefficient \(\alpha\) for MNIST \(\epsilon=0.3\). as a proxy for the goodness of fit. Recall that \(c=0\) corresponds to the binary connector and \(c=1\) to the linear connector. We observe that the binary connector achieves poor TAPS and natural accuracy, indicating a less well-behaved optimization problem. TAPS accuracy peaks at \(c=0.5\), indicating high goodness-of-fit and thus a well-behaved optimization problem. This agrees well with our theoretical considerations aiming to avoid sparsity (\(c<0.5\)) and contradicting gradients (\(c>0.5\)). Single-Estimator vs Multi-Estimator PGDTo evaluate the importance of our proposed multi-estimator PGD, we compare its performance to single-estimator PGD across a range of split positions, reporting results in Table 4. We observe that across all split positions, multi-estimator PGD achieves better certified and better or equal natural accuracy. Further, training collapses reproducibly for single-estimator PGD for small classifiers, indicating that multi-estimator PGD additionally improves training stability. ## 5 Related Work Verification MethodsIn this work, we only consider deterministic verification methods, which analyze a given network as is. While _complete_ (or _exact_) methods (Tjeng et al., 2019; Palma et al., 2021; Wang et al., 2021; Muller et al., 2022b; Zhang et al., 2022c) can decide any robustness property given enough time, _incomplete_ methods (Singh et al., 2018; Raghunathan et al., 2018; Dathathri et al., 2020) sacrifice some precision for better scalability. However, recent complete methods can be used with a timeout to obtain effective incomplete methods. Certified TrainingMost certified training methods compute and minimize sound over-approximations of the worst-case loss using different approximation methods: DiffAI(Mirman et al., 2018) and IBP (Gowal et al., 2018) use Box approximations, Wong et al. (2018) use DeepZ relaxations (Singh et al., 2018), Wong and Kolter (2018) back-substitute linear bounds using fixed relaxations, Zhang et al. (2020) use dynamic relaxations (Zhang et al., 2018; Singh et al., 2019) and compute intermediate bounds using Box relaxations. Shi et al. (2021) significantly shorten training schedules by combining IBP training with a special initialization. Some more recent methods instead compute and optimize more precise, but not necessarily sound, worst-case loss approximations: SABR (Muller et al., 2022a) reduce the regularization of IBP training by propagating only small but carefully selected subregions. IBP-R (Palma et al., 2022) combines adversarial training with large perturbation radii and an IBP-based regularization. COLT (Balunovic and Vechev, 2020) propagates DeepZ relaxation through part of the network, before conducting adversarial attacks in the resulting latent space. However, in contrast to TAPS, COLT does not enable gradient flow between these components, preventing joint training. In our experimental evaluation (Section 4.2), we compare TAPS in detail to the above methods. Robustness by ConstructionLi et al. (2019), Lecuyer et al. (2019), and Cohen et al. (2019) construct probabilistic classifiers by introducing randomness into the inference process of a base classifier. This allows them to derive robustness guarantees with high probability at the cost of significant (100x) runtime penalties. Zhang et al. (2021, 2022a) introduce \(\ell_{\infty}\)-distance neurons, generalized to SortNet by Zhang et al. (2022b) which inherently exhibits \(\ell_{\infty}\)-Lipschitzness properties, yielding good robustness for large perturbation radii, but poor performance for smaller ones. ## 6 Conclusion We propose TAPS, a novel certified training method that reduces over-regularization by constructing and optimizing a precise worst-case loss approximation based on a combination of IBP and PGD training. Crucially, TAPS enables joint training over the IBP and PGD approximated components by introducing the gradient connector to define a gradient flow through their interface. Empirically, we confirm that TAPS yields much more precise approximations of the worst-case loss than existing methods, and demonstrate that this translates to state-of-the-art performance in certified training. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{\(\#\operatorname{ReLU}\) in} & \multicolumn{2}{c}{Single} & \multicolumn{2}{c}{Multiple} \\ \cline{2-5} Classifier & Certified & Natural & Certified & Natural \\ \hline 1 & \(\dashdot\) & 31.47\({}^{\dagger}\) & **93.62** & 97.94 \\ 3 & 92.91 & 98.56 & 93.03 & 98.63 \\ 6 & 92.41 & **98.88** & 92.70 & **98.88** \\ \hline \hline \end{tabular} * Training encounters mode collapse. Last epoch performance reported. \end{table} Table 4: Comparison of TAPS training with single-estimator and multi-estimator PGD propagation, depending on the number of ReLUs in the network’s classifier portion. Figure 7: Effect of the gradient connector parameter \(c\) on TAPS (left) and natural (right) accuracy.
2305.12316
One-Shot Federated Learning for LEO Constellations that Reduces Convergence Time from Days to 90 Minutes
A Low Earth orbit (LEO) satellite constellation consists of a large number of small satellites traveling in space with high mobility and collecting vast amounts of mobility data such as cloud movement for weather forecast, large herds of animals migrating across geo-regions, spreading of forest fires, and aircraft tracking. Machine learning can be utilized to analyze these mobility data to address global challenges, and Federated Learning (FL) is a promising approach because it eliminates the need for transmitting raw data and hence is both bandwidth and privacy-friendly. However, FL requires many communication rounds between clients (satellites) and the parameter server (PS), leading to substantial delays of up to several days in LEO constellations. In this paper, we propose a novel one-shot FL approach for LEO satellites, called LEOShot, that needs only a single communication round to complete the entire learning process. LEOShot comprises three processes: (i) synthetic data generation, (ii) knowledge distillation, and (iii) virtual model retraining. We evaluate and benchmark LEOShot against the state of the art and the results show that it drastically expedites FL convergence by more than an order of magnitude. Also surprisingly, despite the one-shot nature, its model accuracy is on par with or even outperforms regular iterative FL schemes by a large margin
Mohamed Elmahallawy, Tie Luo
2023-05-21T01:57:56Z
http://arxiv.org/abs/2305.12316v1
One-Shot Federated Learning for LEO Constellations that Reduces Convergence Time from Days to 90 Minutes ###### Abstract A Low Earth orbit (LEO) satellite constellation consists of a large number of small satellites traveling in space with high mobility and collecting vast amounts of mobility data such as cloud movement for weather forecast, large herds of animals migrating across geo-regions, spreading of forest fires, and aircraft tracking. Machine learning can be utilized to analyze these mobility data to address global challenges, and Federated Learning (FL) is a promising approach because it eliminates the need for transmitting raw data and hence is both bandwidth and privacy-friendly. However, FL requires many communication rounds between clients (satellites) and the parameter server (\(\mathcal{PS}\)), leading to substantial delays of up to several days in LEO constellations. In this paper, we propose a novel _one-shot_ FL approach for LEO satellites, called _LEOShot_, that needs only _a single communication round_ to complete the entire learning process. LEOShot comprises three processes: (i) _synthetic data generation_, (ii) _knowledge distillation_, and (iii) _virtual model retraining_. We evaluate and benchmark LEOShot against the state of the art and the results show that it drastically expedites FL convergence by _more than an order of magnitude_. Also surprisingly, despite the one-shot nature, its model accuracy is on par with or even outperforms regular iterative FL schemes by a large margin. Satellite communications, low Earth orbit (LEO), federated learning, knowledge distillation, ensemble model, synthetic data generation, teacher-student framework. ## I Introduction Low Earth orbit (LEO) satellite constellations have recently been burgeoning due to the rapid advances in satellite communications (SatCom) technology. Positioned at an altitude of 160-2,000 km above the Earth's surface, LEO satellites are often equipped with sensors and high-resolution cameras to collect a vast amount of mobility-related data, such as tracking and monitoring cloud movements for weather forecast [1], hurricane and forest fire movements [2], flooding situations, migration of large herds of animals across geographic regions, and aircraft tracking. Large-scale machine learning (ML) models can be utilized to analyze these mobility data to address global challenges such as climate change, natural disasters, and abnormal wildlife conditions. However, traditional ML methods require downloading raw data such as satellite images to a ground station (GS) or gateway for centralized model training, which is not practical for SatCom because of its limited bandwidth, large propagation delay, and privacy concerns (e.g., satellite data and images may contain sensitive information such as military activities or critical infrastructure locations). Introducing federated learning (FL) [3] to SatCom appears to be a viable solution because FL eliminates the need for transmitting raw data by allowing satellites to train ML models locally (i.e., on-board) using their own data respectively and only send the resulting model parameters to the GS which acts as the \(\mathcal{PS}\) to aggregate those local models into a global model. However, FL requires many communication rounds between clients (satellites) and the \(\mathcal{PS}\) to re-train and re-aggregate the models until it converges into a well-functioning global model. As a result, this iterative process can take several days or even weeks in the context of SatCom [4, 5], because of the long propagation delay and, most importantly, the highly _sporadic and irregular connectivity_ between LEO satellites and the GS. The latter is attributed to the distinct trajectories of satellites and the GS,1 which leads to very intermittent and non-cyclic visits of satellites to the GS (successively). Footnote 1: A satellite orbits at an _inclination angle_ between 0\({}^{o}\) and 90\({}^{o}\) (typically 50–80\({}^{o}\)), whereas a GS constantly rotates on the 0\({}^{o}\) plane. These degrees are in reference to the Equator of the Earth. In this paper, we propose LEOShot, a novel _one-shot_ FL approach for LEO satellite constellations that accomplishes the FL training process in a single communication round, yet still obtaining a global model with competitive performance. One-shot FL is an emerging paradigm [6] but its existing methods cannot be directly applied to SatCom because they (i) require the \(\mathcal{PS}\) to have a publicly shareable dataset that represents the client data distribution [7, 8], which is hardly available, (ii) require clients to upload raw or synthetic data [9, 10], which contradicts the FL principle, or (iii) still needs extra communication rounds to achieve a satisfactory accuracy [11]. In contrast, LEOShot does not share or transmit data in any form (e.g., raw or synthetic) yet still attains a high-performing model in just a single communication round. In summary, this paper makes the following contributions: * To the best of our knowledge, LEOShot is the first one-shot FL approach proposed for SatCom. It drastically reduces the adverse effect of highly sporadic and irregular connectivity between satellites and GS by instating only one communication round. Not only this, but it also operates without relying on any auxiliary datasets or raw-data sharing, yet attaining competitive model performance. * LEOShot comprises (i) _synthetic data generation_ that generates images mimicking real satellite images instead of downloading them; (ii) _knowledge distillation_ that trains (instead of aggregates) a global model using a teacher-student framework; iii) _virtual model retraining_ that refines the global model iteratively toward high performance but without any extra communication rounds. * Unlike conventional FL which assumes an identical ML model architecture for all clients, LEOShot allows clients to use _different_ model architectures to cater to their own preferences and resource constraints. This is much more flexible and helps solve the _data heterogeneity_ and _system heterogeneity_ among LEO satellites. * We demonstrate via extensive simulations that LEOShot dramatically accelerates FL convergence by _more than an order of magnitude_ as compared to the state-of-the-art FL-SatCom approaches. In the meantime, the accuracy of its trained models outperforms existing methods by a large margin despite its one-shot nature. **Paper Organization.** Section II provides an overview of recent work that utilizes the FL in SatCom. Section III describes the FL-SatCom network model as well as communication links among satellites and GS. Section IV explains the proposed LEOShot framework and its component in detail. The performance evaluation of LEOShot is provided in Section V. Finally, Section VI concludes this paper. ## II Related Work ### _FL for SatCom_ While FL-SatCom is still in its infancy, a few studies have attempted to apply FL to SatCom. Chen et al. [12] directly applied FedAvg [13] to SatCom to demonstrate FL's advantages in avoiding the need to download raw data to a GS. FedISL [14] leverages intra-plane inter-satellite-link (ISL) to reduce the long waiting time for satellites to become visible to the \(\mathcal{PS}\), but only performs well under an ideal setup where the \(\mathcal{PS}\) is either a GS located at the North Pole (NP) or a medium Earth orbit (MEO) satellite positioned directly above the Equator. In addition, FedISL overlooks the _Doppler shift_ resulting from the high relative speed between LEO satellites (clients) and MEO-satellite (\(\mathcal{PS}\)). Another approach called FedHAP [15] was proposed which introduces one or multiple high altitude platforms (HAPs) floating at 18-25km above the Earth's surface as \(\mathcal{PS}\). As a result of using HAPs, the convergence speed of FL is improved by having more visible satellites, but more hardware is required. Another most recent approach, FedLEO [16], has been proposed to enhance the convergence process of FL in the LEO constellation. FedLEO achieved this through the use of intra-plane model propagation and scheduling of sink satellites. It is however necessary for each satellite to run a scheduler to determine which satellite will be the sink to send its model, resulting in a delay for models to be exchanged with the GS. The above are all _synchronous FL_ approaches. There are also _asynchronous FL_ approaches proposed for SatCom that allow the \(\mathcal{PS}\) to proceed to the next training rounds without waiting for the model updates from _all_ the clients. Razmi et al. [17] proposed FedSat, which assumes that the GS is located at NP to simplify the problem so that every satellite visits the GS at _regular intervals_ (once every orbital period). To overcome this limitation, they proposed another approach FedSatSchedule [5], which uses a scheduler to reduce model staleness by predicting satellites' visiting patterns to the GS. However, several days are required to reach a model convergence. Another recent approach called FedSpace [4] formulated an optimization problem to dynamically schedule model aggregation based on predicting connectivity between LEO satellites and the GS, but it requires each satellite to upload a small portion of its data so that the GS can schedule model aggregation, which contradicts FL's principle of avoiding raw data sharing. Another recently proposed FL-SatCom approach, AsyncFLEO [18], is proposed to offer a drastic solution to the staleness challenge that requires several days for asynchronous FL-SatCom approaches to converge. It is capable of achieving convergence within a few hours, but is still subject to a high number of communication rounds and hence there is still large room for improvement. ### _One-Shot FL_ To the best of our knowledge, one-shot FL has not been introduced into SatCom before. The studies [7, 8] employ knowledge distillation [19] in which client-side models are used as teacher models to train a server-side student model (global model). However, the server is required to have access to a public dataset in order to provide pseudo samples for training purposes, which conflicts with FL's privacy principles. As an alternative to knowledge distillation, data distillation [20] is used in [9, 10]. These studies, however, show a notable underperformance; moreover, they require uploading distilled/synthetic data generated by clients to a \(\mathcal{PS}\), which is incompatible with FL principles. Lastly, a theoretical analysis of one-shot FL is presented in [21], focusing on independently and identically distributed (IID) data only. Given the above challenges, none of the existing one-shot frameworks can be practically applied without issues or drawbacks. In addition, to develop a one-shot FL approach for SatCom, it is necessary to take into account the SatCom challenges such as long propagation delays, intermittent and irregular visibility of LEO satellites, and bandwidth limitations. ## III Network Model Fig. 1 illustrates an LEO constellation \(\mathcal{M}\) consists of \(N\) satellites equally distributed on \(O\) orbits. Each orbit \(o\in\mathcal{O}=\{o_{1},o_{2},...,o_{O}\}\) is located at an altitude \(h_{o}\) above the Earth with an inclination angle \(\alpha_{o}\) and has a set of satellites \(\mathcal{M}_{o}\). Satellites in an orbit \(o\) travel with the same velocity \(v_{o}\) and has the same orbital period \(T_{o}\). Here, \(v_{o}=\sqrt{\frac{GM_{E}}{(R_{E}+h_{o})}}\) and \(T_{o}=\frac{2\pi}{GM_{E}}(R_{E}+h_{o})^{3/2}\), where \(G\) is the gravitational constant, \(\sqrt{R_{E}}\) is the Earth's mass, and \(R_{E}=6,371\) km is the Earth's radius. In SatCom, satellite \(m\) can communicate with a GS \(g\) if the Earth does not obstruct their line-of-sight (LoS) link. Mathematically, this means \(\vartheta_{m,g}(t)\triangleq\angle(r_{g}(t),(r_{m}(t)-r_{g}(t)))\leq\frac{ \pi}{2}-\vartheta_{min}\), where \(r_{m}(t)\) and \(r_{g}(t)\) are the trajectories of \(m\) and \(g\), respectively, and \(\vartheta_{min}\) is the minimum elevation angle that depends on the GS location. ### _FL-SatCom System_ Consider an FL-SatCom system, in which each satellite \(m\in\mathcal{M}\) collects a set of Earth images \(\mathcal{D}_{m}\) of size \(d_{m}=|\mathcal{D}_{m}|\). In addition, we assume that these images are non-IID among satellites since they orbit the Earth at irregular intervals and scan different areas. In a synchronous FL system such as FedAvg [13], the \(\mathcal{PS}\) (i.e., a GS) and satellites train an ML model collaboratively to solve the following problem \[\arg\min_{w\in\mathbb{R}^{d}}F(w)=\sum_{m\in\mathcal{M}}\frac{d_{m}}{d}F_{m}( w), \tag{1}\] where \(F(w)\) indicates the overall loss function (e.g., SSE) of the target model; \(w\) is the model weights; \(d=\sum_{m\in\mathcal{M}}d_{m}\) is the total data size; \(F_{m}(w)\) is the loss function of satellite \(m\), which can be expressed as \[F_{m}(w)=\frac{1}{d_{m}}\sum_{j=1}^{d_{m}}f_{m}(w;x_{m,j}), \tag{2}\] where \(f_{m}(\cdot)\) is the training loss on a sample point \(x_{m,j}\). During the training process, there are multiple communication rounds \(\beta=\{0,1,2,\dots\}\). In each round, the GS first transmits the latest global model weights \(w^{\beta}\) to all satellites, and each satellite \(m\) employs a local optimization scheme such as gradient descent to update its model for \(I\) epochs as \[w_{m}^{\beta,i+1}=w_{m}^{\beta,i}-\eta\nabla F_{m}(w_{m}^{\beta,i};x_{m}^{i}),\;i=0,1,2,...,I-1 \tag{3}\] where \(\eta\) is the learning rate. Following that, each satellite uploads its locally updated model to the GS for assembling as \[w^{\beta+1}=\sum_{m\in\mathcal{M}}\frac{d_{m}}{d}w_{m}^{\beta,I}, \tag{4}\] which completes a communication round. The above procedure iterates with an incremental \(\beta\) until the FL model converges (e.g., a target accuracy, target loss, or the maximum number of training rounds is reached). **A key challenge** of this learning process arises from the fact that the convergence of FL requires several communication rounds between LEO satellites and the GS, and each communication can only happen when a satellite transiently comes into the GS's visible zone. As a result, FL would take several days or even longer to reach convergence. This motivated us to develop LEOShot, which requires only a single communication round between satellites and the GS. ### _Communication Model_ In a symmetric radio frequency channel with additive white Gaussian noise (AWGN), the signal-to-noise ratio (SNR) between a satellite \(m\) and a GS \(g\) can be expressed as [22]: \[SNR(m,g)=\frac{PG_{m}G_{g}}{KTBL_{m,g}}, \tag{5}\] where \(P\) is the transmitter power, \(G_{m}\) and \(G_{g}\) are the total antenna gains of satellite \(m\) and an GS \(g\), respectively, \(K\) is the Boltzmann constant (\(1.38\times 10^{-}23J/K\)), \(T\) is the noise temperature at the receiver, \(B\) is the channel bandwidth, and \(L_{m,g}\) is the free-space pass-loss between satellite \(m\) and a Fig. 1: FL-LEO network architecture, comprised of multiple orbits each having multiple satellites. GS \(g\). As long as the LoS link between a satellite \(m\) and a GS \(g\) is not obstructed by the Earth, then \(L_{m,g}\) can be expressed as \[L_{m,g}=\left(\frac{4\pi\|m,g\|_{2}f}{c}\right)^{2} \tag{6}\] where \(\|m,g\|_{2}\) is the Euclidean distance between a satellite \(m\) and a GS \(g\) when they are visible to each other, \(f\) is the carrier frequency, and \(c\) is the light speed. For exchanging local or global model weights (\(w_{m}\) or \(w\)) between a satellite \(m\) and a GS \(g\), the total required time \(t_{t}\) can be calculated as \[t_{t}=\underbrace{\frac{z|\mathcal{P}|}{R}}_{\text{Transmission delay}}+ \underbrace{\frac{\|m,g\|_{2}}{c}}_{\text{Propagation delay}}+t_{m}+t_{s} \tag{7}\] where \(t_{m}\) and \(t_{s}\) are the processing delay at _m-th_ satellite and a GS \(g\), respectively, \(|\mathcal{P}|\) is the number of sample points, \(z\) is the number of bits in each sample, and \(R\) is the maximal achievable data rate, which can be computed by the Shannon Fig. 2: Overview of the proposed LEOShot framework. formula as \[R\approx B\log_{2}(1+SNR(m,g)) \tag{8}\] ## IV The LEOShot framework LEOShot is operated at the server end. Fig. 2 provides an overview of the LEOShot framework, which is comprised of three phases: (a) synthetic data generation, which synthesizes a representative dataset as a _proxy_ of all the satellites' local data, (b) knowledge distillation, for distilling information from the satellite models (teacher models) to train a server model (student model), and (c) virtual model retraining, for training virtual local models iteratively until the server model converges. Algorithm 1 outlines the entire process of LEOShot. ``` Input: Satellites' models, \(L,\eta_{g},\eta_{s}\), \(x\), \(y\), \(J\),\(I\),\(\gamma_{1},\gamma_{2}\) Output: Server model parameters \(\triangleright\) Phase 1: Synthetic data generation 1 Generate batches of random noises \(x\) and labels \(y\) Initialize server model (student) and generator \(\mathcal{G}\)foreach epoch \(j\) of training \(\mathcal{G}\)do 2 Calculate losses \(\mathcal{R}_{CE},\mathcal{R}_{BN},R_{KL_{Gen}}(\hat{x})\) using eqns. (10), (11), and (12) Calculate the generator loss \(\mathcal{R}^{\prime}_{Gen}\) via (14) Generate/Update \(\hat{x}\) Retrain the weights of \(\mathcal{G}\) on generated \(\hat{x}\) via (14) 3 end for\(\triangleright\) Phase 2: Knowledge distillation 4foreachepoch \(j\) of training server modeldo 5 Calculate loss \(\mathcal{R}_{KL_{S}}\) of the server model via (16) Retrain the weights of the server model via (17) 6 end for return\(w_{s}\) \(\triangleright\) Phase 3: Virtual Model Retraining 7 Generate \(L\) virtual models Cluster \(\hat{x}\) into \(L\) Partitions 8foreachepoch \(j\) of updating server modeldo 9foreachvirtual model \(w_{l}\)do 10\(\triangleright\) All models train in parallel 11 Initialize \(w_{l}\gets w_{s}\) 12 Train \(w_{l}\) using its assigned partition of \(\hat{x}\) 13 end for 14 Update \(w_{s}\leftarrow\frac{1}{L}\sum_{l=1}^{L}w_{l}\) 15 end for return\(w_{s}\) ``` **Algorithm 1**LEOShot's 3-phase process As a background, the server begins its operation once it receives all the client models from all the orbits via _sink satellites_. A sink satellite [15] is a satellite on each orbit who (1) collects all the models from other satellites on the same orbit (via intra-orbit model relay), (2) assembles them together into a _partial global model_[15] and (3) sends it to the server. ### _Synthetic Data Generation_ The objective of this phase is to build a generator \(\mathcal{G}\) to generate high-quality unlabeled synthetic data (i.e., having sufficient features for correct classification) without having to download any real data from satellites to a GS or requiring any auxiliary (i.e., publicly available) datasets. To achieve this objective, we use the ensemble of the client (satellite) models uploaded by various sink satellites to generate synthetic data (see Fig. 2a). Note that a salient feature of LEOShot is that it allows _heterogeneous model architectures_ of these client models. That is, each client can decide and use its own preferred neural network rather than a common architecture used by all the satellites (as dictated by the standard FL). We achieve this by using an _ensemble_ of all the client models and _train_ the target global model via _knowledge distillation_ (instead of by averaging model weights). Our generator \(\mathcal{G}\) was inspired by the generative adversarial network (GAN) [23], but is distinct from it as we do not require public datasets. Given a randomly generated input \(x\) (e.g., Gaussian white noise) and a random label \(y\), our generator attempts to generate synthesized data \(\hat{x}\) with similar features to the data collected by satellites, by solving the following optimization problem: \[\min_{\hat{x}}\mathcal{R}_{Gen}\triangleq\mathcal{R}_{CE}(D(\hat{x}),y)+ \mathcal{R}(\hat{x}), \tag{9}\] where \(\mathcal{R}_{Gen}\) denotes the overall loss of the generator, \(\mathcal{R}_{CE}(\cdot)\) is a cross-entropy loss (e.g., classification loss), and \(\mathcal{R}(\hat{x})\) is a regularization term used to steer \(\hat{x}\) towards more realistic data (e.g., images). Here, \(D(\hat{x})\) is the average logits of the ensemble model given an input \(\hat{x}\) (where logits are the output of usually the last fully connected layer), and is defined by \[D(\hat{x})=\frac{1}{|\mathcal{M}|}\sum_{l\in\mathcal{L}}f_{l}(\hat{x},w_{ \mathcal{K}_{l}}), \tag{10}\] where \(f_{l}(\hat{x},w_{\mathcal{K}_{l}})\) is an estimation function of the ensemble model received from orbit \(l\), that returns the logits of \(\hat{x}\) when the model parameter \(w_{\mathcal{K}_{l}}\) is given. Eq. (10) allows us to (indirectly) measure how well the generated synthetic data \(\hat{x}\) mimics both the distribution and the particular instances of the original satellite data, without accessing the original data. In fact, attempting to access satellites' training data contradicts the FL principles. Moreover, unlike traditional FL algorithms (e.g., FedAvg), our design of generator (9) uses logits (as in \(D(\cdot)\)) instead of client model weights, which enables our approach to deal with _heterogeneous_ client models. The regularizer \(\mathcal{R}(\hat{x})\) serves the purpose of improving the stability of the generator. The need comes from the fact that the ensemble of satellite models is trained on non-IID datasets, and therefore tend to make the generator unstable and get stuck in suboptimal local minima. Additionally, there is a risk of overfitting the synthetic data, which ultimately hinders the generator from achieving high levels of accuracy [24]. Therefore, we use a BatchNorm (BN) layer during training to normalize the feature maps to reduce the impact of covariate shifts [25] and overcome the gradient vanishing problem (so that the distance between synthetic images and original satellite images can be continuously reduced). In addition, this BN layer also implicitly stores the average logits as channel-wise means and variances. Thus finally, the \(\mathcal{R}(\hat{x})\) realized by this BN layer can be expressed by \[\mathcal{R}_{BN}(\hat{x})=\frac{1}{L}\sum\limits_{l\in\mathcal{L}}\sum\limits_ {b}\Bigl{(}\left\lVert\mu_{b}(\hat{x})-\mu_{l,b}\right\rVert_{2}+\left\lVert \sigma_{b}^{2}(\hat{x})-\sigma_{l,b}^{2}\right\rVert_{2}\Bigr{)} \tag{11}\] where \(\mu_{b}(\hat{x})\) and \(\sigma_{b}^{2}(\hat{x})\) are the batch-wise estimation of the mean and variance, respectively, associated with the _b-th_ BN layer of the generator, and \(\mu_{l,b}\) and \(\sigma_{l,b}^{2}\) are the mean and variance of the _b-th_ BN layer of \(f_{l}(\hat{x},w_{\mathcal{K}_{l}})\). As a result of adding this regularization term, our designed generator outputs synthetic images with high quality that are close to the original satellite images. ### _Knowledge Distillation_ In this phase, we strive to distill the knowledge from the ensemble of all the client models to train a server (i.e., global) model. Although the generated synthetic data has high quality, it is not useful for knowledge distillation because of the large gap between the ensemble model's decision boundaries and the server model's decision boundary [26]. To address this issue, we force the generator to generate more synthetic data with different distributions and then employ Kullback-Leibler (KL) divergence to minimize the distance between the predictions (proxy of decision boundary) of the ensemble model and the server model during synthetic data generation (bottom of Fig. 1(a)). The new regularizer term that represents the KL divergence is \[\mathcal{R}_{KL_{Gen}}(\hat{x}) =1-\frac{1}{2}\{KL(D_{G}(\hat{x}),Q)+KL(D_{E}(\hat{x}),Q))\} \tag{12}\] \[Q =\frac{1}{2}\cdot(D_{G}(\hat{x})+D_{E}(\hat{x})), \tag{13}\] where \(KL(\cdot)\) denotes the KL divergence loss, \(D_{G}(\hat{x})\) is the server model's logits, and \(D_{E}(\hat{x})\) is the ensemble model's average logits. Thus, with our BN and KL regularization terms, we reformulate our generator optimization problem (9) as \[\mathcal{R}^{\prime}_{Gen}=\min_{\hat{x}}\left[\mathcal{R}_{CE}(D(\hat{x}),y) +\gamma_{1}\mathcal{R}_{BN}(\hat{x})+\gamma_{2}R_{KL_{Gen}}(\hat{x})\right] \tag{14}\] where \(\gamma_{1}\) and \(\gamma_{2}\) are hyper-parameters to trade-off between the two losses. In addition, we optimize the weights of our generator \(\mathcal{G}\) for \(J\) epochs using SGD as follows: \[w^{j}_{Gen}=w^{j-1}_{Gen}-\eta_{g}\nabla\mathcal{R}_{Gen}(w^{j-1}_{Gen};(\hat{ x},y)^{j-1}),\quad j=1,2,\ldots,J \tag{15}\] where \(w^{j}_{Gen}\) is the generator's model at iteration \(j\), and \(\eta_{g}\) is the \(\mathcal{G}\)'s learning rate. After these optimization procedures, our generator can generate synthetic images not only of high quality but also of a distribution that resembles the original satellite data to enable effective knowledge distillation. Referring to Fig. 1(b), the next step after generating the synthetic data \(\hat{x}\) is to commence the updating and retraining of the server model until it attains an acceptable accuracy. To this end, the synthetic data \(\hat{x}\) is fed to both the ensemble of satellite models which acts as the teacher, and the server model which acts as the student. Subsequently, the average logits are computed as the outcome of training the teacher model using (10), which can be used for both homogeneous and heterogeneous ensemble models. The average logits are then applied to distill the knowledge from the ensemble (teacher) model to the server (student) model by minimizing the prediction error between the two models, through the KL divergence as follows: \[\mathcal{R}_{KL_{S}}(\hat{x})=KL(D(\hat{x}),D_{s}(\hat{x})) \tag{16}\] where \(D_{s}(\hat{x})\) is the logits of the server model after being trained on the generated synthetic data \(\hat{x}\). Since the satellite models are trained on non-IID data, we aim to improve the accuracy of the server model and address the issue of poor performance or divergence problem encountered in [24]. To achieve this, we further optimize the server model parameters by employing SGD as \[w^{j}_{s}=w^{j-1}_{s}-\eta_{s}\nabla\mathcal{R}_{KL_{S}}(w^{j-1}_{s};\hat{x}^ {j-1}),\quad j=1,2,\ldots,J \tag{17}\] where \(w^{j}_{s}\) is the server model at iteration \(j\), and \(\eta_{s}\) denotes the learning rate of the server model. Note that our method is different from [24] which uses local batch normalization at each client to harmonize local data distributions but requires several rounds of communication between server and clients. Instead, we use generated synthetic data to resemble the original satellite data and it allows us to directly train a server model locally using SGD, without the need for communication or averaging satellite models which are influenced by non-IID data distributions. On the downside, synthetic data may not be as good as real data; to address this, in Phase 3 we retrain our server model to improve model accuracy. As a result of this phase, we have successfully trained a server model that leverages the knowledge from the ensemble satellite models and the generated synthetic data, and we have taken into account the possible heterogeneity of both the data and models involved. Fig. 1(b) illustrates the entire knowledge distillation process. ### _Virtual Model Retraining_ Although the knowledge distillation phase outputs a functional server model, the model performance still has notable room for improvement (e.g., the classification accuracy was merely 70% on the MNIST dataset). Certainly, allowing for extra communication rounds (and aggregation) between the GS and the LEO satellites will improve model accuracy, but this clearly contradicts our goal of one-shot learning and will also negatively impact the convergence speed. Thus, we propose a novel method that transforms the distributed FL into a _localized_ version, by creating _virtual local models_ on the server and trains those models _locally_ until the server model converges. Specifically, our method consists of four steps: (i) clone \(L\) copies of the server model to serve as the initial virtual local models, (ii) partition the generated synthetic data (after labeling them using a K-means clustering algorithm) into \(L\) groups, each with the same class distribution as one of the \(L\) orbits (distributions were received at the end of model dissemination), (iii) train each virtual model on one of the \(L\) data groups, (iv) aggregate the weights of these trained virtual models to obtain an updated server model. The above repeats until the server model converges, which is the final global model. Fig. 1(c) illustrates the entire retraining process for virtual models. ## V Performance Evaluation ### _Simulation setup_ **LEO Constellation.** We consider a Walker-delta constellation \(\mathcal{M}\), which consists of 40 LEO satellites distributed over five orbits, each with eight satellites. Each orbit is located at an altitude \(h_{o}\) of 500km above the Earth's surface with an inclination angle of 80\({}^{\circ}\). A GS is located in Rolla, MO, USA (can be anywhere) with a minimum elevation angle \(\vartheta_{min}\) of 10\({}^{\circ}\). For both LEOShot and baselines, Table I (upper part) summarizes the parameters pertaining to the communication links described in Section III-B. By using Systems Tool Kit (STK), a software tool for analyzing satellite constellations, we extract the visibility between satellites and the GS. To obtain each set of results, we simulate communication between satellites and the GS over a period of three days. **Baselines.** We compare LEOShot with the state-of-the-art approaches that were proposed most recently and are reviewed in Section II, including FedSpace [4], FedISL [14], FedHAP [15], FedSat [17], and FedSatSchedule [5]. **ML models and Dataset.** An important implication of LEOShot's capability of allowing heterogeneous client models (cf. Section IV-A), is that the server no longer needs to broadcast an initial model \(w^{0}\) to all the clients, which in the context of SatCom will save a significant amount of time. However, for comparison with existing methods, we assume a standard (homogeneous) FL setting where the neural network architecture is common and broadcasting \(w^{0}\) is still required. _We highlight that this setup substantially favors baseline approaches_ and not ours. In the experiment, all satellites train a ResNet-50 model. For each baseline, the GS aggregates the satellite models into a global ResNet-50 model. For LEOShot, however, since a key component of knowledge distillation is to train a smaller global model, the GS trains a ResNet-18 model. For comparison purposes, we use the same datasets as all the baseline approaches use: MNIST [27] and CIFAR-10 [28]. Additionally, we consider a non-IID setting for both datasets, where satellites in two orbits are trained with 4 classes while satellites in the other three orbits are trained with the remaining 6 classes. The lower part of Table I summarizes the training hyperparameters. ### _Results_ **Comparison with Baselines.** As shown in Fig. 3, LEOShot achieves the fastest convergence (on both datasets) in only **90 minutes** with an accuracy of 85.64% on MNIST attained in a single communication round (the convergence time includes waiting for at least one satellite per orbit to be visible to upload a partial model to the GS). The second fastest approach is FedISL [14] with the _ideal_ setup described in Section II (GS at NP or MEO above the Equator), which takes 3.5 hours for FL to converge with an accuracy of 82.67%. Without the ideal setup, its convergence time spikes to 72 hours, and accuracy drops to 61.19%. FedSat [17] and FedHAP [15] have marginally higher accuracy than LEOShot but their convergence is significantly slower. Also importantly, FedSat assumes an ideal setup (GS at NP) similar to FedISL, and FedHAP requires extra hardware (HAP) of substantial extra cost. FedSatSchedule [5] does not have FedSat's ideal assumption, and as a result, its accuracy is only 76.32% and its convergence time doubles FedSat. For all methods, accuracy on CIFAR-10 is lower than on MNIST after the same amount of training time, which is particularly prominent for FedSat [17]. On the other hand, LEOShot maintains a very small difference, which demon \begin{table} \begin{tabular}{|l|l|} \hline **Parameters** & **Values** \\ \hline \hline Transmission power (satellite \& GS) \(P\) & 40 dBm \\ \hline Antenna gain of (satellite \& GS) \(G_{m},G_{s}\) & 6.98 dBi \\ \hline Carrier frequency \(f\) & 2.4 GHz \\ \hline Noise temperature \(T\) & 354.81 K \\ \hline Transmission data rate \(R\) & 16 Mb/sec \\ \hline \hline Number of local training epochs \(I\) & 300 \\ \hline Learning rate \(\eta\) & 0.001 \\ \hline Mini-batch size \(b_{k}\) & 32 \\ \hline Generator learning rate \(\eta_{g}\) & 0.001 \\ \hline Weighting factors \(\gamma_{1}\&\gamma_{2}\) & 1 \& 10 \\ \hline \end{tabular} \end{table} TABLE I: Simulation Parameters (upper: communication; lower: training) strates certain robustness. Recall that LEOShot achieves high accuracy in just a _single communication round_. **Effectiveness of using synthetic data in knowledge distillation.** Pertaining to Section IV-A and IV-B, this subsection (i) investigates whether using model _logits_ or model _parameters_ (weights or gradients) is more effective in producing a server model, and (ii) assesses the test accuracy on multiple deep learning (DL) models with the generated synthetic data. Fig. 4 shows a sample of the generated synthetic images using logits from satellites trained on MNIST and CIFAR-10 datasets. These images approximate the distribution and mimic the content of the real images very well, enabling an effective knowledge distillation. For the first purpose, we conducted two experiments on LEOShot. In the first experiment, satellite model parameters are uploaded to the GS similarly to traditional FL. In the second experiment, model logits are uploaded instead. All the satellites train a ResNet-50 model on MNIST in the non-IID setting. Fig. 5 shows the resulting test accuracy of each partial global model derived from each orbit, as well as the accuracy after averaging the weights or logits. The results indicate that the test accuracy of each partial global model varies depending on the data imbalance from each orbit. The accuracy of the server model is only 31.74% when averaging these partial models' parameters, which underperforms the individual partial models. In contrast, when averaging the logits, the server model achieves an accuracy of 62.36%, which outperforms all the individual partial models. This interesting finding confirms that a single communication round is far from sufficient for weight averaging (as in traditional FL) to accommodate the discrepancy between client models trained on non-IID data; but on the other hand, averaging logits allows knowledge distillation through our _local_ training that minimizes the distance between logits of our teacher and student models. In addition, using logits also allows us to accommodate model heterogeneity. \begin{table} \begin{tabular}{|l|l|l|} \hline DL model & \multicolumn{2}{c|}{Accuracy (\%)} \\ \cline{2-3} & MNIST & CIFAR-10 \\ \hline CNN (2 layers) & 62.26 & 60.88 \\ \hline VGG-11 & 69.15 & 62.72 \\ \hline Wide-ResNet-40-1 [29] & 71.51 & 66.79 \\ \hline **ResNet-18** & **73.64** & **70.67** \\ \hline \end{tabular} \end{table} TABLE II: Comparison of DL models trained on synthetic images Fig. 3: Accuracy and Convergence Time comparison on non-IID data. For all approaches, convergence time was first measured on MNIST and then fixed for CIFAR-10 to measure the accuracy. For the second purpose, Table II reports the performance of various DL models when trained using our synthesized images. We can see that all the DL models achieve acceptable accuracy, with ResNet-18 achieving the highest 73.64% on MNIST. This set of results validates that the quality of our synthesized images is reliable, which plays an important role in transferring knowledge to the server model. **Impact of virtual model retraining.** Pertaining to Section IV-C, here we investigate how our use of virtual local models affects the accuracy of the server model. As reported in Table II, the server model achieves an accuracy of 73.64% by using ResNet-18 as the target model and MNIST as the training dataset. Note that this is achieved without virtual model retraining. Now, with that, we clone five virtual ResNet-18 models initialized with the initial server model weights and train each virtual model on one of five partitions of synthetic images. The final server model is obtained via weight averaging. Our result in Fig. 3 shows that the server model accuracy increases to 85.64% which is a significant 16% improvement without requiring any extra communication round. ## VI Conclusion and Future work This work makes the first effort to introduce one-shot FL into SatCom. We propose a novel framework called LEOShot to address the challenge of highly sporadic and irregular visits of LEO satellites to a GS. Unlike prior work, LEOShot does not require public datasets or client data uploads, thus upholding important FL principles on data privacy protection and communication efficiency. In addition, unlike standard FL which dictates a common identical neural network architecture for all the clients and the server, LEOShot allows each client to choose its own preferred ML model based on its computing resources and data properties. In our quantitative study in comparison with the state-of-the-art benchmarks, we find that LEOShot reduces FL training/convergence time drastically up to 80 times (it converges in as short as 90 minutes); in the meantime, it achieves high accuracy even under challenging non-IID settings and outperforms the benchmarks by large margins. In our future work, we aim to examine LEOShot on real and diverse satellite datasets in different settings. This would include exploring a variety of LEO constellations ranging from sparse to dense constellations with GS located at different geographical locations, as well as training heterogeneous ML models across satellites and constellations. Fig. 4: Samples of synthetic images created by our generator when satellites trained on various datasets. Fig. 5: Test accuracy of partial models from sink satellites on different orbits (solid lines) and the best accuracy obtained by averaging logits or weights (dotted lines).
2304.04885
Forward Sensitivity Analysis and Mode Dependent Control for Closure Modeling of Galerkin Systems
Model reduction by projection-based approaches is often associated with losing some of the important features that contribute towards the dynamics of the retained scales. As a result, a mismatch occurs between the predicted trajectories of the original system and the truncated one. We put forth a framework to apply a continuous time control signal in the latent space of the reduced order model (ROM) to account for the effect of truncation. We set the control input using parameterized models by following energy transfer principles. Our methodology relies on observing the system behavior in the physical space and using the projection operator to restrict the feedback signal into the latent space. Then, we leverage the forward sensitivity method (FSM) to derive relationships between the feedback and the desired mode-dependent control. We test the performance of the proposed approach using two test cases, corresponding to viscous Burgers and vortex merger problems at high Reynolds number. Results show that the ROM trajectory with the applied FSM control closely matches its target values in both the data-dense and data-sparse regimes.
Shady E. Ahmed, Omer San
2023-04-10T22:14:17Z
http://arxiv.org/abs/2304.04885v1
# Forward Sensitivity Analysis and Mode Dependent Control for Closure Modeling of Galerkin Systems ###### Abstract Model reduction by projection-based approaches is often associated with losing some of the important features that contribute towards the dynamics of the retained scales. As a result, a mismatch occurs between the predicted trajectories of the original system and the truncated one. We put forth a framework to apply a continuous time control signal in the latent space of the reduced order model (ROM) to account for the effect of truncation. We set the control input using parameterized models by following energy transfer principles. Our methodology relies on observing the system behavior in the physical space and using the projection operator to restrict the feedback signal into the latent space. Then, we leverage the forward sensitivity method (FSM) to derive relationships between the feedback and the desired mode-dependent control. We test the performance of the proposed approach using two test cases, corresponding to viscous Burgers and vortex merger problems at high Reynolds number. Results show that the ROM trajectory with the applied FSM control closely matches its target values in both the data-dense and data-sparse regimes. keywords: Reduced order models, forward sensitivity, inverse problem, latent control, outer-loop applications, sparse sensors + ## 1 Introduction Engineers always tend to increase gains and reduce costs. For example, the airfoil design of an airplane wing is optimized to increase lift, reduce drag, and enhance stability. The design optimization process involves multiple forward runs to simulate the system's response to different inputs, parameters, and operating conditions. This multi-query nature is often labeled as outer-loop applications while the individual forward simulations are known as inner-loop computations. For high dimensional systems (e.g., fluid flows), the wall-clock time for such computations becomes incompatible with desired turnaround times for design cycles as well as realtime control. This computational burden presents a roadblock to the routine use of simulation tools by industry. Therefore, lightweight surrogates are often sought to approximate the effective dynamics and reduce the computational costs of inner-loop computations without compromising the integrity of the computational pipeline [9; 6; 33; 8; 7; 54; 55; 72; 27; 59; 32; 48; 29]. With the advent of data-driven tools and open-source software libraries, machine learning (ML) algorithms have been exploited to build computationally light emulators solely from data. The complex input-output relationships are learnt from precollected recordings of the system's dynamics during a compute-intensive process known as training. More recently, there has been an increasing interest in embedding existing knowledge to build hybrid physics informed ML frameworks [63; 31; 17; 73], possibly by considering feature enhancement [40], using prediction from simplified models as the bias [53; 51; 52], adopting transfer learning mechanisms [19; 23; 15], designing composite networks [41], implementing physics-informed neural networks [57; 30] and residual forcing [21], imposing conservation laws of physical quantities or analytical constraints into neural network [42; 11; 24], and embedding tensorial invariance and equivariance properties [38; 75; 46; 70]. Alternatively, projection-based reduced order models (PROMs) can be viewed as a physics-constrained ML methodology to emulate the system's dynamics. In particular, an effective low rank subspace is identified by means of modal analysis techniques that tailor a set of basis functions or modes representative of the dominant recurrent structures. The underlying physical constraints are imposed by performing a Galerkin projection of the governing equations onto the respective basis functions. To ensure computational efficiency, only a few basis functions are retained to build the Galerkin reduced order model (GROM). The combination of modal decomposition and projection techniques have been widely applied to build lightweight computational models in flow control systems [45; 13; 49]. Nonetheless, the number of required modes to sufficiently describe systems of interest can be quite large. This is especially true for systems with strong nonlinearity or extreme variations in the parameter space. For such, the GROM fails to accurately represent the system's trajectory. Moreover, GROM can yield long-term instabilities even if the original system is stable [1]. Therefore, correcting the GROM dynamics by introducing closure terms, stabilization schemes, or regularizers is a critical step to adopt them in a reliable framework. The closure problem has been studied extensively in the fluid dynamics and flow control community. Structural and/or functional relationships are often postulated, then physical and mathematical arguments are imposed to define the required parameterization. Alternatively, we address the closure modeling problem by viewing its effect as a control input applied in the latent space (i.e., latent control or latent action) to counteract the induced instabilities and inaccuracies from the GROM truncation. In particular, we employ a continuous time control signal to correct and stabilize the GROM trajectory by deriving low-rank closure models using principles from the Kolmogorov energy cascade of turbulence and energy conservation. We utilize the forecast error, measured as the discrepancy between GROM predictions and collected sensor data, as the feedback and develop a variational approach to update the control input. In addition, we leverage the forward sensitivity method (FSM) to derive first-order estimates of the relationships between the feedback and the desired control parameters [35]. When dealing with deterministic models, whether they are continuous or discrete, the FSM approach can be employed to effectively rectify the forecast errors that arise from inaccuracies in the initial conditions, boundary conditions, and model parameters (collectively called control) [36]. Specifically, the FSM framework possesses a significant benefit, which is its independence from a backward adjoint formulation. Instead, it transforms a dynamic data assimilation problem into a static, deterministic inverse problem, thereby constituting the primary tenet of this approach. The FSM technique employs a linear Taylor series approximation to derive sensitivity dynamics for control parameters, subsequently facilitating the translation of recurrence matrix equations for forward sense. While the computation of these recurrence relations may be computationally intensive for high dimensional state problems, the FSM method holds considerable appeal for models based on latent space projection. Our approach addresses the challenge of closure problems in under-resolved regimes by using a novel strategy to optimize control input parameters at the ROM level, which allows for a more accurate description of complex physical phenomena using a small number of modes. We highlight that one key aspect of the proposed framework is its flexibility in dealing with state variables and observables that live in distinct spaces. For example, the original system has a high dimensional state variable that lives in the physical space. In contrast, the reduced order system has a latent state variable defined in a low rank subspace. Finally, the observable output can be a different measurable quantity related to either space. This is conceptually related to the reduced order observers [18] and functional observers [34; 60; 44] developed in the control community. We demonstrate the proposed framework using the semi-discretized high dimensional flow problems corresponding to the Bateman-Burgers system and vortex merger at a large Reynolds number for the sensor data-rich and data-sparse regimes. This paper is organized as follows. In Section 2, we introduce the key elements of building GROM for high dimensional dynamical systems using a combination of proper orthogonal decomposition (Section 2.1) and Galerkin projection (Section 2.2). The closure problem is formally defined in Section 3 and the proposed FSM-based control approach is presented in Section 4. Numerical experiments are provided in Section 5 with the corresponding discussions. Finally, Section 6 draws the main conclusions of the study and offers outlook for future work. ## 2 Galerkin Reduced Order Models We consider an autonomous dynamical system defined as follows: \[\dot{\mathbf{u}}=\mathcal{F}(\mathbf{u}), \tag{1}\] where \(\mathbf{u}\in\mathbb{R}^{N}\) is the state vector (e.g., the value of the velocity field at discrete grid points) and \(\mathcal{F}:\mathbb{R}^{N}\times\rightarrow\mathbb{R}^{N}\) represents the system's dynamics (e.g., the spatial discretization of the Navier-Stokes equations). We note that Eq. (1) is often called the full order model (FOM) or high dimensional model (HDM) in ROM studies. Due to the computational complexity of solving Eq. (1) for large scale systems with millions of degrees of freedom (DOFs), FOMs are not feasible for multi-query applications (e.g., inverse problem and model predictive control). A possible mitigation strategy is to replace the state vector \(\mathbf{u}\) with a lower rank approximation, where the solution is approximated using a few basis functions that capture the main characteristics of the system. ### Proper Orthogonal Decomposition Proper orthogonal decomposition (POD) is one of the modal decomposition techniques that has been used successfully over last few decades to define optimal low rank bases for the quantities of interest [66; 5; 10; 26; 16; 37; 71; 72]. The POD procedure begins with a set of pre-collected realizations of the system's behavior (known as flow snapshots) at different times as follows: \[\mathcal{U}:=\{\mathbf{u}^{(1)},\mathbf{u}^{(2)},\ldots,\mathbf{u}^{(K)}\}, \tag{2}\] where \(\mathbf{u}^{(i)}\) denotes the \(i^{\rm th}\) snapshot reshaped into a column vector. A Reynolds decomposition of the flow field \(\mathbf{u}\) can be written as: \[\mathbf{u}=\bar{\mathbf{u}}+\mathbf{u}^{\prime}, \tag{3}\] where \(\bar{\mathbf{u}}\) is a reference field usually defined by the ensemble mean as follows: \[\bar{\mathbf{u}}=\frac{1}{K}\sum_{i=1}^{K}\mathbf{u}^{(i)}, \tag{4}\] and thus \(\mathbf{u}^{\prime}\) represents the fluctuating component of the field. POD (using the method of snapshots) seeks a low rank basis functions for the span of \(\mathcal{U}^{\prime}:=\{\mathbf{u}^{\prime(1)},\mathbf{u}^{\prime(2)},\ldots,\mathbf{u}^{ \prime(K)}\}\) by defining a correlation matrix \(\mathbf{C}\in\mathbb{R}^{K\times K}\) as follows: \[[\mathbf{C}]_{ij}=(\mathbf{u}^{\prime(i)},\mathbf{u}^{\prime(j)}), \tag{5}\] where \((\cdot,\cdot)\) denotes the appropriate inner product. An eigenvalue decomposition of \(\mathbf{C}\) yields a set of eigenvectors \(\mathbf{V}=[\mathbf{v}_{1},\mathbf{v}_{2},\ldots\mathbf{v}_{K}]\) and the corresponding eigenvalues \(\mathbf{\Lambda}=\mathrm{diag}[\lambda_{1},\lambda_{2},\ldots\lambda_{K}]\) as: \[\mathbf{C}\mathbf{V}=\mathbf{V}\mathbf{\Lambda}. \tag{6}\] For optimal basis selection, the eigenvalues are stored in descending order of magnitude (i.e., \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{K}\geq 0\)). The POD basis functions \(\mathbf{\Phi}=\{\mathbf{\phi}_{1},\mathbf{\phi}_{2},\ldots\mathbf{\phi}_{K}\}\) can be recovered as follows: \[\mathbf{\phi}_{i}=\frac{1}{\sqrt{\lambda_{i}}}\sum_{j=1}^{K}\mathbf{v}_{i,j}\mathbf{ u}^{\prime(j)}, \tag{7}\] where \(\mathbf{v}_{i,j}\) is the \(j^{\rm th}\) component of the \(i^{\rm th}\) eigenvector. Finally, the \(n^{\rm th}\) rank POD approximation of \(\mathbf{u}\) is obtained by considering only the first \(n\) basis functions as follows: \[\mathbf{u}\approx\bar{\mathbf{u}}+\sum_{i=1}^{n}a_{i}\mathbf{\phi}_{i}. \tag{8}\] ### Galerkin Projection In Eq. (8), the mean field \(\bar{\mathbf{u}}\) and the basis functions \(\mathbf{\Phi}\) are computed from the collected snapshots during an offline stage. In order to estimate \(\mathbf{u}\) at arbitrary times and/or parameters, a model that describes the variation of the coefficients \(\mathbf{a}=[a_{1},a_{2},\ldots,a_{n}]^{T}\) is required. The Galerkin ROM (GROM) of the dynamical system governed by Eq. (1) is obtained by replacing \(\mathbf{u}\) by its \(n^{\rm th}\) rank POD approximation from Eq. (8), followed by an inner product with arbitrary POD modes to yield the following system of ordinary differential equations: \[\dot{a}_{k}=\bigg{(}\mathcal{F}(\bar{\mathbf{u}}+\sum_{i=1}^{n}a_{i}\mathbf{\phi}_{i}), \mathbf{\phi}_{k}\bigg{)},\quad\text{for }k=1,2,\ldots,n. \tag{9}\] We note that the simplification in the left hand-side of Eq. (9) takes advantage of the orthornomality of the POD basis function (i.e., \((\mathbf{\phi}_{i},\mathbf{\phi}_{j})=\delta_{ij}\) where \(\delta_{ij}\) is the Kronecker delta). Without loss of generality, we consider the following incompressible Navier-Stokes equation (NSE) for demonstration purposes: \[\begin{split}\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u}\cdot \nabla)\mathbf{u}&=-\nabla p+\nu\Delta\mathbf{u},\\ \nabla\cdot\mathbf{u}&=0,\end{split} \tag{10}\] where \(\mathbf{u}\) is the velocity vector field, \(p\) the pressure field, and \(\nu\) is the kinematic viscosity. This form of the NSE captures the main characteristics of a large class of flow problems with quadratic nonlinearity and second order dissipation. In our results section, we showcase the applicability of the presented approach in the 1D Burgers problem and the 2D vortex-merger flow problem governed by the vortex transport equations. Applying the Galerkin method to Eq. (10), the resulting GROM reads as follows: \[\dot{\mathbf{a}}=\mathbf{b}+\mathbf{L}\mathbf{a}+\mathbf{a}^{T}\mathbf{N}\mathbf{a}, \tag{11}\] where \(\mathbf{b}\in\mathbb{R}^{n}\), \(\mathbf{L}\in\mathbb{R}^{n\times n}\), and \(\mathbf{N}\in\mathbb{R}^{n\times n\times n}\) respectively represent the constant, linear, and nonlinear terms that result from the inner product between the FOM operators and the POD basis functions. ## 3 The Closure Problem Due to the modal truncation (i.e., using \(n\ll K\) in Eq. (8)), the effects of the truncated scales onto the resolved scales are not captured by Eq. (9). Therefore, the resulting GROM fails to accurately represent the dynamics of the ROM variables \(\mathbf{a}\). Previous studies have shown that GROM yields inaccurate and sometimes unstable behavior even if the actual system is stable [25]. Therefore, efforts have been focused onto developing techniques to correct the GROM trajectory, including works to correct and/or stabilize the GROM using closure and/or regularization schemes [67]. As introduced in Section 1, we view the process of adjusting the GROM trajectory as a control task with a _computational actuator_ in the latent space of the reduced order model. To this end, we modify Eq. (11) by adding a control input \(\mathbf{c}(t)=[c_{1},c_{2},\ldots,c_{n}]^{T}\) as follows: \[\dot{\mathbf{a}}=\mathbf{b}+\mathbf{L}\mathbf{a}+\mathbf{a}^{T}\mathbf{N}\mathbf{a}+\mathbf{c}(t). \tag{12}\] The goal of the control \(\mathbf{c}\) is to steer the GROM predictions toward the target trajectory defined as follows: \[\widehat{a}_{k}(t)=\bigg{(}\mathbf{u}(t)-\bar{\mathbf{u}},\mathbf{\phi}_{k}\bigg{)}, \tag{13}\] where the superscript \(\widehat{(\cdot)}\) denotes the target values. It can be verified that the trajectory given in Eq. (13) with the POD basis functions \(\mathbf{\phi}_{k}\) yields the minimum approximation error among all possible reconstructions of rank-\(n\) (or less) [26]. The control input that would result in values of \(\mathbf{a}\) that are exactly equal to their optimal values in Eq. (13) can be defined as follows: \[c_{k}(t)=\bigg{(}\mathcal{F}(\mathbf{u}),\mathbf{\phi}_{k}\bigg{)}-\bigg{(}\mathcal{F }(\bar{\mathbf{u}}+\sum_{i=1}^{n}a_{i}\mathbf{\phi}_{i}),\mathbf{\phi}_{k}\bigg{)}. \tag{14}\] Equation (14) essentially leads to the following: \[\dot{a}_{k}=\bigg{(}\mathcal{F}(\mathbf{u}),\mathbf{\phi}_{k}\bigg{)}, \tag{15}\] which is an exact equation (i.e., no truncation) for the dynamics of \(\mathbf{a}\). However, we highlight that Eq. (14) is not useful in practice as it requires solving the FOM to compute \(\mathbf{u}\) at each time step. Therefore, alternative approximate models are sought to estimate \(\mathbf{c}\) as a function of the available information in the ROM subspace (i.e., \(\{a_{k},\phi_{k}\}_{k=1}^{n}\)). To account for the effect of ROM truncation onto the dynamics of ROM scales themselves is often referred to as the _closure problem_. The development of closure models for ROMs has been largely influenced by turbulence modeling and especially large eddy simulation (LES) studies. For example, by analogy between POD modes and Fourier modes, it is often postulated that the high-index modes (i.e., \(\{\phi_{k}\}_{k>n}\)) are responsible for dissipating the energy. In turn, by truncating these modes, energy accumulates in the systems causing instabilities. In this regard, the addition of artificial dissipation through eddy viscosity has shown substantial success in improving ROM accuracy [12; 3; 74; 61]. Nonetheless, the determination of the eddy viscosity term has been a major challenge. Several studies relied on brute-force search to select optimal values while other works were inspired by state-of-the-art LES models [1], such as the Smagorinksy model [3; 62] or its dynamic counterparts [74; 56]. Noack et al. [45] utilized a finite-time thermodynamics approach to quantify a nonlinear eddy viscosity by matching the modal energy transfer effect. A notably distinct closure model was proposed in [14] by adding a linear damping term to the ROM equation. This model is predefined using the collected ensemble of snapshots following an energy conservation analysis. The present study draws concepts from the Kolmogorov energy cascade and energy conservation principles to define the effect of the modal truncation on ROM dynamics. In order to derive the form of the closure model, we add a combination of linear friction and diffusion terms to Eq. (10) as follows: \[\frac{\partial\mathbf{u}}{\partial t}+(\mathbf{u}\cdot\nabla)\mathbf{u}=-\nabla p+\nu\Delta \mathbf{u}+\gamma\mathbf{u}+\beta\Delta\mathbf{u}, \tag{16}\] where \(\gamma\) and \(\beta\) are the friction and diffusion parameters, respectively. Projecting Eq. (16) onto the POD subspace leads to a model for \(\mathbf{c}\) as follows: \[\mathbf{c}(t) =\gamma\mathbf{e}+\gamma\mathbf{a}+\beta\mathbf{q}+\beta\mathbf{Da} \tag{17}\] \[\text{where:} [\mathbf{e}]_{k} =\big{(}\bar{\mathbf{u}},\mathbf{\phi}_{k}\big{)},\quad[\mathbf{q}]_{k}=\big{(} \Delta\bar{\mathbf{u}},\mathbf{\phi}_{k}\big{)},\quad[\mathbf{D}]_{k,i}=\big{(}\Delta\mathbf{ \phi}_{i},\mathbf{\phi}_{k}\big{)}. \tag{18}\] Thus, we aim at correcting the GROM trajectory by estimating optimal values for \(\gamma\) and \(\beta\) and we refer to them as the _control parameters_ or simply the _control_. However, we highlight that even if we hypothesize that the correction term can be _approximated_ using Eq. (17), this approximation is by no means exact. Instead, we are interested in values of \(\alpha\) and \(\beta\) that would result in the least error (with respect to a set of reference data points). In addition, it should be noted here that inconsistency issues (between the full order model and reduced order model) might arise from the introduction of arbitrary closure models. We refer the interested readers to [47; 22; 68; 69]. In the present study, we set \(\gamma=[\gamma_{1},\gamma_{2},\ldots,\gamma_{n}]^{T}\in\mathbb{R}^{n}\) and \(\beta=[\beta_{1},\beta_{2},\ldots,\beta_{n}]^{T}\in\mathbb{R}^{n}\) to allow variability of the closure model with different modes. The use of mode-dependent correction has been shown to provide better closure models, e.g., by matching energy levels between FOM and ROM [58], incorporating spectral kernels [65; 61], or utilizing the variational multiscale framework [74; 28; 20]. ## 4 Forward Sensitivity Method We leverage the forward sensitivity method (FSM) [35; 36] to estimate the parameters \(\gamma\) and \(\beta\) from a combination of the underling dynamical model and collected observational data. To simplify our notation, we rewrite Eq. (12), with \(\mathbf{c}(t)\) defined using Eq. (17), as follows: \[\dot{\mathbf{a}}=\mathbf{f}(\mathbf{a},\mathbf{\theta}), \tag{19}\] where \[\mathbf{f}(\mathbf{a},\mathbf{\theta})=\mathbf{b}+\mathbf{L}\mathbf{a}+\mathbf{a}^{T}\mathbf{N}\mathbf{a}+\gamma \mathbf{e}+\gamma\mathbf{a}+\beta\mathbf{q}+\beta\mathbf{Da} \tag{20}\] \[\mathbf{\theta}=[\gamma_{1},\gamma_{2},\ldots,\gamma_{n},\beta_{1},\beta_{2},\ldots, \beta_{n}]^{T}\in\mathbb{R}^{2n} \tag{21}\] denotes the control parameters. In what follows, we use a set of collected, possibly sparse and noisy, measurements to approximate how the predictions of Eq. (19) deviate from their target values. In addition, we describe how these predictions depend on the control parameter \(\mathbf{\theta}\) in Section 4.2. Finally, we fuse these two pieces of information to derive a relationship between the model predictions, temporal measurements, and the corrected parameter values in Section 4.3. ### Forecast Error and Feedback We monitor the model behavior by collecting a set of measurements \(\mathbf{z}\in\mathbb{R}^{m}\) as follows: \[\mathbf{z}(t)=\mathbf{h}(\widehat{\mathbf{a}(t)})+\eta(t), \tag{22}\] where \(\mathbf{h}(\cdot):\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) represents the observational operator, \(\widehat{\mathbf{a}}\) is the true value of the model state \(\mathbf{a}\), and \(\eta\) is the sensor measurement noise. We note that the measurement \(\mathbf{z}\) is a function of the ground truth and the observation operator can involve a sampling operation, an interpolation, or even a mapping between different spaces. Therefore, the dimensionality \(m\) of the observation \(\mathbf{z}\) is not necessarily equal to the dimensionality \(n\) of the state \(\mathbf{a}\). For the measurement noise, we consider a white Gaussian perturbation (i.e., \(\eta(t)\sim\mathcal{N}(\mathbf{0},\mathbf{R}(t))\), where \(\mathbf{R}(t)\) denotes the measurement noise covariance matrix). In most cases, \(\mathbf{R}(t)\) is a diagonal matrix implying that the measurement noise from different sensors are uncorrelated to each other. For simplicity, we assume that \(\mathbf{R}(t)=\sigma^{2}\mathbf{I}_{m}\), where \(\sigma\) is the standard deviation for the measurement noise and \(\mathbf{I}_{m}\) is the \(m\times m\) identity matrix. Due to deviations between the target trajectory (corresponding Eq. (13)) and the GROM solution, the resulting forecast error can be written as follows: \[\mathbf{\epsilon}(t)=\mathbf{z}(t)-\mathbf{h}(\mathbf{a}(t)). \tag{23}\] The definition of the operator \(\mathbf{h}(\cdot)\) is an important component of the whole setup. One option is to stick to the fact that measurements are often collected in the physical space and thus define the forecast error in this space. Considering the Burgers problem with a velocity field \(\mathbf{u}\in\mathbb{R}^{N}\), we might be able to collect only data for \(\mathbf{v}\in\mathbb{R}^{M}\) that is related to \(\mathbf{u}\) as \(\mathbf{v}=\mathcal{H}(\mathbf{u})\), where \(\mathcal{H}(\mathbf{u})\) is an observation operator in the physical space of \(\mathbf{u}\), compared to \(\mathbf{h}(\mathbf{a})\) that is applied in the latent space of \(\mathbf{a}\). Thus, by setting \(\mathbf{z}=\mathbf{v}\) (i.e., \(m=M\)), we have \(\mathbf{h}(\mathbf{a})=\mathcal{H}\big{(}\bar{\mathbf{u}}+\sum_{i=1}^{n}a_{i}\phi_{i}\big{)}\). Another computationally attractive option is to define the forecast error in a latent space defined as follows: \[\mathbf{v}=\sum_{i=1}^{m}z_{i}\mathbf{\psi}_{i}, \tag{24}\] where \(\{\mathbf{\psi}_{i}\}_{i=1}^{m}\) is some low rank basis for \(\mathbf{v}\)[50]. Considering a POD basis \(\mathbf{\psi}\), the components of \(\mathbf{z}\) can be computed by projecting the field \(\mathbf{v}\) on the respective basis functions as \(z_{i}(t)=\big{(}\mathbf{v}(t),\mathbf{\psi}_{i}\big{)}\). Therefore, the observation operator \(\mathbf{h}(\cdot)\) can be defined as follows: \[[\mathbf{h}(\mathbf{a})]_{k}=\bigg{(}\mathcal{H}\big{(}\bar{\mathbf{u}}+\sum_{i=1}^{n}a_{i }\phi_{i}\big{)},\mathbf{\psi}_{k}\bigg{)}. \tag{25}\] ### Sensitivity Dynamics Since our objective is to relate the feedback \(\mathbf{\epsilon}\) to the desired control parameters \(\mathbf{\theta}\), we first need to define how these parameters affect the model predictions. Thus, we define the sensitivity of the model forecast \(\mathbf{a}\) at any time \(t\) with respect to the model's parameters \(\mathbf{\theta}\) using \(\mathbf{V}\in\mathbb{R}^{n\times 2n}\) as follows, \[[\mathbf{V}(t)]_{ij}=\bigg{[}\frac{\partial a_{i}(t)}{\partial\theta_{j}} \bigg{]}. \tag{26}\] By differentiating the dynamical model (i.e., Eq. (19)) with respect to its parameters \(\mathbf{\theta}\) and using the chain rule, it can be verified that \(\mathbf{V}(t)\) evolves according to the following linear system: \[\dot{\mathbf{V}}(t)=\mathbf{D}_{\mathbf{f}}(t)\mathbf{V}(t)+\mathbf{D}_{\mathbf{f}}^{ \mathbf{\theta}}(t), \tag{27}\] where \(\mathbf{D_{f}}\) and \(\mathbf{D_{f}^{\theta}}\) symbolize the Jacobian of the model \(\mathbf{f}\) with respect to the state \(\mathbf{a}\) and parameters \(\mathbf{\theta}\), respectively as follows: \[\mathbf{D_{f}}=\begin{bmatrix}\dfrac{\partial f_{1}}{\partial a_{1}}&\dfrac{ \partial f_{1}}{\partial a_{2}}&\dots&\dfrac{\partial f_{1}}{\partial a_{n}} \\ \dfrac{\partial f_{2}}{\partial a_{1}}&\dfrac{\partial f_{2}}{\partial a_{2}}& \dots&\dfrac{\partial f_{2}}{\partial a_{n}}\\ \vdots&\vdots&\ddots&\vdots\\ \dfrac{\partial f_{n}}{\partial a_{1}}&\dfrac{\partial f_{n}}{\partial a_{2}}& \dots&\dfrac{\partial f_{n}}{\partial a_{n}}\end{bmatrix}\in\mathbb{R}^{n \times n}, \tag{28}\] where \(\mathbf{D_{f}}\) and \(\mathbf{D_{f}^{\theta}}\) symbolize the Jacobian of the model \(\mathbf{f}\) with respect to the state \(\mathbf{a}\) and parameters \(\mathbf{\theta}\), respectively as follows: \[\mathbf{D_{f}}=\begin{bmatrix}\dfrac{\partial f_{1}}{\partial a_{1}}&\dfrac{ \partial f_{1}}{\partial a_{2}}&\dots&\dfrac{\partial f_{1}}{\partial a_{n}} \\ \dfrac{\partial f_{2}}{\partial a_{1}}&\dfrac{\partial f_{2}}{\partial a_{2}}& \dots&\dfrac{\partial f_{2}}{\partial a_{n}}\\ \vdots&\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ \dfrac{\partial f_{n}}{\partial\gamma_{1}}&\dfrac{\partial f_{n}}{\partial \gamma_{2}}&\dots&\dfrac{\partial f_{n}}{\partial\gamma_{n}}&\dfrac{\partial f _{n}}{\partial\beta_{1}}&\dfrac{\partial f_{n}}{\partial\beta_{2}}&\dots& \dfrac{\partial f_{n}}{\partial\beta_{n}}\end{bmatrix}\in\mathbb{R}^{n \times 2n} \tag{29}\] Since the initial conditions of the model sate (i.e., \(\mathbf{a}(0)\)) is independent of the model's parameters, we can set \(\mathbf{V}(0)=0\). Therefore, Eq. (27) can be solved along with Eq. (19) to compute the model's predictions at any time as well as the sensitivity of such predictions to the model's parameters. ### Parameter Estimation The deviations in the GROM trajectory and closure model parameterizations are denoted \(\delta\mathbf{a}=\widehat{\mathbf{a}}-\mathbf{a}\) and \(\delta\mathbf{\theta}=\widehat{\mathbf{\theta}}-\mathbf{\theta}\), respectively, where the superscript \(\widehat{(\cdot)}\) corresponds to their target values. Thus, Eq. (26) can be used to relate \(\delta\mathbf{a}\) to \(\delta\mathbf{\theta}\) as: \[\delta\mathbf{a}(t)=\mathbf{V}(t)\delta\mathbf{\theta}. \tag{30}\] The first order Taylor expansion of \(\mathbf{h}(\cdot)\) around \(\mathbf{a}\) can be written as \(\mathbf{h}(\mathbf{a}_{T})=\mathbf{h}(\mathbf{a})+\mathbf{D_{\mathbf{h}}}(\mathbf{a})\delta\mathbf{a}\), where \(\mathbf{D_{\mathbf{h}}}\) is the Jacobian of the observation operator \(\mathbf{h}\). Therefore, the forecast error \(\mathbf{\epsilon}(t)\) in Eq. (23) can be rewritten as follows: \[\mathbf{\epsilon}(t) =\mathbf{z}(t)-\mathbf{h}(\mathbf{a}(t))\] (30) \[=\mathbf{h}(\widehat{\mathbf{a}}(t))+\eta(t)-\mathbf{h}(\mathbf{a}(t))\] \[=\mathbf{h}(\mathbf{a}(t)\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ``` 1:Start with the initial value of control \(\mathbf{\theta}\) and compute the model trajectory \(\mathbf{a}(t)\) 2:Compute sensitivity dynamics \(\mathbf{V}(t)\) 3:Assemble \(\mathbf{H}\) 4:Solve \(\mathbf{H}\delta\mathbf{\theta}=\mathbf{\xi}\) as a weighted linear least squares using the weight \(\mathbf{R}^{-1}\) 5:Set \(\mathbf{\theta}\leftarrow\mathbf{\theta}+\delta\mathbf{\theta}\) ``` **Algorithm 1** Pseudocode of the forward sensitivity method for parameter estimation. Figure 1 depicts the process of identifying a lower order model for a high dimensional system and the use of the forward sensitivity framework to parameterize closure models that corrects the truncated GROM predictions. It is worth noting that once the measurement data are incorporated to estimate the parameter \(\mathbf{\theta}\), the model Eq. (19) can be used to make predictions at any time (i.e., not restricted to the instants when measurement data are available). ## 5 Results & Discussion We demonstrate the FSM closure framework using two canonical test problems with different levels of complexity. The first case deals with a one dimensional nonlinear advection diffusion system governed by the viscous Burgers equation, which is considered Figure 1: A schematic illustration of the algorithmic steps for deriving a Galerkin reduced order model (top) and closure modeling in the form of a latent control input using a forward sensitivity analysis (bottom). the 1D version of the NSE (see Eq. (10)). In the second demonstration, we consider the two dimensional vortex merger problem governed by the vorticity transport equation (i.e., the curl of the two dimensional NSE). We study the cases of full field measurement and the more practical scenario when only very sparse sensor data are available. ### Viscous Burgers Problem The 1D viscous Burgers problem can be written as: \[\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}=\nu\frac{\partial^ {2}u}{\partial x^{2}}. \tag{35}\] We perform our numerical experiments at a Reynolds number \(\text{Re}=10,000\), which is equivalent to setting \(\nu=10^{-4}\) in Eq. (35). For FOM solution, we utilize a family of compact finite difference schemes for spatial discretization and the third order total variation diminishing Runge-Kutta (TVD-RK3) scheme for temporal integration [61]. We assume an initial condition of a unit step function as follows: \[\mathbf{u}(x,0)=\begin{cases}1,&\text{if}\quad x\in[0,0.5],\\ 0,&\text{if}\quad x\in(0.5,1].\end{cases} \tag{36}\] We divide the spatial domain into 4096 equally spaced intervals and utilize a time step of \(\Delta t_{FOM}=10^{-4}\) for the FOM solution. We store velocity field snapshots every 100 time steps to perform the POD analysis. For GROM, we retain \(n=6\) modes to approximate the velocity field. An eigenvalue analysis reveals that 6 modes capture about 92% of the total system turbulent kinetic energy defined as \(\frac{1}{2}\langle u_{i}u_{i}\rangle\), where \(\langle\cdot\rangle\) denotes an averaging operation. In particular, we use the relative information content (RIC) metric, as shown in Fig. 2 and defined as follows: \[\text{RIC}(n)=\frac{\sum_{i=1}^{n}\lambda_{i}}{\sum_{i=1}^{m}\lambda_{i}} \times 100. \tag{37}\] The inner product between Eq. (35) and the POD basis functions leads to the GROM in Eq. (11), where the corresponding coefficients can be defined as follows: \[\begin{split}[\mathbf{b}]_{k}&=\bigg{(}\nu\frac{ \partial^{2}\bar{u}}{\partial x^{2}}-\bar{u}\frac{\partial\bar{u}}{\partial x },\mathbf{\phi}_{k}\bigg{)},\\ [\mathbf{L}]_{k,i}&=\bigg{(}\nu\frac{\partial^{2}\mathbf{ \phi}_{i}}{\partial x^{2}}-\bar{u}\frac{\partial\mathbf{\phi}_{i}}{\partial x}- \mathbf{\phi}_{i}\frac{\partial\bar{u}}{\partial x},\mathbf{\phi}_{k}\bigg{)},\\ [\mathbf{N}]_{k,i,j}&=\bigg{(}-\mathbf{\phi}_{i}\frac{ \partial\mathbf{\phi}_{j}}{\partial x},\mathbf{\phi}_{k}\bigg{)}.\end{split} \tag{38}\] The time integration of the GROM model is carried out using a time step of \(\Delta t_{ROM}=0.01\), which is 100 times larger than \(\Delta t_{FOM}\). Although 6 POD modes represent more than 92% of the total system energy, the GROM fails to correctly capture their temporal dynamics as we shall see in the following discussions. This exemplifies the need for a mechanism to correct the GROM predictions. The modified Burgers equation to admit the closure model can be written as follows: \[\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}=\nu\frac{\partial ^{2}u}{\partial x^{2}}+\gamma u+\beta\frac{\partial^{2}u}{\partial x^{2}}, \tag{39}\] and the resulting closure model terms in Eq. (17) are \[[\mathbf{e}]_{k}=\big{(}\bar{u},\mathbf{\phi}_{k}\big{)},\qquad[\mathbf{q}]_{k}=\bigg{(} \frac{\partial^{2}\bar{u}}{\partial x^{2}},\mathbf{\phi}_{k}\bigg{)},\qquad[\mathbf{D} ]_{k,i}=\bigg{(}\frac{\partial^{2}\mathbf{\phi}_{i}}{\partial x^{2}},\mathbf{\phi}_{k }\bigg{)}. \tag{40}\] In Section 5.1.1, we use full field measurement data to parameterize such model and in Section 5.1.2 we deal with the sparse data regime. #### 5.1.1 Full field observations We assume that the sensor signal is contaminated by a white Gaussian noise with zero mean and standard deviation of 0.1, which represents 10% of the peak velocity. In other words, we define \(\mathbf{v}=\mathbf{u}+\eta\) where \(\eta\sim\mathcal{N}(0,0.01\mathbf{I})\) and thus \([\mathbf{z}(t)]_{i}=(\mathbf{v}(t)-\bar{\mathbf{u}},\mathbf{\phi}_{i})\) (see Section 4.1). We collect measurement after every 10 time integrations of the GROM (i.e., \(\Delta t_{Obs}=0.1\)). We refer to the solution with the target trajectory (given by Eq. (13)) as prediction with "True Closure" notion. On the other hand, the solution of the _uncontrolled_ GROM (i.e., Eq. (11)) is denoted as the "No Closure" solution. Finally, the solution of the _controlled_ GROM (i.e., Eq. (12)) with FSM used to parameterize the presumed closure Figure 2: The decay of eigenvalues (left) and behavior of relative information content (right) for the 1D Burgers problem at \(\mathrm{Re}=10^{4}\). model in Eq. (17) is labeled as "FSM Closure". Figure 3 depicts the predicted dynamics in the latent ROM space using the considered different approaches. We observe that GROM leads to inaccuracies and significantly amplifies the magnitude of predicted coefficients, especially for the last mode. This behavior is likely to cause long term instabilities in the solution even if the actual system is stable. On the other hand, the FSM effectively controls the GROM trajectory and keeps it closer to the target trajectory. We emphasize that we implement a mode-dependent control to respect the distinct characteristics of the resolved modes defining recurrent flow structures. In Fig. 4, we evaluate the performance in the physical space by computing the reconstructed flow field using Eq. (8) compared to the FOM fields. In addition, the relative error for the predicted POD coefficients as well as the reconstructed velocity fields as a function of time is shown in Fig. 5. We see that results from FSM Closure are close to the True Closure which represents the minimum reconstruction error that could be obtained using 6 modes. On the other hand, vanilla-type GROM without closure yields inaccurate and even non-physical solution in the spatio-temporal space. #### 5.1.2 Sparse field observations We extend our numerical experiments to explore incomplete field measurement scenarios. In particular, we consider a sparse signal \(\mathbf{s}\in\mathbb{R}^{S}\) of the observable field \(\mathbf{v}\) as follows: \[\mathbf{s}=\mathbf{\Theta}\mathbf{v}, \tag{41}\] where \(\mathbf{\Theta}\in\mathbb{R}^{S\times M}\) is a sampling matrix, constructed by taking \(S\) rows of the \(M\times M\) identity matrix (i.e., \([\mathbf{\Theta}]_{ij}=1\) if the \(i^{th}\) sensor is located at the \(j^{th}\) location and \([\mathbf{\Theta}]_{ij}=0\) Figure 3: Dynamics of the first and last modal coefficients with full field measurement for the FSM Closure. Figure 4: Spatio-temporal field predictions of Burgers problem using FOM and GROM approaches. Full field measurements are considered for the FSM Closure. Figure 5: The relative error between the predicted values for the POD modal coefficients (left) and reconstructed velocity field (right) compared to their target values for 1D Burgers problem. Full field measurements are considered for the FSM Closure. otherwise). Sensors can be placed at equally-spaced locations, random locations, or carefully selected places. Optimal sensor placement is an active field of research, also known as optimal experimental design (OED). We refer to [4] and references therein for more information. In this regard, we utilize a greedy compressed sensing algorithm based on QR decomposition with column pivoting to set-up a near-optimal sensor placement strategy as follows: \[\boldsymbol{\Psi}^{T}\mathbf{P}:=\mathbf{Q}\mathbf{R}, \tag{42}\] where \(\boldsymbol{\Psi}=[\boldsymbol{\psi}_{1},\boldsymbol{\psi}_{2},\ldots, \boldsymbol{\psi}_{S}]\in\mathbb{R}^{M\times S}\) includes the first \(S\) POD basis functions for \(\boldsymbol{v}\), and \(\mathbf{P}\in\mathbb{R}^{M\times M}\) is the permutation matrix. Manohar et al. [39] showed that by using the first \(S\) rows of \(\mathbf{P}\) to define the sampling matrix \(\boldsymbol{\Theta}\), a near optimal sensor placement is obtained with similarities to the A- and D-optimality criteria in OED studies. Finally, the field \(\boldsymbol{v}\) can be reconstructed as \(\boldsymbol{v}\approx\boldsymbol{\Psi}(\boldsymbol{\Theta}\boldsymbol{\Psi} )^{-1}\boldsymbol{s}\). Again, if we assume that the observable \(\boldsymbol{v}\) is the velocity field \(\boldsymbol{u}\) itself, the latent measurement \(\boldsymbol{z}\) can be computed as \([\boldsymbol{z}(t)]_{i}=(\boldsymbol{\Psi}(\boldsymbol{\Theta}\boldsymbol{ \Psi})^{-1}\boldsymbol{s}(t)-\bar{\boldsymbol{u}},\boldsymbol{\phi}_{i})\). Figure 6 displays the time evolution of the first and sixth modal coefficients with the adopted FSM closure methodology in the case of sparse measurements. In particular, we selected 25 locations (about 0.5% of the total number of grid points) using the described QR-based algorithm to define the sensors data. We see that FSM closure yields very accurate results that are close to the the target trajectory even with the sparse measurement data. The reconstruction accuracy is also demonstrated using Fig. 7, showing significant improvements compared the GROM predictions without control. Similar observations can be found in Fig. 8 displaying the relative error for the predicted POD coefficients and the reconstructed velocity fields with respect to the target values that represent the minimum reconstruction error with 6 modes. Figure 6: Dynamics of the first and last modal coefficients with sparse field measurement for the FSM Closure. Figure 7: Spatio-temporal field predictions of Burgers problem using FOM and GROM approaches. Sparse field measurements are considered for the FSM Closure. Figure 8: The relative error between the predicted values for the POD modal coefficients (left) and reconstructed velocity field (right) compared to their target values for 1D Burgers problem. Sparse field measurements are considered for the FSM Closure. ### Vortex Merger Problem One of the key benefits of the proposed FSM closure framework is that it is dealing with the reduced order model of the problem instead of the full fledged high dimensional model. Therefore, the computational complexity is dependent on the number of employed POD modes, rather than the spatial dimensionality of the problem. In order to highlight this aspect, we consider the two dimensional (2D) vortex merger problem [64], governed by the following vorticity transport equation: \[\frac{\partial\omega}{\partial t}+J(\omega,\psi)=\frac{1}{\text{ Re}}\Delta\omega,\qquad\text{in }\Omega\times[0,T]. \tag{43}\] where \(\omega\) and \(\psi\) denote the vorticity and streamfunction fields, and \((J(\cdot,\cdot))\) is the Jacobian operator defined as: \[J(\omega,\psi)=\frac{\partial\omega}{\partial x}\frac{\partial\psi}{\partial y }-\frac{\partial\omega}{\partial y}\frac{\partial\psi}{\partial x}. \tag{44}\] The vorticity and streamfunction are linked by the kinematic relationship: \[\Delta\psi=-\omega. \tag{45}\] We consider a spatial domain of dimensions \((2\pi\times 2\pi)\) with periodic boundary conditions in both the \(x\) and \(y\) directions. The flow is initiated with a pair of co-rotating Gaussian vortices with equal strengths centered at \((x_{1},y_{1})=(5\pi/4,\pi)\) and \((x_{2},y_{2})=(3\pi/4,\pi)\) as follows: \[\omega(x,y,0)=\exp\left(-\rho\left[(x-x_{1})^{2}+(y-y_{1})^{2}\right]\right)+ \exp\left(-\rho\left[(x-x_{2})^{2}+(y-y_{2})^{2}\right]\right)\!, \tag{46}\] where \(\rho\) is a parameter that controls the mutual interactions between the two vortical motions. In the present study, we consider \(\text{Re}=5000\) and set \(\rho=\pi\). For the FOM simulations, we define a regular Cartesian grid with a resolution of \(256\times 256\) (i.e., \(\Delta x=\Delta y=2\pi/256\)). For temporal integration of the FOM model, we use the TVD-RK3 scheme with a time-step of \(10^{-3}\). Vorticity snapshots are collected every 100 time-steps for \(t\in[0,50]\), resulting in a total of 500 snapshots. The evolution of the vortex merger problem is depicted in Fig. 9, which illustrates the convective and interactive mechanisms affecting the transport and development of the two vortices. This makes it a challenging problem for standard ROM approaches and a good test bed for the proposed FSM framework. In terms of POD analysis, we use \(n=6\) to define the total number of resolved scales and hence the dimensionality of the GROM system. The decay of the POD eigenvalue and the RIC values for the current setup is shown in Fig. 10. Finally, the GROM terms for the vortex merger problem can be written as follows: \[[\mathbf{b}]_{k} =\bigg{(}-J(\bar{\omega},\bar{\psi})+\frac{1}{\mathrm{Re}}\nabla^{2 }\bar{\omega},\mathbf{\phi}_{k}^{\omega}\bigg{)}, \tag{47}\] \[[\mathbf{L}]_{k,i} =\bigg{(}-J(\bar{\omega},\mathbf{\phi}_{i}^{\psi})-J(\mathbf{\phi}_{i}^{ \omega},\bar{\psi})+\frac{1}{\mathrm{Re}}\Delta\phi_{i}^{\omega},\mathbf{\phi}_{k}^ {\omega}\bigg{)},\] \[[\mathbf{N}]_{k,i,j} =\bigg{(}-J(\mathbf{\phi}_{i}^{\omega},\mathbf{\phi}_{j}^{\psi});\mathbf{\phi} _{k}^{\omega}\bigg{)}.\] Similar to Eq. (39), we modify Eq. (43) to derive the closure model as follows: \[\frac{\partial\omega}{\partial t}+J(\omega,\psi)=\frac{1}{\mathrm{Re}}\Delta \omega+\gamma\omega+\beta\Delta\omega, \tag{48}\] which results in the following terms for the closure model in Eq. (17): \[[\mathbf{e}]_{k}=\big{(}\bar{\omega},\mathbf{\phi}_{k}\big{)},\qquad[\mathbf{q}]_{k}= \big{(}\Delta\bar{\omega},\mathbf{\phi}_{k}\big{)},\qquad[\mathbf{D}]_{k,i}=\big{(} \Delta\mathbf{\phi}_{i},\mathbf{\phi}_{k}\big{)}. \tag{49}\] Figure 9: Samples of temporal snapshots of the vorticity field for the vortex merger problem at \(\mathrm{Re}=5000\). #### 5.2.1 Full field observations We first explore the idealized case where full field measurements of the vorticity fields are collected. We also consider additive Gaussian noise with zero mean and standard deviation of \(0.1\). We record data every \(5\) time units, corresponding to a total of \(10\) measurement instants. We apply the approach presented in Section 4 to compute the mode-dependent parameters \(\gamma_{i}\) and \(\beta_{i}\) for \(i=1,2,\ldots,n\). The estimated parameters values are then plugged into Eq. (17) to define the closure model. The corresponding predictions of the POD modal coefficients are shown in Fig. 11, where we see that the higher amplitude oscillations are damped and the ROM trajectory is getting closer to the target values. In addition, the reconstruction of the vorticity field at two different time instants is depicted in Fig. 12. We see that the GROM model without closure results in flow field predictions that miss significant flow features. On the other hand, the FSM Closure framework is capable of parameterizing the latent control model resulting in a reduced reconstruction error. Figure 13 also shows the relative error for the predicted POD coefficients as well as the reconstructed vorticity fields as a function of time. #### 5.2.2 Sparse field observations In this section, we investigate the performance of the proposed forward sensitivity approach for mode-dependent control when only spatially sparse observations are available. In particular, we consider a relatively data-scarce regime with \(25\) spatial locations (that is less than \(0.04\%\) of the total number of grid points). We also incorporate additive Gaussian noise similar to Section 5.2.1. Although it is typically possible to _clean_ this data a bit by considering its spectrum, we intentionally avoid this step to assess the robustness of the FSM framework to data noise and sparsity. We illustrate the predictions of the system's dynamics in the latent space in Fig. 14. As expected, the predictions of the GROM with Figure 10: The decay of eigenvalues (left) and behavior of relative information content (right) for the 2D vortex merger problem at \(\mathrm{Re}=5000\). FSM closure is quite less accurate than the case with full field measurements (i.e., Fig. 11) especially at later times. However, compared to the uncontrolled GROM model, we see that the GROM model is able to reproduce the GROM model with the same GROM model. Figure 11: The time evolution of the first 6 modes of the vortex merger problem when full field measurements are collected every 5 time units. Figure 12: Comparison between the vorticity field at the \(t=40\) (top) and \(t=50\) (bottom) with True Closure (ground truth from FOM data), No Closure (standard GROM) and the proposed FSM Closure approach with full field measurements. that the FSM Closure introduces substantial improvements. We also observe that the predictions of the first few modes is closer to the target values than the peredictions of the last modes (e.g., \(a_{5}\) and \(a_{6}\)). This can be explained by the principle of locality of energy transfer (and modal interactions) of the variational multiscale method [43; 2]. It implies that the POD truncation results in a model that has much less information about the modes that are closer to the cut-off (e.g., the latest Figure 14: The time evolution of the first 6 modes of the vortex merger problem when only sparse field measurements are available. Figure 13: The relative error between the predicted values for the POD modal coefficients (left) and reconstructed vorticity field (right) compared to their target values for 2D vortex merger problem. Full field measurements are considered for the FSM Closure. modes) than those that are farther away from the cut-off (e.g., the first modes). In addition, since the first modes have larger contribution to the data construction, the FSM algorithm tends to give higher importance to those mode as it minimizes the error with respect to the measurements. One way to address this issue could be to define a different scaling to ensure that all modal coefficient are equally important. We also show the reconstructed vorticity fields from the GROM without closure as well as GROM with FSM closure at \(t=40\) and \(t=50\) in Fig. 15. We observe that the FSM closure results in a more accurate recovery of the underlying flow features with respect to the target values (denoted as True Closure). Finally, Fig. 16 shows the relative error for the predicted POD coefficients as well as the reconstructed vorticity fields as a function of time. ## 6 Concluding Remarks We propose a variational approach for correcting nonlinear reduced order models (ROMs) using the forward sensitivity method (FSM). We cast the closure as a control input in the latent space of the ROM and utilize physical arguments to build parameterized models with damping and dissipation terms. We leverage FSM to blend the predictions from the ROM with available sparse and noisy observations to estimate the unknown Figure 15: Comparison between the vorticity field at the \(t=40\) (top) and \(t=50\) (bottom) with True Closure (ground truth from FOM data), No Closure (standard GROM) and the proposed FSM Closure approach with sparse field measurements. model parameters. We apply this approach on a projection based ROM of two test problems with varying complexity corresponding to the one dimensional viscous Burgers equation and the two dimensional vortex-merger problem. These are often considered as canonical test beds for broad transport phenomena governed by nonlinear partial differential equations. We investigate the capability of the approach to approximate optimal values for the mode-dependent parameters without constraining the direction of energy transfer between different modes. Results show that equipping GROM with FSM-based control dramatically increases the ROM accuracy. The predicted trajectories get closer to the values that provide the minimum reconstruction error. The presented framework can effectively enhance digital twin technologies where computationally-light models are required and sensor data are continuously collected. ## Acknowledgments The authors are grateful to Sivaramakrishnan Lakshmivarahan for his efforts that greatly helped us in understanding the mechanics of the FSM method. Omer San would like to acknowledge support from the U.S. Department of Energy under the Advanced Scientific Computing Research program (grant DE-SC0019290), the National Science Foundation under the Computational Mathematics program (grant DMS-2012255). Disclaimer: This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any Figure 16: The relative error between the predicted values for the POD modal coefficients (left) and reconstructed vorticity field (right) compared to their target values for 2D vortex merger problem. Sparse field measurements are considered for the FSM Closure.
2305.16170
Multi-Agent Reinforcement Learning for Network Routing in Integrated Access Backhaul Networks
We investigate the problem of wireless routing in integrated access backhaul (IAB) networks consisting of fiber-connected and wireless base stations and multiple users. The physical constraints of these networks prevent the use of a central controller, and base stations have limited access to real-time network conditions. We aim to maximize packet arrival ratio while minimizing their latency, for this purpose, we formulate the problem as a multi-agent partially observed Markov decision process (POMDP). To solve this problem, we develop a Relational Advantage Actor Critic (Relational A2C) algorithm that uses Multi-Agent Reinforcement Learning (MARL) and information about similar destinations to derive a joint routing policy on a distributed basis. We present three training paradigms for this algorithm and demonstrate its ability to achieve near-centralized performance. Our results show that Relational A2C outperforms other reinforcement learning algorithms, leading to increased network efficiency and reduced selfish agent behavior. To the best of our knowledge, this work is the first to optimize routing strategy for IAB networks.
Shahaf Yamin, Haim Permuter
2023-05-12T13:03:26Z
http://arxiv.org/abs/2305.16170v1
# Multi-Agent Reinforcement Learning for Network Routing in Integrated Access Backhaul Networks ###### Abstract In this study, we examine the problem of wireless routing in integrated access backhaul (IAB) networks involving fiber-connected base stations, wireless base stations, and multiple users. Physical constraints prevent the use of a central controller, leaving base stations with limited access to real-time network conditions. These networks operate in a time-slotted regime, where base stations monitor network conditions and forward packets accordingly. Our objective is to maximize the arrival ratio of packets, while simultaneously minimizing their latency. To accomplish this, we formulate this problem as a multi-agent partially observed Markov Decision Process (POMDP). Moreover, we develop an algorithm that uses Multi-Agent Reinforcement Learning (MARL) combined with Advantage Actor Critic (A2C) to derive a joint routing policy on a distributed basis. Due to the importance of packet destinations for successful routing decisions, we utilize information about similar destinations as a basis for selecting specific-destination routing decisions. For portraying the similarity between those destinations, we rely on their relational base-station associations, i.e., which base station they are currently connected to. Therefore, the algorithm is referred to as Relational Advantage Actor Critic (Relational A2C). To the best of our knowledge, this is the first work that optimizes routing strategy for IAB networks. Further, we present three types of training paradigms for this algorithm in order to provide flexibility in terms of its performance and throughput. Through numerical experiments with different network scenarios, Relational A2C algorithms were demonstrated to be capable of achieving near-centralized performance even though they operate in a decentralized manner in the network of interest. Based on the results of those experiments, we compare Relational A2C to other reinforcement learning algorithms, like Q-Routing and Hybrid Routing. This comparison illustrates that solving the joint optimization problem increases network efficiency and reduces selfish agent behavior. ## 1 Introduction The increasing demand for wireless communication and the limitations of the electromagnetic spectrum have led to the development of more efficient methods for managing networks. To meet these needs, the 3rd Generation Partnership Project (3GPP) has established a standard, New Radio (NR), which includes novel designs and technologies to support fifth-generation (5G) networks [1]. One of the key features of this new protocol is the inclusion of new bands at millimeter wave (mmWave) frequencies. These frequencies offer the potential for increased data rates by exploiting the spatial diversity available in these bands, which are currently less congested compared to traditional bands. However, operating at mmWave frequencies also introduces new physical challenges, such as severe path and penetration losses. To overcome these challenges, network density can be increased and beam-forming methods can be used [2]. Although increasing network density has potential benefits, the deployment and operation of fiber between the Next-Generation Node Base Station (gNB) and the core network can be costly. Integrated Access and Backhaul (IAB) is a promising solution for successful 5G adoption as it allows for only a fraction of the gNBs to be connected to traditional fiber-like infrastructures, thus reducing redundant deployment and operational costs by utilizing spatial diversity [3]. The gNBs connected to fiber are called IAB donors, while the remaining gNBs are called IAB nodes and use multi-hop wireless connections for backhaul traffic. While IAB networks are cost-effective in terms of deployment and operation, ensuring reliable network performance remains a challenging research area due to their highly non-stationary nature. The dynamic nature of the topology, tight delay constraints, and limited information regarding the status of the network are some of the factors to be considered when supporting these requirements. Routing plays a crucial role in the context of network congestion control, where each destination may have multiple paths, and base stations monitor network conditions to make routing decisions. There are two main approaches for implementing routing algorithms in wireless networks: centralized and distributed. In a centralized approach, there is a central network processor that is responsible for path selection, while in a distributed approach, each node makes next-hop decisions based only on its own observations without knowledge of other nodes' decisions. In practical implementations, due to bandwidth limitations and multi-hop structures, information sharing is limited to the base station's neighborhood. This implies that base stations can only observe a part of the current network state, and enhance, when operating in a distributed manner next-hop transmission decisions are only based on partial observations. This paper focuses on the analysis of distributed routing algorithms developed for networks that exhibit physical limitations, specifically IAB-based networks. We design a deep reinforcement-learning-based algorithm to achieve optimal routing policies. One unique aspect of our approach is that, given that successful routing decisions rely heavily on packet destinations, we consider that it is beneficial for the agent to use knowledge of similar destinations when making routing decisions. To portray the similarity between these destinations, we use their _relational_ base-station associations, which refer to the base stations they are currently connected to. Our study differs from previous works that focused on designing routing policies to optimize packet paths to specific destinations without sharing information with other similar destinations. These previous algorithms are generally not suitable for a dynamic topology like the one in this study. We propose a novel algorithm for learning a decentralized routing policy using deep policy gradients. To make the learning efficient, we use the Advantage Actor Critic (A2C) [4] algorithm, which is a combination of policy gradient [5], temporal difference estimation [6], and deep neural networks, and enables learning from experience in an unknown environment with a large state space through interactions with the environment. Our proposed algorithm is called _Relational A2C_. In the current study, we present numerical results that evaluate the performance of the Relational A2C algorithm in several network scenarios. We also compare the results obtained by Relational A2C with those from other methods such as [7, 8, 9, 10]. Our results show that the proposed approach outperforms existing methods and is able to achieve performance comparable to that of centralized systems. To the best of our knowledge, this is the first work that addresses the routing problem in IAB networks using deep reinforcement learning (RL). Additionally, our algorithm formulates the routing problem as a joint optimization problem, which promotes agent cooperation and reduces selfish behavior, resulting in more efficient use of network resources. ### Related Work Routing in networks has been and still is the subject of extensive research, as seen in [11], [12]. There is a large body of literature on routing strategies for wireless networks, including earlier protocols such as DSR [13] and AODV [14] for ad-hoc networks, and various routing protocols for delay and disruption tolerant networks (DTNs) [15], as well as strategies for resource constrained wireless networks (such as sensor networks or internet-of-things networks) [16]. However, many existing routing protocols were designed for specific wireless network scenarios and may not be easily adaptable to other scenarios. For example, routing protocols for ad-hoc networks assume a connected network, while routing protocols for DTNs assume a disconnected network. In this study, we focus on routing in IAB networks that are characterized by dynamic topology changes and strict time constraints, and cannot be expected to be subject to these assumptions. This has motivated the introduction of methods that can acquire a nearly optimal policy without requiring a-priori knowledge. A major technique that is capable of achieving this goal is RL, which is a class of machine learning algorithms that can learn an optimal policy via interaction with the environment without knowledge of the system dynamics (such algorithms are also known as model-free algorithms) [17, Ch. 1]. One of the most popular RL techniques is Q-learning [18], which can learn the optimal policy online by estimating the optimal action-value function. Early works that applied Q-learning to network routing used the classical tabular Q-learning method [7, 19, 20]. This system enables each device to forward a limited number of packets in each time slot; for this it receives a reward over the ACK signal. However, it becomes computationally difficult to apply this method when the state space becomes large. This issue has motivated the combination of deep learning [21] with RL, giving rise to the deep RL class of algorithms. These algorithms have attracted much attention in recent years due to their ability to approximate the action-value function for large state and action spaces. Recently, the authors in [22] proposed a deep RL-based algorithm called deep Q-network (DQN), which combines deep neural networks and Q-learning. Recent studies that derived DRL-based algorithms for network routing problems can be found in [23, 24, 25, 26, 27, 8, 28]: A DRL approach for routing was developed in [23] with the objective of minimizing the delay in the network. In this approach, a single controller finds all the paths of all source-destination pairs given the demand in the network, which represents the bandwidth request of each source-destination pair. However, this approach results in complex state and action spaces, and does not scale for large networks as it depends on a centralized controller. Moreover, in this approach, the state representation does not capture the network topology, which is highly dynamic in our scenario. Motivated by the high complexity of the state and action representations of the approaches proposed in [23, 24], a feature engineering approach has recently been proposed in [25] that only considers some candidate end-to-end paths for each routing request. This proposed representation was shown to outperform the representation of the approaches proposed in [23, 24] in some use-cases. In [26], the authors employed a deep recurrent Q-Network (DRQN), to determine the routing policy, this is a mixture of a DQN and a long short-term memory (LSTM) network. Using device-specific characteristics, such as the last k actions taken and the next m destinations of packets in queues, the algorithm is trained for each device. The LSTM layer in the DRQN algorithm uses past observations for the prediction of the whole network state, which, in turn, allows the agent to select the next hop for a specific packet in the next time step. The work in [27] applied an algorithm called hierarchical-DQN (h-DQN) [29]. h-DQN facilitates exploration in complicated environments by decomposing the original problem into a hierarchy of sub-problems such that higher-level tasks invoke lower levels as if they were primitive actions. In [8], the authors used another RL algorithm called Actor Critic algorithm, which is a policy-based RL algorithm where the policy learned directly via parametrization. They compared their results with that of the algorithm in [7, 20] and showed that their proposed algorithm achieves better performance. ### Paper Structure and Notations The organization of this paper is as follows. In Section II, we present the problem formulation and assumptions. In Section III, we provide the mathematical background for our proposed solution, including Markov Decision Processes (MDPs), RL, and Multi-Agent Reinforcement Learning (MARL). Section IV presents our mathematical formulation for the problem and explains the rationale behind the chosen MARL approach. In Section V, we provide a review of existing routing algorithms. Section VI describes our proposed algorithm, which is based on the A2C method and includes three different training paradigms, ranging from fully decentralized to centralized training. Simulation results, including a comparison with existing routing algorithms, are presented in Section VII. Finally, in Section VIII, we conclude this work and discuss future research directions. Throughout this work, we use \(\mathbb{N}\) to denote natural numbers, bold letters ( e.g., \(\mathbf{X}\)) to denote vectors, and \(\mathbf{X_{i}}\) denotes the \(i\)th element in the vector \(\mathbf{X}\), \(i\geq 0\). Further, \(\mathbf{X_{t,i}}\) denotes the \(i\)th element in the vector \(\mathbf{X}\) at the \(t\)th time-step, \(i\geq 0,t\in\mathbb{N}\). we use calligraphic letters to denote sets, e.g., \(\mathcal{X}\), and the cardinality of a set is denoted by \(|\cdot|\), e.g., \(|\mathcal{X}|\) is the cardinality of the set \(\mathcal{X}\). Lastly, \(\mathbb{E}[\cdot]\) denotes the stochastic expectation. ## 2 Problem Formulation We consider a multi-hop IAB wireless network with an IAB donor, IAB nodes and User Equipments (UE) [3], as shown in Fig. 1. The IAB donor is wired to the core network, whereas IAB nodes use wireless communication to backhaul their traffic to the core network via a multi-hop connection. Both the IAB donor and IAB nodes provide access and backhaul interfaces for the UE and IAB nodes, respectively. We model this network by an undirected weighted graph \(\mathcal{G}=(\mathcal{N},\mathcal{L},d)\), where \(\mathcal{N}\), \(\mathcal{L}\) denote the sets of nodes and wireless links, respectively, and \(d:\mathcal{L}\rightarrow\mathbb{N}\) assigns a delay to each wireless link. There are three sets present in \(\mathcal{N}\): a set \(\mathcal{D}\) of the IAB donor, a set \(\mathcal{B}\) of the IAB nodes and a set \(\mathcal{U}\) of the UEs, i.e., \(\mathcal{N}=\mathcal{D}\cup\mathcal{B}\cup\mathcal{U}\). Each of the nodes \(n\in\mathcal{D}\cup\mathcal{B}\) is equipped with an independent buffer queue, and a transceiver with beam-forming and routing capabilities. Each of the links \((n,m)\in\mathcal{L}\) is a bidirectional link between node \(n\) and node \(m\), portraying a time-varying wireless channel. We assume that time is slotted by \(t\in\mathbb{N}\), and, for simplification, we assume that packets are constant in length and that transmission rates are limited to transmit integer numbers of packets per slot. As another simplification, we represent the wireless link's spectral efficiency as a delay between the two nodes of the graph using the mapping \(d\). This assumption is based on the fact that links with varying degrees of spectral efficiency will require a different number of transmissions to transfer the same amount of data, so using a link with low spectral efficiency rather than a link with high spectral efficiency will result in an increased delay. Once an IAB node or UE is activated, it is connected to an already active node, i.e., either an IAB donor or another IAB node which has a path to an IAB donor. Thus, we build the network topology in an iterative greedy fashion similarly to in [30], where we set constraints over the maximal number of IAB parents (\(P_{parent}^{max}\)), IAB children (\(C_{children}^{max}\)), number of users each base station has (\(U_{children}^{max}\)), and the number of Figure 1: IAB network illustration with 1 IAB donor, 3 IAB nodes and 9 UEs. associated base stations each user has (\(U_{parent}^{max}\)). It should be noted that by using the following topology generation scheme, we receives a connected graph, i.e., there is a path from any base station to any other node in the network. We assume that all our nodes operate at mmWave bands for backhaul and access transmission and reception (in-band backhauling) with beam-forming capabilities. Therefore, similarly to in [31], we disregard the interference between non-assigned nodes since narrow beam mmWave frequencies have a power limit rather than an interference limit. Considering the stochastic nature of packet arrivals, we use a Poisson process with parameter \(\lambda\), which we refer to as the network load. In each time-slot we sample the arrival process, which corresponds to the number of packets generated. In order to distribute the packets across the network's base stations, the donor receives the number of packets that corresponds to its available wireless bandwidth. The remainder of the packets are distributed uniformly among the network's base stations. As each packet is generated, it is given a time limit, which is referred to as its Time To Live (TTL). If this TTL expires, the packet is dropped. To prioritize packets with lower TTLs, the base-stations use an unlimited-sized prioritized queue based on TTL. Consequently, the base station always processes packets in accordance with the prioritized queue. We denote \(\mathcal{N}_{i}\) as the set of neighbors of node \(i\in\mathcal{N}\), i.e., \((i,j)\in\mathcal{L},\ \forall j\in\mathcal{N}_{i}\), and let \(C,K\in\mathbb{N}\) be the number of wireless channels and activated base stations, respectively. In each time step, each base station extracts a set of packets from its queue. This is followed by deciding where to send each packet, which means that the \(i^{th}\) base station has to choose one destination from \(\mathcal{N}_{i}\) for each packet. In our model, users may move or change their base-station associations between two consecutive time slots, which would be resolved in a change of the network topology. In addition, the edge delays are slowly varying around their initial values to modulate the changes in the wireless medium. ## 3 Background In the following section, we introduce the mathematical foundations, on which we will base our proposed solution. We begin by exploring MDPs and Partially Observe MDPs (POMDPs). Then, we describe the tools from the field of RL and MARL that we use as the basis for our method. ### MDPs and Partially Observe MDPs Preliminaries We can define MDP as a tuple, \(<\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{P}>\), where \(\mathcal{S},\mathcal{A}\) and \(\mathcal{R}\) are the sets of environment states, actions and rewards, respectively. In addition, let \(\mathcal{P}\) be the set of the probabilities \(\big{\{}\Pr(s^{\prime},r|s,a)\big{\}}_{s^{\prime},s\in\mathcal{S},a\in \mathcal{A},r\in\mathcal{R}}\), whereas the probability \(\Pr(s^{\prime},r|s,a)\) represents the agent probability of observing next state \(s^{\prime}\) and reward \(r\) after being at state \(s\) and performing action \(a\). The agent's policy can be represented as the following mapping \(\pi:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\), which represents a mapping from the current state to the probability distribution on the action space. A Partially Observe MDP (POMDP) is defined as a tuple \((\mathcal{O},\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{P})\)[32]. An agent interacting with the environment at state \(\mathbf{s}\in\mathcal{S}\) observes an observation \(o(s)\in\mathcal{O}\). After observing \(o\), the agent selects an action \(a\in\mathcal{A}\) based on this observation, which means that now the agent's policy is determined by its observations. That is, policy can be represented as the following mapping \(\pi:\mathcal{O}\times\mathcal{A}\rightarrow[0,1]\), which represents a mapping from the current observation to the probability distribution on the action space. After performing an action, the remainder of the decision process is the same as for the MDP case. ### RL Preliminaries We examine a general model-free RL framework [17] applied to a specific MDP, in which the agent interacts with the environment and learns to accomplish the task at hand by a series of discrete actions. In this case, the agent does not assume any prior knowledge of the underlying environment statistics. Figure 2 describes the decision making processes between the agent and the environment: at each discrete time step \(t\), the agent observes the current state of the environment \(S_{t}\) and executes an action \(A_{t}\) according to its policy \(\pi\). Then, the agent receives an immediate reward \(R_{t}\) and the environment transitions to a new state \(S_{t+1}\), based on the transition kernel \(\Pr(S_{t+1}|S_{t},A_{t})\), to the next state \(S_{t+1}\). Under this framework, the agent's goal is to select actions that maximize the expected cumulative discounted reward \(G_{t}\), where we define \(G_{t}\) as follows: \[G_{t}=\sum_{n=0}^{\infty}\gamma^{n}R_{t+n+1},\;\gamma\in[0,1). \tag{1}\] The discount factor \(\gamma\) determines how much immediate rewards are favored over more distant rewards. For a general MDP framework, the action-value function \(Q_{\pi}(S_{t},A_{t})\) represents the expected accumulated discounted reward starting from state \(S_{t}\), picking action \(A_{t}\), and following policy \(\pi\) afterwards, whereas the value function \(V_{\pi}(S_{t})\) represents the expected accumulated discounted reward starting from state \(S_{t}\), and following policy \(\pi\)[17, Ch. 3]: \[Q_{\pi}(S_{t},A_{t})\triangleq E_{\pi}[G_{t}|S_{t},A_{t}],\,V_{\pi}(S_{t}) \triangleq E_{\pi}[G_{t}|S_{t}].\] The optimal policy \(\pi^{*}\) is a policy that satisfies \(Q_{*}(S_{t},A_{t})\triangleq Q_{\pi^{*}}(S_{t},A_{t})\geq Q_{\pi}(S_{t},A_{t})\) for any policy \(\pi\) and for every possible state-action pair. By continuously interacting with the environment, the RL agent aims to learn the optimal policy \(\pi^{*}\). ### MARL Preliminaries MARL also addresses sequential decision-making problems, but with more than one agent involved, as illustrated in Fig. 3. In particular, both the evolution of the system state and the reward received by each agent are influenced by the joint actions of all agents. More intriguingly, each agent has has to optimize its own long-term reward, which now becomes a function of the policies of all other agents [33]. Figure 3: Multi-agent decision process framework. Figure 2: Decision process framework. As we are interested in optimizing the network performance in this study, it is essential that our agents act cooperatively. In a fully cooperative setting, all agents will tend to share a common reward function, such as \(R_{1}=R_{2}=\cdots=R_{N}=R\). We note that this model may also be referred to as a multi-agent MDP (MMDP) in the AI community [34]. In this model, the value function and Q-function are identical for all agents, which enables the application of single-agent RL algorithms, where all agents are coordinated as one decision maker [33]. Besides the common-reward model, another slightly more general and emerging model for cooperative MARL considers team-average rewards [35, 33]. Specifically, agents are allowed to have different reward functions, which may be kept private for each agent, while the goal for cooperation is to optimize the long-term reward corresponding to the average reward \(R\triangleq\frac{1}{N}\sum_{i=0}^{N-1}R_{i}\). The average-reward model, which allows more heterogeneity among agents, includes the model above as a special case. It also preserves privacy among agents, and facilitates the development of decentralized MARL algorithms [35, 33]. ## 4 RL for IAB Network Routing In the following section, we formulate and evaluate the IAB network routing problem using a Multi-Agent POMDP. First, we propose and analyze an approach to solve this problem. Next, we proceed to formulate it mathematically as a Multi-Agent POMDP. We conclude our discussion by describing our evaluation metrics. During the agents' training phase, their experience is formed from tuples of \(<s,a,r>\) that are chained together into time sequences by the next state, \(s^{\prime}\). We can define this experience in two ways. On the one hand, we can select the base station as our decision-maker. Hence, every base station acts as an independent agent, making its own decisions about how to forward packets. On the other hand, we can select the packet as our decision-maker. By doing this, every packet in the network is an agent and makes an independent decision when it reaches the front of a base station's queue. In our work, we employed the packet approach to define our decision process mathematically due to its convenience. In light of the fact that our network contains multiple packets, we consider this to be a multi-agent problem. ### Formulating IAB Routing Using a Multi-Agent POMDP Framework This section outlines the mathematical formulation of our problem. Due to the presence of multiple agents with a joint goal, we formulate the problem of multi-agent routing as a as a Mili-Agent POMDP [32] with discrete actions. Each agent observes statistics relevant to itself and does not observe the entire global state. We denote the the Multi-Agent POMDP by a tuple \((\mathcal{S},\mathcal{O},\mathcal{A}_{1},\mathcal{A}_{2},\ldots,\mathcal{A}_ {N},\mathcal{P},\mathcal{R},N,\gamma)\), where \(N\) is the number of agents and \(\mathcal{S}\) is the environment state space. Environment state \(\mathbf{s}\in\mathcal{S}\) is not fully observable. Instead, agent \(i\) draws a private observation \(\mathbf{o}_{i}\in\mathcal{O}\) that is correlated with \(\mathbf{s}\). \(\mathcal{A}_{i}\) is the action space of agent \(i\), yielding a joint action space \(\mathcal{A}=\mathcal{A}_{1}\times\mathcal{A}_{2}\times\cdots\times\mathcal{A} _{N}\). \(\Pr(\mathbf{s}^{\prime},r|\mathbf{s},\mathbf{a})\in\mathcal{P}\) is the state-reward transition probability, where \(r,\mathbf{s},\mathbf{s}^{\prime},\mathbf{a}\in\mathcal{R}\times\mathcal{S} \times\mathcal{A}\). \(\mathcal{R},\gamma\) represent the available rewards set and the discount factor, respectively. Agent \(i\in\{1,2,\cdots,N\}\) uses a policy \(\pi_{i}:\mathcal{O}\times\mathcal{A}_{i}\rightarrow[0,1]\) to choose actions after drawing observation \(\mathbf{o}_{i}\). After all agents have taken actions, the joint action \(\mathbf{a}\) triggers a state transition \(\mathbf{s}\rightarrow\mathbf{s}^{\prime}\) based on the state transition probability \(\Pr(\mathbf{s}^{\prime}|\mathbf{s},\mathbf{a})\). In Mult-Agent POMDP, MARL can be used as a computational tool. Below, we specify each component using the MARL definitions. * **Observations.** The agent can only observe information relevant to the packet it controls. Specifically, we consider the packet's current node, time to live (TTL), and queue delay in this work. Let \(\mathbf{o}_{i}=(n,t,QueueDelay(i,n))\) be the agent \(i\) observation. The scope of \(n\) in this context is limited to only the base stations and the final destination, i.e., other users cannot relay messages. * **Actions.** If agent \(i\) is authorized to perform a wireless hop based on \(\mathbf{o}_{i}=(n,t)\), the allowable action set includes all wireless links that are available from node \(n\). For example, \(a_{i}=l_{n,n^{\prime}}\), where \(l_{n,n^{\prime}}\in\mathcal{L}\) represents the link between nodes \(n\) and \(n^{\prime}\). Let \(\mathbf{a}=(a_{1},a_{2},\cdots,a_{N})\) be the joint action. Whenever the agent does not have permission to conduct a wireless hop, its action will be defined as null. * **Transitions**. Joint action **a** triggers a state transition \(\mathbf{s}\rightarrow\mathbf{s}^{\prime}\) with probability \(\Pr(\mathbf{s}^{\prime}|\mathbf{s},\mathbf{a})\) which depends on the dynamics of the environment and on the frequency at which the agent is polled to provide an action. In this case, the frequency is dependent on the duration of each time slot in the system. * **Reward.** Let \(D_{i}\) represent the immediate delay of the \(i^{th}\) agent. To be more specific, \(D_{i}\) consists of two components that cause delay to the \(i^{th}\) agent: the instant wireless link delay and the delay caused by waiting at the current base-station queue. We define the current agent's delay as the sum of these two terms. If we define the observation as \(\mathbf{o}_{i}=(n,t)\) and the action as \(a_{i}=l_{n,n^{\prime}}\), we can denote the immediate delay of the agent as \(D_{i}=-(q_{i}^{n}+d((n,n^{\prime})))\), where \(q_{i}^{n}\) represents the delay induced by the \(n^{th}\) node queue to the \(i^{th}\) agent. Accordingly, we define the agents' joint reward using the immediate delay representation as follows: \(R\triangleq\frac{1}{N}\sum_{i\in\mathcal{I}}D_{i}\in\mathcal{R}\), where \(\mathcal{I}\) represents the set of active agents. Let \(\Pi\triangleq\times_{i=0}^{N-1}\pi_{i}\) represent the joint agent policy. Let \(\mathbf{s}_{t}\in\mathcal{S}\) denote the state of the network at time \(t\in\mathbb{N}\). Our objective is to derive an RL-based algorithm to identify the set of policies \(\{\pi_{i}\}_{i=0}^{N-1}\) that maximize the expected accumulated discounted reward over a finite time horizon, i.e, \[\Pi^{\star}=\underset{\Pi}{\operatorname{argmax}}\left\{\mathbb{E}_{\Pi} \bigg{[}\sum_{t=1}^{T}\gamma^{t-1}R_{t+1}\Big{|}\mathbf{s}_{1}\bigg{]}\right\}. \tag{2}\] ### Evaluation Metrics A reactive routing scheme is used in a multi-hop IAB network to dynamically minimize packet delay while ensuring that packets reach their destination on time. There might be multiple hops, links with inefficient spectral efficiency, and nodes with overloaded queues in a packet's path, all of which may cause delay. There are various metrics to measure or estimate the congestion within the network. In our scenario we define our congestion estimation using the following metrics: * The time it takes for a packet to travel from its source to its destination. * The percentage of packets that made it to their destination successfully. This multi-objective problem aims to minimize the packet latency while simultaneously maximizing the arrival ratio. Therefore, it may suffer from a Pareto-front, which means that optimizing w.r.t. one objective, leads to a sub-optimal solution w.r.t. another objective [36]. Despite the fact that the network performance measurements are well defined, an individual agent does not necessarily have access to their signals. For example, the arrival ratio represents a metric which is dependent upon the entire network; due to its multi-hop structure, at each time slot an individual cannot even obtain a good estimation of this value. ## 5 Existing Routing Algorithms The following section describes existing solutions for solving a routing problem. Throughout this section, we explore and modify those algorithms to solve the problem of routing in an IAB network. ### Centralized-Routing In this policy, the algorithm has access to the full network state. During each time step, the next hop is determined by computing the shortest path to the packet's destination. By observing the full state while calculating the shortest path, this algorithm also accounts for delays caused by queues at other base stations along the packet's path. Even though this algorithm has a high complexity, it achieves network performance improvement with full knowledge of the network's state, and serve as a benchmark in comparison to a decentralized approach. ### Minimum-Hop Routing In this policy, the algorithm has access only to the links' delay [9]. Based solely on those link delays, the next hop is determined for each time step by computing the shortest path to the packet's destination. In addition to serving as a benchmark, this algorithm is intended to analyze the influence of queue delay on the resulting delay. ### Back-Pressure Routing In this policy [10], each base-station stores a queue for each possible destination. The following procedure describes this algorithm. In each time slot, each node calculates the difference between the specific queue length and the corresponding queue length located at each of its neighboring nodes (this calculation occurs for each queue). By utilizing this difference for all available destinations, each base station is able to make two different decisions. First, it makes a greedy decision regarding which packets to extract from the queues based on the size of the differentiation ( in other words, the destinations with the greatest degree of differentiation will be selected). Second, the node will determine the packet's next hop based on a greedy decision (the neighbor with the highest differentiation score). ### Q-Routing In this policy, each node uses an off policy iterative method termed Q-learning [7]. Next, Q-learning is first used to learn a representation of the network state in terms of Q-values and then these values are used to make routing decisions. Given this representation of the state, the action \(a\) at node \(n\) is to find the best neighbor node to deliver the a packet that results in lower latency for the packet to reach its destination. The following procedure describes this algorithm. As soon as node \(n\) extracts a packet destined to node \(d\) from his queue, it selects the next-hop decision based on the \(\epsilon\)-greedy policy w.r.t. its Q-values, i.e., \[y=\begin{cases}UniformRandom(\mathcal{A}_{n})&\text{w.p. }\epsilon,\\ \operatorname*{argmax}_{a\in\mathcal{A}_{n}}\hat{Q}^{(n)}(a,d)&\text{w.p. }1- \epsilon.\end{cases} \tag{3}\] Once the packet has been sent from node \(n\) to neighboring node \(y\), node \(y\) will send its best estimate back to node \(n\), \(\max\limits_{a^{\prime}\in\mathcal{A}_{y}}\hat{Q}^{(y)}(a^{\prime},d)\) for the destination \(d\) back to node \(n\) over the ACK signal. This value essentially estimates the remaining time in the journey of the packet. Following that, the Q-value is modified by the following formula: \[\hat{Q}^{(n)}_{new}(y,d)=\hat{Q}^{(n)}_{old}(y,d)+\alpha\cdot\big{(}r+\gamma \cdot\max\limits_{a^{\prime}\in\mathcal{A}_{y}}\hat{Q}^{(y)}(a^{\prime},d)- \hat{Q}^{(n)}_{old}(y,d)\big{)}. \tag{4}\] ### Full Echo Q-Routing This algorithm is a variant of the Q-Routing algorithm described above, and addresses the well-known issue of exploration against exploitation [7]; similarly to Q-Routing, it utilizes an iterative off-policy method termed Q-Learning. In order to eliminate the problem of exploration against exploitation, this algorithm executes the following procedure. As soon as node \(n\) extracts a packet destined to node \(d\) from its queue, it selects the next-hop decision based on the greedy policy w.r.t. its Q-values, \(y=\operatorname*{argmax}_{a\in\mathcal{A}_{n}}\hat{Q}^{(n)}(a,d)\). Once the packet has been sent from node \(n\) to neighboring node \(y\), node \(n\) receives information from all neighboring nodes, which transmit their respective best estimates for packets destined to node \(d\), i.e., \(\max\limits_{a^{\prime}\in\mathcal{A}_{y}}\hat{Q}^{(y)}(a^{\prime},d),\forall y \in\mathcal{N}_{n}\). Using those estimations, node \(n\) modifies its Q-values for each neighbor by utilizing the following formula: \[\hat{Q}^{(n)}_{new}(y,d)=\hat{Q}^{(n)}_{old}(y,d)+\alpha\cdot\big{(}r+\gamma \cdot\max\limits_{a^{\prime}\in\mathcal{A}_{y}}\hat{Q}^{(y)}(a^{\prime},d)- \hat{Q}^{(n)}_{old}(y,d)\big{)},\forall y\in\mathcal{N}_{n}. \tag{5}\] ### Hybrid Routing In this algorithm, a Q-Routing agent is trained simultaneously with an on policy iterative method [8]. In this case, Q-learning is used to learn a representation of the network state in terms of Q-values, and then Hybrid routing uses these values to update the agent's policy parameters by using the Actor-Critic method. As soon as node \(n\) extracts a packet destined to node \(d\) from its queue, it selects its next action based on sampling its policy distribution, \(\pi_{n}(d,\cdot;\mathbf{\theta}_{n})\). Then, it sends the packet to one of its neighboring nodes \(y\). The corresponding Q-value is updated based on a Q-Routing update rule, and then its policy parameters \(\mathbf{\theta}_{n}\) are updated according to the following formula: \[\mathbf{\theta}_{n}\leftarrow\mathbf{\theta}_{n}+\alpha\cdot\nabla_{\mathbf{\theta}_{n}} \log\pi_{n}(y,d;\mathbf{\theta}_{n})\cdot\big{(}r+\gamma\max_{a^{\prime}\in\mathcal{ A}_{y}}\hat{Q}^{(y)}(a^{\prime},d)-\max_{a\in\mathcal{A}_{n}}\hat{Q}^{(n)}(a,d) \big{)}.\] ## 6 Our Proposed Multi-Agent Relational A2C Routing Algorithm In this study, we are primarily interested in exploring a decentralized solution to the problem of routing in an IAB network. In the following section, we present our novel solutions to this issue. First, we discuss the motivation behind our proposed solution and the challenges it seeks to address. Following this, we describe the main characteristics of our solution. Finally, we present three different training paradigms within our approach, ranging from fully decentralized training to centralized training. As a first step, we attempted to solve the above task using traditional RL techniques such as Q-routing[7], Full-Echo Q-Routing[7], and Hybrid Routing[8]. These methods have not achieved a high degree of generalization due to the challenges posed by the above task, such as partial observability, a large state space, and multi-agent optimization. Essentially, these methods assume that each agent acts independently and does not share their experience with other agents, resulting in performance degradation as a result of insufficient correlation between reward signals, network state, and other agents' policies. Our proposed solution addresses these issues by formulating this problem as a Mutli-Agent POMDP as described in Sec. 4.1. We define our algorithm objective in the same manner as described in Eq. (2) to encourage cooperation among the different agents. Furthermore, we leverage the homogeneity between destinations in order to support an invariant number of users, as we might encounter in a real-world scenario. To this end we categorize destinations into groups based on their _relational_ base-station association, i.e., which base station they are currently connected to. Through this categorization, policy and value functions are shared by each group. As part of this process, each agent uses an iterative online on-policy method called Advantage Actor-Critic (A2C)[17, Ch. 13]. According to this scheme, the actor decides what action to take and the critic informs the actor of its effectiveness and how it should be adjusted. Based on the current observation, critic produces an estimated representation of the network state, and the actor uses this information to update its policy. Every agent in the network represents its own strategy through its actor. To contend with the issue of the large size of the state space, we propose using neural networks to approximate both the actor and the critic. Consequently, since we categorize agents according to their relational association with base stations, we refer to this algorithm as _Relational A2C_. In this section, we present three algorithms based on our method, each in its own subsection. The first algorithm is Relational A2C, in which training is centralized. Specifically, all agents use the same global actor and global critic. Figure 3(a) shows how these are updated based on information from all of the IAB stations. The second algorithm is the Dec-Relational A2C algorithm, in which each base station has a local actor and critic that are trained based on the local base station experience. Figure 3(b) illustrates how the information from the base station is used to update these models. Lastly, the third algorithm is based on federated learning [37], which we refer to as Fed-Relational A2C. Most of the time, this algorithm operates in a decentralized manner, but once within a given period of time, it converts the weighted averages of these local model weights into a global model, which then is broadcast to the base stations. ### Relational A2C In this section, we present the general idea behind our method and explain our centralized training solution. As a first step, we unify all packets destined to the same destination in accordance with the same policy. In the following step, for any possible graph topology, denoted as \(\mathcal{L}^{\star}\triangleq\{\mathcal{L}|\mathcal{L}\textit{ is a set of graph edges that forms network topology}\}\), we define a mapping \(H:\mathcal{N}\times\mathcal{L}^{\star}\rightarrow\{0,1\}^{K}\) that maps agents destined to nodes \(n\in\mathcal{N}\) to a given group. In this case, \(K\) and \(|\{0,1\}^{K}|\) represent the numbers of available base stations and different groups that our agents are divided into, respectively. Next, we define our categorization mapping, \(H:\mathcal{N}\times\mathcal{L}^{\star}\to\{0,1\}^{K}\) as follows: \[H(i,\mathcal{L})[j]\triangleq\begin{cases}1,&\text{if }(i,j)\in\mathcal{L},\\ 0,&\text{if }(i,j)\notin\mathcal{L}.\end{cases}\] It is noteworthy that according to this mapping destinations are grouped according to their relational base-station associations. Furthermore, Fig. 5 demonstrates the functionality of this mapping. In addition, the division of our system's agents into different groups is intended to result in a significant reduction in the number of agents. As a result, multiple agents can cooperate under the same goal and share information Figure 4: Illustration of centralized and decentralized training paradigms for our proposed method. Figure 5: An illustration of the categorization mapping \(H\) for a scenario with \(K=4\). with similar groups, thereby suppressing the non-stationarity issue associated with multi-agent systems [38]. Next, we present the centralized training paradigm, namely _Relational A2C_ (Fig. 4a). In view of the fact that we are utilizing neural networks, we would like to prevent bias in our input and categorize agents in a meaningful way. Thus, we propose encoding the observations of the \(n^{th}\) agent under the assumption that it is located at the \(i^{th}\) node as follows: \[\mathbf{o}_{n}\triangleq[oneHot(i),t,H(destination(n),\mathcal{L}),QueueDelay(n,i)], \tag{6}\] where we define \(oneHot:\mathcal{N}\rightarrow\{0,1\}^{K}\) as follows: \[oneHot(n)[j]\triangleq\begin{cases}1,&j=n,\\ 0,&\text{otherwise}.\end{cases}\] The last variables, \(t\) and \(QueueDelay(n,i)\), represent the TTL and the queue delay of agent \(n\) at the node \(i\), respectively. Let \(\Pi(\cdot;\mathbf{\theta}),\hat{V}_{\Pi}(\cdot;\mathbf{w}),\mathbf{\theta}\in\mathbb{ R}^{L_{1}},\mathbf{w}\in\mathbb{R}^{L_{2}},L_{1},L_{2}\ll|\mathcal{A}|\cdot| \mathcal{S}|\) be our actor and critic representations, respectively, as depicted in Fig 6. In spite of the fact that we are only provided with one actor and one critic for all groups, we are still able to differentiate between them due to the mapping \(H\). Considering this representation, and assuming the packet \(n\) is located at the \(i^{th}\) base station, the packet strategy determines the next hop decision by sampling the corresponding actor distribution \(\Pi(\mathbf{o}_{n};\mathbf{\theta})\). Next, the \(i^{th}\) base station sends the agent via a packet to its next hop decision. After that, the base station receives feedback through the ACK signal that contains the critic's estimation for the next hop and the agent's instant delay, \(\hat{V}_{\Pi}(\mathbf{o}^{\prime}_{n};\mathbf{w})\) and \(D_{n}\), respectively. The critic's value represents the next-node estimate of remaining time in a packet's journey to the agent destination, when the agent follows policy \(\Pi\). Next, we consider \(\hat{V}_{\Pi}(\mathbf{o}_{t,n};\mathbf{w})\) as the estimation of the \(n^{th}\) agent averaged delay given the current observation, i.e., \(\mathbb{E}[\sum_{k=0}^{T-t-1}\gamma^{k}\cdot D_{n,t+1+k}|\mathbf{o}_{t,n},\Pi]\). We derive the following equality based on the Bellman equation for a single agent scenario applied for the \(n^{th}\) agent: \[\hat{V}_{\Pi}(\mathbf{o}_{t,n};\mathbf{w})=\mathbb{E}[D_{n,t+1}+\gamma\cdot \hat{V}_{\Pi}(\mathbf{o}_{t+1,n};\mathbf{w})|\mathbf{o}_{t,n},\Pi].\] Figure 6: Relational A2C neural architecture located at the \(i^{th}\) base station. Hence, we propose to update the critic by minimizing the estimation error. This is achieved by minimizing the following objective \(\mathcal{T}\) w.r.t. \(\mathbf{w}\): \[\mathcal{T}(\Pi)=\mathbb{E}\bigg{[}\bigg{(}\hat{V}_{\Pi}(\mathbf{O}_{n}; \mathbf{w})-(D_{n}+\gamma\cdot\hat{V}_{\Pi}(\mathbf{O}_{n}^{\prime};\mathbf{w}) )\bigg{)}^{2}\bigg{|}\Pi\bigg{]}, \tag{7}\] where \(\mathbf{O}_{n},D_{n}\) and \(\mathbf{O}_{n}^{\prime}\) represent the random variables of observation, instant delay and next observation for a random agent. We update the critic's parameters using the gradient descent method: \[\mathbf{w}\leftarrow\mathbf{w}-\alpha\cdot\nabla_{\mathbf{w}}\mathcal{T}(\Pi),\] for some learning rate \(\alpha\in(0,1)\). Next, we seek to identify a set of policy parameters \(\mathbf{\theta}\) that maximizes the expected cumulative discounted reward (refer to the definition in (2)). To do so, we derive the following lemma. **Lemma 1**.: _The gradient of the objective \(J(\Pi)\) w.r.t. \(\mathbf{\theta}\) is proportional to the following estimator:_ \[\nabla_{\mathbf{\theta}}J(\Pi)\propto\mathbb{E}\bigg{[}\sum_{n^{\prime}\in \mathcal{I}_{t}}\nabla_{\mathbf{\theta}}\log\Pi(a_{t,n^{\prime}}|\mathbf{o}_{t,n^{ \prime}};\mathbf{\theta})\big{(}\big{(}Q_{\Pi}(\mathbf{s}_{t},\mathbf{a}_{t})-\frac{1}{| \mathcal{I}_{t}|}\sum_{n\in\mathcal{I}_{t}}\hat{V}_{\Pi}(\mathbf{o}_{t,n};\mathbf{ w})\big{)}\bigg{]},\] _where \(\mathcal{I}_{t}\) represents the set of agents that were active at timestep \(t\)._ Proof.: The full proof is attached in Appendix B. According to its definition, the action-value function \(Q_{\Pi}(\mathbf{s}_{t},\mathbf{a}_{t})\) represents the average delay of all packets while the network is at state \(\mathbf{s}_{t}\), uses action \(\mathbf{a}_{t}\), and follows the policy \(\Pi\). Our proposal is to estimate the average network delay \(\hat{Q}_{\Pi}(\mathbf{s}_{t},\mathbf{a}_{t})\) using our active agents in order to accommodate the partially observed issue, i.e., \[\hat{Q}_{\Pi}(\mathbf{s}_{t},\mathbf{a}_{t})=\frac{1}{|\mathcal{I}_{t}|}\sum_ {n\in\mathcal{I}_{t}}\hat{Q}_{\Pi}(\mathbf{o}_{t,n},a_{t,n})=\frac{1}{| \mathcal{I}_{t}|}\sum_{n\in\mathcal{I}_{t}}D_{n,t+1}+\gamma\cdot\hat{V}_{\Pi} (\mathbf{o}_{t+1,n};\mathbf{w}).\] Following that, we can estimate the gradient of the objective \(J(\Pi)\) w.r.t. \(\mathbf{\theta}\) at the \(t^{th}\) time-step using the following sample: \[\nabla_{\mathbf{\theta}}\hat{J}(\Pi)=\sum_{i\in\mathcal{I}_{t}}\bigg{(}\frac{1}{| \mathcal{I}_{t}|}\sum_{n\in\mathcal{I}_{t}}D_{n,t+1}+\gamma\cdot\hat{V}_{\Pi} (\mathbf{o}_{t+1,n};\mathbf{w})-\hat{V}_{\Pi}\big{(}\mathbf{o}_{t,n};\mathbf{ w}\big{)}\bigg{)}\cdot\nabla_{\mathbf{\theta}}\text{log}\Pi(a_{t,i}|\mathbf{o}_{t,i}; \mathbf{\theta}). \tag{8}\] This estimation allows us to apply the stochastic gradient ascent method in order to find the local optimal solution. That is, at each time step t, the parameter \(\mathbf{\theta}\) is updated by \[\mathbf{\theta}\leftarrow\mathbf{\theta}+\eta\nabla_{\mathbf{\theta}}\hat{J}(\Pi),\] for some learning rate \(\eta\in(0,1)\). By using this method, we expect to increase the cooperation between the different agents through the optimization of the joint objective (Fig. 3(a) illustrates this procedure). The steps of the proposed Relational A2C algorithm are summarized in Algorithm 1 below: ### Dec-Relational A2C In this section we extend our method to support a decentralized training paradigm, namely, _Dec-Relational A2C_. For decentralized training, we propose decoupling the neural network across the different base stations so that each base station uses its own set of weights for both actor and critic, as depicted in Fig 6. With a dedicated neural network for each base station, the base station ID in the observation becomes redundant, so we drop it, leading to a new observation \(\mathbf{o}_{n}=[t,H(destination(n),\mathcal{L}),QueueDelay(n))]\), where \(QueueDelay\) represents the amount of time the \(n^{th}\) agent waited at the current base station queue. Let \(\hat{\Pi}_{k}(\cdot;\boldsymbol{\theta}_{k}),\hat{V}_{\Pi}(\cdot;\mathbf{w} _{k}),\boldsymbol{\theta}_{k}\in\mathbb{R}^{L_{1}},\mathbf{w}_{k}\in\mathbb{R }^{L_{2}}\), be our actor and critic representations at the \(k^{th}\) base-station, respectively. In addition, let \(k,n,k^{\prime},\mathbf{o}_{n},\mathbf{o}_{n}^{\prime}\) be our current node, agent, next node decision, observation and next step observation, respectively. Following that, based on the Bellman equation we propose to modify the temporal difference estimation as follows: \[\delta_{k,n,k^{\prime}}=D_{n}+\gamma\cdot\hat{V}_{\Pi}(\mathbf{o}_{n}^{\prime} ;\mathbf{w}_{k^{\prime}})-\hat{V}_{\Pi}(\mathbf{o}_{n};\mathbf{w}_{k}), \tag{9}\] such that there is a mutual update between the different base stations through the ACK signal. Essentially, by using this modification we hope that the next base-station estimation combined with the instant delay will represent the estimated path delay from the current base station for this agent. Next, let \(\mathcal{I}_{k}\) represent the set of agents who took a next-hop decision from base station \(k\). The critic is then updated at each base station \(k\) by minimizing the following objective \(\mathcal{T}_{k}\) w.r.t. \(\mathbf{w}_{k}\): \[\mathcal{T}_{k}(\hat{\Pi}_{k})=\mathbb{E}\bigg{[}\sum_{n\in\mathcal{I}_{k}} \delta_{k,n,k^{\prime}_{n}}^{2}\bigg{|}\hat{\Pi}_{k}\bigg{]}, \tag{10}\] where we update critic's parameters using the gradient descent method: \[\mathbf{w}_{k}\leftarrow\mathbf{w}_{k}-\alpha\cdot\nabla_{\mathbf{w}_{k}} \mathcal{T}_{k}(\hat{\Pi}_{k}),\] for some learning rate \(\alpha\in(0,1)\). As a next step, we aim to approximate the objective gradient w.r.t. \(\boldsymbol{\theta}_{k}\). **Lemma 2**.: _The gradient of the objective \(J(\Pi)\) w.r.t. \(\boldsymbol{\theta}_{k}\) is proportional to the following estimator:_ \[\nabla_{\boldsymbol{\theta}_{k}}J(\Pi)\propto\mathbb{E}\bigg{[}\sum_{n\in \mathcal{I}_{k,t}}\nabla_{\boldsymbol{\theta}_{k}}\log\Pi(a_{t,n}|\boldsymbol{ o}_{t,n};\boldsymbol{\theta}_{k})\big{(}Q_{\Pi}(\boldsymbol{s}_{t}, \boldsymbol{a}_{t})-\frac{1}{|\mathcal{I}_{k,t}|}\sum_{n^{\prime}\in\mathcal{I }_{k,t}}\hat{V}_{\Pi}(\boldsymbol{o}_{t,n};\mathbf{w}_{k})\big{)}\bigg{]},\] _where \(\mathcal{I}_{k,t}\) represents the set of agents that were active at the \(k^{th}\) base station at timestep \(t\)._ Proof.: The full proof is attached in Appendix C. To support decentralized training, we first proposed a local estimation of average network delay, \(\hat{Q}_{\Pi}^{(k)}(\mathbf{s}_{t},\mathbf{a}_{t})=\frac{1}{|\mathcal{I}_{k,t}| }\sum_{n\in\mathcal{I}_{k,t}}D_{n,t+1}+\gamma\cdot\hat{V}_{\Pi}(\mathbf{o}_{t+ 1,n};\mathbf{w}_{a_{t,n}})\). Using this estimation and following Lemma 2, we can estimate the objective \(J(\Pi)\) gradient w.r.t. \(\mathbf{\theta}_{k}\) using the following sample: \[\nabla_{\mathbf{\theta}_{k}}\hat{J}(\hat{\Pi}_{k})=\sum_{n\in\mathcal{I}_{k}} \nabla_{\mathbf{\theta}_{n}}\text{log}\big{(}\hat{\Pi}_{k}(a_{n}|\mathbf{o}_{n}; \mathbf{\theta}_{k})\big{)}\big{(}\frac{1}{|\mathcal{I}_{k}|}\sum_{n^{\prime}\in \mathcal{I}_{k}}\delta_{k,n^{\prime},a_{n^{\prime}}}\big{)}, \tag{11}\] where we neglect the time indexing for notational simplicity. Afterwards, we update our agents policies using the gradient ascent method: \[\mathbf{\theta}_{k}\leftarrow\mathbf{\theta}_{k}+\eta\nabla_{\mathbf{\theta}_{k}}\hat{J}( \hat{\Pi}_{k}),\] for some learning rate \(\eta\in(0,1)\). We term this method as _Dec-Relational A2C_. By following this method we are able achieve fully decentralized training of our network (Fig. 3(b) illustrates this procedure). The steps of this algorithm are summarized in Algorithm 2 below: ``` 1:Initialize Learning Rates \(\eta,\alpha\). 2:for Base Station \(k=0,1,\ldots,K-1\)do 3: Initialize Actor and Critic weights \(\mathbf{\theta}_{k},\mathbf{w}_{k}\). 4:endfor 5:for time step \(t=1,2,...,T\)do 6:for Base Station \(k=0,1,\ldots,K-1\)do 7: Extract active agents \(\mathcal{I}_{k,t}\). 8:for Agent \(n\in\mathcal{I}_{k,t}\)do 9: Observe \(\mathbf{o}_{t,n}\). 10:\(a_{t,n}=a,\text{w.p. }\hat{\Pi}_{k}(a|\mathbf{o}_{t,n};\mathbf{\theta}_{k}), \forall a\in\mathcal{A}_{n,t}\), 11:endfor 12:endfor 13: Execute actions. 14:for Base Station \(k=0,1,\ldots,K-1\)do 15:for Agent \(n\in\mathcal{I}_{k,t}\)do 16: Obtain the delay \(D_{n,t+1}\) and next observation \(\mathbf{o}_{t+1,n}\) associated with the \(n^{th}\) agent. 17: Set Temporal Difference Error. 18:\(\delta_{t,n}=D_{n,t+1}+\gamma\cdot\hat{V}(\mathbf{o}_{t+1,n};\mathbf{w}_{a_{t,n}})-\hat{V}\big{(}\mathbf{o}_{t,n};\mathbf{w}_{k}\big{)}\). 19:endfor 20:endfor 21: Update critics' and actors' parameters based on Eq. 10 and Eq. 11. 22:for Base Station \(k=0,1,\ldots,K-1\)do 23:\(\mathbf{\theta}_{t+1,k}\leftarrow\mathbf{\theta}_{t,k}+\eta\cdot\bigg{(}\sum_{n\in \mathcal{I}_{k,t}}\nabla_{\mathbf{\theta}_{k}}\text{log}\big{(}\hat{\Pi}_{k}(a_{t,n}|\mathbf{o}_{t,n};\mathbf{\theta}_{t,k})\big{)}\cdot\big{(}\frac{1}{|\mathcal{I }_{k,t}|}\sum_{n^{\prime}\in\mathcal{I}_{k,t}}\delta_{t,n^{\prime}}\big{)} \bigg{)}\). 24:\(\mathbf{w}_{t+1,k}\leftarrow\mathbf{w}_{t,k}-\alpha\cdot\nabla_{\mathbf{w}_{k}}\bigg{(} \frac{1}{|\mathcal{I}_{k,t}|}\sum_{n\in\mathcal{I}_{k,t}}(\delta_{t,n})^{2} \bigg{)}\). 25:endfor 26:endfor ``` **Algorithm 2** The Dec-Relational A2C Algorithm for Simultaneously Optimize Routing Strategy ### Fed-Relational A2C To conclude, we propose another version that combines the features of the previous versions. As part of this approach, the weights of the network are updated using a federated learning approach [37]. Therefore, we refer to this method as _Fed-Relational A2C_. In this approach the agents constantly update their weights in a decentralized manner, similar to the decentralized approach, while the agents periodically report their weights to a central controller; based on the weights they submit, the controller calculates a new set of shared weights. The final step is to relay the updated weights to the base stations. Thus, we find this method to be somewhere in the middle between the previous centralized and decentralized approaches. Additionally, in this approach, weights are shared periodically among the different base stations, resulting in the same observations as in the relational approach. The steps of this algorithm are summarized in Algorithm 3 below: ``` 1:Initialize Actor and Critic weights and learning rates \(\mathbf{\theta},\mathbf{w},\eta,\alpha\). 2:Initialize federated update period \(\tau\). 3:for BS \(k=0,1,\ldots,K-1\)do\(\triangleright\) Broadcast Actor and Critic weights. 4:\(\mathbf{w}_{k}\leftarrow\mathbf{w}\), \(\mathbf{\theta}_{k}\leftarrow\mathbf{\theta}\). 5:endfor 6:for time step \(t=1,2,\ldots,T\)do 7:if\(t\) MOD \(\tau==0\)then 8: Collect Actor and Critic weights from each base-station \(\{\mathbf{w}_{k}\}_{k=0}^{K-1},\{\mathbf{\theta}_{k}\}_{k=0}^{K-1}\) with the number of updates since last global update \(\{n_{k}\}_{k=0}^{K-1}\). 9:\(\textsc{FederatedUpdate}(\{\mathbf{w}_{k}\},\{\mathbf{\theta}_{k}\}_{k=0}^{K-1}, \)K,\(\{n_{k}\}_{k=0}^{K-1})\). 10:endif 11: Follow decision and training phases of Decentralized Algorithm 2. \(\triangleright\) Lines 6-25. 12:endfor 13:procedureFederatedUpdate(CriticWeights,ActorWeights,K,NumberOfUpdates) 14:\(\{\mathbf{w}_{k}\}_{k=0}^{K-1}\leftarrow\textsc{CriticWeights}\), \(\triangleright\) Unpack the Critic Weights. 15:\(\{\mathbf{\theta}_{k}\}_{k=0}^{K-1}\leftarrow\textsc{ActorWeights}\), \(\triangleright\) Unpack the Actor Weights. 16:\(\{n_{k}\}_{k=0}^{K-1}\leftarrow\textsc{NumberOfUpdates}\)\(\triangleright\) Unpack each BS's update number. 17:\(\mathbf{w}\leftarrow\sum_{k=0}^{K-1}\frac{n_{k}}{\sum_{k=0}^{K-1}n_{k}}\mathbf{w}_{k}\), \(\mathbf{\theta}\leftarrow\sum_{k=0}^{K-1}\frac{n_{k}}{\sum_{k=0}^{K-1}n_{k}}\mathbf{ \theta}_{k}\), \(\triangleright\) Apply FedAvg Update Rule. 18:for BS \(k=0,1,\ldots,K-1\)do\(\triangleright\) Broadcast the new calculated weights to the agents 19:\(\mathbf{w}_{k}\leftarrow\mathbf{w}\), \(\mathbf{\theta}_{k}\leftarrow\mathbf{\theta}\). 20:endfor 21:endprocedure ``` **Algorithm 3** The Federated-Relational Advantage Actor Critic (Fed-Relational A2C) Algorithm for Simultaneously Optimize Routing Strategy It should be mentioned that in a general POMDP setting solved using MARL techniques, as considered here, a common problem is multiple local optimum points [39] within the joint policy space, which may be resolved with convergence to a less desirable, local optima strategy solution [39]. As a result, convergence to the optimal policy is not guaranteed theoretically. In practice, however, it achieves very good performance even in various POMDP models with infinitely large state space. For example, the work in [40] developed an Actor Critic algorithm for teaching multiple agents how to play Starcraft games directly from screen images, and achieved very good performance at various stages. ## 7 Experiments and Insights In this section, we describe our main research insights and their associated experiments. First, we evaluate several connectivity scenarios in order to demonstrate the importance of network routing in an IAB network. Next, we study the impact of network load as well as the impact of mobility. This analysis shows that high loads affect network performance significantly in the case of mobility for most of the routing algorithms. We then analyze how changes in online traffic patterns and node failure affect the network. As a result of this experiment, we gained valuable insights into how the proposed routing can adapt to and recover from online changes. Lastly, we analyzed the different routing convergence times As part of these experiments, we study and evaluate various routing methods within an IAB network, as discussed in Section V. In particular, the performance of Relational A2C is compared to six other algorithms: Centralized Routing, Minimum-Hop Routing, Back-pressure Routing, Q-Routing, Full Echo Q-Routing, and Hybrid Routing. We refer the reader to Section V for a detailed explanation of each algorithm. To conduct those experiments, we have developed a gym-based simulated IAB environment [41]. The simulation takes place over a 2-dimensional grid 1. Tables 1,2 describe network and algorithms hyper parameters, respectively. Furthermore, Relational A2C was implemented as described in Algorithms 1,2,3 in Section VI, with all methods based on the A2C network architecture shown in Fig 6. In the following, all the metrics we mentioned in Section IV are used as the figure-of-merit for evaluating the performance of the different algorithms. Footnote 1: For reproducing our results, we refer the reader to run our code, [https://github.com/Shahaf-Yamin/Routing-In-IAB-Networks](https://github.com/Shahaf-Yamin/Routing-In-IAB-Networks) ### The Importance of Routing in an IAB network In this section, we examine the importance of routing in an IAB network by measuring network performance in various scenarios of connectivity. Our study emphasizes the importance of routing algorithms by illustrating that for high connectivity, where routing is needed, performance is much higher than for low connectivity, where routing is not needed due to the limited number of paths. In order to change the network's connectivity we have modified the constraints that dictate the number of parents each IAB node / User may have and the number of devices (IAB children / Users) that each IAB can support. Using the results illustrated in Fig. 9, we can conclude that using a single path topology (Fig 8.a.) yielded the lowest arrival ratio and the longest delay. We can also deduce that higher network connectivity is highly recommended to support the expected 5G's requirements of high load and low latency. Moreover, while higher connectivity may have some benefits, we must also consider that a higher level of connectivity may result in more interference between nearby base stations, which may contradict our assumption regarding interference. In light of this conclusion, the purpose of this study is to examine how using routing can improve network performance, while only taking partial connectivity into account. Figure 8: Visualization of different connectivity scenarios. Figure 9: Performance illustration of different network’s connectivity cases. ### The Influence of Network Load This section examines how network load affects the performance of different algorithms in various mobility scenarios. That is, we evaluate each algorithm's resilience under different loads for various mobility scenarios. Following this, we conclude that an increase in load / mobility affects network performance significantly for most of the proposed routing algorithms. As a next step, we will describe in detail our experiments and their corresponding empirical results. To change the network load, the parameter \(\lambda\) of the Poisson distribution has been modified. This parameter indicates the average number of packets generated in each time-slot by the IABs. To modify \(\lambda\), we scanned various loads successively from bottom-to-top, and then from top-to-bottom. The results presented are an average of 10 different measurements for each load across five different network topologies. First, we evaluate the performance of our algorithms over a static topology, which means that users cannot change their base-station association or location. Next, we increase the user's speed to \(3\frac{m}{sec}\), which means that users can change their location and change their association with a base station at any time. We then refer to this scenario as a dynamic topology and evaluate our algorithms' performance. In addition, we investigate how the speed of UEs affects the performance of the different algorithms. Furthermore, due to the lack of further information provided by the arrival ratio, we present only the average delay metric. The following sections provide results and insights from those experiments. #### 7.2.1 Static Topology This experiment evaluated the algorithm's performance under a static network topology and changing load. In Fig. 9(a), we present the performance of the algorithms. Based on these results, we can conclude that all three versions of the Relational A2C algorithm outperformed other algorithms, despite acting in a decentralized manner. Moreover, these results also revealed that using exploration in conjunction with exploitation (Full-Echo Q-Routing) greatly improved performance at low to medium loads when compared with Q-Routing, whilst they achieved similar performances for higher loads. As a further interesting phenomenon, we observe that Hybrid Routing improves performance when compared with traditional Q-Routing under low loads, but degrades performance under higher loads. We explain this phenomenon by stating that higher loads require better coordination among agents, as making an incorrect routing decision while the network is already congested will probably result in packet loss. Our conclusion is that since each agent learns an independent stochastic policy as part of the algorithm, it is less effective at Figure 10: Static topology and dynamic topology - average delay performance for different routing algorithms under different loads. handling non-stationary problems than traditional Q-Routing algorithms. In our next experiment, we allow UEs to travel and change their base-station association, which will further increase the non-stationarity of the problem. Additionally, as network load increases, we observed performance degradation of the Shortest-Path algorithm. Our explanation for this phenomenon is that queue delay does not have a significant influence on performance for low loads, but as the load increases, it becomes more significant. #### 7.2.2 Dynamic Topology This experiment evaluated the algorithm's performance under a dynamic network topology and changing load. Following this, based on the results illustrated in Fig. 10 we can determine that although acting in a decentralized manner, all versions of Relational A2C's algorithm managed to achieve superior performance than the other algorithms for this scenario. Further, Hybrid Routing does not perform well in this non-stationary environment, as we observe further degradation in performance when compared to a static scenario, and at low load it is even less effective than Q-Routing. Moreover, when compared with the static scenario, Full-Echo Q-Routing exhibits performance degradation at medium to high loads. It performs worse in this range than the traditional Q-Routing. Our explanation for this phenomenon is that when using Full Echo Q-Routing, frequent topology changes may cause multiple modifications to the routing policy, which will result in increased instability and longer routing times as loads increase. In addition, this phenomenon coincides with the phenomena observed in [7], in which the authors applied those algorithms to a grid topology. The significant performance gap between all versions of Relational A2C and the traditional hybrid algorithm supports our claim that by following our proposed algorithms, agents are able to cooperate more effectively and increase the stability of our routing policy, proving that agents that cooperate achieve better outcomes than agents who act selfishly. Thus, these results demonstrate that optimizing the joint goal and solving the coordination problem between agents significantly improves performance for both static and dynamic topologies. When comparing the performance of static versus dynamic topologies as illustrated in Fig. 10, we observe that all tabular RL baselines suffer from performance degradation when users are permitted to move and change their base-station association. Based on these results, which indicate a significant difference between static and dynamic topologies, we propose the following experiment to examine the effect of UE movement on algorithm performance. #### 7.2.3 Results from Examining the Influence of Dynamic Topology Changes In this experiment, we evaluated the algorithm's robustness to dynamic topology changes under constant loads. For this purpose, we scanned various UE speeds and measured the arrival ratio and average delay with the different algorithms. We evaluate the algorithms' performance for two possible loads, medium load (\(\lambda=3\)) and high load (\(\lambda=5\)), presented in Fig. 10(a) and Fig. 10(b), respectively. Fig. 10(a) and Fig. 10(b) demonstrate that increasing the UE speed will not have a significant impact on the Relational A2C algorithm's performance for varying network loads. Also, in medium loads, Full Echo Q-Routing achieves superior performance to Q-Routing, with both suffering from similar performance degradation as a result of their different UE speeds. Furthermore, we observe that the combination of higher load with increased speed results in more severe performance degradation for Full Echo Q-Routing compared to traditional Q-Routing. This observation is consistent with the results of our previous experiments. Moreover, Hybrid Routing shows the greatest degradation among all Q-Routing algorithms, further demonstrating that it is less able to cope with this non-stationary setting than traditional Q-Routing algorithms. Aside from this, we observe that all other RL-based algorithms suffer from performance degradation when UE speed is increased, which serves as an additional indication of our proposed approach's superiority. ### Experiment Results for Online Changes The purpose of this section is to examine how online changes impact algorithm performance in a variety of scenarios. Our study indicates that Relational A2C-based approaches are superior for handling such online changes when compared to other algorithms. As a next step, we will describe the online scenarios that we have studied. In the first case, we analyzed the algorithm's response to bursts of traffic. In the second case, we investigate how the algorithm responds to node failures and recovery situations. Also, in these experiments we capture those online changes by measuring the algorithm's performance using a sliding window with a length of 100 time slots. #### 7.3.1 Experiment Results for Bursts of Traffic During this experiment we measure the algorithm's response to bursts of traffic. This was accomplished by evaluating the algorithm performance while changing the network load. Our experiment involves changing the network load rapidly from low (\(\lambda=2\)) to high (\(\lambda=5\)) and then back to low. The results presented here are an average of 10 measurements across five different topologies of the network. Furthermore, we conducted this experiment for both static and dynamic topologies, as decipted in Fig. 9(a) and Fig. 9(b), respectively. Further, due to the absence of further information offered by the arrival ratio, we present only the average delay metric measured through time for this experiement. In both static and dynamic scenarios, it is evident that the Relational A2C versions achieve superior performance than the traditional RL-algorithms. Additionally, all algorithms exhibit similar reaction times when measuring the impact of changes in load from low to high and from high to low. When comparing between static and dynamic scenarios, we are able to see that the Relational A2C algorithms did not suffer from any performance degradation, while the tabular methods greatly suffer. Further, we observe that Full Echo Q-Routing is superior to traditional Q-Routing in static scenarios, while they achieve similar performance in dynamic scenarios. Additionally, Hybrid Routing suffers from the greatest degradation of all Q-Routing algorithms when working with high loads, emphasizing its inability to cope with such a non-stationary scenario. Our conclusions are consistent with the results of our previous experiments, which further verifies the results. #### 7.3.2 Experiment Results for Node Failure In this experiment, we evaluate how the algorithms respond to a scenario of node failure and recovery. To achieve this, the algorithm performance was evaluated while removing a random base station from the Figure 11: Average network delay for different routing algorithms under different UE speed. network for a specified period of time. Since we insert dynamic changes through node failures, we now consider only static topologies, in which users cannot move. The following results are based on an average of 30 consecutive experiments with random base-station failure in each experiment. This experiment was conducted for a high load as shown in Fig. 13. To illustrate the arrival ratio metric in such an online setting, we measured the number of packets that dropped within our sliding window. From these results, it is evident that all algorithms are able to to recover from the node failure situation, although we observe that the average delay does not return to its initial value after the node has recovered. In our opinion, this is due to the fact that the other base stations' queues are already congested at this point. This results in a more congested network than before the node failure. In conjunction with the fact that the arrival rate process does not change over time, performance is degraded under higher loads. Figure 12: Static topology and dynamic topology - average delay performance for different routing algorithms under bursts of traffic. low load - \(\lambda=2\), high load - \(\lambda=5\). Figure 13: Performance for different routing algorithms in case of node failure for high load \(\lambda=5\). A further, but equally significant, conclusion we can draw from these measurements is that while Relational A2C has a lower delay than Centralized Routing in Fig. 12(a), its packet loss rate is higher in Fig. 12(b). This demonstrates the importance of using both arrival rate and average delay in our analysis. ### Algorithm Convergence In this section, we examine the algorithm's convergence time under a constant load, starting with a random initialization point. According to our results, all versions of our algorithm were able to converge to a similar solution, as opposed to the other baselines that suffer from performance degradation. Thus, we conclude that all Relational A2C-based algorithms were able to achieve a better routing solution than traditional algorithms. The results presented in Fig. 14 are the average of those measurements. Additionally, we found that centralized training achieved the fastest convergence among the different training paradigms of our proposed method. In fact, this convergence gap arises because the different agents can share their experience implicitly through the mutual updates of both the actor and the critic among all base stations. Furthermore, we conclude that Fed-Relational A2C performs better than Dec-Relational A2C in terms of convergence. In general, this difference can be understood intuitively, since federated learning approaches manage the trade-off between a fully decentralized training paradigm and a centralized training paradigm. Another interesting phenomenon is that Full Echo Q-Routing appears to be the fastest algorithm that converges to a stable solution (an approximated point of equilibrium). This rapid convergence can be explained by the fact that Full Echo Q-Routing receives all of the rewards available to it, regardless of the chosen action. As a result, it is able to reduce the number of interactions required for convergence with the environment. ## 8 Conclusion In this paper, we examined the problem of routing in an IAB network in which multiple IABs nodes operate simultaneously within the same network in order to route multiple packets towards their destinations. A successful joint routing policy must be developed by the IABs in order to route packets effectively without causing network congestion. Due to physical limitations, the IAB is only able to exchange limited information with its neighbors, and thus does not know the current status of the entire network. This raises the interesting and unanswered question of which hops should be selected at each time step so that the network arrival ratio achieved by matching routing policies is maximal and the average packet delay is minimal. To identify the joint routing policy that maximizes the network's arrival ratio while minimizing the average Figure 14: Illustration of the performance of different routing algorithms through their training procedures. packet delay, we developed a novel Relational A2C algorithm, which aims to determine the best joint routing strategy, based on observations collected by the IABs via online learning. To support decentralized training, we developed two different approaches which achieve similar performance to the centralized training approach. For various different scenarios, we compared the arrival ratio and average delay of the proposed Relational A2C algorithms with those of six other algorithms. It was found that Relational A2C performed better than baseline algorithms in all cases and achieved performance similar to that of a centralized approach. Further, we demonstrate that network routing is crucial to reducing congestion and maximizing the utilization of network resources in IAB networks. These results clearly demonstrate the ability of Relational A2C based algorithms to learn near-centralized policies and the overall superiority of the proposed approach over existing methods.
2305.19243
Unlocking Tuning-free Generalization: Minimizing the PAC-Bayes Bound with Trainable Priors
It is widely recognized that the generalization ability of neural networks can be greatly enhanced through carefully designing the training procedure. The current state-of-the-art training approach involves utilizing stochastic gradient descent (SGD) or Adam optimization algorithms along with a combination of additional regularization techniques such as weight decay, dropout, or noise injection. Optimal generalization can only be achieved by tuning a multitude of hyperparameters through grid search, which can be time-consuming and necessitates additional validation datasets. To address this issue, we introduce a practical PAC-Bayes training framework that is nearly tuning-free and requires no additional regularization while achieving comparable testing performance to that of SGD/Adam after a complete grid search and with extra regularizations. Our proposed algorithm demonstrates the remarkable potential of PAC training to achieve state-of-the-art performance on deep neural networks with enhanced robustness and interpretability.
Xitong Zhang, Avrajit Ghosh, Guangliang Liu, Rongrong Wang
2023-05-30T17:31:25Z
http://arxiv.org/abs/2305.19243v2
# Auto-tune: PAC-Bayes Optimization over Prior and Posterior for Neural Networks ###### Abstract It is widely recognized that the generalization ability of neural networks can be greatly enhanced through carefully designing the training procedure. The current state-of-the-art training approach involves utilizing stochastic gradient descent (SGD) or Adam optimization algorithms along with a combination of additional regularization techniques such as weight decay, dropout, or noise injection. Optimal generalization can only be achieved by tuning a multitude of hyperparameters through grid search, which can be time-consuming and necessitates additional validation datasets. To address this issue, we introduce a practical PAC-Bayes training framework that is nearly tuning-free and requires no additional regularization while achieving comparable testing performance to that of SGD/Adam after a complete grid search and with extra regularizations. Our proposed algorithm demonstrates the remarkable potential of PAC training to achieve state-of-the-art performance on deep neural networks with enhanced robustness and interpretability. ## 1 Introduction PAC-Bayes generalization bounds serve as a critical theoretical foundation for contemporary deep learning, by providing quantitative assessments of the ability of trained neural network models to effectively generalize to unobserved test data [49], [40], [41]. Although PAC-Bayes bounds were traditionally used only in the post-training stage for the purpose of quality control [52; 41], the recent work of Dziugaite et al.[16] has opened the door to using these bounds during the training phase. They showed that one can directly train a network via optimizing the PAC-Bayes bounds, a strategy we refer to as PAC-Bayes training, and obtained reasonable performances. This discovery marks the first time PAC-Bayes bounds have been leveraged for training a neural network and has sparked further exploration into various other PAC-Bayes training techniques [33], [47], [45], [6],[44],[18],[15]. However, it is well-known that the PAC-Bayes bounds suffer from the curse-of-dimensionality, making the practical use of PAC-Bayes training challenging on highly over-parameterized networks/models (like ResNet18 and VGG13 that outperform shallow ones in many tasks), since the over-parameterization can risk rendering the PAC-Bayes bound vacuous[16]. We summarize this challenge in (Q1). On the other hand, the current SGD/Adam-based training procedure for neural networks requires the addition of many tricks/regularizations to achieve the best performance. To understand why these tricks/regularizations help, many recent works focused on examining them individually. For instance, it has been shown that 1) larger learning rates [12; 5], momentum [22], and smaller batch sizes [32] induce higher degrees of regularization on the Hessian or the sharpness of the loss function, subsequently yielding better generalization. 2) Techniques such as dropout [54], parameter noise injection [43], label noise [13], and batch normalization [37] serve as implicit regularizations on the Hessian of the loss function, and the intensity of their implementation can significantly affect generalization. 3) There is evidence to suggest that the concurrent use of dropout and batch normalization may result in conflict [36]. 4) The effects on generalization by the use of mini-batch SGD/Adam can be replicated by the injection of noise into the weights [42]. 5) The generalization influence of the mini-batch in SGD can be also emulated by full-batch gradient descent, given the explicit inclusion of SGD's implicit gradient regularization and weight clipping [20]. Despite having all these obervations and explanations, it is still quite mysterious why a combination of almost all these regularizations remains essential in practice. Additionally, the process of adjusting the strengths of each regularization for different situations can be both time-consuming and heuristic. We summarize this challenge for the current training procedure in (Q2). Q1 : _Can PAC-Bayes training work on highly over-parameterized deep neural networks?_ Q2 : _Can we make the training procedure less dependent on the choice of hyper-parameters and use as few regularizations/tricks as possible?_ This paper provides affirmative answers to both questions, by proposing a practical PAC-Bayes training framework. Using the framework, we show: 1. PAC-Bayes training can achieve state-of-the-art results for deep neural networks. 2. PAC-Bayes training can achieve state-of-the-art results even when the training data set is small. 3. PAC-Bayes training can be made (almost) tuning-free (only network initialization is needed) and therefore get rid of the hassle of parameter search and the use of validation data (slightly increased the amount of the training data). 4. From PAC-Bayes training, we see that among the different regularization/tricks, only weight decay and noise injections are essential: having these two are usually enough to achieve the best generalization. ## 2 Preliminaries Throughout the paper, boldface letters denote vectors. We first introduce the basic setting of the PAC-Bayes analysis. For any supervised-learning problem, the goal is to find a good model \(\mathbf{h}\) from some hypothesis space, \(\mathbf{h}\in\mathcal{H}\subseteq\mathbb{R}^{d}\), with the help of the training data \(\mathcal{S}\equiv\{z_{i}\}_{i=1}^{m}\), and \(z_{i}\) is the training pair with sample \(\mathbf{x}_{i}\) and its label \(y_{i}\). The usual assumption is that the training and test data are i.i.d. sampled from the same unknown distribution \(\mathcal{D}\). For a given model \(\mathbf{h}\in\mathcal{H}\), the empirical and population/generalization error are defined as: \[\ell(\mathbf{h};\mathcal{S})=\frac{1}{m}\sum_{i=1}^{m}\ell(\mathbf{h};z_{i}), \qquad\ell(\mathbf{h};\mathcal{D})=\mathbb{E}_{\mathcal{S}\sim\mathcal{D}}( \ell(\mathbf{h};\mathcal{S})),\] where the loss function \(\ell(\mathbf{h};z_{i}):\mathbf{h}\mapsto\mathbb{R}^{+}\) measures the misfit between the true label \(y_{i}\) and the predicted label by the model \(\mathbf{h}\). In particular, for the neural network setting, we have \(\ell(\mathbf{h},z_{i})=r(f_{\mathbf{h}}(\mathbf{x}_{i}),y_{i})\), where \(f_{\mathbf{h}}\) is the neural network parametrized by \(\mathbf{h}\), and \(r\) is some metric of choice (e.g., MSE, cross-entropy) to measure the misfit. PAC-Bayes bounds include a family of upper bounds on the generalization error of the following type. **Theorem 2.1**.: _[_38_]_ _Assume the loss function \(\ell\) is **bounded** within the interval \([0,C]\). Given a fixed prior distribution \(\mathcal{P}\) over the model space \(\mathcal{H}\), and given a scalar \(\delta\in(0,1)\), for any choice of i.i.d \(m\)-sized training dataset \(\mathcal{S}\) according to \(\mathcal{D}\), and all posterior distributions \(\mathcal{Q}\) over \(\mathcal{H}\), we have_ \[\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h};\mathcal{D}))\leq \mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h};\mathcal{S})+C\sqrt{ \frac{\ln(\frac{\sqrt{2m}}{\delta})+\mathrm{KL}(\mathcal{Q}||\mathcal{P})}{2 m}},\] _holds with probability at least \(1-\delta\). Here KL stands for the Kullback-Leibler divergence._ A PAC-Bayes bound is used to measure the gap between the expected empirical error and the expected generalization error in terms of the KL-divergence between the prior \(\mathcal{P}\) and the posterior \(\mathcal{Q}\). It's worth noting that this bound holds for any data-independent prior \(\mathcal{P}\) and any posterior \(\mathcal{Q}\), which enables one to further optimizing the bound by searching for the best posterior (known as the Gibbs-posterior). In practice, the posterior can be set to center around the trained model from data, and the prior can be selected to center around the initial model or around \(\mathbf{0}\). Gap between theory and practice.Various PAC-Bayes bounds have been proposed in the literature, with the goal of making them tight and non-vacuous. However, theoretical tightness does not necessarily equate to numerical tightness, as the former usually refers to good asymptotic rates while the latter refers merely to small values. For PAC-Bayes training, we need the latter. For instance, the bound in Theorem 2.1 is considered theoretically looser than the following one [2], \[\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h};\mathcal{D})\leq \mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h};\mathcal{S})+\sqrt{ \mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h};\mathcal{S})\frac{\ln( \frac{2\sqrt{m}}{\delta})+\operatorname{KL}(\mathcal{Q}||\mathcal{P})}{m}}+ \frac{\ln(\frac{2\sqrt{m}}{\delta})+\operatorname{KL}(\mathcal{Q}||\mathcal{P })}{m}. \tag{1}\] as it has a smaller asymptotic rate in \(m\) when the training error gets down to 0. But if used in PAC training, the looser bound in Theorem 2.1 usually leads to better performance on both the CIFAR10 and CIFAR100 datasets as it can achieve smaller numerical values in the non-asymptotic regime of interest. This indicates that a theoretically loose PAC-Bayes bound can still be useful in practice. Some practical challenges faced by PAC-Bayes training.In this section, we list some obstacles to directly using existing PAC-Bayes bound in the literature to conduct training. As the cross-entropy loss used in classification tasks is unbounded, directly applying the bound for bounded loss in Theorem 2.1 would fail. _Bounded loss._ To use Theorem 2.1 appropriately, one can begin by converting the cross-entropy loss to a bounded version, and then apply Theorem 2.1. There are many ways to convert it to bounded loss (clipping, log-transforms), but they all tend to decrease the variance of the loss across the inputs, making the training slow. From our experience with deep neural networks, this will even cause the training accuracy to plateau at a very low level. _Unbounded loss._ In our next attempt, we tried to use existing PAC-Bayes bounds derived for unbounded losses [23][30]. In [23], an upper bound was derived for variables that satisfy the so-called hypothesis-dependent range condition, which is stated as \(\sup_{z}l(\mathbf{h},z)\leq K(\mathbf{h}),\;\forall\mathbf{h}\in\mathcal{H}\). However, the cross entropy loss does not satisfy this condition without putting extra assumptions on the input. In [30], the author proposed PAC-Bayes bound for unbounded variables using Efron-Stein type of inequalities and obtained the following PAC-Bayes bound (adapted to our notations), \[\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h};\mathcal{D})\leq \mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h};\mathcal{S}))+\sqrt{ \frac{1}{m}\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\left[\ell_{1}(\mathbf{h}, \mathcal{S})+\mathbb{E}_{z^{\prime}\sim\mathcal{D}}\ell(\mathbf{h},z^{\prime} )\right]\operatorname{KL}(\mathcal{Q}||\mathcal{P})}+\frac{1}{m},\] where \(z_{k}=(\mathbf{x}_{k},y_{k})\) is the \(k\)th training pair, and \(z^{\prime}\sim\mathcal{D}\) is a test sample drawn from the data distribution and \(\ell_{1}(\mathbf{h},\mathcal{S}):=\frac{1}{m}\sum_{k=1}^{m}\ell(\mathbf{h},z_{ k})^{2}\), This bound holds for any unbounded loss with a finite second-order moment. However, it is inconvenient for PAC-Bayes training, as the term \(\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\mathbb{E}_{z^{\prime}\sim\mathcal{D}} \ell(\mathbf{h},z^{\prime})^{2}\) in the bound is almost as difficult to estimate as the generalization error itself empirically. ## 3 Prior art on PAC-Bayes training The theoretical machine learning field underwent substantial evolution with the introduction of the PAC-Bayesian learning framework by McAllester [40], which built upon Shawe-Taylor's earlier work [49]. Several works, including those by McAllester [40, 39, 41], Catoni [9, 10], Maurer [31, 38], and Germain [21], define seminal PAC-Bayes bounds. These bounds, as the one in Theorem 2.1, utilize the KL-divergence between the output posterior distribution and the prior, along with the training error, to explain generalization capabilities. Although the initial goal of creating PAC-Bayes bounds was to predict the performance of classification learning algorithms [48, 11, 21], they were soon extended to regression tasks as well [21, 38]. Recently, in the pioneering work of Dziugaite and Roy's [16], the PAC bound was first used for training of neural networks. Specifically, McAllester's PAC-Bayes bound [40] was used for training a shallow stochastic neural network performing binary MNIST classification with bounded \(0\)-\(1\) loss and proven to be non-vacuous, i.e, exceeding the empirical test error by only a small margin with high probability. However, the performance of the proposed PAC training in this paper could not yet match that of the traditional training. Following this work, many recent studies [33], [47], [45], [6], [44], [55] aimed to enhance the performance of PAC-Bayes training. These works focused on introducing new loss functions and expanding the applicability of PAC-Bayes bounds to a wider range of neural network architectures and datasets. For instance, Rivasplata et al. [47] introduced PBB (PAC-Bayes with BackProp), which is a self-bound algorithm that utilizes training data to learn a predictor and certify its risk through PAC-Bayes bound minimization. Their method utilizes a tighter generalization bound compared to the approach proposed by Dziugaite and Roy [16] on the MNIST dataset. However, both studies are limited to training shallow networks with binary labels on the MNIST dataset using bounded loss, which restricts their broader application to deep network training. In another line of work, the prior and posterior are trained on separate datasets. Dziugaite et al. [15] suggested that a tighter PAC-Bayes bound could be achieved with a data-dependent prior in McAllester-type linear bounds [40]. They divide the data into two sets, using one to train the prior distribution and the other to train the posterior with the optimized prior, thus making the prior independent from the new training dataset. This, however, reduces the training data available for the posterior. Moreover, due to the use of shallow networks, the reported test accuracy didn't match the state-of-the-art performance. The original paper by Dziugaite and Roy's [16] does not suffer from the issue of dataset splitting, as the same dataset was used for training both the prior and the posterior. Later, Dziugaite et al. [18] and Rivasplata et al [46] justified this approach by utilizing differential privacy. However, their argument only holds for priors provably satisfying the so-called \(DP(\epsilon)\)-condition in differential privacy, which limits their practical application (for more discussion please see Remark 4.2.1). To better align with practical applications, several PAC-Bayes bounds for unbounded loss have been established [4], [3], [26], [30], [23], [46]. However, from the PAC training perspective, whether or not these theoretically tighter bounds can lead to better performance in training is still unclear. In addition, most existing PAC-Bayes training algorithms require hyper-parameter tuning, sometimes even more than vanilla SGD training, making it less feasible in practice. In this work, we make a step forward in the PAC-Bayes training, making it more practical and demonstrating its potential to replace the normal training of neural networks in realistic settings. ## 4 Theory preparation The proposed PAC-training framework has several key features: it includes a derivation of a relatively numerically tight PAC-bound; it includes a general theory that supports the training of the prior and the posterior simultaneously by the same training data; and it employs a layerwise prior that maintains differences of different types of layers. In this section, we will start with the necessary definitions. **Definition 1** (Exponential moment on finite intervals).: _Let \(X\) be a random variable. We call any \(K>0\) an exponential moment bound of \(X\) on a fix interval \([\gamma_{1},\gamma_{2}]\) if the following holds_ \[\mathbb{E}[\exp{(\gamma X)}]\leq\exp{(\gamma^{2}K^{2})},\quad\forall\gamma \in[\gamma_{1},\gamma_{2}]. \tag{2}\] Note that this definition concurs with that of the sub-Gaussian bound when \(X\) is centered around 0 and the interval \([\gamma_{1},\gamma_{2}]\) is set to \([0,\infty)\). By using a finite interval \([\gamma_{1},\gamma_{2}]\), this definition allows for a larger class of random variables to have a finite \(K\). In particular, if \(X\) is non-negative (e.g., cross-entropy), it is known [8] that any \(X\) with a finite second-order moment will satisfy Definition 1 with a finite \(K\). Next, to conduct the PAC-Bayes analysis, we need to extend the above definition to random variables that are parametrized by a hypothesis \(\mathbf{h}\in\mathcal{H}\). **Definition 2** (Exponential moment on finite intervals for unbounded variables over a hypothesis).: _Let \(X(\mathbf{h})\) be a random variable parameterized by a hypothesis \(\mathbf{h}\in\mathcal{H}\), and fix an interval \([\gamma_{1},\gamma_{2}]\). Let \(\mathcal{P}_{\boldsymbol{\lambda}}\) be some distribution over \(\mathcal{H}\) parameterized by \(\boldsymbol{\lambda}\in\Lambda\subseteq\mathbb{R}^{k}\), and \(\Lambda\) is the set of all possible \(\boldsymbol{\lambda}\)s. Then, we call any non-negative function \(K(\boldsymbol{\lambda})\) a uniform exponential moment bound for \(X(\mathbf{h})\) over the priors \(\{\mathcal{P}_{\boldsymbol{\lambda}},\boldsymbol{\lambda}\in\Lambda\}\) and the interval \([\gamma_{1},\gamma_{2}]\) if the following holds_ \[\mathbb{E}_{\mathbf{h}\sim\mathcal{P}_{\boldsymbol{\lambda}}}\mathbb{E}[\exp{ (\gamma X(\mathbf{h}))}]\leq\exp{(\gamma^{2}K^{2}(\boldsymbol{\lambda}))}, \quad\forall\gamma\in[\gamma_{1},\gamma_{2}],\;\boldsymbol{\lambda}\in\Lambda \subseteq\mathbb{R}^{k}. \tag{3}\] Now we can start to establish the PAC-Bayes bound for the proposed PAC-Bayes training. **Theorem 4.1**.: _Given a prior distribution \(\mathcal{P}_{\mathbf{\lambda}}\) parametrized by \(\mathbf{\lambda}\in\Lambda\) over the hypothesis set \(\mathcal{H}\). Fix some \(\mathbf{\lambda}\in\Lambda\) and \(\delta\in(0,1)\). For any \(\gamma\in[\gamma_{1},\gamma_{2}]\) and any choice of i.i.d \(m\)-sized training dataset \(\mathcal{S}\) according to \(\mathcal{D}\), and all posterior distributions \(\mathcal{Q}\) over \(\mathcal{H}\), we have_ \[\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h};\mathcal{D})\leq\mathbb{ E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h};\mathcal{S})+\frac{1}{\gamma m}( \ln\frac{1}{\delta}+\mathrm{KL}(\mathcal{Q}||\mathcal{P}_{\mathbf{\lambda}}))+ \gamma K^{2}(\mathbf{\lambda}) \tag{4}\] _holds with probability at least \(1-\delta\) when \(\ell(\mathbf{h},\cdot)\) satisfies Definition 2 with bound \(K(\mathbf{\lambda})\)._ Optimizing over the prior and posterior.Now we select the posterior distribution as \(\mathcal{Q}_{\mathbf{\sigma}}(\mathbf{h}):=\mathbf{h}+\mathcal{Q}_{\mathbf{\sigma}}(0)\), where \(\mathbf{h}\in\mathbb{R}^{d}\) is the current model (i.e., the network parameters) and \(\mathcal{Q}_{\mathbf{\sigma}}(0)\) is a zero mean distribution parameterized by \(\mathbf{\sigma}\in\mathbb{R}^{d}\). Assuming the prior \(\mathcal{P}_{\mathbf{\lambda}}\) is parameterized by \(\mathbf{\lambda}\in\mathbb{R}^{k}\) (\(k\ll m,d\)). Then for PAC training, we propose to optimize over all four variables \(\mathbf{h}\), \(\gamma\), \(\mathbf{\sigma}\), and \(\mathbf{\lambda}\). \[(\hat{\mathbf{h}},\hat{\gamma},\hat{\mathbf{\sigma}},\hat{\mathbf{\lambda}})=\arg \min_{\begin{subarray}{c}\mathbf{h},\mathbf{\lambda},\mathbf{\sigma},\\ \gamma\in[\gamma_{1},\gamma_{2}]\end{subarray}}\underbrace{\mathbb{E}_{\hat{ \mathbf{h}}\sim\mathcal{Q}_{\mathbf{\sigma}}(\mathbf{h})}\ell(\bar{\mathbf{h}}; \mathcal{S})+\frac{1}{\gamma m}(\ln\frac{1}{\delta}+\mathrm{KL}(\mathcal{Q}_{ \mathbf{\sigma}}(\mathbf{h})||\mathcal{P}_{\mathbf{\lambda}}))+\gamma K^{2}(\mathbf{ \lambda})}_{\equiv L_{PAC}(\mathbf{h},\gamma,\mathbf{\sigma},\mathbf{\lambda})}.\] (P) Before getting into the details of the training algorithm, let us provide the theoretical error guarantee for P. **Assumption 4.1.1** (Continuity of the KL divergence).: _Let \(\mathfrak{Q}\) be a family of posterior distribution, let \(\mathfrak{P}=\{P_{\mathbf{\lambda}},\mathbf{\lambda}\in\Lambda\subseteq\mathbb{R}^{k}\}\) be a family of prior distributions parameterized by a \(k\)-dimensional variable vector \(\mathbf{\lambda}\in\mathbb{R}^{k}\). We say the posterior family \(\mathfrak{Q}\) is continuous with respect to the parameterization of the prior under a metric \(\|\cdot\|\) defined on \(\Lambda\), if there exist \(\varepsilon_{0}>0\) and a non-decreasing function \(\eta_{1}(x):\mathbb{R}_{+}\to\mathbb{R}_{+}\) such that for any \(0<\varepsilon<\varepsilon_{0}\) and \(\mathbf{\lambda},\tilde{\mathbf{\lambda}}\in\Lambda\), \(||\tilde{\mathbf{\lambda}}-\mathbf{\lambda}||<\varepsilon\), it holds that_ \[|\mathrm{KL}(\mathcal{Q}||\mathcal{P}_{\mathbf{\lambda}})-\mathrm{KL}(\mathcal{Q} ||\mathcal{P}_{\tilde{\mathbf{\lambda}}})|\leq\eta_{1}(\|\mathbf{\lambda}-\tilde{\mathbf{ \lambda}}\|),\] _for any \(\mathcal{Q}\in\mathfrak{Q}\)._ **Assumption 4.1.2** (Continuity of the moment bound).: _Let the \(K(\mathbf{\lambda})\) be as defined in Definition 2. Assume it is continuous with respect to the parameter \(\mathbf{\lambda}\) of the prior in the sense that there exists a non-decreasing function \(\eta_{2}(x):\mathbb{R}_{+}\to\mathbb{R}_{+}\) such that_ \[|K^{2}(\mathbf{\lambda})-K^{2}(\tilde{\mathbf{\lambda}})|\leq\eta_{2}(\|\mathbf{\lambda}- \tilde{\mathbf{\lambda}}\|),\quad\forall\mathbf{\lambda},\tilde{\mathbf{\lambda}}\in\Lambda.\] **Theorem 4.2**.: _Let \(n(\varepsilon):=\mathcal{N}(\Lambda,\|\cdot\|,\varepsilon)\) be the covering number of the set \(\Lambda\) of the prior parameters. Under Assumption 4.1.1 and Assumption 4.1.2, the following inequality holds for the minimizer \((\hat{\mathbf{h}},\hat{\gamma},\hat{\mathbf{\sigma}},\hat{\mathbf{\lambda}})\) of (P) with probability as least \(1-\epsilon\):_ \[\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}_{\mathbf{\sigma}}(\hat{\mathbf{h }})}\ell(\mathbf{h};\mathcal{D}) \leq\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}_{\mathbf{\sigma}}(\hat{ \mathbf{h}})}\ell(\mathbf{h};\mathcal{S})+\frac{1}{\hat{\gamma}m}\left[\ln \frac{n(\varepsilon)}{\epsilon}+\mathrm{KL}(\mathcal{Q}_{\hat{\mathbf{\sigma}}}( \hat{\mathbf{h}})||\mathcal{P}_{\tilde{\mathbf{\lambda}}})\right]+\hat{\gamma}K^ {2}(\hat{\mathbf{\lambda}})+\eta\] \[=L_{PAC}(\hat{\mathbf{h}},\hat{\gamma},\hat{\mathbf{\sigma}},\hat{\bm {\lambda}})+\eta+\frac{\ln(n(\varepsilon))}{\hat{\gamma}m} \tag{5}\] _holds for any \(\epsilon,\varepsilon>0\), where \(\eta=(\frac{1}{\gamma_{1}m}+\gamma_{2})(\eta_{1}(\varepsilon)+\eta_{2}( \varepsilon))\)._ Since \((\hat{\mathbf{h}},\hat{\gamma},\hat{\mathbf{\sigma}},\hat{\mathbf{\lambda}})\) is the minimizer of \(L_{PAC}(\mathbf{h},\gamma,\mathbf{\sigma},\mathbf{\lambda})\), the first term on the right-hand side of (5) is guaranteed to be the smallest possible. In applications, we need to ensure the correction terms (i.e., the second and third terms) are small. This can usually be achieved by selecting a relatively small \(k\), the dimension of the prior parameter \(\mathbf{\lambda}\), compared to the dataset size. More discussions can be found in the next section. **Remark 4.2.1**.: _Theoretical justification for the use of data-dependent prior was previously provided in [46] and [18]by using differential privacy. However, the requirement for the prior to be differentially private restricts its application to generalization settings. In particular, the prior encountered in this paper, since it is only implicitly defined as a solution to a non-convex optimization problem with no good closed-form expression, the condition for differential privacy is hard to be verified. In contrast, Theorem 4.2 holds for arbitrary data-dependent priors._ PAC-Bayes training algorithm ### Gaussian prior and posterior In our PAC training, we use Gaussian distributions for both the prior and posterior. We set the prior distribution to be centered around the initialization of the neural network \(\mathbf{h}_{0}\) (as suggested by [17]), having independent entries and with a scalar variance for all the weights in one layer (different from [17]). For a \(k\)-layer network, the prior is written as \(\mathcal{P}_{\boldsymbol{\lambda}}(\mathbf{h}_{0})\), where \(\boldsymbol{\lambda}\in\mathbb{R}^{k}_{+}\) is the vector containing the variance for each layer. The set of all such priors is denoted by \(\boldsymbol{\gamma}_{\boldsymbol{\gamma}}^{\top}:=\{\mathcal{P}_{\boldsymbol{ \lambda}}(\mathbf{h}_{0}),\boldsymbol{\lambda}\in\Lambda\}\). We select the posterior distribution to be centered around the trained model \(\mathbf{h}\), with independent anisotropic variance. Specifically, for a network with \(d\) trainable parameters, the posterior is set to \(\mathcal{Q}_{\boldsymbol{\sigma}}(\mathbf{h}):=\mathcal{N}(\mathbf{h},\text{ diag}(\boldsymbol{\sigma}))\), where \(\mathbf{h}\) is the mean and \(\boldsymbol{\sigma}\in\mathbb{R}^{d}_{+}\) is the vector containing the variance for each trainable parameter. The set of all posteriors is \(\boldsymbol{\Omega}:=\{\mathcal{Q}_{\boldsymbol{\sigma}}(\mathbf{h}), \boldsymbol{\sigma}\in\Sigma,\mathbf{h}\in\mathcal{H}\}\), and the KL divergence between such prior and posterior is \[\mathrm{KL}(\mathcal{Q}_{\boldsymbol{\sigma}}(\mathbf{h})||\mathcal{P}_{ \boldsymbol{\lambda}}(\mathbf{h}_{0}))=\frac{1}{2}\sum_{i=1}^{k}\left[- \mathbf{1}_{d_{i}}^{\top}\ln(\boldsymbol{\sigma}_{i})+d_{i}(\ln(\boldsymbol{ \lambda}_{i})-1)+\frac{\|\boldsymbol{\sigma}_{i}\|_{1}+\|(\mathbf{h}-\mathbf{ h}_{0})_{i}\|^{2})}{\boldsymbol{\lambda}_{i}}\right], \tag{6}\] where \(\boldsymbol{\sigma}_{i},(\mathbf{h}-\mathbf{h}_{0})_{i}\) are vectors denoting the variances and weights for the \(i\)-th layer, respectively, and \(\boldsymbol{\lambda}_{i}\) is the scalar variance for the \(i\)-th layer. \(d_{i}=\dim(\boldsymbol{\sigma}_{i})\), and \(\mathbf{1}_{d_{i}}\) denotes an all-ones vector of length \(d_{i}\). Substituting the Gaussian prior and posterior into Theorem 4.2 gives the following corollary. **Corollary 5.0.1**.: _Assume the parameter sets of priors and posteriors are both bounded, \(\mathcal{H}:=\{\mathbf{h}\in\mathbb{R}^{d}:\|\mathbf{h}\|_{2}\leq M\}\), \(\Sigma:=\{\boldsymbol{\sigma}\in\mathbb{R}^{d}_{+}:\|\boldsymbol{\sigma}\|_{1 }\leq T\}\), \(\boldsymbol{\Lambda}:=:\{\boldsymbol{\lambda}\in[e^{-a},e^{b}]^{k}\}\), and assume \(\ell(\mathbf{h},\cdot)\) satisfies Definition 2 with bound \(K(\boldsymbol{\lambda})\) and \(K^{2}(\boldsymbol{\lambda})\) is Lipschitz continuous with Lipschitz constant \(C_{K}\). Then with high probability, the PAC-Bayes bound for the minimizer of (P) has the form_ \[\mathbb{E}_{h\sim\mathcal{Q}_{\boldsymbol{\sigma}}(\hat{\mathbf{h}})}\ell( \mathbf{h};\mathcal{D})\leq L_{PAC}(\hat{\mathbf{h}},\hat{\gamma},\hat{ \boldsymbol{\sigma}},\hat{\boldsymbol{\lambda}})+\eta,\] _where \(\eta=\frac{k}{\gamma_{1}m}\left(1+\ln\frac{C(C_{K}+L(d))(b+a)\gamma_{1}m}{2k}\right)\), \(C=\frac{1}{\gamma_{1}m}+\gamma_{2}\), \(L(d)=\frac{1}{2}\max\{d,e^{a}(2M+T)\}\)_ **Remark 5.0.1**.: _The boundedness of the parameter sets in the assumption of Corollary 5.0.1 can be guaranteed by clipping the variables during training (usually not needed if the training dataset is large), and the Lipschitz continuity assumption on \(K(\boldsymbol{\lambda})\) can be verified numerically from data._ Again since \((\hat{\mathbf{h}},\hat{\gamma},\hat{\boldsymbol{\sigma}},\hat{\boldsymbol{ \lambda}})\) is the minimizer of \(L_{PAC}(\mathbf{h},\gamma,\boldsymbol{\sigma},\boldsymbol{\lambda})\), the first term in the bound is guaranteed to be small. The correction term \(\eta\) would also be small as long as \(k\) is much smaller than \(m\), which means the trainable parameters in the prior distribution has to be much smaller than the number of training samples. For the large CIFAR10/100 dataset [29] with \(m=5\times 10^{4}\), the correction term would be very small even with a deep CNN 1. But for small graph datasets, such as PubMed [7] where \(m=60\), the correction term could become large. If this happens, then we need to set a smaller bound for the parameter sets of the prior and the posterior distributions and use clipping to enforce the parameters falling into this range during training. More discussions about details are available in the appendix (Sec. C, D). Footnote 1: In all our experiments for various neural networks, we set \(\gamma_{1}=0.5\) and \(\gamma_{2}=10\). Scalar prior.Scalar prior is a special case of the layerwise prior by setting all entries of \(\boldsymbol{\lambda}\) to be equal, for which the KL divergence reduces to \[\mathrm{KL}(\mathcal{Q}_{\boldsymbol{\sigma}}(\mathbf{h})||\mathcal{P}_{ \boldsymbol{\lambda}}(\mathbf{h}_{0}))=\frac{1}{2}\left[-\mathbf{1}_{d}^{\top} \ln(\boldsymbol{\sigma})+d(\ln(\lambda)-1)+\frac{1}{\lambda}(\|\boldsymbol{ \sigma}\|_{1}+\|\mathbf{h}-\mathbf{h}_{0}\|^{2})\right]. \tag{7}\] When the data size is small, PAC-Bayes training with scalar prior may deliver better performance than the layerwise one by limiting the number of trainable parameters in the prior to one. ### Estimating \(K(\boldsymbol{\lambda})\) and \(C_{k}\) In order to optimize \(L_{PAC}\), we first need to estimate the function \(K(\boldsymbol{\lambda})\). For the PAC-Bayes bound to be effective, the value of \(K(\boldsymbol{\lambda})\) cannot be too large. In fact, this is why we made \(K\) a function of \(\boldsymbol{\lambda}\) and only requires the inequality to hold on a finite interval in Definition 2, as otherwise, the estimated \(K\) will be too large. In practice, we estimate \(K(\mathbf{\lambda})\) using linear interpolation: first for a discrete set, \(\{\mathbf{\lambda}_{1}\),....,\(\mathbf{\lambda}_{s}\}\subseteq\Lambda\)2, we estimate the corresponding \(K_{1}\),....\(K_{s}\) using the empirical version of (2), that is for any \(i=1,...,s\), we solve Footnote 2: Note that with a little ambiguity, the \(\mathbf{\lambda}_{i}\) here has a different meaning from that in (6)), here \(\mathbf{\lambda}_{i}\) means the \(i\)th element in the discrete set, whereas in (6), \(\mathbf{\lambda}_{i}\) means the \(i\)th element in \(\mathbf{\lambda}\). \[\hat{K}_{i} =\arg\min_{K>0}K \tag{8}\] \[\text{s.t.}\exp\left(\gamma^{2}K^{2}\right)\geq\frac{1}{nm}\sum_{ l=1}^{n}\sum_{j=1}^{m}\exp(\gamma(\ell(\mathbf{h}_{l};\mathcal{S})-\ell( \mathbf{h}_{l};z_{j})))]\quad\forall\;\gamma\in[\gamma_{1},\gamma_{2}],\] for \(K(\mathbf{\lambda}_{i})\), where \(\mathbf{h}_{l}\sim\mathcal{P}_{\mathbf{\lambda}_{i}}(\mathbf{h}_{0}),l=1,...,n\), are samples from the prior distribution and are fixed when solving (8). The minimizer \(\tilde{K}_{i}\) is found using a bisection search. From the pairs \((\mathbf{\lambda}_{i},K(\mathbf{\lambda}_{i}))\), we construct \(K(\mathbf{\lambda})\) using function interpolation, which then allows us to compute the derivative of \(K(\mathbf{\lambda})\) when optimizing the PAC-Bayes loss. Notably, since for each fixed \(\mathbf{\lambda}_{i}\), the prior is independent of the data, this procedure of estimating \(K(\mathbf{\lambda})\) can be carried out before training. Algorithm 1 summarizes the procedure of computing \(K(\mathbf{\lambda})\) before training. Moreover, if one wants to apply Corollary 5.0.1 to estimate the final prediction error of the PAC training, it is sufficient to estimate the Lipschitz constant \(C_{K}\) also from the interpolated function. ``` 0:\(\gamma_{1}\) and \(\gamma_{2}\), \(s\) query prior variances \(\mathcal{V}=\{\mathbf{\lambda}_{i}\in\Lambda\subseteq\mathbb{R}^{k},i=1,...,s\}\), the initial neural network weight \(\mathbf{h}_{0}\), the training dataset \(\mathcal{S}=\{z_{i}\}_{i=1}^{n}\), model sampling time \(n=10\) 0:the piece-wise linear interpolation \(\tilde{K}(\mathbf{\lambda})\) for\(K(\mathbf{\lambda})\) for\(\mathbf{\lambda}_{i}\in\mathcal{V}\)do Set up a discrete grid \(\Gamma\) for the interval \([\gamma_{1},\gamma_{2}]\) of \(\gamma\). for\(l=1:n\)do Sampling weights from the Gaussian distribution \(\mathbf{h}_{l}\sim\mathcal{N}(\mathbf{h}_{0},\mathbf{\lambda}_{i})\) Use \(\mathbf{h}_{l}\), \(\Gamma\) and \(\mathcal{S}\) to compute one term in the sum in (8) endfor Solve \(\hat{K}_{i}\) using (8) endfor Fit a piece-wise linear function \(\tilde{K}(\mathbf{\lambda})\) to the data \(\{(\mathbf{\lambda}_{i},\hat{K}_{i})\}_{i=1}^{s}\) ``` **Algorithm 1** Compute \(K(\mathbf{\lambda})\) given a set of query priors More details about the \(K\) estimation for the layer-wise prior are in the appendix (Sec. A). ### PAC-Bayes bound minimization (Auto-tune) Like [17], we use the Adam optimizer to optimize the model, the prior and the posterior, except that 1) we also optimize \(K(\mathbf{\lambda})\) over the prior variance; 2) we allow the loss to be unbounded; and 3) we allow the network to be deep by using the layer-wised prior. Tuning-free PAC-Bayes training.Algorithm 2 and 3 describe the PAC-Bayes training procedure with scalar and layerwise prior. Here, we directly minimize \(L_{PAC}\) using Adam. Although there are many input parameters to be specified, their value can be largely fixed across very different settings, and therefore not much actual tuning is needed. The only essential input is the initial model \(\mathbf{h}_{0}\), for which we use the default Kaiming initialization [25] as in normal training. The loss function is defined as Equation (P) with the KL term defined in (6). When everything else is fixed, \(\gamma\in[\gamma_{1},\gamma_{2}]\) has a closed-form solution, \[\gamma^{*}=\min\left\{\max\left\{\gamma_{1},\frac{1}{K}\sqrt{\frac{\ln\frac{1} {\delta}+\mathrm{KL}(\mathcal{Q}_{\mathbf{\sigma}}(\mathbf{h})||\mathcal{P}_{\mathbf{ \lambda}}(\mathbf{h}_{0}))}{m}}\right\},\gamma_{2}\right\}. \tag{9}\] So, we only need to perform gradient updates on the other three variables, \(\mathbf{h},\mathbf{\sigma},\mathbf{\lambda}\). We would like to comment that as PAC-Bayes training involves more trainable parameters, the optimization inevitably becomes harder but we found that it is still quite manageable. More details about the implementation and discussions can be found in the appendix (Sec. 8, B). The second stage of training.In many cases, the PAC-Bayes training does not allow the training accuracy to reach nearly 100\(\%\), due to the existence of the KL term. However, for many classification tasks, a high training accuracy is usually essential for good testing accuracy. Therefore, we add a second stage to the PAC-Bayes training stage (Stage 1) to make the training converge. Specifically, in Stage 2, we continue to update the model by minimizing only \(\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}_{\boldsymbol{\hat{\sigma}}}}\ell( \mathbf{h};\mathcal{S})\) over \(\mathbf{h}\) using Adam and keep all other variables fixed to the solution found by Stage 1, which is essentially a stochastic gradient update with noise injection whose level has been learned from Stage 1. Evaluation/Prediction.The bound derived in the PAC-Bayes theorems pertains to the Bayesian predictor, which is constructed by averaging predictions over the posterior distribution \(\mathcal{Q}_{\boldsymbol{\hat{\sigma}}}(\mathbf{h})\). In practice, implementing this Bayesian predictor involves evaluating the new input across multiple models sampled from the posterior distribution, and then employing an average or voting procedure to obtain a final prediction. In this paper, however, we propose using a simple deterministic predictor that relies solely on the final trained model for making predictions. This approach offers several advantages. Firstly, it eliminates the need for multiple model evaluations. Additionally, this deterministic predictor has the potential to enhance performance based on the following intuition. Recall that for any \(\mathbf{h}\in\mathbb{R}^{d}\) and \(\boldsymbol{\sigma}\in\mathbb{R}^{d}_{+}\), we used \(\mathcal{Q}_{\boldsymbol{\sigma}}(\mathbf{h})\) to denote the multivariate normal distribution with mean \(\mathbf{h}\) and covariance matrix \(\text{diag}(\boldsymbol{\sigma})\). Now let us Taylor expand the left-hand side of the PAC-Bayes bound: \[\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}_{\boldsymbol{\hat{\sigma}} }(\hat{\mathbf{h}})}\ell(\mathbf{h};\mathcal{D}) =\mathbb{E}_{\Delta\mathbf{h}\sim\mathcal{Q}_{\boldsymbol{\hat{ \sigma}}}(0)}\ell(\hat{\mathbf{h}}+\Delta\mathbf{h};\mathcal{D})\] \[\approx\ell(\hat{\mathbf{h}},\mathcal{D})+\mathbb{E}_{\Delta \mathbf{h}\sim\mathcal{Q}_{\boldsymbol{\hat{\sigma}}}(0)}(\ell(\hat{\mathbf{h }};\mathcal{D})\Delta\mathbf{h}+\frac{1}{2}\Delta\mathbf{h}^{\top}\nabla^{2}\ell (\hat{\mathbf{h}};\mathcal{D})\Delta\mathbf{h})\] \[=\ell(\hat{\mathbf{h}};\mathcal{D})+\frac{1}{2}\text{Tr}(\text{ diag}(\boldsymbol{\sigma})\nabla^{2}\ell(\hat{\mathbf{h}};\mathcal{D}))\geq\ell(\hat{ \mathbf{h}},\mathcal{D}). \tag{10}\] Recall here \(\hat{\mathbf{h}}\) and \(\hat{\boldsymbol{\sigma}}\) are the minimizers of the PAC-Bayes loss, obtained by solving the optimization problem (P). Equation (10) states that the deterministic predictor has a smaller prediction error than the Bayesian predictor. However, note that the last inequality in (10) is derived under the assumption that the term \(\nabla^{2}\ell(\hat{\mathbf{h}},\mathcal{D})\) is positive-semidefinite. This is a reasonable assumption as \(\hat{\mathbf{h}}\) is the local minimizer of the PAC-Bayes loss and the PAC-Bayes loss is close to the population loss when the number of samples is large. Nevertheless, since this assumption does not holds for all cases, the presented argument can only serve only as an intuition that shows the potential benefits of using the deterministic predictor. Regularizations in the PAC-Bayes loss:Plugging the KL divergence formula (6) into (P), we can see that in the case of the Gaussian prior, the PAC-Bayes loss is nothing but the original training loss augmented by a noise injection and a weight decay, except that the weight decay term is now centered at \(\mathbf{h}_{0}\) instead of \(\mathbf{0}\), the coefficients in front of the weight decay change from layer to layer, and the noise injection has anisotropic variances. Since many factors in normal training, such as mini-batch and dropout, enhance generalization by some sort of noise injection, it is not surprising that they can be substituted by just the well-calibrated noise injection mechanism in PAC-Bayes training. ## 6 Experiments Evaluation on deep convolution neural networks:We test the proposed method on the CIFAR100 and CIFAR100 datasets with _no data augmentation_ on various popular deep neural networks including ResNet18, ResNet34 [50], VGG13 and VGG19 [34], and DenseNet121 [27] by comparing its performance with the normal training by SGD and Adam with various regularizations (which we call baseline). The baseline training involves a grid search of hyper-parameters, including the optimizer (SGD/Adam/AdamW), momentum for SGD (\(0.3,0.6,0.9\)), learning rates (\(1e{-3},5e{-3},1e{-2},5e{-2},1e{-1},2e{-1}\)), weight decay (\(1e{-4},5e{-4},1e{-3},5e{-3},1e{-2}\)), and noise injection (\(5e{-4},1e{-3},5e{-3},1e{-2}\)), thus it is quite time-consuming. To make this search possible, we adjusted one hyper-parameter at a time while keeping the others fixed. To determine the optimal hyper-parameter for a variable, we used the mean testing accuracy of the last five epochs. We then used this selected hyper-parameter to tune the next one. We only applied noise injection to Adam/AdamW, as it sometimes causes instability with SGD. The best learning rate for Adam and AdamW is the same since weight decay is the only difference between the two optimizers. We set batch size as \(128\), since from the literature, it usually gives the best performance. Finally, 38 searches were conducted for each baseline model on one dataset. The testing accuracy for the optimally-tuned baseline and a single run of the Tuning-free PAC-Bayes training are presented in Table 1. Since there is no published validation dataset in the CIFAR10 and CIFAR100, we (unfairly) used the test dataset to tune the hyperparameters for the baseline, which would make the reported performance of the baseline slightly inflated compared to its actual performance. Nevertheless, the testing accuracy of our method with scalar and layer-wised prior match the best testing accuracy of baselines. There is no grid search for our PAC-Bayes training, and we use Adam as the optimizer and use the learning rate as \(1e{-4}\) for all models. To provide more details, we have plotted all the searched results for the baseline for VGG13, ResNet34, and DenseNet121 on CIFAR100 in Figure 1. The plotted results are sorted in ascending order based on their testing accuracy. The figure shows that our proposed training algorithms are better than most searched settings. Extra stability to the batch size:Additionally, we find that the result of the tuning-free PAC training is insensitive to the batch size. Table 2 shows that changing the batch size from \(128\) to \(2048\) for VGG13 and ResNet18 does not decrease the performance of the PAC-Bayes training as much as it does for the normal training. Despite not requiring exhaustive tuning, our proposed tuning-free algorithms match the best testing accuracy of the baseline with a much larger batch size. This observation enables using large batch sizes for the PAC training to accelerate the convergence. Besides the batch size, our proposed method is also insensitive to the learning rate. Please refer to the appendix (Sec. H.1, and H.2) for more details. Evaluation on graph neural networks:To demonstrate the robustness of the proposed PAC training algorithm across different network architectures, we also test it on the graph neural networks. Moreover, the number of training samples for node classification tasks is generally much smaller than CIFAR10/100, allowing us to examine the performance of the algorithm in the data scarcity setting. Unlike CNNs, the GNN baselines find their best performance with AdamW optimizer and with dropout turned on, while the proposed PAC-Bayes training algorithm stays the same as in the CNN setting. To ensure the best results of the baseline, we fine-tuned the learning rate \begin{table} \end{table} Table 1: Testing accuracy of convolution neural networks on C10 (CIFAR10) and C100 (CIFAR100). “scalar” and “layer” denote the PAC-Bayes training (Auto-tune) with scalar and layer-wised prior. \begin{table} \end{table} Table 2: The results of large batch sizes on the testing accuracy of CNNs on C10 (CIFAR10) and C100 (CIFAR100). The number in \((\cdot)\) indicates how much the results differ from using a small batch size of \((128)\). “scalar” and “layer” denote the PAC-Bayes training (Auto-tune) with scalar and layer-wised prior. The best results are **highlighted**. (\(1e-3,5e-3,1e-2\)), weight decay (\(0,1e-2,1e-3,1e-4\)), noise injection (\(0,1e-3,1e-2,1e-3\)), and dropout (\(0,0.4,0.8\)). We follow the convention for graph datasets by randomly assigning 20 nodes per class for training, 500 for validation, and the remaining for testing. To obtain the best testing performance of baseline models, we conducted a thorough grid search of hyperparameters. We evaluated the accuracy of the validation nodes to identify the best hyperparameters and report the corresponding testing accuracy. Since GNNs are faster to train than convolutional neural networks, we tested all possible combinations of the above parameters for the baseline, conducting 144 searches per model on one dataset. We tested GCN [28], GAT [53], SAGE [24], and APPNP [19] on CoraML, Citeseer, PubMed, Cora and DBLP [7] with detailed tuning. There are only two convolution layers for GNNs, so we only test our algorithm with the scalar prior. We added one dropout layer between two graph convolution layers for baselines only, except keeping the dropout in the attention layers of GAT for both our algorithm and baselines since it is essentially dropping edges of the input graph. In Table 3, for each GNN architecture, the rows of AdamW and scalar, respectively, record the performances of the baseline and the PAC training with early stopping determined by the validation dataset. We see that the results of our algorithm match the best testing accuracy of baselines. We also did a separate experiment by disabling the early stopping and using the training and validation nodes both for training and then we report the performances of the baseline and PAC training. For the baseline, we need to first train the model using only the original training data to detect the best hyperparameters, and then train the model again on the combined data. The rows of AdamW+val Figure 1: Sorted testing accuracy of CIFAR100. “scalar” and “layer” represent the tuning-free PAC-Bayes training with the scalar and layerwise prior. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & & CoraML & Citeseer & Pubmed & Cora & DBLP \\ \hline \multirow{4}{*}{GCN} & AdamW & 81.9\(\pm\)1.8 & 84.3\(\pm\)1.2 & 78.3\(\pm\)2.3 & 58.6\(\pm\)0.6 & 71.6\(\pm\)1.2 \\ & scalar & 83.5\(\pm\)1.1 & 84.0\(\pm\)1.4 & 78.8\(\pm\)2.2 & 59.6\(\pm\)1.0 & 74.9\(\pm\)1.6 \\ \cline{2-7} & AdamW+val & 85.7\(\pm\)0.7 & **90.3\(\pm\)0.4** & **85.0\(\pm\)0.6** & 60.7\(\pm\)0.7 & **80.6\(\pm\)1.4** \\ & scalar+val & **86.1\(\pm\)0.7** & 90.0\(\pm\)0.4 & 84.9\(\pm\)0.8 & **62.0\(\pm\)0.4** & 80.5\(\pm\)0.6 \\ \hline \multirow{4}{*}{GAT} & AdamW & 81.7\(\pm\)1.0 & 85.3\(\pm\)0.7 & 78.1\(\pm\)2.1 & 60.1\(\pm\)0.8 & 75.5\(\pm\)2.3 \\ & scalar & 81.6\(\pm\)1.2 & 85.0\(\pm\)1.1 & 77.5\(\pm\)2.4 & 58.9\(\pm\)0.9 & 75.9\(\pm\)1.6 \\ \cline{2-7} & AdamW+val & 85.7\(\pm\)1.0 & **90.8\(\pm\)0.3** & 84.0\(\pm\)0.4 & **63.5\(\pm\)0.4** & **81.8\(\pm\)0.6** \\ & scalar+val & **85.9\(\pm\)0.8** & 90.6\(\pm\)0.5 & **84.4\(\pm\)0.5** & 60.9\(\pm\)0.6 & 81.0\(\pm\)0.5 \\ \hline \multirow{4}{*}{SAGE} & AdamW & 80.6\(\pm\)1.4 & 84.5\(\pm\)1.3 & 75.7\(\pm\)2.4 & 55.6\(\pm\)0.6 & 71.8\(\pm\)1.6 \\ & scalar & 80.8\(\pm\)1.4 & 83.7\(\pm\)1.3 & 74.3\(\pm\)2.5 & 55.5\(\pm\)1.6 & 73.6\(\pm\)1.3 \\ \cline{2-7} & AdamW+val & 85.7\(\pm\)0.5 & **90.5\(\pm\)0.5** & 83.5\(\pm\)0.4 & 60.6\(\pm\)0.5 & **80.7\(\pm\)0.6** \\ & scalar+val & **86.5\(\pm\)0.5** & 90.0\(\pm\)0.5 & **84.4\(\pm\)0.6** & **61.2\(\pm\)0.2** & 79.9\(\pm\)0.5 \\ \hline \multirow{4}{*}{APPNP} & AdamW & 83.5\(\pm\)1.3 & 85.3\(\pm\)1.1 & 80.0\(\pm\)2.4 & 59.8\(\pm\)0.8 & 77.0\(\pm\)2.2 \\ & scalar & 83.7\(\pm\)1.1 & 85.2\(\pm\)1.3 & 79.3\(\pm\)3.5 & 60.0\(\pm\)0.7 & 77.4\(\pm\)2.6 \\ \cline{1-1} \cline{2-7} & AdamW+val & 86.6\(\pm\)0.7 & **91.0\(\pm\)0.4** & 85.1\(\pm\)0.5 & 62.5\(\pm\)0.4 & 80.6\(\pm\)2.8 \\ \cline{1-1} & scalar+val & **87.1\(\pm\)0.6** & 90.4\(\pm\)0.5 & **85.7\(\pm\)0.4** & **63.5\(\pm\)0.4** & **81.8\(\pm\)0.5** \\ \hline \hline \end{tabular} \end{table} Table 3: Testing accuracy of GNNs with different training settings. “+val” denotes combing validation nodes to training, where the best hyper-parameters selected from “AdamW” is used in “AdamW+val”. The highest testing accuracy is **highlighted**. and scalar+val record the performances of the baseline and the PAC training respectively, at full convergences with the combined dataset for training. We can see that testing accuracy after adding validation nodes increased significantly for both methods but still, the results of our algorithm match the best testing accuracy of baselines. We use Adam as the optimizer with the learning rate as \(1e{-}2\) for all models using both training and validation nodes. Additional training information and evaluation outcomes are in the appendix (Sec. H.3). Few-shot text classification with transformers:The proposed method is also observed to work on transformers network. We conducted experiments on two text classification tasks of the GLUE benchmark as shown in Table 4. We use classification accuracy as the evaluation metric. The baseline method uses grid search over the hyperparameter choices of the learning rate (\(1e{-}1\), \(1e{-}2\), \(1e{-}3\)), batch size (\(2,8,16,32,80\)), dropout ratio (\(0,0.5\)), optimization algorithms (SGD, AdamW), noise injection (\(0,1e{-}5\), \(1e{-}4\), \(1e{-}3\), \(1e{-}2\), \(1e{-}1\)), and weight decay (\(0,1e{-}1\), \(1e{-}2\), \(1e{-}3\), \(1e{-}4\)). The learning rate and batch size of our method are set to \(1e{-}3\) and \(80\) (i.e., full-batch), respectively. In this task, the number of training samples is small (80). As a result, the preset \(\gamma_{2}=10\) is a bit large and thus prevents the model from achieving the best performance with PAC-Bayes training. We use the refined procedure described in Sec. H.43. Footnote 3: The refined procedure can be also applied to the CNN and GNN experiments but with smaller improvements than the transformers. We adopt BERT [14] as our backbone and added one fully-connect layer to be the classification layer. Only the added classification layer is trainable, and the pre-trained model is frozen without gradient update. To simulate a few-shot learning scenario, we randomly sample 100 instances from the original training set and take the whole development set to evaluate the classification performance. We split the training set into 5 splits, taking one split as the validation data and the rest as the training set. Each experiment was conducted five times, and we report the average performance. We used the PAC-Bayes training with the scalar prior in this experiment. According to Table 4, our method is competitive to the baseline method on the SST task, the performance gap is only \(0.4\) points. On the QNLI task, our method outperforms the baseline by a large margin, and the variance of our proposed method is less than that of the baseline method. More background about the SST and QNLI tasks are available in the appendix (Sec. H.5) ## 7 Model analysis We examined the learning process of PAC-Bayes training by analyzing the posterior variance \(\mathbf{\sigma}\) for different layers in models trained by Algorithm 3. Typically, batch norm layers have smaller \(\mathbf{\sigma}\) values than convolution layers. Additionally, shadow convolution and the last few layers have smaller \(\mathbf{\sigma}\) values than the middle layers. We also found that skip-connections in ResNet18 have smaller \(\mathbf{\sigma}\) values than nearby layers, suggesting that important layers with a greater impact on the output have smaller \(\mathbf{\sigma}\) values. In Stage 1, the training loss is higher than the testing loss, which means the adopted PAC bound is able to bound the generalization error throughout the PAC training stage. Additionally, we observed that the final value of \(K\) is usually very close to the minimum of the sampled function values. The average value of \(\mathbf{\sigma}\) experienced a rapid update during the initial 50 warmup epochs but later progressed slowly until Stage 2. The details can be found in Figure 2. More details are available in the appendix (Sec. H.6). \begin{table} \begin{tabular}{c c c} \hline \hline & **SST** & **QNLI** \\ \hline baseline & **72.9\(\pm\)0.99** & 62.6\(\pm\)0.10 \\ scalar & 72.5\(\pm\)0.99 & **64.2\(\pm\)0.02** \\ \hline \hline \end{tabular} \end{table} Table 4: Testing accuracy on the development sets of 2 GLUE benchmarks. ## 8 Limitations of the proposed PAC-Bayes training In previous sections, we demonstrated the great potential of PAC training. Here we mention some limitations. 1. In conventional training, the weights of the neural network \(\mathbf{h}\) is the only parameter to be stored and updated. In PAC-Bayes training, we have four parameters \(\mathbf{h},\boldsymbol{\lambda},\boldsymbol{\sigma},\gamma\). Among these variables, \(\gamma\) can be computed on the fly or whenever needed. We need to store \(\mathbf{h},\boldsymbol{\lambda},\boldsymbol{\sigma}\), where \(\boldsymbol{\sigma}\) has the same size as \(\mathbf{h}\) and \(\boldsymbol{\lambda}\) is much smaller. Hence the total storage is approximately doubled. Likewise, we now need to compute the gradient for \(\mathbf{h},\boldsymbol{\lambda},\boldsymbol{\sigma}\), so the cost of automatic differentiation in each iteration is also approximately doubled. In the inference stage, the complexity is the same as in conventional training. 2. The additional parameters to be optimized in PAC-Bayes training inevitably increase the difficulty of the optimization, and the most direct consequence is that there are more parameters to be initialized. In most of the experiments we ran, the recommended initialization of \(\mathbf{v}\) and \(\mathbf{b}\) work well. Rarely is it necessary to modify this setup. But if it occurs (i.e., in some settings, the recommended noise initializations are too large or too small), then the convergence of the algorithm would usually be affected immediately after the training started. So, if one observes a stall in the training accuracy in the early iterations, it usually indicates that the noise initialization requires adjustment, and a simple adjustment of multiplying the noise level by a global scalar often suffices. The tuning of initialization for PAC training (most of the time unnecessary) can be performed much more efficiently than the hyper-parameter tuning for the baseline, as inappropriate noise initializations lead to convergence problems appearing right after the training starts, whereas for the hyperparameter turning, one needs to wait until the completion of the entire training process to evaluate the effectiveness of the current parameter configuration, which is much more time-consuming. ## 9 Conclusion and future work In our study, we introduce Auto-tune, a PAC-Bayes method for training deep neural networks, superior to prior methods due to its trainable prior and posterior applicability to over-parameterized networks with unbounded loss, and efficiency without hyper-parameter tuning. It demonstrated competitive performance to grid-search optimized models in experiments with convolutional and graph neural networks. Enhancements to generalization can be achieved through tighter PAC-Bayes bounds. The Figure 2: Training details of ResNet18 on CIFAR10. The red star denotes the final \(K\). learned noise level may be used to defend against gradient inversion attacks and facilitate network quantization. ## Appendix A Estimating K in the case of layer-wise prior When using the layer-wise prior version of our method, the exponential moment bound \(K(\mathbf{\lambda})\) is a \(k\)-dimensional function, and estimating it accurately requires a large number of samples. In this paper, we adopt a simple empirical approach to address this issue. We assume that the original function \(K(\mathbf{\lambda})\) can be well-approximated by a one-dimensional function \(\tilde{K}\) that solely depends on the mean value of the input. That is, we assume there is a one-dimensional function \(\tilde{K}\) such that \(K(\mathbf{\lambda})\approx\tilde{K}(\mathbf{\tilde{\lambda}})\), where \(\mathbf{\tilde{\lambda}}\) denotes the mean of \(\mathbf{\lambda}\). Then we only need to estimate this one-dimensional function \(\tilde{K}\) through function interpolation. We note that one can always choose to directly interpolate the \(k\)-dimensional function \(K(\mathbf{\lambda})\) directly, but at the expense of increased sampling complexity and computational time. ## Appendix B Label smoothing Minimizing the loss and maximizing the accuracy are two separate but related tasks. In the image or node classification task, the goal is to increase the classification accuracy, while in PAC-Bayes training, the goal is to minimize the population loss. However, in cases when accuracy and loss are perfectly negatively correlated, these two goals coincide. With label-smoothing, we observed a larger correlation between the loss and accuracy than without, and therefore we reported results with label-smoothing turned on for both the PAC-Bayes training and the baseline. When the label smoothing is turned off, the performances of the two methods would decrease by comparable amounts. ## Appendix C Cautions in choosing \(\gamma_{2}\) Recall that the exponential momentum bound \(K(\mathbf{\lambda})\) is estimated over a range \([\gamma_{1},\gamma_{2}]\) of \(\gamma\) as per Definition 2. It means that we need the inequality \[\mathbb{E}_{\mathbf{h}\sim\mathcal{P}_{\mathbf{\lambda}}}\mathbb{E}[\exp{(\gamma X (\mathbf{h}))}]\leq\exp{(\gamma^{2}K^{2}(\mathbf{\lambda}))}\] to hold for any \(\gamma\) in this range. One needs to be a little cautious when choosing the upper bound \(\gamma_{2}\), because if it is too large, then the empirical version of \(\mathbb{E}_{\mathbf{h}\sim\mathcal{P}}\mathbb{E}[\exp{(\gamma X(\mathbf{h}))}]\) would be a very poor approximation to the true expectation due to the fact that the variable \(X\) is on the exponent, unless a large amount of samples \(h\) is drawn, which would then be extremely time-consuming and unrealistic in the deep neural network setting. Therefore, we recommended \(\gamma_{2}\) to be set to 10, or 20 at most, to avoid this issue. Even though in some cases, this means the optimal \(\gamma\) that minimizes the PAC-Bayes bound is ruled out, we found that a more accurate estimation of \(K\) resulting from using the recommended \(\gamma_{2}\) is more crucial to obtaining the best performance of the PAC-Bayes training. The choice of \(\gamma_{1}\) is not as critical as the choice of \(\gamma_{2}\). But a smaller \(\gamma_{1}\) usually means a larger value of \(K(\mathbf{\lambda})\) for any given \(\mathbf{\lambda}\), and therefore a looser PAC-Bayes bound and worse performance. ## Appendix D Initialization of GNNs Typically, the training dataset for a graph neural network is quite small. As a result, the KL term in the PAC-Bayes loss gets large easily provided the initialization of the noise is not sufficiently good. This poses a challenge in initializing the PAC-Bayes training process. Although the proposed initialization in Algorithm 2 works well for most GNN networks and datasets, it may fail on some occasions. To address this issue, we modify the initialization by adding a clipping of the noise levels \(\mathbf{\sigma}\) and \(\lambda\) at a lower bound of \(-\log(10)\). For GNN, this operation increases the noise level. Please refer to the paragraph under Remark 5.0.1 for the theoretical reason of the clipping and to Sec. 8 (second item) for how the value \(-\log 10\) is found in practice. Proof of Theorem 4.1 **Theorem E.1**.: _Given a prior \(\mathcal{P}_{\mathbf{\lambda}}\) parametrized by \(\mathbf{\lambda}\in\Lambda\) over the hypothesis set \(\mathcal{H}\). Fix \(\mathbf{\lambda}\in\Lambda\), \(\delta\in(0,1)\) and \(\gamma\in[\gamma_{1},\gamma_{2}]\). For any choice of i.i.d \(m\)-sized training dataset \(\mathcal{S}\) according to \(\mathcal{D}\), and all posterior distributions \(\mathcal{Q}\) over \(\mathcal{H}\), we have_ \[\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h};\mathcal{D})\leq\mathbb{ E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h};\mathcal{S})+\frac{1}{\gamma m}( \ln\frac{1}{\delta}+\mathrm{KL}(\mathcal{Q}||\mathcal{P}_{\mathbf{\lambda}}))+ \gamma K^{2}(\mathbf{\lambda}) \tag{11}\] _holds with probability at least \(1-\delta\) when \(\ell(\mathbf{h},\cdot)\) satisfies Definition 2 with bound \(K(\mathbf{\lambda})\)._ Proof.: Firstly, in the bounded interval specified, we bound the difference of the expected loss over the posterior distribution evaluated on the training dataset \(\mathcal{S}\) and \(\mathcal{D}\) with the KL-divergence between the Posterior distribution \(\mathcal{Q}\) and prior distribution \(\mathcal{P}_{\mathbf{\lambda}}\) evaluated over a hypothesis space \(\mathcal{H}\). For \(\gamma\in[\gamma_{1},\gamma_{2}]\), \[\mathbb{E}_{\mathcal{S}\sim\mathcal{D}}[\exp\left(\gamma m( \mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h};\mathcal{D})\right.\left. -\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h};\mathcal{S}))-\mathrm{ KL}(\mathcal{Q}||\mathcal{P}_{\mathbf{\lambda}}))\right]\] \[= \mathbb{E}_{\mathcal{S}\sim\mathcal{D}}[\exp\left(\gamma m( \mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h};\mathcal{D})-\mathbb{E }_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h};\mathcal{S}))-\mathbb{E}_{ \mathbf{h}\sim\mathcal{Q}}\frac{\mathrm{d}\mathcal{Q}}{\mathrm{d}\mathcal{P}_{ \mathbf{\lambda}}}(\mathbf{h}))] \tag{12}\] \[\leq \mathbb{E}_{\mathcal{S}\sim\mathcal{D}}\mathbb{E}_{\mathbf{h}\sim \mathcal{Q}}[\exp((\gamma m(\ell(\mathbf{h};\mathcal{D})-\ell(\mathbf{h}; \mathcal{S}))-\log\frac{\mathrm{d}\mathcal{Q}}{\mathrm{d}\mathcal{P}_{\mathbf{ \lambda}}}(\mathbf{h}))]\] (13) \[= \mathbb{E}_{\mathcal{S}\sim\mathcal{D}}\mathbb{E}_{\mathbf{h}\sim \mathcal{Q}}[\exp(\gamma m(\ell(\mathbf{h};\mathcal{D})-\ell(\mathbf{h}; \mathcal{S})))\frac{\mathrm{d}\mathcal{P}_{\mathbf{\lambda}}}{\mathrm{d}\mathcal{Q }}(\mathbf{h})]\] (14) \[= \mathbb{E}_{\mathbf{h}\sim\mathcal{P}_{\mathbf{\lambda}}}\mathbb{E}_{ \mathcal{S}\sim\mathcal{D}}[\exp(\gamma m(\ell(\mathbf{h};\mathcal{D})-\ell( \mathbf{h};\mathcal{S})))] \tag{15}\] where \(\mathrm{d}\mathcal{Q}/\mathrm{d}\mathcal{P}\) denotes the Radon-Nikodym derivative. In 12, we use \(\mathrm{KL}(\mathcal{Q}||\mathcal{P}_{\mathbf{\lambda}})=\mathbb{E}_{\mathbf{h} \sim\mathcal{Q}}\frac{\mathrm{d}\mathcal{Q}}{\mathrm{d}\mathcal{P}_{\mathbf{\lambda }}}(\mathbf{h}))\). From 12 to 13, Jensen's inequality is used over the convex exponential function. And in 14, the expectation is changed to the prior distribution from the posterior. Let \(X=\ell(\mathbf{h};\mathcal{D})-\ell(\mathbf{h};\mathcal{S})\), then \(X\) is centered with \(\mathbb{E}[X]=0\). Then, by Definition 1, \[\exists\ \mathbb{E}_{\mathbf{h}\sim\mathcal{P}_{\mathbf{\lambda}}}\mathbb{E}_{ \mathcal{S}\sim\mathcal{D}}[\exp\left(\gamma mX\right)]\leq\exp\left(m\gamma^{2 }K^{2}(\mathbf{\lambda})\right). \tag{16}\] Using Markov's inequality, (17) holds with probability at least \(1-\delta\). \[\exp\left(\gamma mX\right)\leq\frac{\exp\left(m\gamma^{2}K^{2}(\mathbf{\lambda}) \right)}{\delta}. \tag{17}\] Combining(15) and (17), the following inequality holds with probability at least \(1-\delta\). \[\exp\left(\gamma m(\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\ell( \mathbf{h};\mathcal{D})-\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h}; \mathcal{S}))-\mathrm{KL}(\mathcal{Q}||\mathcal{P}_{\mathbf{\lambda}})\right)\leq \frac{\exp\left(m\gamma^{2}K^{2}(\mathbf{\lambda})\right)}{\delta}\] \[\Rightarrow \gamma m(\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h}; \mathcal{D})-\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h};\mathcal{S})) -\mathrm{KL}(\mathcal{Q}||\mathcal{P}_{\mathbf{\lambda}})\leq\ln\frac{1}{\delta}+m \gamma^{2}K^{2}(\mathbf{\lambda})\] \[\Rightarrow \mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h};\mathcal{ D})\leq\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}}\ell(\mathbf{h};\mathcal{S})+\frac{1}{ \gamma m}(\ln\frac{1}{\delta}+\mathrm{KL}(\mathcal{Q}||\mathcal{P}_{\mathbf{ \lambda}}))+\gamma K^{2}(\mathbf{\lambda}) \tag{18}\] The bound 18 is exactly the statement of the Theorem. ## Appendix F Proof of Theorem 4.2 **Theorem F.1**.: _Let \(n(\varepsilon):=\mathcal{N}(\Lambda,\|\cdot\|,\varepsilon)\) be the covering number of the set of the prior parameters \(\Lambda\). Under Assumption 4.1.1 and Assumption 4.1.2, the following inequality holds for the minimizer \((\hat{\mathbf{h}},\hat{\gamma},\hat{\mathbf{\sigma}},\hat{\mathbf{\lambda}})\) of upper bound in 4 with probability as least \(1-\epsilon\):_ \[\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}_{\mathbf{\sigma}}(\hat{\mathbf{ h}})}\ell(\mathbf{h};\mathcal{D}) \leq\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}_{\mathbf{\sigma}}(\hat{\mathbf{h}})}\ell( \mathbf{h};\mathcal{S})+\frac{1}{\hat{\gamma}m}\left[\ln\frac{n(\varepsilon)}{ \epsilon}+\mathrm{KL}(\mathcal{Q}_{\hat{\mathbf{\sigma}}}(\hat{\mathbf{h}})|| \mathcal{P}_{\hat{\mathbf{\lambda}}})\right]+\hat{\gamma}K^{2}(\hat{\mathbf{\lambda}})+\eta\] \[=L_{PAC}(\hat{\mathbf{h}},\hat{\gamma},\hat{\mathbf{\sigma}},\hat{ \mathbf{\lambda}})+\eta+\frac{\ln(n(\varepsilon))}{\hat{\gamma}m} \tag{19}\] _holds for any \(\epsilon,\varepsilon>0\), where \(\eta=(\frac{1}{\gamma_{1}m}+\gamma_{2})(\eta_{1}(\varepsilon)+\eta_{2}(\varepsilon))\)._ _Proof:_ In this proof, we extend the data-independent PAC-Bayes bound of 4 to the data-dependent one that accommodates the error when the prior distribution \(\mathfrak{P}=\{P_{\mathbf{\lambda}},\mathbf{\lambda}\in\Lambda\subseteq\mathbb{R}^{k}\}\) is parameterized by and optimized over a finite set of parameters with a much smaller dimension than the model itself. Let \(\mathbb{T}(\Lambda,\|\cdot\|,\varepsilon)\) be an \(\varepsilon\)-cover of the set \(\Lambda\), which states that for any \(\mathbf{\lambda}\in\Lambda\), there exists a \(\tilde{\mathbf{\lambda}}\in\mathbb{T}(\Lambda,\|\cdot\|,\varepsilon)\), such that \(||\mathbf{\lambda}-\tilde{\mathbf{\lambda}}||\leq\varepsilon\). Assumption 4.1.1 and Assumption 4.1.2. Now we select the posterior distribution as \(\mathcal{Q}_{\mathbf{\sigma}}(\mathbf{h}):=\mathbf{h}+\mathcal{Q}_{\mathbf{\sigma}}\), where \(\mathbf{h}\) is the current model and \(\mathcal{Q}_{\mathbf{\sigma}}\) is a zero mean distribution parameterized by \(\mathbf{\sigma}\in\mathbb{R}^{d}\). Assuming the prior \(\mathcal{P}\) is parameterized by \(\mathbf{\lambda}\in\mathbb{R}^{k}\)\((k\ll m,d)\). Then the PAC-Bayes bound in 4 holds already for any \((\hat{\mathbf{h}},\hat{\gamma},\hat{\mathbf{\sigma}},\mathbf{\lambda})\), \(\mathbf{\lambda}\in\Lambda\), i.e., \[\mathbb{E}_{\hat{\mathbf{h}}\sim\mathcal{Q}_{\mathbf{\sigma}}(\hat{\mathbf{h}})} \ell(\tilde{\mathbf{h}};\mathcal{D})\leq\mathbb{E}_{\hat{\mathbf{h}}\sim \mathcal{Q}_{\mathbf{\sigma}}(\hat{\mathbf{h}})}\ell(\tilde{\mathbf{h}};\mathcal{S })+\frac{1}{\hat{\gamma}m}(\ln\frac{1}{\delta}+\mathrm{KL}(\mathcal{Q}_{\mathbf{ \sigma}}(\hat{\mathbf{h}})||\mathcal{P}_{\mathbf{\lambda}}))+\hat{\gamma}K^{2}( \mathbf{\lambda}) \tag{20}\] with probability over \(1-\delta\). Now, for the collection of \(\mathbf{\lambda}\)s in the \(\varepsilon\)-net \(\mathbb{T}(\Lambda,\|\cdot\|,\varepsilon)\), by the union bound, the PAC-Bayes bound uniformly holds with probability at least \(1-|\mathbb{T}|\delta=1-n\delta\). Now for an arbitrary \(\mathbf{\lambda}\in\Lambda\), its distance to the \(\varepsilon\)-net is at most \(\varepsilon\). Then under Assumption 4.1.1 and Assumption 4.1.2, we have: \[\min_{\tilde{\mathbf{\lambda}}\in\mathbb{T}}|\mathrm{KL}(\mathcal{Q}||\mathcal{P} _{\mathbf{\lambda}})-\mathrm{KL}(\mathcal{Q}||\mathcal{P}_{\tilde{\mathbf{\lambda}}} )|\leq\eta_{1}(\|\mathbf{\lambda}-\tilde{\mathbf{\lambda}}\|)\leq\eta_{1}(\varepsilon),\] and \[\min_{\tilde{\mathbf{\lambda}}\in\mathbb{T}}|K^{2}(\mathbf{\lambda})-K^{2}(\tilde{ \mathbf{\lambda}})|\leq\eta_{2}(\|\mathbf{\lambda}-\tilde{\mathbf{\lambda}}\|)\leq\eta_{2 }(\varepsilon).\] With these two inequalities, we can control the PAC-Bayes loss at the given \(\mathbf{\lambda}\) as follows: \[\min_{\mathbf{\lambda}\in\mathbb{T}}|L_{PAC}(\hat{\mathbf{h}},\hat{\gamma},\hat{ \mathbf{\sigma}},\mathbf{\lambda})-L_{PAC}(\hat{\mathbf{h}},\hat{\gamma},\hat{\mathbf{ \sigma}},\tilde{\mathbf{\lambda}})|\leq\frac{1}{\hat{\gamma}m}\eta_{1}(\varepsilon )+\hat{\gamma}\eta_{2}(\varepsilon)\leq\frac{1}{\gamma_{1}m}\eta_{1}( \varepsilon)+\gamma_{2}\eta_{2}(\varepsilon)\leq C(\eta_{1}(\varepsilon)+\eta_ {2}(\varepsilon))\] where \(C=\frac{1}{\gamma_{1}m}+\gamma_{2}\) and \(\gamma\in[\gamma_{1},\gamma_{2}]\). Now, since this inequality holds for any \(\mathbf{\lambda}\in\Lambda\), it certainly holds for the optima \(\hat{\mathbf{\lambda}}\). Combining this with (20), we have \[\mathbb{E}_{\hat{\mathbf{h}}\sim\mathcal{Q}_{\mathbf{\sigma}}(\hat{\mathbf{h}})} \ell(\mathbf{h};\mathcal{D})\leq L_{PAC}(\hat{\mathbf{h}},\hat{\gamma},\hat{ \mathbf{\sigma}},\hat{\mathbf{\lambda}})+C(\eta_{1}(\varepsilon)+\eta_{2}(\varepsilon))\] where \(C:=\frac{1}{\gamma_{1}m}+\gamma_{2}\). Now taking \(\epsilon:=n\delta\), we get, with probability \(1-\epsilon\), it holds that \[\mathbb{E}_{\hat{\mathbf{h}}\sim\mathcal{Q}_{\mathbf{\sigma}}(\hat{\mathbf{h}})} \ell(\tilde{\mathbf{h}};\mathcal{D})\leq\mathbb{E}_{\hat{\mathbf{h}}\sim \mathcal{Q}_{\mathbf{\sigma}}(\hat{\mathbf{h}})}\ell(\tilde{\mathbf{h}};\mathcal{S })+\frac{1}{\hat{\gamma}m}(\ln\frac{n(\varepsilon)}{\epsilon}+\mathrm{KL}( \mathcal{Q}_{\mathbf{\sigma}}(\hat{\mathbf{h}})||\mathcal{P}_{\mathbf{\lambda}}))+ \hat{\gamma}K^{2}(\hat{\mathbf{\lambda}})+C(\eta_{1}(\varepsilon)+\eta_{2}( \varepsilon))\] and the proof is completed. ## Appendix G Proof of Corollary 5.0.1 Recall for the training, we proposed to optimize over all four variables \(\mathbf{h}\), \(\gamma\), \(\mathbf{\sigma}\), and \(\mathbf{\lambda}\). \[(\hat{\mathbf{h}},\hat{\gamma},\hat{\mathbf{\sigma}},\hat{\mathbf{\lambda}})=\arg\min_{ \begin{subarray}{c}\mathbf{h},\mathbf{\lambda},\mathbf{\sigma},\\ \gamma\in[\gamma_{1},\gamma_{2}]\end{subarray}}\underbrace{\mathbb{E}_{\hat{ \mathbf{h}}\sim\mathcal{Q}_{\mathbf{\sigma}}(\hat{\mathbf{h}})}\ell(\tilde{\mathbf{ h}};\mathcal{S})+\frac{1}{\gamma m}(\ln\frac{1}{\delta}+\mathrm{KL}(\mathcal{Q}_{\mathbf{ \sigma}}(\mathbf{h})||\mathcal{P}_{\mathbf{\lambda}}))+\gamma K^{2}(\mathbf{\lambda})}_{ \equiv L_{PAC}(\hat{\mathbf{h}},\gamma,\mathbf{\sigma},\mathbf{\lambda})}. \tag{21}\] **Corollary G.0.1**.: _Assume the parameter sets of priors and posteriors are both bounded, \(\mathcal{H}:=\{\mathbf{h}\in\mathbb{R}^{d}:\|\mathbf{h}\|_{2}\leq M\}\), \(\Sigma:=\{\mathbf{\sigma}\in\mathbb{R}_{+}^{d}:\|\mathbf{\sigma}\|_{1}\leq T\}\), \(\Lambda=:\{\mathbf{\lambda}\in[e^{-a},e^{b}]^{k}\}\), and assume \(\ell(\mathbf{h},\cdot)\) satisfies Definition 2 with bound \(K(\mathbf{\lambda})\) and \(K^{2}(\mathbf{\lambda})\) is Lipschitz continuous with Lipschitz constant \(C_{K}\). Then with high probability, the PAC-Bayes bound for the minimizer of (P) has the form_ \[\mathbb{E}_{\hat{\mathbf{h}}\sim\mathcal{Q}_{\mathbf{\sigma}}(\hat{\mathbf{h}})}\ell( \mathbf{h};\mathcal{D})\leq L_{PAC}(\hat{\mathbf{h}},\hat{\gamma},\hat{\mathbf{\sigma}}, \hat{\mathbf{\lambda}})+\eta,\] _where \(\eta=\frac{k}{\gamma_{1}m}\left(1+\ln\frac{C(C_{K}+L(d))(b+a)\gamma_{2}m}{2k}\right)\), \(C=\frac{1}{\gamma_{1}m}+\gamma_{2}\), \(L(d)=\frac{1}{2}\max\{d,e^{a}(2M+T)\}\)_ _Proof:_ The above Corollary is an extension of the previous theorem, with a more explicit expression when the bounds for the hypothesis parameter \(\mathbf{h}\), prior variance parameter \(\boldsymbol{\lambda}\) and posterior variance parameter \(\boldsymbol{\sigma}\) are known. For a \(k\)-layer network, the prior is written as \(\mathcal{P}_{\boldsymbol{\lambda}}(\mathbf{h}_{0})\), where \(\boldsymbol{\lambda}\in\mathbb{R}_{+}^{k}\) is the vector containing the variance for each layer. The set of all such priors is denoted by \(\mathfrak{P}:=\{\mathcal{P}_{\boldsymbol{\lambda}}(\mathbf{h}_{0}), \boldsymbol{\lambda}\in\Lambda\}\). We select the posterior distribution to be centered around the trained model \(\mathbf{h}\), with independent anscalar variance. Specifically, for a network with \(d\) trainable parameters, the posterior is set to \(\mathcal{Q}_{\boldsymbol{\sigma}}(\mathbf{h}):=\mathcal{N}(\mathbf{h},\text{ diag}(\boldsymbol{\sigma}))\), where \(\mathbf{h}\) is the mean and \(\boldsymbol{\sigma}\in\mathbb{R}_{+}^{d}\) is the vector containing the variance for each trainable parameter. The set of all posteriors is \(\mathfrak{Q}:=\{\mathcal{Q}_{\boldsymbol{\sigma}}(\mathbf{h}),\boldsymbol{ \sigma}\in\Sigma,\mathbf{h}\in\mathcal{H}\}\), and the KL divergence between such prior and posterior is \[\mathrm{KL}(\mathcal{Q}_{\boldsymbol{\sigma}}(\mathbf{h})||\mathcal{P}_{ \boldsymbol{\lambda}}(\mathbf{h}_{0}))=\frac{1}{2}\sum_{i=1}^{k}\left[- \mathbf{1}_{d_{i}}^{\top}\ln(\boldsymbol{\sigma}_{i})+d_{i}(\ln(\lambda_{i})- 1)+\frac{1}{\lambda_{i}}(\|\boldsymbol{\sigma}_{i}\|_{1}+\|(\mathbf{h}- \mathbf{h}_{0})_{i}\|^{2})\right], \tag{22}\] where \(\boldsymbol{\sigma}_{i},(\mathbf{h}-\mathbf{h}_{0})_{i}\) are vectors denoting the variances and weights for the \(i\)-th layer, respectively, and \(\lambda_{i}\) is the scalar variance for the \(i\)-th layer. \(d_{i}=\dim(\boldsymbol{\sigma}_{i})\), and \(\mathbf{1}_{d_{i}}\) denotes an all-ones vector of length \(d_{i}\). Letting \(v_{i}=\log 1/\lambda_{i}\), \(i=1,...,k\) and doing a change of variable \(\tilde{\mathcal{P}}_{\mathbf{v}}:=\mathcal{P}_{\boldsymbol{\lambda}}=\mathcal{ N}(0,[\text{diag}(\lambda_{i}I_{d_{i}})])=\mathcal{N}(0,[\text{diag}(e^{-v_{i}}I_{d_{i}})])\), where \(d_{i}\) is the number of trainable parameters in the \(i\)th layer. \[\frac{\partial\mathrm{KL}(\mathcal{Q}_{\boldsymbol{\sigma}}||\tilde{\mathcal{ P}}_{\mathbf{v}})}{\partial v_{i}}=\frac{1}{2}[-d_{i}+e^{v_{i}}(\|\boldsymbol{ \sigma}_{i}\|_{1}+\|\mathbf{h}_{i}-\mathbf{h}_{0,i}\|^{2})],\] where \(\boldsymbol{\sigma}_{i}\), \(\mathbf{h}_{i}\),\(\mathbf{h}_{0,i}\) are the blocks of \(\boldsymbol{\sigma}\), \(\mathbf{h}\), \(\mathbf{h}_{0}\), corresponding to the \(i\)th layer, respectively. Now, given the assumptions on the bounds in the statement of the corollary, we have: \[\|\nabla_{\mathbf{v}}\mathrm{KL}(\mathcal{Q}_{\boldsymbol{\sigma}}||\tilde{ \mathcal{P}}_{\mathbf{v}})\|_{2}\leq\|\nabla_{\mathbf{v}}\mathrm{KL}(\mathcal{ Q}_{\boldsymbol{\sigma}}||\tilde{\mathcal{P}}_{\mathbf{v}})\|_{1}\leq\frac{1}{2} \max\{d,e^{a}(2M+T)\}\equiv L(d) \tag{23}\] where we used the assumption \(\|\boldsymbol{\sigma}\|_{1}\leq T\) and \(\|h_{0}\|,\|h\|\leq M\) (23) says \(L(d)\) is a valid Lipschitz bound on the KL divergence and therefore Assumption 4.1.1 is satisfied by setting \(\eta_{1}(x)=L(d)x\). Then we can apply Theorem 4.2, to get with probability \(1-\epsilon\), \[\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}_{\boldsymbol{\sigma}}(\hat{\mathbf{h}})} \ell(\mathbf{h};\mathcal{D})\leq\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}_{ \boldsymbol{\sigma}}(\hat{\mathbf{h}})}\ell(\mathbf{h};\mathcal{S})+\frac{1}{ \hat{\gamma}m}\left[\ln\frac{n}{\epsilon}+\mathrm{KL}(\mathcal{Q}_{\hat{ \boldsymbol{\sigma}}}(\hat{\mathbf{h}})||\mathcal{P}_{\boldsymbol{\lambda}}) \right]+\hat{\gamma}K^{2}(\hat{\boldsymbol{\lambda}})+C(C_{K}+L(d))\varepsilon. \tag{24}\] Here, we used \(\eta_{1}(x)=L(d)x\) and \(\eta_{2}(x)=C_{K}x\). Note that for the set \([-b,a]^{k}\), the covering number \(n=\mathcal{N}([-b,a]^{k},|\cdot|,\varepsilon)\) is \(\left(\frac{b+a}{2\varepsilon}\right)^{k}\). Introducing a new variable \(\rho>0\), letting \(\varepsilon=\frac{\rho}{C(C_{K}+L(d))}\) and inserting them to the above, we obtain with probability \(1-\epsilon\) \[\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}_{\boldsymbol{\sigma}}(\hat{ \mathbf{h}})}\ell(\mathbf{h};\mathcal{D})\] \[\leq\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}_{\boldsymbol{\sigma}}( \hat{\mathbf{h}})}\ell(\mathbf{h};\mathcal{S})+\frac{1}{\hat{\gamma}m}\left[ \ln\frac{1}{\epsilon}+\mathrm{KL}(\mathcal{Q}_{\hat{\boldsymbol{\sigma}}}( \hat{\mathbf{h}})||\mathcal{P}_{\boldsymbol{\lambda}})\right]\] \[+\hat{\gamma}K^{2}(\hat{\boldsymbol{\lambda}})+\rho+\frac{k}{ \gamma_{1}m}\ln\frac{C(C_{K}+L(d))(b+a)}{2\rho}.\] Further making the upper bound tighter by optimizing over \(\rho\), we obtain \[\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}_{\boldsymbol{\sigma}}(\hat{ \mathbf{h}})}\ell(\mathbf{h};\mathcal{D})\] \[\leq\mathbb{E}_{\mathbf{h}\sim\mathcal{Q}_{\boldsymbol{\sigma}}( \hat{\mathbf{h}})}\ell(\mathbf{h};\mathcal{S})+\frac{1}{\hat{\gamma}m}\left[ \ln\frac{1}{\epsilon}+\mathrm{KL}(\mathcal{Q}_{\hat{\boldsymbol{\sigma}}}(\hat{ \mathbf{h}})||\mathcal{P}_{\boldsymbol{\lambda}})\right]\] \[+\hat{\gamma}K^{2}(\hat{\boldsymbol{\lambda}})+\frac{k}{\gamma_{1} m}\left(1+\ln\frac{C(C_{K}+L(d))(b+a)\gamma_{1}m}{2k}\right).\] ## Appendix H More Experiment details We conducted experiments using eight A5000 GPUs that were powered by four AMD EPYC 7543 32-Core Processors. To speed up the training process for posterior and prior variance, we utilized a warmup method that involved updating the noise level in the posterior of each layer as a scalar for the first 50 epochs and then proceeding with normal updates after the warmup period. This method only affects the convergence speed, not the generalization, and it was only used for large models in image classification. ### Image classification There is no data augmentation in our experiments. The implementation is based on the GitHub repo [1]. For the layer-wised prior, we treated each parameter in the PyTorch object model.parameters() as an independent layer, i.e., the weights and bias of one convolution/batch-norm layer were treated as two different layers. The number of training epochs of Stage 1 is \(500\) epochs for PAC-Bayes training. Moreover, a learning rate scheduler was added to both our method and the baseline to make the training fully converge. Specifically, the learning rate will be reduced by \(0.1\) whenever the training accuracy did not increase for \(20\) epochs. For PAC-Bayes training, the scheduler is only activated in Stage 2. The training will be terminated when the training accuracy is above \(99.9\%\) for \(20\) epochs, or when the learning rate decreased to below \(1e{-5}\). We also add label smoothing (\(0.1\)) [51] to all models, as discussed in Sec. B. The testing accuracy from all experiments with batch size \(128\) with the learning rate \(1e{-4}\) is shown in Figure 3 and Figure 4. We also visualize the sorted testing accuracy of baselines and our proposed PAC-Bayes training with large batch sizes and a larger learning rate \(5e{-4}\) (used only to obtain faster convergence) in Figure 5 and Figure 6. These figures demonstrate that our PAC-Bayes training algorithm achieves better testing accuracy than most searched settings. For models VGG\(13\) and ResNet\(18\), the large batch size is \(2048\), and for large models VGG\(19\) and ResNet\(34\), the large batch size is set to \(1280\) due to the GPU memory limitation. ### Node classification We test the PAC-Bayes training algorithm on the following popular GNN models. * GCN [28]: the number of filters is \(32\). * SAGE [24]: the number of filters is \(32\). * GAT [53]: the number of filters is \(8\), the number of heads is \(8\), and the dropout rate of the attention coefficient is \(0.6\). * APPNP [19]: the number of filters is \(32\), \(K=10\) and \(\alpha=0.1\). We set the number of layers to 2, which achieves the best performance for the baseline. A ReLU activation and a dropout layer are added in between the two convolution layers. All search details are visualized in Figure 7-10. Our proposed PAC-Bayes training with the scalar prior is better than most of the settings during searching and achieved comparable testing accuracy when adding validation nodes to training. ### Further improvement of the PAC-training through refining the range of \(\gamma\) As per Definition 1, the function \(K\) depends on the range \([\gamma_{1},\gamma_{2}]\) of \(\gamma\). The preset values of \(\gamma_{1}\) and \(\gamma_{2}\) may not be optimal for a given dataset. If the predefined range is too large, then it may result in a large \(K\) and then a vacuous PAC bound. If the range is too small, then it may not contain the optimal \(\gamma\) that minimizes the PAC loss and again result in a vacuous PAC bound. To resolve this issue, we recommend estimating the range of \(\gamma\) using a few iterations and then use the estimated range to do the PAC training. Specifically, we use the preset \(\gamma_{1}\) and \(\gamma_{2}\) to train the network for a certain number of epochs and compute the \(\gamma^{*}\) as according to equation (9). Afterward, we reset the model to its initial weights and noise levels and optimize \(L_{PAC}\) with \(\gamma_{1},\gamma_{2}\) both set to the optimal \(\gamma^{*}\). ### Datasets of few-shot text classification SST is the sentiment analysis task, whose performance is evaluated as the classification accuracy. Sentiment analysis is the process of analyzing the sentiment of a given text to determine if the emotional tone of the text is positive, negative, or neutral. QNLI (Question-answering Natural Language Inference) focuses on determining the logical relationship between a given question and a corresponding sentence. The objective of QNLI is to determine whether the sentence contradicts, entails, or is neutral with respect to the question. ### Model analysis We analyzed models ResNet18, ResNet34 and VGG13 on CIFAR10 and CIFAR100. The details can be found in Figure 11-15. Based on the figures, shadow convolution and the last few layers have smaller \(\mathbf{\sigma}\) values than the middle layers for all models. We also found that skip-connections in ResNet18 and ResNet34 have smaller \(\mathbf{\sigma}\) values than nearby layers on both datasets, suggesting that important layers with a greater impact on the output have smaller \(\mathbf{\sigma}\) values. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline lr & \(3e{-}5\) & \(5e{-}5\) & \(1e{-}4\) & \(2e{-}4\) & \(3e{-}4\) & \(5e{-}4\) \\ \hline CIFAR10 & 88.6 & 88.9 & 89.7 & 89.6 & 89.6 & 89.5 \\ CIFAR100 & 67.7 & 68.0 & 67.1 & - & - & - \\ \hline \hline \end{tabular} \end{table} Table 6: Testing accuracy of VGG13 trained with different learning rates. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \(10\) & \(20\) & \(50\) & \(80\) & \(100\) & \(150\) \\ \hline CIFAR10 & 88.5 & 88.5 & 89.3 & 89.5 & 89.5 & 88.9 \\ CIFAR100 & 69.4 & 69.6 & 68.9 & 69.1 & 69.0 & 68.1 \\ \hline \hline \end{tabular} \end{table} Table 7: Testing accuracy of ResNet18 trained with warmup epochs of \(\sigma\). In Stage 1, the training loss is higher than the testing loss, which means the adopted PAC-Bayes bound is able to bound the generalization error throughout the PAC training stage. Moreover, the final value of \(K\) is usually very close to the minimum of the sampled function values. The average value of \(\mathbf{\sigma}\) experienced a rapid update during the initial 50 warmup epochs but later progressed slowly until Stage 2. Figure 3: Sorted testing accuracy of CIFAR10. The x-axis represents the experiment index. Figure 4: Sorted testing accuracy of CIFAR100. The x-axis represents the experiment index. Figure 5: Sorted testing accuracy of CIFAR10 with large batch sizes. The x-axis represents the experiment index. Figure 6: Sorted testing accuracy of CIFAR100 with large batch sizes. The x-axis represents the experiment index. Figure 8: Testing accuracy of SAGE. The interval is constructed by the first and third quartiles over the ten random splits. Figure 7: Testing accuracy of GCN. The interval is constructed by the first and third quartiles over the ten random splits. Figure 10: Testing accuracy of APPNP. The interval is constructed by the first and third quartiles over the ten random splits. Figure 9: Testing accuracy of GAT. The interval is constructed by the first and third quartiles over the ten random splits. Figure 11: Training details of ResNet18 on CIFAR100. The red star denotes the final \(K\). Figure 12: Training details of ResNet34 on CIFAR10. The red star denotes the final \(K\). Figure 14: Training details of VGG13 on CIFAR10. The red star denotes the final \(K\). Figure 13: Training details of ResNet34 on CIFAR100. The red star denotes the final \(K\). Figure 15: Training details of VGG13 on CIFAR100. The red star denotes the final \(K\).
2308.07081
Aesthetics of Sanskrit Poetry from the Perspective of Computational Linguistics: A Case Study Analysis on Siksastaka
Sanskrit poetry has played a significant role in shaping the literary and cultural landscape of the Indian subcontinent for centuries. However, not much attention has been devoted to uncovering the hidden beauty of Sanskrit poetry in computational linguistics. This article explores the intersection of Sanskrit poetry and computational linguistics by proposing a roadmap of an interpretable framework to analyze and classify the qualities and characteristics of fine Sanskrit poetry. We discuss the rich tradition of Sanskrit poetry and the significance of computational linguistics in automatically identifying the characteristics of fine poetry. The proposed framework involves a human-in-the-loop approach that combines deterministic aspects delegated to machines and deep semantics left to human experts. We provide a deep analysis of Siksastaka, a Sanskrit poem, from the perspective of 6 prominent kavyashastra schools, to illustrate the proposed framework. Additionally, we provide compound, dependency, anvaya (prose order linearised form), meter, rasa (mood), alankar (figure of speech), and riti (writing style) annotations for Siksastaka and a web application to illustrate the poem's analysis and annotations. Our key contributions include the proposed framework, the analysis of Siksastaka, the annotations and the web application for future research. Link for interactive analysis: https://sanskritshala.github.io/shikshastakam/
Jivnesh Sandhan, Amruta Barbadikar, Malay Maity, Pavankumar Satuluri, Tushar Sandhan, Ravi M. Gupta, Pawan Goyal, Laxmidhar Behera
2023-08-14T11:26:25Z
http://arxiv.org/abs/2308.07081v1
Aesthetics of Sanskrit Poetry from the Perspective of Computational Linguistics: A Case Study Analysis on Sikasstaka ###### Abstract Sanskrit poetry has played a significant role in shaping the literary and cultural landscape of the Indian subcontinent for centuries. However, not much attention has been devoted to uncovering the hidden beauty of Sanskrit poetry in computational linguistics. This article explores the intersection of Sanskrit poetry and computational linguistics by proposing a roadmap of an interpretable framework to analyze and classify the qualities and characteristics of fine Sanskrit poetry. We discuss the rich tradition of Sanskrit poetry and the significance of computational linguistics in automatically identifying the characteristics of fine poetry. We also identify various computational challenges involved in this process, including subjectivity, rich language use, cultural context and lack of large labeled datasets. The proposed framework involves a human-in-the-loop approach that combines deterministic aspects delegated to machines and deep semantics left to human experts. We provide a deep analysis of _Sikasstaka_, a Sanskrit poem, from the perspective of 6 prominent _kavyasstatric_ schools, to illustrate the proposed framework. Additionally, we provide compound, dependency, _amaya_ (prose order linearised form), meter, _rasa_ (mood), _alanikara_ (figure of speech), and _rtii_ (writing style) annotations for _Sikasstaka_ and a web application to illustrate the poem's analysis and annotations. Our key contributions include the proposed framework, the analysis of _Sikasstaka_, the annotations and the web application1 for future research. We aim to bridge the gap between _kavyasstara_ and computational methods and pave the way for future research. Footnote 1: Link for interactive analysis of _Sikasstaka_: [https://sanskritshala.github.io/shikashastakam/](https://sanskritshala.github.io/shikashastakam/) ## 1 Introduction Sanskrit literature has a rich and diverse tradition that has played a significant role in shaping the literary and cultural landscape of the Indian subcontinent for centuries (Pollock, 2006; Jamison and Brereton, 2014). With its complex grammatical rules and nuanced vocabulary, the Sanskrit language has provided a fertile ground for poets to craft intricate and evocative stanzas that capture the key elements of human experience (Pollock, 1996). From the ancient epics like the Ramayana and the Mahabharata, to the lyrical works of Kalidasa and Bhartfhari etc. Sanskrit poetry has embodied the key elements of Indian thought and culture, providing inspiration and contemplation for generations of readers and scholars. However, the researchers in the computational linguistics community have not devoted much attention to uncovering the hidden beauty in Sanskrit poetry (Scharf et al., 2015). The computational linguistic research community could be interested in identifying fine poetry for various reasons, including the need to evaluate machine-generated poetry (Agarwal and Kann, 2020; Van de Cruys, 2020; Li et al., 2020; Hopkins and Kiela, 2017), translating poetry (Yang et al., 2019; Ghazvininejad et al., 2018; Krishna et al., 2019), to gain a deeper understanding of poetic language (Haider, 2021; Hamalainen and Alnajjar, 2019; Waltz, 1975), and to develop recommender systems (Zhang et al., 2020; Foley IV, 2019) for readers interested in poetry. By analyzing large corpora of poetry (Haider, 2021; Haider et al., 2020; Gopidi and Alam, 2019; Fairley, 1969) and identifying the most outstanding examples, researchers can improve evaluation metrics for machine-generated poetry (Yi et al., 2018; Liu et al., 2019; Ghazvininejad et al., 2017; Goncalo Oliveira, 2017), develop new models and techniques for natural language processing (Hu and Sun, 2020), and preserve cultural heritage. Furthermore, personalized recommendations for readers can be developed based on their preferences. Analyzing and identifying the characteristics of fine poetry automatically presents several computational challenges, including: (1) Subjectivity: The perception of what makes a poem "good" is subjective and can vary widely among individuals and cultures (Sheng and Uthus, 2020). Developing a computational model that can accurately capture and evaluate the aesthetic qualities of a poem is therefore challenging. (2) Rich language use: Poetry often employs complex language use, including metaphors, similes, allusions, and wordplay, which can be difficult for computational models to understand and generate (Chakrabarty et al., 2021). (3) Cultural context: Poetry is often deeply rooted in cultural and historical contexts (Gopidi and Alam, 2019), which can be challenging for automated systems to comprehend and incorporate. (4) Lack of large labeled datasets: The development of automated systems to identify the characteristics of fine poetry relies on large, labeled datasets, which can be difficult to create due to the subjective nature of poetry and the diversity of cultural and linguistic contexts (Al-Ghamdi et al., 2021; Horvath et al., 2022). Overall, the computational challenges of identifying the characteristics of fine poetry automatically include the development of sophisticated algorithms that can account for the subjective nature of poetry, understand complex language use, incorporate cultural context and work with limited labeled data. In this article, we aim to address the question of whether we can build an automated and interpretable framework to analyze and classify Sanskrit poetry (_k\(\omega\)vga_) (Kesarwani et al., 2017) into levels of the characteristics of fine composition. This framework has the potential to answer interesting questions such as given two kavyas, which one is more beautiful and why. It could also serve as a valuable tool for pedagogical purposes. The _kavyasasstra_ provides various perspectives for the analysis of poetry, and we select a _Sikasstaka_ composition2 to analyze from these different perspectives. We also aim to discuss whether computational methods can aid in similar analysis and the limitations of existing state-of-the-art methods. Our proposed framework involves a human-in-the-loop approach (Zhipeng et al., 2019; Ghazvininejad et al., 2017), where deterministic aspects are delegated to machines, and deep semantics are left to human experts. We hope that with further development, this framework can eventually become fully automated, enhancing the appreciation of Sanskrit poetry's inner beauty for neophytes and Sanskrit enthusiasts alike. We believe that this work can serve as a stepping stone, bridging the gap between the tradition of _kavyasasstra_ and computational methods, and paving the way for future research. Footnote 2: Refer to the actual piece and its translation in Appendix A. Our key contributions are as follows: 1. We propose a roadmap of human-in-loop and interpretable framework to classify Sanskrit poetry into levels of the characteristics of fine composition (SS4). 2. We illustrate the proposed framework by providing a deep analysis of _Sikasstaka_ from the perspective of 6 prominent _kavyasasstra_ schools (SS3). 3. We provide compound, dependency, _anvaya_, meter, _rasa_, alankara, _rti_ annotations for _Sikasstaka_ (SS5). 4. We also provide web application3 to illustrate our _Sikasstaka_ analysis and annotations (SS5). 5. We publicly release our codebase of framework and web-application for future research4 Footnote 3: Link for interactive analysis of _Sikasstaka_: [https://sanskritshala.github.io/shikshastakam/](https://sanskritshala.github.io/shikshastakam/) ## 2 Background **The 6 main schools of _kavyasasstra_ (Poetics):_**_kavyasasstra_ is the traditional Indian science of poetrics and literary criticism, which has played a significant role in shaping the development of literature and aesthetics in the Indian subcontinent for over two thousand years. The term "_kavya_" refers to poetry or literature, while "_Sastra_" means science or knowledge, and _kavyasasstra_ is thus the systematic study of the nature, forms, and principles of poetry and literature. The roots of _kavyasasstra_ can be traced back to ancient India, where it developed alongside other branches of learning such as philosophy, grammar, and rhetoric. Over time, _kavyasas_atva_ evolved into a complex and sophisticated system of poetics, encompassing a wide range of concepts and techniques for analyzing and appreciating poetry, such as _rasa_ (mood of poetry), _alankara_ (the use of rhetorical and figurative devices), _dhvani_ (superiority of suggestive meaning), _vakrokti_ (oblique), _aucitya_ (appropriateness) and _rti_ (style of writing). chandassastrais the traditional Indian science of meter and verification in poetry, dating back to the Vedic period. It involves the systematic study of the principles, forms, and structures of meter and verification in poetry, including the use of various poetic devices and the study of various types of meters and poetic forms. While closely related to _kavyasastra_, _chandasastra_ is a separate branch of traditional Indian knowledge and is not considered one of the 6 main schools of _kavyasastra_. Key concepts and techniques include the classification of meters, rhyme and alliteration, and the principles of accentuation and stress. _chandasastra_ has played a significant role in the development of Indian poetics and literary theory, and continues to be a vital part of Indian cultural heritage. By mastering the principles of _chandasastra_, poets and writers can create verses that are both aesthetically pleasing and technically precise, resulting in some of the most beautiful and evocative poetry in the world. Efforts put in Computational Linguistics:In the field of computational poetry, researchers have tackled various problems using algorithms and computational methods, such as generating new poetry (Agarwal and Kann, 2020; Van de Cruys, 2020; Li et al., 2020; Hopkins and Kiela, 2017; Krishna et al., 2020), translating poetry (Yang et al., 2019; Ghazvininejad et al., 2018) analyzing emotional tone, analyzing rhyme and meter patterns (Kao and Jurafsky, 2012; Greene et al., 2010; Fang et al., 2009), classifying poetry (Baumann et al., 2018; Kesarwani et al., 2017), and recommending poetry (Zhang et al., 2020; Foley IV, 2019). However, existing work does not focus much on analyzing the characteristics of fine poetry, which is crucial for improving the generation, translation, and recommendation applications. This gap motivates our work, which proposes an interpretable framework to classify Sanskrit poetry into different levels of composition using computational linguistics. We demonstrate the proposed framework by conducting a deep analysis of _Sikasstaka_, a Sanskrit poem, from the perspective of 6 well-known _kavyasastric_ schools. The key contributions of our article include the proposed framework, the analysis of _Sikasstaka_, the annotations, the web application, and the publicly released codebase for future research. ## 3 Basics of schools, Computational Aspect, Open Issues and Analysis This section presents the basics of the 7 schools of _kavyasastra_ and _chandassastra_. Each subsection provides a comprehensive overview of the respective school's fundamentals, the feasibility of developing a computational system, open issues and future directions. We provide reasoning for choosing the _Sikasstaka_ and demonstrate its analysis based on the corresponding school. Why _Sikasstaka:_To facilitate a more thorough and nuanced analysis of _kavya_, we set a few criteria to guide our selection of the most appropriate composition for our study. These criteria are as follows: (1) the composition should be sufficiently small to enable comprehensive scrutiny from diverse _kavyasastra_ perspectives; (2) the poet who has authored the composition ought not to possess a significant reputation in the _kavyasastra_ domain to obviate the potential for biases that may arise from establishing the composition as _ututama kavya_; \begin{table} \begin{tabular}{|p{42.7pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **School** & **Founder** & **Treatise** & **Objective** & **English meaning** \\ \hline _rasa_ & Bharatamuni & _nilysäastra_ & _na hi rasadrte käsicdarthah_ & No meaning proceeds, if a poetry does not carries _rasa_. \\ \hline _alankära_ & Bhâmaha & Kavyäläankära & _nipakdädriämalkärah_ & Alankarärs are vital for poatsayäanyärivahdhoditah_ & Alankarärs are vital for poetic beauty, just like ornaments for a charming woman’s face. \\ \hline _rni_ & Vâmana & Kavyäläankära-süttravvtri & _rniäntämä känyäsia_ & Poetic style (_rti_) is the soul of poetry. \\ \hline _dhvani_ & Anandavardhana & Dhvanyäloka & _käänyäsäntämä dhvanih_ & _dhvani_ is the soul of poetry. \\ \hline _vakrokit_ & Kuntaka & _vakrokit_-jivättam_ & _vakrokit_ & _kävvajävitam_ & _vakrokit_ is the vital element of the poetry._ \\ \hline _aucitya_ & Ksemendra & _aucitya_-vicärac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cá–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cá–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cárac–cá–cárac–cárac–cárac–cá–cárac–cárac–cárac–cárac–cá–cárac–cá–cárac–cá–cárac–cárac–cárac–cácárac–cárac–cárac–cáárac–cá–crac–cárac–cárac–cácárac–cá–cárac–cárac–cá–cárac–cácárac–cárac–cácárac–cáác–cárac–cárac–cácárac–cácárac–cácárac–cácácárac–cácárac–cácárac–cácárac–cácárac–cácácárac–cácácárac–cácárac–cácárac–cácárac–cácácárac–cácácácrac–cácácácácácácác–raccácácácácácácácácácácácácácácácácácácácácácácácácácácácácáccácácácácácáccácácáccácácáccácáccácccáccáccác (3) the composition should have made a substantive contribution to the traditions from a non-literary perspective to explore the role of _kavyasastra_ aspects in its accomplishment. The _Sikasstaka_ composition is well-suited for our analysis as it satisfies all three criteria that we have established. First, it meets the requirement of being compact enough to allow for in-depth examination from various _kavyasastra_ perspectives as it consists of only eight stanzas.5 Additionally, not all _astakas_ can be considered _uttama kavya_ (the fine poetry), which further supports its appropriateness as a subject of study. Second, the author of _Sikasstaka_, Caitanya Mahaprabhu, is not primarily known as a poet or literary figure. Thus, he meets the second criterion of not being a well-established poet in the _kavyasastra_ domain, which helps to minimize the potential biases that could arise from claiming the composition as _uttama kavya_. Finally, Sikasstaka has made significant contributions to the traditions of Gaudiya Vaisnavism from a non-literary perspective, which satisfies the third criterion. This composition has significantly influenced the lifestyle of the religious community adhering to the principles of Gaudiya Vaishnavism, serving as the fundamental backbone of their philosophy for the past 500 years. The enduring prevalence and practice of these teachings within the community underscore the profound impact of this composition on their way of life. In summary, the _Sikasstaka_ composition satisfies all three criteria. Footnote 5: We encourage readers to go through this composition: [https://sanskritishala.github.io/shikashastakam/shloka.html](https://sanskritishala.github.io/shikashastakam/shloka.html) where word-to-word meanings and translations are given. ### Metrical analysis Basics:The bulk of Sanskrit literature consists of poetic works that conform to the conventions of Sanskrit prosody, or _chandasasastra_, which involves the study of Sanskrit meters, or _chandas_. The primary objective of utilizing _chandas_ is to infuse rhythm into the text to facilitate memorization, while also aiding in the preservation of accuracy to some degree (DEO, 2007; Melnad et al., 2015). Pingala is considered to be the father of _chandasasastra_. Prominent scholars and their work in this field include Pingala (_Chandasstarra_), Bharata Muni (_nityasastra_), Pingala Bhatta (_Chandomanjar_), Vrttanatha (_Vrttaratnakara_), Jagannatha Pandita (_rasasagagadhara_), etc. Although approximately 1,398 meters are described in Vrttaratnakara Rajagopalan, 2018), not all are widely used. Computational aspect:In _chandasastra_, the classification of each syllable (_aksara_) in Sanskrit as either _laghu_ (short) or _guru_ (long) enables the representation of every line of Sanskrit text as a binary sequence comprised of _laghu_ and _guru_ markers. The identification of specific patterns within these sequences leads to the recognition of various types of _chandas_, thus enabling the deterministic identification of a meter. In recent years, commendable efforts have been made by the computational community to develop user-friendly toolkits for meter identification (Neill, 2023; Rajagopalan, 2018; Terdalkar and Bhattacharya, 2023). _Sikasstaka_ analysis:The _astaka_, which comprises a series of 8 stanzas, is typically composed in a single meter. In the case of the _Sikasstaka_, there are 5 meters employed, namely, _sardaluvikatia, vasantailaka, anustup, viyogin_ and _upajati_ (_indravanisa_ and _vansastha_). Further details on how to identify meters can be found in the relevant literature (Neill, 2023; Rajagopalan, 2018; Terdalkar and Bhattacharya, 2023). The identification of meters has aided in the identification of typographical errors (_tanuja_\(\rightarrow\)_tanja_, _dhali_\(\rightarrow\)_dhali_, _gadagada_\(\rightarrow\)_gadagada_) in some of the existing manuscripts of the _Sikasstaka_. One may wonder about the use of 5 different meters in the _Sikasstaka_. Some scholars, or _acaryas_, have argued that the stanzas were not composed in a single sitting. Rather, they were collected in Rupa Goswami's (disciple of the author of _Sikasstaka_) Padyavali, which is an anthology of compositions by expert poets in the _gaudfya_ Vaisnava tradition. Krsnadas Kaviraj Goswami later compiled and arranged them in a meaningful order in his composition, the Caitanya Caritamrta. We posit that the use of multiple meters serves as an evidence supporting this claim. Open issues and future direction:It would be interesting to explore the correlation between the intrinsic mood of the meter and the mood of the poetry. It would also be worthwhile to investigate the relationship between poets and the meters they have employed. This could involve examining patterns in their favorite choices and attempting to identify the poet based on the composition itself. Additionally, there is potential for exploring whether the occurrence of _rasadosa_ (obstruction in the enjoyment of mellow) can be predicted through automatic identification of the composition's _chandas_. These investigations could contribute to a deeper understanding of the relationship between meters, poetry, and emotion in Sanskrit literature. ### _alankara_ school Basics:This school focuses on the use of figurative language or literary devices to enhance the beauty and aesthetic appeal of poetry. It includes the study of metaphors, similes, personification, hyperbole, and other literary devices. The most important exponent of this school is Bhamaha, who wrote the _kavyalankara_, one of the earliest treatises on poetic embellishments. Other notable figures associated with this school include Dandin, Udbhata, Rudrata and so on. There are two broad categories of _alankara_s: (1) _sabdalankara_: refers to the figure of sound that relies on the pleasing sound or choice of words, which loses its effect when the words are substituted with others of similar meaning. (2) _arthalankara_: is the figure of speech that is based on the meaning of words. Computational aspect:The process of identifying _sabdalankara_ is considered deterministic and involves verifying the occurrence of specific patterns of syllables or words. In contrast, the identification of _arthalankara_ presents a significant semantic challenge, even for experienced annotators. The feasibility and difficulty of developing supervised systems for this purpose can be better understood through empirical investigations. There is no system currently available for automated _alankara_ analysis for Sanskrit. To develop a supervised data-driven system, several important questions must be addressed. For instance, it is crucial to determine the amount of data that needs to be annotated, as well as the appropriate methods for marking _alankara_ annotations, such as whether they should be marked at the level of a complete sloka and a phrase. To address these concerns, it is necessary to develop a standardized scheme for _alankara_ annotation. This would enable researchers to collect and analyze annotated data consistently and systematically, which would facilitate the development of accurate and reliable automated systems for _alankara_ analysis. _Sikasstaka_ analysis:In our analysis of the _Sikasstaka_, we have employed the _alankara_ categorization outlined by Mammata in his influential work "Kavyaprakasa". This text provides a comprehensive overview of 67 _alankaras_ or literary ornaments that can be utilized to embellish and enhance the expression of poetry. According to the _alankara_ school, _alankara_ is considered the soul of poetry, and without its incorporation, a poetic composition lacks vitality. In other words, _kavya_ without _alankara_ is likened to a lifeless entity, deemed by some poets of this school to be comparable to a widow.6 Furthermore, it is worth noting that certain poets from this tradition place greater emphasis on _arthalankara_, or figurative language, over _sabdalankara_, which deals primarily with sound patterns and word repetition. The _Sikasstaka_, a devotional composition in Sanskrit, exhibits the mood of separation (_vipralambha-srugarah_) as its primary _rasa_ or mood. The author portrays himself as a devotee, and his beloved is Lord Govinda. The appropriate use of alankaras (figures of speech) in this composition serves to express the _rasa_ without hindrance. The author employs _sabdalankaras_ (_anuprasa_7) and _arthalankaras_ (_rupaka_8, _upama_9, _vyatireka_10, _tulyayogit_11 and _visesokti_12 ) in _Sikasstaka_. The use of _rupaka_ (metaphor) and _upama_ is frequent, with the former appearing 6 times and the latter 4 times. The metaphors are employed to equate two things directly, while similes use "like" or "as" to make comparisons. Footnote 6: _alankarahait vidhavaiva sarasvarall_ 2.334, Agnipurapa A poetry which is also a form of goddess Sarasvai, is not decorated by _alankara_s, looks like a widow women, who is not wearing any ornamenta. Footnote 7: _varnasamyam 5 metaphors to describe the victory of _Srikrstana Sankirtrana_. The use of these metaphors creates a garland-like effect, which serves as an offering to acknowledge the victory of Srikrstana _Sankirtrana_. Additionally, the use of _antyanuprasa Sabdalankara_ (a figure of speech where every word ends with a similar suffix) creates a rhythmic effect. In the second stanza, the author expresses his sorrow of not having an attachment to such a glorious _Sankirtrana_, where the Lord invests all his potencies without keeping any rules and regulations. Although there is every reason to get attached to such _Sankirtrana_, the author cannot get attached, which is expressed through _Uktanimita visesyokti alankara_. This _alankara_ is used to denote that all the causes are present for action to happen, yet the action does not occur. In the third stanza, the author talks about the prerequisites of developing attachment for the _Sankrrtana_: humility and tolerance. The author employs _vyatireka alankara_ to embellish this stanza by comparing humility with a grass and tolerance with a tree. In this figure of speech, the subject of illustration (upamana) has higher qualities than the subject of comparison (upameya). Thus, the author says that one should be lower than a grass and more tolerant than a tree. In the fourth stanza, multiple things (wealth, followers, beautiful women and flowery language of Vedas) are coupled with a single property "_na kanaye_" (do not desire). This device is called _tulyayogita alankara_. In the fifth stanza, the author beautifully couples metaphor and simile to express his eternal servitude to Govinda. The author compares the way water drops fall from a lotus into the mud, similarly, he has fallen from Govinda's lotus feet to the ocean of nescience (material existence) as a dust-like servant. The author requests Govinda to place him back at his lotus feet. In the sixth stanza, again _tulyayogita alankara_ is employed to connect different symptoms mentioned to a single property. In the seventh stanza, the author expresses his intense separation from Govinda by deploying three similes (_upama alankara_). The author uses these similes to express his love in the mood of separation. He compares a moment to a great millennium, his eyes to the rainy season, and the entire world to void. This garland of similes (_malopama alankara_) serves as an offering to express the author's love for Govinda. The final stanza employs the _anuktanimita-visesokti_ device, wherein a condition is expressed but the consequent effect is absent, and no explanation is offered for the lack of effect. This constitutes the _anukatanimita-visesokti alankara_. The author enumerates several potential causes for being despised, yet the anticipated outcome of animosity does not manifest. Despite hatred conditions, the author maintains devotion to the lord. The rationale for the absence of hatred is not mentioned here. In conclusion, the _Sikasstaka_ employs a range of alankaras to express its mood (_rasa_) without hindrance. The appropriate use of metaphors, similes, and other figures of speech serves to create a garland-like effect that acknowledges Govinda's glory and expresses the author's devotion and love. Open issues and future direction:Moving forward, several directions for research on _alankaras_ can be explored. One crucial area is the formulation of the problem of identifying _alankaras_. Since _alankaras_ can be assigned to complete or partial stanzas, or sequences of syllables or pairs of words, determining the appropriate approach for identifying them is non-trivial. Different formulations, such as sentence classification, sequence labeling, or structured prediction, can be employed. Another interesting direction for research is to investigate which alankaras are more beautiful and how we can compare two compositions based solely on their use of _alankaras_. Defining a basis for evaluating beauty in poetry is a challenging problem in itself. However, understanding which alankaras contribute more to the aesthetic appeal of a poem and how to measure this appeal can be valuable for poets and scholars alike. Finally, exploring the correlation of _alankara_ school with other schools of _kavyasstra_, such as _rasa_ and _dhvani_, is a promising area for future research. Understanding how these different aspects of poetry interact can provide insight into the complex mechanisms of poetic expression and perception. Overall, these future directions offer exciting opportunities to deepen our understanding of _alankaras_ in poetry. ### _rasa_ school Basics:The _rasa_ school of Indian aesthetics places significant emphasis on the emotional impact of poetry on the reader or audience. Bharata muni, the author of the _natyasstra_, is considered the most influential thinker associated with this school. His treatise on Indian performing arts includes a detailed discussion of _rasa_ theory, which posits that _rasa_ is the soul of poetry. According to Bharata, _rasa_ is the ultimate emotional plea sure that can be derived from a work of art. _rasa_ is the heightened emotional response to the text. Bharata also classified the various emotions depicted in performing arts into eight different _rasas_, or flavors, which are _srngara_ (romance), _hasya_ (comedy), _karuna_ (piteous), _raudra_ (anger), _virus_ (heroism), _bhayanaka_ (fear), _bibhatsa_ (disgust), and _adbhata_ (wonder). The ninth emotion, _shata_ (peace) is also enlisted by the other scholars.13 The _shata rasa_ is not compatible for dramas but other kinds of poetry can be composed in this _rasa_. The concept of _rasa_ theory remains an integral part of Indian culture and has greatly influenced the performing arts in India and beyond. Footnote 13: _na yarta duhkham na sukham na dveso nápi matsarah samah sarvesu bluínesu sa sántah prathito rasahí_l, _natyas cate the _vipralamba srrigara_, such as _namuragah_ (no attachment to Krsna), _a-darasanana_ (Krsna not being visible), _govinda-virahena_ (separation from Govinda), and _mat-prana-nathah_ (the master of my life). In conclusion, it is evident from our analysis of _vibhava_ and _anubhava_ in this composition that the _rasa_ evoked is _vipralamba srrigara_, which is characterized by intense feelings of separation and longing. The author effectively employs _vibhava_s such as the names and descriptions of Krsna, to activate the basic emotions of the reader or spectator, while his _anubhavas_, such as his intense feelings of separation and tears, serve to indicate the emotional response experienced. The composition is a powerful example of how the combination of _vibhava_ and _anubhava_ can be used to evoke _rasa_ in Indian classical literature. Open issues and future direction:The future direction of computational linguistics in identifying _rasa_ in poetry requires a comprehensive approach that covers all the crucial dimensions of the problem. Data annotation is one crucial aspect of the future direction of computational linguistics in identifying _rasa_ in poetry. Data annotation involves the manual tagging of texts with information such as the dominant _rasa_, _vibhava_, _anubhava_, and _vyabhicarbhava_. Possible approaches that can be taken include using supervised machine learning algorithms to identify patterns in the text that are associated with particular emotions. This involves training a model on a labeled dataset of texts annotated with the dominant _rasa_ and associated _vibhava_, _anubhava_, and _vyabhicarbhava_. Another approach is to use unsupervised machine learning algorithms such as clustering and topic modeling to identify the dominant emotions expressed in the text. This approach requires minimal human annotation. In addition, incorporating contextual information such as the author, genre, and historical period can also be useful in identifying the dominant _rasa_ in a text. ### _rtiti_ school Basics:In this study, the focus is on the concept of _rtiti_, also referred to as _marga_ (way), in poetry. This aspect of poetry has been heavily emphasized by the scholar Vamana, who considers _rtiti_ as the soul of poetry. According to Vamana, there are three _rtits_, namely _vaidarbht_, _gaudt_, and _pancalt_, which were named after the geographical areas from where poets constructed in these styles. The deterministic elements of _rtiti_ include the letters, length and quantity of compounds, and the complexity or simplicity of the composition. The _vaidarbht_ style is considered the most delightful style of poetry, consisting of all ten _gunas15_ explained by Vamana. The construction of _vaidarbht_ contains a shorter compounds. 16 The _gaudt_ style, dominated by _oja_ and _kanti gunas_, 17 and lacks _sukumarata_ and _madhya gunas_. _sukumarata_ and _madhya_ are the qualities which occur in a construction having less compounds and soft letter combinations are used. But, here in the _gaudt_, the composition possesses long compounds and too many joint consonants. _pancalt_ style has a simple composition having no or less compounds and simple construction. Footnote 15: _samagraugappeta vaidarbht_ 2.11, Kavyälankara-sütärvtri without 'n', '\(\$\)' and's', and consisting of long compounds. _prasada_ is present in all the _rasas_ and all kinds of poetries and _ritis_ equally. In _prasada_ guna, words can be comprehended right after the hearing of it. In conclusion, _rti_ plays an essential role in the composition of poetry, as emphasized by \(\gamma\)anna. _validarbht_, _gaudt_, and \(\textit{pancil}\overline{\textit{ not evoke wonder compared to _vacyartha_, while in _adhama kavya_, _dhvani_ is absent. Furthermore, _dhvani_ is divided into three broad categories: (1) _vastu-dhvani_, which implies some rare fact or idea, (2) _alankara-dhvani_, which suggests a figure of speech, and (3) _rasa-dhvani_, which evokes a feeling or mood in the reader. Computational aspect:Identifying _dhvani_ in _kavya_ computationally is a complex and challenging task due to the subjective nature of literary interpretation and the intricacy of the language. _dhvani_, which encompasses various layers of meaning, including literal, indirect, and metaphoric, presents a challenge to computational identification due to the absence of annotated data. Poetic devices such as simile, metaphor, and alliteration add to the complexity of creating a unified set of features for the identification of _dhvani_. Furthermore, identifying _dhvani_ requires additional information, such as the context, mood of the composer, his biography, and his other compositions. The cultural and historical context further complicates the identification of _dhvani_, which often requires scholars' commentaries to decode the layers of meanings. Even humans may not be able to identify _dhvani_ without these inputs, indicating the level of expertise required for such tasks. At present, to the best of our knowledge, no system is available to provide _dhvani_ analysis for Sanskrit poetry. Sikasstaka analysis:The _Sikasstaka_ is a set of 8 stanzas containing sublime instructions delivered without any specific audience. In contrast to the Bhagavad-gita, which is directed towards a particular audience, the _Sikasstaka_ is meant for inner contemplation and self-instruction. We can draw three possible dhvanis from these 8 stanzas. _The first dhvani deals with the sambandha, abhidheya, and prayojana for a devotee. sambandha_ refers to the relation of the subject with the _abhidheya_ of devotion, _abhidheya_ refers to the means of devotion which is being stated, and _prayojana_ refers to the fruit of devotion. The first five stanzas of the _Sikasstaka_ discuss the _sambandha_ aspect, while the entire set of stanzas serves as the _abhidheya_. The last three stanzas focus on _prayojana_, which involves absorption in the remembrance of Krsna and the intense emotions that this experience brings. The second dhvani presents the higher stages of devotional service.The first five stanzas describe _sadhana bhakti_, or the devotional service that is practiced through discipline and rules. The last 3 stanzas discuss _prema_, the ultimate perfection of devotion. _The third dhvani is the biography of the author._ In the initial stanza of the _Sikasstaka_, the author's proficiency in Sanskrit grammar and poetry is conveyed by the phrase "_vidya-vadhu-jivanam_." The Caitanya Caittamta (CC) further provides a detailed description of the author's logical expertise, wherein he is seen to skillfully refute and reestablish his own arguments, as well as his prowess in teaching devotion to Krsna through grammar. Additionally, CC narrates an incident where the author defeated the great poetry scholar Kesava Kasmiri. The phrase "_paranin vijayate strkrsna-sankirtanam_" illustrates the author's victorious preaching of the message of _sankirtanam_, or congregational chanting of the holy names of Krsna. CC even goes on to mention how animals like tigers, deers and elephants were inspired to participate in _sankirtanam_, resulting in a blissful embrace between a tiger and a deer. The second stanza of the _Sikasstaka_ reveals the author's egalitarian nature, as he considered everyone to be eligible to participate in _sankirtanam_ without any discrimination. The third stanza introduces the author's _bjta-mantra_, which urges one to develop a taste for _sankirtanam_ and show mercy towards all living entities. The author's adherence to the principle of humility is evident in his desire for devotees to follow the same. The fourth stanza recounts the author's renunciation of wealth, followers, and even his beautiful wife at the age of 24 to embrace the life of a _sanyast_, or a renunciant. The last three stanzas of the _Sikasstaka_ reflect the author's mood of separation in the last 24 years of his life. The CC provides a vivid description of the same. In summary, the author's teachings in the _Sikasstaka_ are a reflection of his own life experiences and practices. Open issues and future direction:In future research, it is crucial to address the challenges of identifying suggestive meaning in poetry by formulating a well-defined problem statement. One potential direction is to explore the use of machine learning techniques to identify _dhvani_ in _kavya_. This could involve developing annotation schemes that capture different levels of meaning, including literal, indirect, and metaphoric, which can be used to train machine learning models. Evaluation of performance is another important direction for future research. Metrics could be developed to measure the accuracy of the system's predictions of _dhvani_. These metrics could be based on human evaluations or derived automatically by comparing the system's output to expert annotations. One addional aspect that could be explored in future research is the use of multi-lingual and cross-lingual approaches for identifying _dhvani_. Many works of Sanskrit poetry have been translated into other languages, and it would be interesting to investigate whether the same _dhvani_ can be identified across languages, and whether insights from one language can be used to improve the identification of _dhvani_ in another language. Lastly, researchers could explore ways of integrating multiple sources of information to improve the accuracy of _dhvani_ identification. For example, incorporating biographical information about the composer, their other works, and the cultural and historical context of the composition could help to disambiguate multiple possible meanings and identify the intended _dhvani_. ### _vakrokti_ school Basics:Kuntaka's _vakrokti_ school is a prominent school of Indian poetics that emphasizes the use of oblique expressions in poetry. Kuntaka's seminal work, Vakroktijivita, lays out the principles of this school and classifies the levels of expression of _vakrokti_ into six categories. The first category is phonetic figurativeness, which involves the skillful use of syllables to embellish the sound of the poem. This is closely related to _ritis_ and _anuprasa_ (alliteration) _alankara_, which are types of _sabdalankara_. The second category is Lexical figurativeness, which involves the use of oblique expressions at the level of the root word without suffix. The third category is the obliqueness in the suffixes. The fourth category is involves the figurativeness, which involves the use of oblique expressions at the level of the sentence. In this category, one has to rely on a complete sentence to understand the oblique meaning. Different _arthalankaras_ are categorised under this category. The fifth category is sectional figurativeness, which involves the twist in the chapter-wise arrangement of the poetry. The sixth category is compositional figurativeness, which involves understanding the oblique meaning by relying on the complete composition. Kuntaka considers _vakrokti_ to be the soul of poetry, and argues that it is the only embellishment possible to the word and its meaning. The use of _vakrokti_ is essential to create effective poetry, and mastery of the six levels of expression is fundamental to the creation of effective poetry in the _vakrokti_ school. Computational aspect:Upon analysis of Kuntaka's _vakrokti_ school, it becomes apparent that his theory of oblique expression incorporates various elements of the previously established schools of _riti_, _rasa_, _dhvani_, and _alankara_. Despite this, Kuntaka's contribution to the field lies in his systematic classification of 6 levels of oblique expression. However, the computational challenges posed by the deep semantics of this theory are similar to those discussed in the other schools. Therefore, it is difficult to develop a module capable of accurately identifying _vakrokti_ in a composition. To date, no computational system exists for computing _vakrokti_, as it remains a complex task. Sikasstaka analysis:The present analysis examines the use of oblique expressions in _Sikasstaka_. The study identifies all 6 categories of oblique expressions deployed in the poem. The first category, phonetic figurativeness, concerns the skillful use of syllables to enhance the poetic sound, and we find that _Sikasstaka_ employs various types of _anuprasa alanikaras_. The second category, word-root level figurativeness, relates to the use of oblique expressions at the level of individual words, and the poem utilizes _rapak alanakara_ for this purpose. The third category, sentential figurativeness, focuses on oblique expressions at the level of sentences, and employs several _arthalankaras_. The fourth category, Contextual Figurativeness, relies on the poem's context and utilizes a series of _ripaka alanikaras_ to create a garland-like effect. The fifth category, compositional figurativeness, involves understanding the oblique meaning by examining the poem's different section. _Sikasstaka_ uses this category to discuss stages of _bhakti_ and the concept of _sambandha_, _abhidheya_, and _prayojana_ in an oblique manner. The final category, refers to the use of complete composition to convey oblique. We find that the author talks about his biography in the entire composition in an oblique manner. Overall, the study reveals that _Sikasstaka_ employs all six categories of oblique expressions, highlighting the author's mastery of poetic devices and his ability to convey complex spiritual ideas in a highly nuanced and layered manner. Open issues and future direction:As the field of NLP advances, there is growing interest in exploring the use of oblique expressions in computational models. In particular, the study of oblique expressions can contribute to the development of natural language generation, sentiment analysis, and machine translation systems. One potential area of research is the identification and classification of oblique expressions in large corpora of texts. Machine learning algorithms can be trained to recognize patterns of oblique expression usage and distinguish between different types of oblique expressions, such as the six categories identified in the present analysis of _Siksastaka_. This would require the creation of annotated datasets that can be used for training and testing purposes. Another area of research is the development of computational models that can generate oblique expressions in natural language. Such models could be trained to generate oblique expressions based on the context of the text and the intended meaning. This would require a deep understanding of the different types of oblique expressions and their functions, as well as the ability to generate language that is both grammatically correct and semantically appropriate. Despite the potential benefits of using oblique expressions in computational models, there are also several challenges that need to be addressed. One of the main challenges is the ambiguity of oblique expressions, which can make it difficult for computational models to accurately interpret their meaning. Another challenge is the variability of oblique expression usage across different languages and cultural contexts, which requires a more nuanced and context-specific approach. ### _aucitya_ school Basics:Ksemendra, a renowned poet of the 11th century, introduced the concept of _aucitya_ in his seminal work _aucitya_-vicara-carca. _aucitya_, meaning appropriateness, suggests that the elements of poetry, such as _rasa_, _alankara_, and _rti_, should be used in an appropriate manner.20_aucitya_ is considered the soul of a poem, and appropriateness is regarded as the secret of beautiful composition. This theory has been widely accepted by scholars without any argument and is often referred to as the 'Theory of Coordination' since it regulates all aspects of _natyasastra_. _aucitya_ is a tool that aids writers in weighting their poem according to their will and effectively delivering their ideas. Ksemendra categorized _aucitya_ into several classes, namely _pada_ (word form), _vakya_ (sentence), _pradhanartha_ (meaning of the whole composition), _guna_(qualities), _alankara_ (poetic figures), _rasa_ (sentiments), etc. In conclusion, Ksemendra's concept of _aucitya_ has had a significant impact on the field of poetry. The idea of appropriateness and the use of poetry elements in a balanced manner are essential components of successful composition. The categorization of _aucitya_ into multiple classes provides a comprehensive framework for analyzing and understanding the nature of poetry. Therefore, _aucitya_ is an invaluable tool for writers, enabling them to deliver their ideas efficiently. Footnote 20: _ucitashtanavinyaasadalankrrialankrtihalaucityadacyuta niyyambhavantyeva gunajan gunajh_! (_The figures are decorative when they do not overpass the appropriateness in the poetry. The qualities can be assumed as qualities in real sense when they follow the validity of appropriateness.) 1.6, _aucitya_-vicida-carca Computational aspect:According to the available literature, no computational system currently exists that provides an analysis of _aucitya_. In order to develop a module for _aucitya_ analysis, we propose considering the mutual compatibility of _rti_, _rasa_, and _alankara_ in the initial implementation. For instance, _validarh_! is considered a compatible _rti_ when deploying the srngara _rasa_, while _gaud_! is appropriate for the _vira rasa_. Additionally, certain _alankaras_ are suitable for specific _rasas_. At a later stage, we may annotate corpora and build data-driven metrics to automatically measure the _aucitya_ of a composition. However, to accomplish this, it is essential to establish standardized data annotation policies in advance. One major challenge in evaluating poetic appropriateness across various languages, cultures, and time periods is the lack of standardized rules or models. Furthermore, the subjective nature of poetic appropriateness makes it challenging to reach a consensus among scholars and experts on the definition of _aucitya_. Siksastaka analysis:Our analysis suggests that the _Siksastaka_ is an exemplary composition that demonstrates _aucitya_. The composition employs the _vipralambha srngara rasa_, and the _validarh_! with the _madhurya_ guana, which are perfectly compatible. Additionally, the use of _alankaras_ is deployed in an appropriate manner, without overwhelming the expression of the _rasa_. The composition's _rasa_ remains consistent and does not change abruptly. Instead, it gradually intensifies throughout the composition, reaching its pinnacle in the last stanza. This uniformity in the _rasa_'s expression contributes to the effectiveness of the composition in evoking the desired emotional response. Open issues and future direction:Moving forward, the development of a computational system for _aucitya_ analysis presents an exciting area of research. In terms of future directions, several questions can be explored. Firstly, the effectiveness of the proposed mutual compatibility approach between _rti_, _rasa_, and _alankara_ needs to be evaluated by analyzing the performance of the initial module. Furthermore, additional features could be incorporated to improve the accuracy of _aucitya_ analysis, such as word choice, meter, and rhyme patterns. Moreover, there is a need to establish standardized guidelines for the annotation of corpora in different languages and cultural contexts. This would facilitate the creation of a large, diverse dataset that can be used to train the _aucitya_ analysis system. Additionally, research could focus on the development of data-driven metrics that automatically evaluate poetic appropriateness. Another potential direction for research is to investigate the interplay between language, culture, and poetic appropriateness. This involves analyzing the extent to which poetic conventions vary across different languages and cultures and how these variations affect the evaluation of _aucitya_. The impact of historical and temporal changes on poetic conventions can also be explored to understand the evolution of poetic conventions over time. Finally, collaboration between experts in literature and computer science can help establish a consensus on what constitutes _aucitya_. This could lead to the development of a standardized set of rules and models for evaluating poetic appropriateness, which could be applied across different languages, cultures, and time. ## 4 The proposed framework This section presents the description of a proposed framework for the analysis and classification of poetry. The framework is designed to categorize compositions into 3 classes that demonstrate high, medium, or low conformity with the standards of fine Sanskrit poetry, as articulated by experts in _kavyasastra_. Figure 1 depicts the proposed framework, which employs a hierarchical ensembling architecture consisting of 7 modules. The violet modules are deterministic, while the remaining modules are learning-based and currently supported by human experts. Each module can be trained independently, and the outputs and features of the previous modules are utilized to enhance the performance of the next phase modules. The _aucitya_ module is responsible for learning the weight of each module by employing all their outputs, except the meter module. We illustrate the proposed framework by showing analysis on _Sikasataka_ composition. The first layer of the framework consists of 3 modules namely, _rti_, Meter and _alankara_ module. The _rti_ module is the first module in the proposed framework, and it is responsible for identifying the _rti_ of a composition. This is deterministic, and the classification is enabled by various clues provided by the _kavyasastra_. The next module, the meter identification module, is also deterministic, and commendable efforts have been made by the SCL community in recent years to develop user-friendly toolkits for the identification of meter [18, 19, 20, 17]. In the _alankara_ module, the identification of _sabdalankara_ is considered deterministic, and it involves verifying the occurrence of specific patterns of syllables or words. In contrast, the identification of _arthalankaira_ presents a significant semantic challenge, even for experienced annotators. The module relies on a supervised paradigm of binary classification to identify whether any _arthalankara_ is present or not. The _rasa_ and _vakrokti_ (oblique) modules are the next two modules in the framework. For the _rasa_ module, a classification problem with 9 classes of _rasas_ is framed. These _rasas_ are _springara_ (love and beauty), _hasya_ (comedy), _karuna_ (piteous), _raudra_ (anger), _vira_ (heroism), _bhayanaka_ (fear), _bibhatsa_ (disgust), _adbhuta_ (wonder), and _santa_ (peace). It is argued that _rti_ and meter may be helpful and can serve as auxiliary information to identify _rasa_; therefore, they are used as features in the _rasa_ module. The _vakrokti_ module is framed as a binary classification problem to identify the presence of oblique. Here, the _alankara_ module has a strong correlation in contributing to oblique identification. The next layer of the module is the _dhvani_ module, which is also a classification problem with 3 classes: direct meaning, indirect meaning, and suggestive meaning. Suggestive meaning has a strong correlation with _rasa_, _alankara_, and _vakrokti_. Therefore, these features are plugged into the _dhvani_ module. Finally, the _aucitya_ module learns the weight of each module using a supervised paradigm. These weighted features of all modules are used for the final classification of the composition into three classes that demonstrate high, medium, or low conformity with the stan dards of fine Sanskrit poetry, as articulated by experts in _kavyasasstra_. In the absence of annotated data, human experts are relied upon for the modules that are supervised modules. In the future, it is hoped that all the modules can be automated, and researchers can consider training these modules in an end-to-end fashion. Overall, the proposed framework presents a systematic approach to analyze and classify poetry into different categories based on various features. The framework has the potential to assist poets, scholars, and critics in the evaluation of compositions and provide insights into the nuances of poetry. Is _Sikasstaka_ an _utama kavya_?The criterion for a composition to be qualified as an _uttama kavya_ is the presence of _dhvani_, which evokes wonder in the reader.21_dhvani_ is the soul of a _kavya_ and is of three types: _vastu-dhvani_, _alankara-dhvani_, and _rasa-dhvani_. Vastu-dhvani_ signifies rare facts or ideas; _alankara-dhvani_ indicates figures of speech, and _rasa-dhvani_ evokes feelings or moods in the reader. In the context of _vastu-dhvani_, _Sikasstaka_ explores the _sambandha_, _abhidheya_, and _prayojana_ for a devotee, gradations of devotional service, and the biography of the author. The composition provides an insightful glimpse into the key elements of _bhaki_ and its various stages, making it a source of rare knowledge for the reader. In terms of _alankara-dhvani_, _Sikasstaka_ employs a range of figures of speech such as metaphors, simplices, and other literary devices to express its mood (_rasa_) without hindrance. The appropriate use of these devices creates a garland-like effect that acknowledges Govinda's glory and expresses the author's devotion and love. The composition's poetic beauty is further enhanced by its rhythm and musical, making it a delight for readers. Moreover, _Sikasstaka_ is a powerful example of how the combination of _vibhava_ and _amubhava_ can be used to evoke vipralambdaha srnga_rasa_, making it an exemplar of _rasa-dhvani_. The composition conveys the feeling of separation from the divine, which is central to the Vaisnava tradition of _bhakti_, and generates a deep emotional response in the reader. In conclusion, the evidence presented above establishes that _Sikasstaka_ is indeed an _uttama kavya_. Its exploration of rare knowledge, its _alankara_ beauty, and its evocation of deep emotions through _rasa-dhvani_ which renders greater amusement than the other meanings make it a masterpiece of Sanskrit literature. Footnote 21: _idamuttamamamatišyáni yyáacevád dhvaniribudhahi kathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathiathathiathiathiathathiathiathiathathiathiathiathathiathiathathiathiathathiathiathathiathiathathiathathiathathiathathiathathiathathiathiathathiathathiathathiathathathiathathiathathathiathathathiathathathiathathathiathathiathathathiathathathiathathathathiathathathathiathathathathiathathathathathi anvaya_ to annotate the _Sikasstaka_. The primary objective of this annotation exercise is to facilitate a literal understanding of the composition. By providing grammatical information, this annotation serves as a baseline to evaluate the _dhvani_, or poetic suggestion, of the composition. The stanzas were annotated by 4 annotators, each an expert in one of the aforementioned tasks. In cases where there was a discrepancy in the annotation process, the annotators discussed and resolved the issue. All annotators hold a minimum academic qualification of a Master in Arts in Sanskrit. The annotations are publicly available via an interactive web-based platform as well as standard text annotation format. Web-based applicationFigure 2 shows our web application that offers an interactive module to illustrate annotations and poetry analysis. The motivation to build a web-based application to illustrate poetry analysis Delmonte (2015); Delmonte and Prati (2014) is that it can help to make poetry analysis more accessible, engaging, and effective, providing students with a rich learning experience that encourages deeper understanding and appreciation of this art form. This template can serve as a proxy to illustrate how an automated system should produce the poetry analysis. There are 7 dimensions to our analysis which can be classified into two broad categories, namely, stanza-level (metrical and _alankara_ analysis) and global-level (_rti_, _vakrokti_, _rasa_, _dhvani_ and _aucitya_ analysis). As the names suggest, stanza-level analysis shows the analysis for each stanza and global analysis considers complete composition to provide the respective analysis. The stanza-level modules are shown in blue color and the rest in purple. Our platform can provide the analysis in two possible ways. First, a user can choose to analyze any stanza through all the modules. Otherwise, the user can also opt for choosing any module and analyzing all the stanzas from the perspective of the chosen module. Our web template leverages the flex module to provide interactive features. ## 6 Conclusion In conclusion, this article presents an innovative approach to bridge the gap between Sanskrit poetry and computational linguistics by proposing a framework that classifies Sanskrit poetry into levels of the characteristics of fine composition. The proposed framework takes a human-in-the-loop approach, combining the deterministic aspects delegated to machines and the deep semantics left to human experts. The article provides a deep analysis of _Sikasstaka_, a Sanskrit poem, from the perspective of 7 prominent _kavyasstra_ schools to illustrate the proposed framework. The analysis establishes that _Sikasstaka_ is indeed the _uttama kavya_ composition. Moreover, the article provides compound, dependency, _anvaya_, meter, _rasa_, _alankara_, and _rti_ annotations for _Sikasstaka_ and a web application to illustrate the poem's analysis and annotations. The roadmap of the proposed framework opens new possibilities for the automatic analysis and classification of Sanskrit poetry while preserving the rich tradition of Sanskrit poetry. Figure 2: Web-based interactive modules to illustrate annotations and poetry analysis. Here, we showcase the verse-level analysis for _alankara_ module. ### Ethics Statement Our work proposes a roadmap framework to evaluate the aesthetic beauty of Sanskrit poetry, which we believe will be valuable to individuals interested in the analysis of Sanskrit poetry. We have taken great care to ensure that our analysis is conducted ethically and responsibly. We do not anticipate any adverse effects of our framework and analysis on any community. ## Acknowledgement We would like to express gratitude to several individuals who contributed to the successful completion of this research project. We are grateful to Chaitanya Lakkundi for his helpful discussions on grammar. We would like to extend appreciation to Dr. Anupama Rayali, Dr. Pavankumar Salturi and Dr. Saurabh Dwivedi (Buddh P G College, Kushinagar, Uttar Pradesh) for their assistance in annotating Siksastaka. We are also thankful to Sugyan Kumar Mahanty for providing valuable inputs in their metrical analysis. We thank Hrishikesh Terdalkar for helping us with the metrical analysis and web designing. We acknowledge the support and assistance of Gauranga Darshan Prabhu, Dr. Amba Kulkarni (University of Hyderabad), Leela Govind Prabhu, Sriram Krishnan, Vedantasutra Prabhu and Venu Gopal Prabhu in connecting with different scholars of poetry school. We are also grateful to Shatavadhani R. Ganesh for his insightful discussions on clarifying our doubts. We appreciate the anonymous reviewers for their insightful suggestions that helped in enhancing this work. The work of the first author is supported by the TCS Fellowship under Project TCS/EE/2011191P.
2307.08537
A Weierstrass Representation Formula for Discrete Harmonic Surfaces
A discrete harmonic surface is a trivalent graph which satisfies the balancing condition in the 3-dimensional Euclidean space and achieves energy minimizing under local deformations. Given a topological trivalent graph, a holomorphic function, and an associated discrete holomorphic quadratic form, a version of the Weierstrass representation formula for discrete harmonic surfaces in the 3-dimensional Euclidean space is proposed. By using the formula, a smooth converging sequence of discrete harmonic surfaces is constructed, and its limit is a classical minimal surface defined with the same holomorphic data. As an application, we have a discrete approximation of the Enneper surface.
Motoko Kotani, Hisashi Naito
2023-07-17T14:56:12Z
http://arxiv.org/abs/2307.08537v3
# A Weierstrass representation formula # A Weierstrass representation formula for discrete harmonic surfaces Motoko Kotani1 and Hisashi Naito2 On the occasion of Jean-Pierre Bourguignon' 75th birthday Footnote 1: AIMR, Tohoku University, JAPAN. Footnote 2: Graduate School of Mathematics, Nagoya University, JAPAN. **Abstract** A discrete harmonic surface is a trivalent graph which satisfies the balancing condition in the 3-dimensional Euclidean space and achieves energy minimizing under local deformations. Given a topological trivalent graph, a holomorphic function, and an associated discrete holomorphic quadratic form, a version of the Weierstrass representation formula for discrete harmonic surfaces in the 3-dimensional Euclidean space is proposed. By using the formula, a smooth converging sequence of discrete harmonic surfaces is constructed, and its limit is a classical minimal surface defined with the same holomorphic data. As an application, we have a discrete approximation of the Enneper surface. ## 1 Introduction The purpose of the present paper is to construct a discrete version of the well known _Weierstrass representation formula_ for a minimal surface and to show its applications. A minimal surface is defined as a minimizing surface of the area functional under local variations and a central research object in Geometric Analysis. Minimal surfaces are seen everywhere in nature and also used in engineering products because they enjoy both effectiveness and natural beauty at the same time. Recent years, various notions of _discrete surfaces_ have been proposed. The _triangularizations/polygonalizations_ of a topological surface have been traditionally used both in pure and applied mathematics and been proved useful in history. Discretization of differential geometric surface in geometric analysis by U. Pinkall and K. Polthier [PP, BP], of integrable systems lead by A. Bobenko et al. [BBS, BH, BPW] are among those challenges. The present paper is based on the _Discrete surface theory_ introduced in [KNO]. The notion of discrete surface to be harmonic and the _balancing condition_ are proposed there. The condition for discrete surface to be minimal under the area variational formula is also given. It should be noted that isometry is a too rigid property in discrete surface theory while those are important in classical surface theory. Surfaces with harmonicity are much richer than minimal surfaces, area minimizing surfaces among isometric surfaces in the Euclidean space, to be studied in the discrete case. We call discrete surfaces in the Euclidean space minimizing the energy functional, and satisfying the balancing condition as a consequence _discrete harmonic surfaces_. The convergence of a sequence of subdivided discrete surfaces of a given discrete surface is studied in [T, KNT]. Although we proved the energy monotocity formula in the convergence, the limit _surface_ has singularities in general. We found there the balancing condition plays an important role to control the regularity of the limit surface. As an application of the Weierstrass representation formula is to establish a method to analyze the singularities for a good class of discrete surfaces. We expect we should have a better control in the case of minimal surfaces or harmonic surfaces which has a "Weierstrass representation formula" given by holomorphic data. In order to establish a kind of _Weierstrass representation formula_ for discrete harmonic surfaces, we need a notion of _conformal_ structures of a discrete surface, and holomorphic quadratic differentials with respect of the conformal structure. A notion of conformal structure for triangularizations of a topological surface is introduced by W. Thurston [Th] motivated in discrete approximation of the Riemannian mapping between complex surfaces. He noticed a map between two surfaces preserving their circle packings can be used as a discrete conformal map because they hold circles and contact angles. The notion develops discrete Riemann mapping theorem [RS] between complex surfaces and the discrete complex function theory (cf.[St]). On the other hand the notion of holomorphicity and holomorphic differential were introduced to study discrete surfaces by U. Pinkall and W. Y. Lam in [PL] and have been used for the study of minimal surfaces by [L, L2]. By adapting those notions to discrete harmonic surfaces in our setting, we are successful to construct a Weierstrass representation formula for discrete harmonic surfaces with the data of their Gauss map and holomorphic quadratic differentials and construct a converging sequence of discrete harmonic surfaces, which are obtained by iterating subdivision of a given discrete harmonic surface. **Theorem** (Weierstrass representation formula for a discrete harmonic surface, Theorem 6.2).: _Given a topological planer trivalent graph \(M=(M,V,E)\), where \(V\) is the set of vertices, and \(E\) is the set of edges. With respect to a complex coordinate \(z\colon M\to\mathbb{C}\) for a holomorphic function \(g\) on \(\mathbb{C}\) and a holomorphic quadratic differential \(q\) on \(M\) associated with \(g\), let us define a map \(X\colon M\to\mathbb{R}^{3}\)_ \[X=\int\mathrm{Re}\ dF,\] _with_ \[\begin{split} dF(e_{ab})&=\frac{\boldsymbol{i}q(e_{ ab})}{dg(e_{ab})}(1-g_{a}g_{b},\boldsymbol{i}(1+g_{a}g_{b}),g_{a}+g_{b})\\ &=-\overline{\tau}(e_{ab})(1-g_{a}g_{b},\boldsymbol{i}(1+g_{a}g_ {b}),g_{a}+g_{b})\\ &(e_{ab}\in E\text{ which connects the vertices $v_{a}$ and $v_{b}$ in $V$}).\end{split} \tag{1.1}\] _It defines a discrete harmonic surface in \(\mathbb{R}^{3}\). The above equation is called_ Weierstrass representation formula for a discrete harmonic surface. As an application of the theorem, we have **Theorem**.: _A smoothly converging sequence of discrete harmonic surfaces \(X_{n}\colon M_{n}\to\mathbb{R}^{3}\) with a given initial discrete harmonic surface \(X_{0}\colon M_{0}\to\mathbb{R}^{3}\) with the holomorphic data \(g\) and \(q\) is constructed, where \(M_{n}\) are iterated subdivisions of \(M_{0}\) in \(\mathbb{C}\). The limit is a classical minimal surface given by the classical Weierstrass representation with the holomorphic data._ The limit surface is considered as a hidden continuous object in a given discrete surface. We expect the method is useful to draw minimal surfaces in computer graphics. Actually,in the last section, we show as an example a sequence of discrete harmonic surfaces which converges to the Enneper surface, a famous classical minimal surface. ## 2 Minimal surfaces in Differential Geometry In Geometric Analysis, minimal surfaces have been studied intensively with variational methods. A minimal surface is a solution of the Euler-Lagrange equation to the Area functional. To be more precise, for a given isometric immersion \(X\colon M\to\mathbb{R}^{3}\) of a smooth surface \(M\) with the unit normal vector field \(N\colon M\to S^{2}\), consider its smooth deformation \[X_{t}=X+tN\quad\text{for }t\in(-\varepsilon,\varepsilon).\] Because the _area variation formula_ is expressed as for \(A(t)=\text{Area}(X_{t})\), we have the _Steiner formula_ \[dA(t)=\left(1-2tH+4t^{2}K+O(t^{3})\right)dA\] for a local variation \(X_{t}\) of a given surface \(X_{0}\) with \(H\) and \(K\) as its _mean curvature_ and _Gauss curvature_, a minimal surface is characterized to be a surface with the mean curvature \(H=0\). It is known that a differential surface admits an isothermal coordinate \((u,v)\) with which the first fundamental from is expressed as \[I=e^{2\lambda}(du^{2}+dv^{2}).\] With respect the complex coordinate \(z=u+\mathbf{i}v\); the Laplacian is given as \[\Delta=e^{-2\lambda}\left(\frac{\partial^{2}}{\partial u^{2}}+\frac{\partial^ {2}}{\partial v^{2}}\right).\] The structure equations for a smooth surface \(X\colon M\to\mathbb{R}^{3}\) with \(N\) the unit normal vector field and \(Q=\langle X_{zz},N\rangle\) are given **Theorem 2.1** (Structure equations for a smooth surface).: \[X_{z\overline{z}} =\frac{1}{4}e^{2\lambda}HN,\] \[Xzz =2\lambda_{z}X_{z}+QN,\] \[N_{z} =-2e^{-2\lambda}QX_{\overline{z}}-\frac{1}{2}HX_{z},\] _where \(H\) is the mean curvature of \(X\). The integrability conditions are given for \(Q\), \(H\) and \(\lambda\)_ \[\lambda_{z\overline{z}}=-e^{-2\lambda}Q\overline{Q}+\frac{1}{16}e^ {2\lambda}H^{2}=0,\] \[4Q_{\overline{z}}=e^{2\lambda}H_{z}.\] When \(M\) is a minimal surface, the above equations are given \[X_{z\overline{z}} =0,\] \[X_{zz} =2\lambda_{z}X_{z}+QN,\] \[N_{z} =-2e^{-2\lambda}QX_{\overline{z}}.\] That indicates \(X\) is a vector valued harmonic function and \(X_{z}\) is a vector valued holomorphic function satisfying \[\langle X_{z},X_{z}\rangle=0.\] Additionally \(Q\,dz\,dz\) is a holomorphic quadratic differential. due to \(X_{z\overline{z}}=0\). The Weierstrass representation for a minimal surface is given as **Theorem 2.2** (Weierstrass representation formula for a minimal surface).: _With a holomorphic quadratic differential \(Q\) and a meromorphic function \(g\), let us define A vector valued function \(X\colon M\to\mathbb{R}^{3}\)_ \[X=\operatorname{Re}\int\frac{Q}{g_{z}}\,dz\left(\frac{1}{2}(1-g^{2}),\frac{ \boldsymbol{i}}{2}(1+g^{2}),g\right)\in\mathbb{R}^{3}.\] _It is a minimal surface with the gauss map_ \[N=\frac{1}{1+|g|^{2}}\left(g+\overline{g},\boldsymbol{i}(\overline{g}-g),|g| ^{2}-1\right)\in S^{2}.\] Moreover the 1-parameter family \[X(\theta)=\operatorname{Re}\int e^{\boldsymbol{i}\theta}\frac{Q}{g_{z}}\,dz \left(\frac{1}{2}(1-g^{2}),\frac{\boldsymbol{i}}{2}(1+g^{2}),g\right)\in \mathbb{R}^{3}\] is proved to be a family of minimal surfaces and called an _associated family_ of \(X_{0}\). The minimal surface \(X(\pi/2)\) is called the conjugate minimal surface of \(X_{0}\) ## 3 Basic knowledge of discrete surfaces Let us review some results on discrete surfaces in [KNO], [T] and [KNT] that we used in the present paper. **Definition 3.1** (Discrete surface [KNO]).: Consider a trivalent graph \(M=(M,V,E)\) with the set \(V=V(M)\) of vertices and the set \(E=E(M)\) of oriented edges of \(M\), and a non degenerate vector field \(N\colon M\to S^{2}\). Namely, \(N\) takes different value for vertices next to each other. Its realization \(X\colon M\to\mathbb{R}^{3}\) is called a _discrete surface_ in \(\mathbb{R}^{3}\) with \(N\) as its Gauss map, when it has a non-degenerate corresponding tangent vector space of each vertex determined by the three nearest vertices of the vertex whose unit normal vector is equals to \(N\). Because a discrete surface does not have a Riemannian metric, we define the first fundamental form and the second fundamental form at each vertex is the weighted average over those of triangles surrounding the vertex. To be more precise, for each vertex \(v\in V\), let us denote the set \(E_{v}\) of edges emerging at \(v\) \[E_{v}=\{e\in E\mid o(e)=v\},\] and the adjacency vertices of \(v\), \(v_{1}\), \(v_{2}\), \(v_{3}\). Then in \(\mathbb{R}^{3}\) \[X_{1}=X(v_{1}),\quad X_{2}=X(v_{2}),\quad X_{3}=X(v_{3})\] are the adjacency vertices of \(X_{0}=X(v_{0})\), and the unit normal vectors at \(X_{0}\), \(X_{1}\), \(X_{2}\), and \(X_{3}\) are denoted by \(N_{0}\), \(N_{1}\), \(N_{2}\) and \(N_{3}\). Consider the triangle \(\triangle_{0,a,b}\) formed by the three vertices \(X_{0}\), \(X_{a}\), and \(X_{b}\), for \(a\), \(b\in\{1,2,3\}\) with \(a\neq b\), and define its first and second fundamental formulas \[I_{ab} =\begin{pmatrix}\langle X_{a}-X_{0},X_{a}-X_{0}\rangle&\langle X _{a}-X_{0},X_{b}-X_{0}\rangle\\ \langle X_{b}-X_{0},X_{a}-X_{0}\rangle&\langle X_{b}-X_{0},X_{b}-X_{0}\rangle \end{pmatrix},\] \[I\!I_{ab} =\begin{pmatrix}\langle X_{a}-X_{0},N_{a}-N_{0}\rangle&\langle X _{a}-X_{0},N_{b}-N_{0}\rangle\\ \langle X_{b}-X_{0},N_{a}-N_{0}\rangle&\langle X_{b}-X_{0},N_{b}-N_{0}\rangle \end{pmatrix},\] and \(H_{ab}\) and \(K_{ab}\) by the trace of the matrix \(I_{ab}^{-1}I\!I_{ab}(v)\) as are in the classical case. Now we define the mean curvature \(H(v_{0})\) and the Gauss curvature \(K(v_{0})\) at \(v_{0}\) by their weighted average by the areas of the triangles i.e. \[H(v_{0}) :=\frac{1}{A(v_{0})}\sum_{a,b}\sqrt{\det I_{ab}}H_{ab},\] \[K(v_{0}) :=\frac{1}{A(v_{0})}\sum_{a,b}\sqrt{\det I_{ab}}K_{ab},\] where \(A_{v_{0}}\) is the sum of the area of the three triangles. See [KNO] for the details. It should be noted that isometry of triangularizations is rigid, and [PL] pointed out isometry is too strong requirement to be satisfied to develop rich discrete surface theory. Actually, in our setting as well, we have **Theorem 3.2** (Conditions for discrete minimal surfaces, [KNO]).: _The conditions for being a minimal surface is_ \[\langle X_{a}-X_{0},X_{b}-X_{0}\rangle=\langle X_{b}-X_{0},X_{c}-X_{0}\rangle =\langle X_{c}-X_{0},X_{a}-X_{0}\rangle,\] _where \(a,b,c=1,2,3\)._ That indicates a local structure of minimal surfaces at each vertex is the identical, i.e. trivalent with equal length and the same angle \(2\pi/3\) between two of them. The class is rather too rigid. On the other hand, harmonic realizations of a discrete surface seem to be richer and useful to develop geometry. **Definition 3.3** (Harmonic surface).: When \(X\colon M\to\mathbb{R}^{3}\) satisfy the balancing conditions at each vertex; \[\sum_{e\in E_{v}}dX(e)=(X_{1}-X_{0})+(X_{2}-X_{0})+(X_{3}-X_{0})=0\quad(v\in E),\] where \(X_{a}=X(v_{a})\) for \(a=1,\,2,\,3\) are the adjacency vertices, we call \(X\) a _harmonic realization_ and \(X(M)\) a _harmonic surface_. We have proved a harmonic surface achieves minimizing of the energy function \[\mathbb{E}=\frac{1}{2}\sum_{u\sim v}\|X_{u}-X_{v}\|^{2}\] under local deformation [KNO]. ## 4 Construction of a sequence of iterated subdivisions of a given discrete surface One of the major interests of the discrete surface theory is to discover a continuum object hidden in a discrete surface and study the relation between them. A candidate of the continuum object is a converging limit, if there exists, of a sequence of discrete surfaces obtained by iterated subdivisions of a discrete surface. In [T, KNT], for a given discrete surface, its subdivision is constructed. Let us quickly review that. Triangulations of a smooth surface has been useful in many aspect of mathematical study and it applications to problems in the real world. In that case, a triangle lies on a plan or a surface, and is subdivided by using the metric on it. In our case, a discrete surface does not have a face which lies on a plan or a surface. So the subdivision is not trivial. We take two steps of the subdivisions process. Namely first we construct a topological subdivision and then realize it in \(\mathbb{R}^{3}\) to have a subdivision of a discrete surface in \(\mathbb{R}^{3}\). A planar trivalent graph has a triangulation of a plan as its dual graph in the plan. The triangulation is subdivided in the canonical way, and then take its dual to have a trivalent graph. This is called the _Goldberg-Coxeter subdivision_. We take it as a _topological subdivision_, and have a sequence of a topological discrete surface \(M_{n}\) obtained by the iteration of this process for a given discrete surface \(M_{0}\). Now we take the second step to construct a sequence of discrete surfaces in \(\mathbb{R}^{3}\). For a given discrete surface \(X_{n}\) in \(\mathbb{R}^{3}\), we take a part of it to have a planar graph \(M_{n}\)in the plan, and apply the Goldberg-Coxeter subdivision to have a new planar trivalent graph \(M_{n+1}\). We understand the process to be topological. Next we realize \(M_{n+1}\) in \(\mathbb{R}^{3}\). When \(X_{n}\colon M_{n}\to\mathbb{R}^{3}\) is given, we consider it an a boundary condition of the equations whose solution should satisfy the balancing condition and to be \(X_{n+1}\). When the convergence of a sequence of iterationally subdivided discrete surfaces is discussed, it is found that a naive subdivision obtained by solving the Dirichlet problem with the original discrete surface as its the boundary condition does not work, and its modification to make the subdivision to be a discrete harmonic, i.e. that satisfies the balancing condition, is proved to be appropriate in the discussion of convergence [T]. We have the following theorem for the convergence; **Theorem 4.1** (Tao [T], Kotani-Naito-Tao [KNT]).: _Let \(\{X_{i}\}\) be a sequence of discrete harmonic surfaced iteratively constructed from \(X_{0}=X\) as above._ 1. _The sequence_ \(\{X_{n}\}\) _forms a Cauchy sequence in the Hausdorff topology and show the energy monotonicity formula._ 2. _The limit of the Cauchy sequence_ \(\mathcal{M}_{\infty}=\overline{\cup M_{i}}\) _is divided into three kinds of sets:_ \[\mathcal{M}_{\infty}=\mathcal{M}_{\mathcal{F}}\cup\mathcal{M}_{\mathcal{V}} \cup\mathcal{M}_{\mathcal{S}}.\] _The first one_ \(\mathcal{M}_{\mathcal{F}}\) _is the set of accumulating points associated with each face in_ \(M_{i}\)_. The second one is the set of all vertices, replaced as in the above step, i.e.,_ \(\mathcal{M}_{\mathcal{V}}=\cup_{i}M_{i}.\) _The third one_ \(\mathcal{M}_{\mathcal{S}}\) _is the set of the rest of the accumulating points. We know little about_ \(\mathcal{M}_{\mathcal{S}}\) _in general, however, we prove an un-branched discrete surface do not have such_ \(\mathcal{M}_{\mathcal{S}}\)_._ The regularity of the limit set is not trivial at all, although we have the energy monotonicity formula. That gives a motivation of the present paper. When we have a representation formula with holomorphic data, we can expect to develop finer analysis of singularities. ## 5 Discrete holomorphic quadratic differentials Let \(M=(M,V,E)\) be a discrete surface with the set \(V\) of vertices and the set \(E\) of oriented edges of \(M\). By definition, \(M\) is a trivalent graph. It has a non-degenerate unit normal vector field \(N\colon V\to S^{2}\), namely, the normal vector \(N_{v}\) at \(v\) are different from the normal vector at any of its nearest neighboring vertices. Let us take a complex coordinate \(z\colon U_{v}\to U\) from a neighborhood \(U_{v}\) of a vertex \(v\) of a discrete surface \(M\) to a domain \(U\subset\mathbb{C}\). Figure 1: Goldberg–Coxeter subdivision. a) A part of trivalent graph consisting hexagons and an octagon. b) Take an \(n\)-gon inside each \(n\)-gon and then connect between the new vertex and original vertex. c) Remove original edges, we obtain the subdivision of a). The notion of the holomorphic quadratic differentials is introduced in [PL] for a triangulation of a topological surface. We imitate theirs to define a holomorphic quadratic differential for a discrete surface in our setting. Let \(q\colon E\to\mathbb{C}\) be a discrete 1-form, i.e. \(q(e_{ab})=-q(e_{ba})\) for each oriented edge \(e_{ab}\) which connect a vertex \(v_{a}\) to a vertex \(v_{b}\) and \(e_{ba}\) its reverse edge. The function \(\tau\colon E\to\mathbb{C}\) is uniquely defined by \[q(e_{ab}) =\langle\tau_{ab},\boldsymbol{i}\,dz(e_{ab})\rangle \tag{5.1}\] \[0 =\langle\tau_{e_{ab}},dz(e_{ab})\rangle, \tag{5.2}\] where \(dz(e_{ab})=z(v_{b})-z(v_{a})\in\mathbb{C}\), and \(\langle\,\ \rangle\) is a complex innerproduct in \(\mathbb{C}\). From the definition, we see \(\tau_{e_{ab}}=\tau_{e_{ba}}\). Actually \(\tau\) is explicitly given as \[q(e)=\boldsymbol{i}\overline{\tau(e)}\,dz(e)\] **Definition 5.1** (Discrete holomorphic quadric differential).: When a discrete 1-form \(q\) which satisfies \[\sum_{e\in E_{v}}q(e)=0\quad\text{for all }v\in V\] and the \(\tau\) defined as above satisfies \[\sum_{e\in E_{v}}\tau_{e}=0\quad\text{for all }v\in V,\] it is called a _holomorphic quadratic differentials_. It is easily checked that the notion is preserved under the the Mobius transformation in the coordinate \(z\). **Remark 5.2**.: The holomorphic quadratic differential \(q\) can be written by using a harmonic function with respect to the cotangent Laplacian \(\Delta_{\text{cot}}\) (cf. [PP]). More precisely, for a function \(u\colon V\to\mathbb{R}\), define a \(\mathbb{C}\)-valued function \(\lambda_{abc}\) for every triple \((a,b,c)\) such that \[\langle\lambda_{ijk},dz(e_{ab})\rangle=u_{b}-u_{a},\text{ where }(a,b)=(i,j),(j,k),(k,i),\] and take \(\tau\) such that for a triple \(T_{\text{left}}\) and \(T_{\text{right}}\) of vertices which make a triangle left to the edge \(e\) and a triangle right to the edge \(e\), \[\tau(e)=\lambda_{T_{\text{left}}}-\lambda_{T_{\text{right}}}.\] Then the \(q\) defined from the \(\tau\) satisfies \[\sum_{e\in E_{v}}q(e)=\Delta_{\text{cot}}u.\] The \(q\) is holomorphic if and only if \(u\) is harmonic with respect to \(\Delta_{\text{cot}}\). Weierstrass representation formula for a discrete harmonic surface Given a topological trivalent graph \(M=(V,E)\in\mathbb{C}\) with a complex coordinate \(z\) of \(U\in M\subset\mathbb{C}\). Now we are ready to give a Weierstrass representation with a pair of a holomorphic function \(g\) in \(z\) and a holomorphic quadratic differential \(q\) on \(M\) associated with \(g\). A _holomorphic quadratic differential associated with \(g\)_ is an extended notion, i.e. a 1-form in the form of \[q(e)=\boldsymbol{i}\overline{\tau}(e)\,dg(e)\] satisfying \[\sum_{e\in E_{v}}q(e)=0,\quad\sum_{e\in E_{v}}\tau(e)=0\quad\text{for all }v\in V.\] This notion is preserved when \(g\) is replaced by its linear fractional transformation. Let us define \(F\colon V\to\mathbb{C}^{3}\) by \[dF(e_{ab}) =\frac{\boldsymbol{i}q(e_{ab})}{dg(e_{ab})}(1-g_{a}g_{b}, \boldsymbol{i}(1+g_{a}g_{b}),g_{a}+g_{b})\] \[=-\overline{\tau}(e)(1-g_{a}g_{b},\boldsymbol{i}(1+g_{a}g_{b}),g_ {a}+g_{b})\] \[(e_{ab}\in E\text{ which connect the vertices }v_{a}\text{ and }v_{b}\text{ in }V).\] Then we have **Lemma 6.1**.: \[\frac{F(e_{ab})}{q(e_{ab})}=c_{ab}\left(N_{a}\times N_{b}+\boldsymbol{i}(N_{b }-N_{a})\right),\] (6.1) _where_ \[c_{ab}=\frac{(1+|g_{a}|^{2})(1+|g_{b}|^{2})}{|g_{b}-g_{a}|^{2}},\] _and for_ \[N=\frac{1}{1+|g|^{2}}(g+\overline{g},\boldsymbol{i}(\overline{g}-g),|g|^{2}-1).\] Proof.: The statement is obtained by simple calculations. The first term of the RHS of (6.1) \(N_{a}\times N_{b}\) is parallel to the vector with the three elements; The first one \[\boldsymbol{i}[(\overline{g}_{a}-g_{a})(|g_{b}|^{2}-1)-(\overline {g}_{b}-g_{b})(|g_{a}|^{2}-1)\] \[\quad=\boldsymbol{i}[(\overline{g}_{a}\overline{g}_{b}-1)(g_{b}- g_{a})-(g_{a}g_{b}-1)(\overline{g}_{b}-\overline{g}_{a}),\] the second one follows \[(\overline{g}_{b}+g_{b})(|g_{a}|^{2}-1)-(\overline{g}_{a}+g_{a}) (|g_{b}|^{2}-1)]\] \[\quad=-(\overline{g}_{a}\overline{g}_{b}+1)(g_{b}-g_{a})-(g_{a}g _{b}+1)(\overline{g}_{b}-\overline{g}_{a}),\] and the third one follows \[\boldsymbol{i}[(\overline{g}_{a}+g_{a})((\overline{g}_{b}-g_{b}) -(\overline{g}_{b}+g_{b})(\overline{g}_{a}-g_{a})]\] \[\quad=\boldsymbol{i}[(g_{a}+g_{b})(\overline{g}_{b}-\overline{g} _{a})]-(\overline{g}_{a}+\overline{g}_{b})(g_{b}-g_{a})].\] Thus we obtain \[N_{a}\times N_{b} =\frac{-\mathbf{i}}{(1+|g_{a}|^{2})(1+|g_{b}|^{2})}\left((g_{b}-g_{a})(1 -\overline{g}_{a}\overline{g}_{b},\mathbf{i}(1+\overline{g}_{a}\overline{g}_{b}), \overline{g}_{a}+\overline{g})\right.\] \[\qquad\qquad\qquad\qquad\qquad\left.-(\overline{g}_{b}-\overline {g}_{a})(1-g_{a}g_{b},\mathbf{i}(1+g_{a}g_{b}),g_{a}+g_{b})\right)\] \[=\frac{1}{(1+|g_{a}|^{2})(1+|g_{b}|^{2})}\left((g_{b}-g_{a})\frac {\overline{F}d\overline{g}(e_{ab})}{\overline{q}}+(\overline{g}_{b}-\overline {g}_{a})\frac{Fdg(e_{ab})}{q}\right)\] \[=c_{ab}^{-1}[\overline{F}/\overline{q}+F/q],\] where \[c_{ab}=\frac{(1+|g_{a}|^{2})(1+|g_{b}|^{2})}{|g_{b}-g_{a}|^{2}}.\] Similarly, we compute \(N_{b}-N_{a}\) with three elements; The first one is \[\frac{[(g_{b}+\overline{g}_{b})(1+|g_{a}|^{2})-(g_{a}+\overline{g }_{a})(1+|g_{b}|^{2})]}{(1+|g_{a}|^{2})(1+|g_{b}|^{2})}\] \[= \frac{[(1-\overline{g}_{a}\overline{g}_{b})(g_{b}-g_{a})+(1-g_{a }g_{b})(\overline{g}_{b}-\overline{g}_{a})]}{(1+|g_{a}|^{2})(1+|g_{b}|^{2})},\] the second one follows \[\frac{\mathbf{i}[(\overline{g}_{b}-g_{b})(1+|g_{a}|^{2})-(\overline{g }_{a}-g_{a})(1+|g_{b}|^{2})]}{(1+|g_{a}|^{2})(1+|g_{b}|^{2})}\] \[= \frac{\mathbf{i}[-(1+\overline{g}_{a}\overline{g}_{b})(g_{b}-g_{a})+( 1+g_{a}g_{b})(\overline{g}_{b}-\overline{g}_{a})]}{(1+|g_{a}|^{2})(1+|g_{b}|^{ 2})},\] the third one follows \[\frac{[(|g_{b}|^{2}-1)(|g_{a}|^{2}+1)-(|g_{a}|^{2}-1)(|g_{b}|^{2 }+1)]}{(1+|g_{a}|^{2})(1+|g_{b}|^{2})}\] \[= \frac{[(g_{a}+g_{b})(\overline{g}_{b}-\overline{g}_{a})+( \overline{g}_{a}+\overline{g}_{b})(g_{b}-g_{a})]}{(1+|g_{a}|^{2})(1+|g_{b}|^{ 2})}.\] Namely, by putting \[c_{ab}(N_{b}-N_{a}):=\mathbf{i}(\overline{F}/\overline{q}-F/q),\] we show \[c_{ab}[N_{a}\times N_{b}+\mathbf{i}(N_{b}-N_{a})]=(\overline{F}/\overline{q}+F/q) -(\overline{F}/\overline{q}-F/q)=2F/q.\] Because \[\langle N_{a},dF(e_{ab})\rangle\] \[=\frac{\overline{\tau}(e_{ab})}{1+|z_{a}|^{2}}[(1-z_{a}z_{b})(z_ {a}+\overline{z}_{a})-(1+z_{a}z_{b})(\overline{z}-z)+(z_{a}+z_{b})(|z_{a}|^{2} -1)]\] \[=\frac{\overline{\tau}(e_{ab})}{1+|z_{a}|^{2}}(z_{a}-z_{b})(1+|z_{ a}|^{2})=\overline{\tau}(e_{ab})(z_{a}-z_{b}),\] we have \[\langle N_{a},\text{Re}(dF(e_{ab}))\rangle=\overline{\tau}(e_{ab})(z_{a}-z_{b})+ \tau(e_{ab})(\overline{z}_{a}-\overline{z}_{b})=\boldsymbol{i}(\overline{q}(e_{ ab})-q(e_{ab})).\] The balancing condition \[\left\langle N_{a},\sum_{b\sim a}\text{Re}(dF(e_{ab}))\right\rangle=0\] follows from \[\sum_{e}q(e)=0.\] **Theorem 6.2** (Weierstrass representation formula for a discrete harmonic surface).: _Given a trivalent graph \(M=(M,V,E)\) and its complex coordinate \(z:U\subset M\to\mathbb{C}\), with a planar part \(U\) of \(M\). For simplicity, we consider \(M\subset\mathbb{C}\). For a holomorphic function \(g\) in \(z\) on \(\mathbb{C}\) and a holomorphic quadratic differential \(q\) on a \(M\) associate with \(g\), let us define a map \(X\colon M\to\mathbb{R}^{3}\)_ \[X=\int\text{Re}\ dF,\] _with_ \[\begin{split} dF(e_{ab})&=\frac{\boldsymbol{i}q(e_ {ab})}{dg(e_{ab})}(1-g_{a}g_{b},\boldsymbol{i}(1+g_{a}g_{b}),g_{a}+g_{b})\\ &=-\overline{\tau}(e)(1-g_{a}g_{b},\boldsymbol{i}(1+g_{a}g_{b}),g_{a}+g_{b})\\ &\quad(e_{ab}\in E\text{ which connect the vertices $v_{a}$ and $v_{b}$ in $V$}).\end{split} \tag{6.2}\] _It defines a discrete harmonic surface in \(\mathbb{R}^{3}\). The above equation is called Weierstrass representation formula for a discrete harmonic surface._ It should be noted the vector field \(N\) defined by (6.1) is not a normal vector field unless \(q\) is real valued. We call it _pseudo normal_ to the discrete harmonic surface. ## 7 Convergence of discrete harmonic surfaces to a minimal surface In this section,we construct a sequence of discrete harmonic surfaces expressed with the Weierstrass representation formula (6.2) with two holomorphic data \(g\) and \(q\), and show its smooth convergence to a classical minimal surface with its Weierstrass date \(g\) and \(q\). Key to the proof is to construct such a sequence with the "common" \(g\) and \(q\) on \(\mathbb{C}\). To illustrate our idea, let us consider a sequence of discrete harmonic surfaces \(X_{\lambda_{n}}\colon M_{\lambda_{n}}\to\mathbb{R}^{3}\) via the Weierstrass representation formula with common holomorphic function \(g\) and holomorphic quadratic differential \(q\) associated with \(g\), where \(M_{\lambda_{n}}\) is a regular hexagonal lattice on \(\mathbb{C}\) with edge length \(\lambda_{n}>0\) obtained by homothety with multiplier \(\lambda_{n}\) of the initial one. Becuase the Weierstrass representation formula is a discrete approximation of the classical one, it is straightforward to show the following; **Theorem 7.1**.: _The sequence of discrete harmonic surfaces \(\{X_{\lambda_{n}}\}\) with \(\lambda_{n}\to 0\) constructed by the above way. The sequence converges to a (classical) minimal surface \(X_{\infty}\colon M_{\infty}\to\mathbb{R}^{3}\), and the mean curvature and the Gauss curvature of \(X_{n}\) smoothly converge to that of the limit surface._ Proof.: It is easy to see the sequence is a Cauchy sequence in the Hausdorff topology and have a minimal surface \(X_{\infty}\colon M_{\infty}\to\mathbb{R}^{3}\) as its limit which is defined by the classical Weierstrass representation formula with \(g\) and \(q\). Due to \[\langle N_{a},dF(e_{ab})\rangle=\overline{\tau}(e_{ab})(z_{a}-z_{b})\quad \text{(for two adjacency vertices $v_{a}$ and $v_{b}$)},\] the pseudo Gauss map \(N\) converges to the Gauss map (unit normal vector field) of a limit minimal surface because the distance between two adjacency vertices in \(\mathbb{C}\) goes to zero as \(n\) goes to the infinity. Now we recall the definitions of the first fundamental form and the second fundamental form for a discrete surface \(X_{n}\) and a unit vector field \(N\) (not necessarily the normal vector) are difference analog of the classical one, they converge to that of \(X_{\infty}\) with the unit normal vector \(N\), and therefore the mean curvature and the Gauss curvature converges to that of the mean curvature and the Gauss curvature of the limit surface. Now let us construct a sequence of trivalent discrete harmonic surfaces \(X_{n}\colon M_{n}\to\mathbb{R}^{3}\) with \(M_{0}=M\) given by the Weierstrass representation formula with common \(g\) and \(q\), and its convergence. For the purpose, we have \(M=(M,V,E)\) a planar trivalent graph with a complex coordinate \(z\in\mathbb{C}\). For simplicity, we consider \(M\) embedded in \(\mathbb{C}\). For a given discrete harmonic surface \(X\colon M\to\mathbb{R}^{3}\), we have a sequence of topological discrete surfaces \(M_{n}\) by the Goldberg-Coxeter subdivisions in \(\mathbb{C}\). In this section, we take twice of this process to construct \(M_{n}\) from \(M_{n+1}\) so that it looks like a homothety of multiplier \(1/4\) in each subdivision step and all faces (polygonals) in \(M_{n}\) become the homothetic polygons multiplied with \(1/4\) in \(M_{n+1}\). Given a holomorphic function \(g\) on \(\mathbb{C}\), a holomorphic quadratic differential \(q_{n}\) of each \(M_{n}\) is uniquely determined by \(g\) up to scalar multiplicity. Because the Goldberg-Coxeter subdivisions are homothety and \(M_{n}\) converges to \(\mathbb{C}\) in the Hausdorff topology (i.e. \(\cup M_{n}\) is dense in \(\mathbb{C}\)), we can construct \(q_{n}\) consistently and a differential form \(q\) on \(\mathbb{C}\) so that \(q_{n}\) is a restriction of \(q\). The classical holomorphicity of \(q\) follows from the discrete holomorphicity. When we take the Goldberg-Coxeter subdivision, then \(\lambda_{n}=4^{-n}\lambda\) and then by similar argument in theorem 7.1, we show **Corollary 7.2**.: _For a given discrete harmonic surface with holomorphic data \(g\) and \(q\), we construct a sequence of discrete harmonic surfaces which smoothly converges to a classical minimal surface with the holomorphic data \(g\) and \(q\)._ ## 8 Trivalent Enneper surface as numerical examples Given a regular hexagonal lattice in \(\mathbb{C}=(\mathbb{C},z)\). The Weierstrass representation with a pair of a holomorphic function \(g\) on \(\mathbb{C}\) and a holomorphic quadratic differential \(q\) defined on \(E\) associate with \(g\) is given by Theorem 6.2. In this section,as an application, we study the convergence of the sequence of discrete surfaces \(X_{n}\) which are obtained by subdivision of a given discrete surface \(X_{0}\) with the Weierstrass data \(q\) and \(g\). **Lemma 8.1**.: _Let \(X\colon(M,V,E)\to\mathbb{R}^{3}\) be a trivalent graph in \(\mathbb{R}^{3}\), The holomorphic quadratic form \(q\) with respect to \(g\) is determined uniquely up scalar multiple._ Proof.: Take a vertex \(v_{0}\in V\) and its nearest neighbor vertices \(v_{1},v_{2},v_{3}\). Their image in \(\mathbb{R}^{3}\) are denoted by \(x_{0}=X(v_{0})\) and \(x_{a}=X(v_{a})\) with \(a=1,2,3\). Let \(q_{a}=q(e_{0a})\), and \(c_{a}=g(x_{a})-g(x_{0})\). Then the conditions for \(q\) to be a holomorphic quadratic differentials is given as two linear equations. \[q_{1}+q_{2}+q_{3} =0,\] \[\frac{q_{1}}{c_{1}}+\frac{q_{2}}{c_{2}}+\frac{q_{3}}{c_{3}} =0.\] The solution to them can be expressed \[q_{1}=\nu c_{1}(c_{3}-c_{2}),\quad q_{2}=\nu c_{2}(c_{1}-c_{3}),\quad q_{3}= \nu c_{3}(c_{2}-c_{1}), \tag{8.1}\] with an arbitrary scalar \(\nu\in\mathbb{C}\). Now we compute _trivalent Enneper surface_ and their convergence. Let \(X=(V,E)\) be a regular hexagonal lattice whose edge length \(\lambda>0\) in \(B_{R}(0)\subset\mathbb{C}\). and a holomorphic function \(g\) be \(g(z)=z\). Then by Lemma 8.1, we may calculate the holomorphic quadratic differential \(q\) as \[q_{1} =\nu(g(z_{1})-g(z))(g(z_{2})-g(z_{3})),\] \[q_{2} =\nu(g(z_{2})-g(z))(g(z_{3})-g(z_{1})), \tag{8.2}\] \[q_{3} =\nu(g(z_{3})-g(z))(g(z_{1})-g(z_{2})),\] for an arbitrary scalar \(\nu\in\mathbb{C}\). By using the Weierstrass formula (1.1) in Theorem 6.2, we obtain the local formula \[F(z_{a})=F(z)+\frac{\mathbf{iq}_{a}}{g(z_{a})-g(z)}\begin{bmatrix}1-g(z_{a})g(z) \\ \mathbf{i}(1+g(z_{a})g(z))\\ g(z_{a})+g(z)\end{bmatrix}\in\mathbb{C}^{3}. \tag{8.3}\] Taking \(F(z_{0})=w_{0}\in\mathbb{C}^{3}\) for a fixed vertex \(z_{0}\in V\subset\mathbb{C}\) and a fixed \(w_{0}\in\mathbb{C}^{3}\). we may calculate \(F(z)\) using (8.3) along a path from \(z_{0}\) to \(z\in V\subset\mathbb{C}\). By defining \(X(z)=\text{Re}(F(z))\), we call the surface \(X\) a _trivalent Enneper surface_ (see Fig. 5). For the well-definedness of \(X\), we may show that the formula (8.3) is closed along any closed path in the hexagonal lattice. **Lemma 8.2**.: _For any \(z_{0},\dots,z_{5},z_{0}\) be a closed path in a regular hexagonal lattice in \(\mathbb{C}\), the formula (8.3) is closed along the path._ Proof.: Let coordinates of \(z_{0},\dots,z_{6}\) and \(z_{7},\dots,z_{12}\) be \[z_{j} =\lambda e^{(j-1)\pi\mathbf{i}/3}+z_{0},\qquad\ j=1,\dots,6,\] \[z_{k} =2\lambda e^{(k-7)\pi\mathbf{i}/3}+z_{0},\qquad k=7,\dots,12,\] (see Fig. 2). By (8.2) and (8.3), we obtain \[dF(e_{j,j+1})=F(z_{j+1})-F(z_{j})=\mathbf{i}(z_{j+6}-z_{j-1})\begin{bmatrix}1-z_{j}z_{j+ 1},\\ \mathbf{i}(1+z_{j}z_{j+1})\\ z_{j}+z_{j+1}\end{bmatrix}, \tag{8.4}\] for \(j=1,\ldots,6\), (in case of \(j=1\), set \(j-1\) to be \(6\)). Substituting explicit complex coordinates into (8.4), we obtain \[dF(e_{j,j+1}) =\mathbf{i}\lambda(2e^{(j-1)\pi\mathbf{i}/3}-e^{(j-2)\pi\mathbf{i}/3}) \begin{bmatrix}1-\lambda^{2}(e^{(j-1)\pi\mathbf{i}/3}+z_{0})(e^{j\pi\mathbf{i}/3}+z_{0} )\\ \mathbf{i}(1+\lambda^{2}(e^{(j-1)\pi\mathbf{i}/3}+z_{0})(e^{j\pi\mathbf{i}/3}+z_{0}))\\ \lambda(e^{(j-1)\pi\mathbf{i}/3}+e^{j\pi\mathbf{i}/3}+2z_{0})\end{bmatrix}\] \[=\sqrt{3}e^{(j+1)\pi\mathbf{i}/3}\begin{bmatrix}\left(1-\lambda^{2} \left(z_{0}+e^{\mathbf{i}\pi(j-1)/3}\right)\left(z_{0}+e^{j\pi\mathbf{i}/3}\right) \right)\\ \mathbf{i}\left(1+\lambda^{2}\left(z_{0}+e^{\mathbf{i}\pi(j-1)/3}\right)\left(z_{0}+e ^{j\pi\mathbf{i}/3}\right)\right)\\ \lambda\left(2z_{0}-\sqrt{3}e^{-\pi\mathbf{i}/6}e^{j\pi\mathbf{i}/3}\right)\end{bmatrix},\] and \[dF(z_{1,2})+dF(z_{2,3})+dF(z_{3,4})+dF(z_{4,5})+dF(z_{5,6})+dF(z_{6,1})=0\in \mathbb{C}^{3}.\] By taking Goldberg-Coxeter subdivision of the regular hexagonal lattice \(M\), we obtain a regular hexagonal lattice \(M_{1}\). Constructing the same manner from \(M_{1}\), we may obtain a trivalent Enneper surface \(X_{1}\). Continuing this procedure, we obtain a sequence \(\{X_{j}\}_{j=0}\) of trivalent Enneper surfaces (see Fig. 5). We compute that the sequence \(\{X_{j}\}\) converges a classical Enneper surface \(X_{\text{classical}}\), Gauss maps and mean curvatures of \(\{X_{j}\}\) converge the Gauss map and the mean curvatures of \(X_{\text{classical}}\), respectively (see Figs. 3, 4). In fact, the Enneper surface \(X_{\text{classical}}\) appeared in the limit is \[X =\sqrt{3}\nu(u^{3}/3-u^{2}v+u),\] \[Y =\sqrt{3}\nu(-v^{3}/3+v^{2}u-v),\] \[Z =\sqrt{3}\nu(u^{2}-v^{2}),\] Figure 2: where \(\nu\) is the scalar used in (8.2). We show numerical results of convergence \[X_{k}(z_{j})-X_{\rm classical}(z_{j}),\quad N_{k}(z_{j})-N(z_{j}),\quad H_{k}(z _{j})-H(z_{j}),\] in Fig. 6). ### Acknowledgements M.K acknowledges the JSPS Grant-in-Aid for Scientific Research (B) under Grant No. JP23H01072. H.H acknowledges the JSPS Grant-in-Aid for Scientific Research (C) JP19K03488 and the JSPS Grant-in-Aid for Scientific Research (B) under Grant No. JP23H01072. Figure 4: Left: Goldberg–Coxeter construction from \(M_{0}\) (black) to \(M_{1}\) (blue). Right: Goldberg–Coxeter construction from \(M_{1}\) (black) to \(M_{2}\) (blue). Figure 3: regular hexagonal lattices \(M_{k}\) to construct trivalent Enneper surfaces \(X_{k}\). Gray disks express \(B_{\sqrt{3}}(0)\subset\mathbb{C}\). Note that classical Enneper surface from \(B_{R}(0)\) have exactly self-intersections if \(R\geq\sqrt{3}\). Figure 5: Upper row: Trivalent Enneper surface \(X_{k}\) (with \(\nu=1\) in holomorphic quadratic differential on \(B_{\sqrt{3}}(0)\in\mathbb{C}\)), where \(X_{k}\) is the \((k-1)\)-times Goldberg–Coxeter subdivision of \(X_{1}\). Vertex coloring are proportional to mean curvature (the larger the absolute value, the darker the color). Lower row: Their pseudo normal vector (pseudo Gauss map).
2301.12839
CMS Tracker Alignment: Legacy results from LHC Run 2 and Run 3 prospects
The inner tracking system of the CMS experiment, which comprises Silicon Pixel and Silicon Strip detectors, is designed to provide a precise measurement of the momentum of charged particles and to reconstruct the primary and secondary vertices. The movements of the different substructures of the tracker detectors driven by the operating conditions during data taking, require to regularly update the detector geometry in order to accurately describe the position, orientation, and curvature of the tracker modules. The procedure in which new parameters of the tracker geometry are determined is known as alignment of the tracker. The alignment procedure is performed several times during data taking using reconstructed tracks from collisions and cosmic rays data, and later on, further refined after the data taking period is finished. The tracker alignment performance corresponding to the ultimate accuracy of the alignment calibration for the legacy reprocessing of the CMS Run 2 data is presented. The data-driven methods used to derive the alignment parameters and the set of validations that monitor the performance of physics observables after the alignment are reviewed. Finally, the prospects for the alignment calibration during the upcoming run of the LHC, where more challenging operation conditions are expected, will be addressed.
Sandra Consuegra Rodríguez
2023-01-30T12:46:41Z
http://arxiv.org/abs/2301.12839v1
# CMS Tracker Alignment: Legacy results from LHC Run 2 and Run 3 prospects ###### Abstract: The inner tracking system of the CMS experiment, which comprises Silicon Pixel and Silicon Strip detectors, is designed to provide a precise measurement of the momentum of charged particles and to reconstruct the primary and secondary vertices. The movements of the different substructures of the tracker detectors driven by the operating conditions during data taking, require to regularly update the detector geometry in order to accurately describe the position, orientation, and curvature of the tracker modules. The procedure in which new parameters of the tracker geometry are determined is known as alignment of the tracker. The alignment procedure is performed several times during data taking using reconstructed tracks from collisions and cosmic rays data, and later on, further refined after the data taking period is finished. The tracker alignment performance corresponding to the ultimate accuracy of the alignment calibration for the legacy reprocessing of the CMS Run 2 data is presented. The data-driven methods used to derive the alignment parameters and the set of validations that monitor the performance of physics observables after the alignment are reviewed. Finally, the prospects for the alignment calibration during the upcoming run of the LHC, where more challenging operation conditions are expected, will be addressed. ## 1 The CMS tracker The CMS Tracker comprises both the silicon pixel and strip detectors within a length of 5.8 m and a diameter of 2.5 m [1]. The silicon pixel detector is the closest detector in proximity to the interaction point and consists of a barrel region (BPIX) and two forward endcaps (FPIX). In the configuration operating until the end of the 2016 data-taking period (Phase-0 pixel detector), the BPIX consisted of three layers and the FPIX of two disks in each of the two endcaps, for a total of 1440 modules. The upgraded pixel detector in operation since 2017 (Phase-1 pixel detector) has an additional barrel layer in BPIX and an additional disk on each of the two endcaps of the FPIX, which results in a total of 1856 modules [2]. The silicon strip detector consists of 15 148 modules and is composed of four subsystems: the Tracker Inner Barrel and Disks (TIB and TID), the Tracker Outer Barrel (TOB), and the Tracker EndCaps (TEC). The determination of the trajectory of charged particles (tracks) from signals (hits) in the tracker, referred to as tracking, is instrumental for the process of event reconstruction in CMS, with a good tracking performance of particular relevance for many physics analyses. The measurement of the momentum of the tracks relies on an accurate determination of the track curvature induced by the magnetic field, which in turn requires a carefully calibrated detector. The resolution of the momentum measurement is affected by multiple scattering and by the limited knowledge of the position of each of the track hits. The latter has contributions from two main components, the spatial resolution of the modules, known as intrinsic hit resolution (\(\approx\) 10 to 30 \(\upmu\)m), and the precision that comes from the limited knowledge of the position and orientation of the modules, known as alignment position errors. The reduction of this last component well below the intrinsic hit resolution is needed in order to fully exploit the remarkable resolution of the CMS silicon modules. Starting from a precision after installation of \(\mathcal{O}\)(100 \(\upmu\)m), this is accomplished through a track-based alignment. ## 2 Alignment of CMS tracker The alignment of the CMS tracker using reconstructed charged particles, known as track-based alignment, constitutes a major challenge due to the enormous number of degrees of freedom involved. A measured hit position is assigned to every hit registered in the detector and a set of tracks is formed from the combination of these hits. To each of these tracks, a set of track parameters (e.g., parameters related to the track curvature and deflection angles due to multiple scattering) is associated. The fitted tracks depend on so-called alignment or global parameters as well, with nine of these parameters needed per sensor to set the coordinates of the sensor center, rotational angles, and surface deformations [3]. The track-based alignment method follows a least-square approach, minimizing the sum of squares of the normalized track-hit residuals from a set of tracks. The track hit-residuals are obtained by subtracting the projection of the track prediction which depends on the track and alignment parameters from the measured hit position. The resulting system of equations for the alignment parameters obtained from the minimization of the track-hit residuals is solved through a global fit [4]. The tracks are then re-fitted assuming the new geometry defined by the updated set of alignment parameters and the measurement of the track momentum is also updated. The validation of the new alignment conditions follows. ## 3 Tracker alignment strategy for data The hierarchical structure of the CMS tracker allows aligning high-level mechanical structures up to single modules. Each year, before the beginning of the data taking for physics analysis, at least the high-level structures of the tracker are aligned using the available cosmic ray data from the commissioning of the detector. This initial alignment allows a first non-refined determination of the alignment constants, which at this point can show large movements with respect to the initially assumed geometry, due to temperature and magnetic field changes, as well as the reinstallation of detector components during the detector shutdown. These alignment constants are then used online for the express processing of the data. An automated alignment continuously monitors the high-level structure movements of the pixel detector and automatically corrects the geometry if the alignment corrections exceed certain thresholds. A track-based alignment at a higher granularity level is also periodically run offline. A heterogeneous sample of tracks crossing the detector at different angles and covering their full active area to relate different detector components is used. The automated alignment is then refined with periodic updates from these campaigns happening in parallel. After the data-taking period is finished, the full statistics of the dataset collected during the year is used to provide the alignment conditions for the reprocessing of the data. The ultimate accuracy of the alignment calibration is derived and used for the final or legacy reprocessing of the data. For the 2016-2018 period, up to \(\approx\) 700k parameters and 220 geometries for different intervals of validity were determined, covering the significant changes of the alignment conditions over time. ## 4 Legacy results The tracker geometry obtained from each alignment fit can be compared to the starting geometry for the particular fit, to identify unusual movements or systematic distortions artificially introduced by the \(\chi^{2}\) minimization of the track-hit residuals. Further validations are carried out, inspecting observables that are particularly sensitive to the quality of the alignment. The alignment constants for the modeling of the data in simulation were derived separately for each data-taking year and tuned to emulate the effects of the residual misalignment left in data after the legacy reprocessing. The comparison of the performance of the different sets of alignment constants used during data taking, for the end-of-year reconstruction, and the legacy reprocessing is illustrated in this section. ### Tracking performance The distribution of the median of the track-hit residuals per module (DMRs) constitutes a measure of the tracking performance. Each track is refitted without the hit under consideration. Therefore, the computation of the residuals is said to be unbiased. The distribution of the median should result in a narrow and centered distribution around zero since the track-based alignment performs the minimization of the sum of the track-hit residuals. The width of the distribution constitutes a measure of the local precision of the alignment calibration, while deviations of the mean from zero indicate possible biases due to change of conditions in the detector. The DMRs are monitored for all the tracker substructures, as shown for the barrel and forward pixel detector in Figure 1, for the local x-coordinate of the modules. A significant improvement for the legacy reprocessing over the alignment during data taking and the end-of-year reconstruction is observed. ### Vertexing performance The effect of the alignment calibration on the reconstruction of physics objects is also studied. The distance between tracks and the vertex reconstructed without the track under scrutiny (unbiased track-vertex residuals) is studied to evaluate the performance of the alignment in the pixel detector, searching for potential biases in the primary vertex reconstruction. The mean value of the unbiased track-vertex residuals is shown in Figure 2 for the transverse plane as a function of the processed luminosity. The large alignment corrections that were needed at the beginning of each year of data taking can be observed. ### Monitoring of systematic distortions Systematic distortions of the tracker geometry, known as weak modes, can cause biases in the track reconstruction, affecting the performance of physics observables. In an ideally aligned tracker, the reconstructed invariant mass of resonances such as the \(\mathrm{Z}\rightarrow\mu\mu\) decay should minimally depend on the azimuthal angle or the pseudorapidity of the muons. Therefore, the reconstructed mass of the resonance serves as a good observable to inspect the geometry and detect the presence of weak modes. The improvement for the legacy reprocessing in the uniformity of the reconstructed mass of the \(\mathrm{Z}\rightarrow\mu\mu\) decay is demonstrated in Figure 3. The overlap validation constitutes another tool to monitor the alignment looking for systematic distortions, in this case by using hits from tracks that have two hits in separate modules within the same layer of the tracker. Due to the small distance between the two hits, there is a small uncertainty in the propagation of the track parameters, and the double difference of the track hit-residuals is very sensitive to systematic distortions. The mean overlap residuals are shown in Figure 4 as a function of the processed luminosity. The overall reduction of the residual values can be observed for the legacy reprocessing. Figure 1: The distribution of median residuals is shown for the local-x coordinate in the barrel pixel (left) and forward pixel (right). The alignment constants used for the Run 2 legacy reprocessing (green) are compared with the ones used during data taking (blue) and for the end-of-year reconstruction (red). The quoted means \(\mu\) and standard deviations \(\sigma\) correspond to the parameters of a Gaussian fit to the distributions [5]. ## 5 Run 3 prospects The integrated luminosity collected by the CMS experiment is expected to be doubled during Run 3 with respect to Run 2. Stronger variations of the Lorentz drift of charged carriers released by charged particles passing through the silicon sensors are foreseen due to larger irradiation doses. The alignment procedure is sensitive to Lorentz drift changes induced by accumulated radiation after \(\approx 1\) fb\({}^{-1}\), while the pixel local reconstruction calibration which corrects for this effect is only performed after \(\approx 10\) fb\({}^{-1}\). If the alignment is performed at a high enough granularity, inward and outward-pointing modules are free to move separately and the bias coming from Lorentz angle miscalibration can be absorbed. Therefore, a finer granularity for the automated alignment is expected to be deployed during Run 3. A finer granularity will be used at earlier stages with respect to Run 2 for the alignment run offline as well, to better cope with radiation effects. In addition, more geometries will be derived to cover the significant changes over time. Figure 3: Reconstructed Z \(\rightarrow\mu\mu\) mass as a function of the difference in pseudorapidity \(\eta\) between positively and negatively charged muons. The full sample of dimuon events in the 2016-2018 period was used [5]. Figure 2: The mean track-vertex impact parameter in the transverse plane \(d_{xy}\) is shown as a function of the processed luminosity. The vertical black lines indicate the first processed run for each data-taking year, and the vertical dotted lines indicate a change in the pixel tracker calibration [6]. ## 6 Summary The performance of the tracker alignment corresponding to the final set of alignment conditions used for the legacy reprocessing of the CMS Run 2 data has been presented. The data-driven methods used to derive the alignment parameters and the set of validations that monitor the tracking and vertexing performance, as well as the presence of systematic distortions in the detector geometry, were reviewed. Finally, the prospects for the alignment calibration during Run 3, in particular, the challenges related to the radiation damage and the rapidly changing conditions were discussed.
2303.04822
Extracting higher central charge from a single wave function
A (2+1)D topologically ordered phase may or may not have a gappable edge, even if its chiral central charge $c_-$ is vanishing. Recently, it is discovered that a quantity regarded as a "higher" version of chiral central charge gives a further obstruction beyond $c_-$ to gapping out the edge. In this Letter, we show that the higher central charges can be characterized by the expectation value of the \textit{partial rotation} operator acting on the wavefunction of the topologically ordered state. This allows us to extract the higher central charge from a single wavefunction, which can be evaluated on a quantum computer. Our characterization of the higher central charge is analytically derived from the modular properties of edge conformal field theory, as well as the numerical results with the $\nu=1/2$ bosonic Laughlin state and the non-Abelian gapped phase of the Kitaev honeycomb model, which corresponds to $\mathrm{U}(1)_2$ and Ising topological order respectively. The letter establishes a numerical method to obtain a set of obstructions to the gappable edge of (2+1)D bosonic topological order beyond $c_-$, which enables us to completely determine if a (2+1)D bosonic Abelian topological order has a gappable edge or not. We also point out that the expectation values of the partial rotation on a single wavefunction put a constraint on the low-energy spectrum of the bulk-boundary system of (2+1)D bosonic topological order, reminiscent of the Lieb-Schultz-Mattis type theorems.
Ryohei Kobayashi, Taige Wang, Tomohiro Soejima, Roger S. K. Mong, Shinsei Ryu
2023-03-08T19:00:02Z
http://arxiv.org/abs/2303.04822v4
# Extracting higher central charge from a single wave function ###### Abstract A (2+1)D topologically ordered phase may or may not have a gappable edge, even if its chiral central charge \(c_{-}\) is vanishing. Recently, it is discovered that a quantity regarded as a "higher" version of chiral central charge gives a further obstruction beyond \(c_{-}\) to gapping out the edge. In this Letter, we show that the higher central charges can be characterized by the expectation value of the _partial rotation_ operator acting on the wavefunction of the topologically ordered state. This allows us to extract the higher central charge from a single wavefunction, which can be evaluated on a quantum computer. Our characterization of the higher central charge is analytically derived from the modular properties of edge conformal field theory, as well as the numerical results with the \(\nu=1/2\) bosonic Laughlin state and the non-Abelian gapped phase of the Kitaev honeycomb model, which corresponds to U(1)\({}_{2}\) and Ising topological order respectively. The letter establishes a numerical method to obtain a set of obstructions to the gappable edge of (2+1)D bosonic topological order beyond \(c_{-}\). We also point out that the expectation values of the partial rotation on a single wavefunction put a constraint on the low-energy spectrum of the bulk-boundary system of (2+1)D bosonic topological order, reminiscent of the Lieb-Schultz-Mattis type theorems. _Introduction -_ (2+1)D topological phases with bulk energy gap host various intriguing physical phenomena [1]. One of the most striking is the phenomenon of bulk-edge correspondence, where the property of the bulk heavily constrains what can happen at the boundary of the system. The most celebrated example is the case of Integer Quantum Hall effect (IQHE), where nonzero bulk Chern number implies the presence of gapless charged edge modes [2]. Even in the absence of charge conservation, systems with nonzero chiral central charge, which signals nonzero _thermal_ Hall conductance, has gapless edge modes [3]. We have a good theoretical understanding of these quantities by way of coarse-grained Chern-Simons theory, and we can extract them from microscopic wavefunctions [4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. In the presence of anyonic excitations, there are properties beyond chiral central charge \(c_{-}\) that enforces the presence of gapless edge modes. In many cases, non-trivial braiding statistics between anyons can present an obstruction to gapping out all anyonic degrees of freedom simultaneously at the edge of the system [14; 15]. Such phases of matter are said to have an _ungappable_ edge. Recently, it is discovered that a quantity called _higher central charge_ can partially capture "ungappability" of the edge [16; 17]. In particular, higher central charges of an Abelian topological order completely determines whether it has an ungappable edge [18]. However, so far the quantity has been characterized purely through the topological quantum field theory (TQFT) framework, and a microscopic understanding of the higher central charge has been lacking. In this Letter, we show that the expectation value of the _partial rotation_ operator - rotation operator that acts only on a part of the system - can be used to reliably extract the higher central charge of topologically ordered systems. This is the first proposal that relates the wavefunction of a topological ordered state to its higher central charges, and our operational definition even allows its evaluation on a quantum computer. Our finding is supported by an analytical conformal field theory (CFT) calculation, as well as numerical simulation of the non-Abelian phase of the Kitaev honeycomb model and \(\nu=1/2\) bosonic Laughlin state. This Letter establishes a general numerical method to obtain a set of obstructions to a gappable edge of a bosonic topological order beyond \(c_{-}\). _Definition and properties of higher central charge -_ The higher central charges \(\zeta_{n}\) are complex numbers characterizing a topologically ordered state, labeled by a positive integer \(n\). The higher central charge can be easily Figure 1: (a) The setup for which we considered the gappability problem. The obstruction can be captured by the chiral central charge \(c_{-}\) and higher central charge \(\zeta_{n}\). (b) Schematics of the partial rotation of a cylinder bisected into A and B subsystems. computed from the properties of anyons in the topological order. Concretely, for a given bosonic (2+1)D topologically ordered state, the higher central charge is defined as the following phase \[\zeta_{n}=\frac{\sum_{a}d_{a}^{2}\theta_{a}^{n}}{\left|\sum_{a}d_{a}^{2}\theta_{a }^{n}\right|}\, \tag{1}\] where the sum is over all anyons in the topological order. \(d_{a}\) is quantum dimension, and \(\theta_{a}\) is topological twist (i.e., self-statistics) of an anyon \(a\). When \(n=1\), \(\zeta_{1}\) reduces to the Gauss sum formula for chiral central charge of the bosonic topological phase modulo 8, \(\zeta_{1}=e^{\frac{2\pi i}{8}c_{-}}\), hence \(\zeta_{n}\) formally provides a generalization of \(c_{-}\). Higher central charges put a constraint on the gapability of the edge; it was proven in [16] that \(\zeta_{n}=1\) for all \(n\) such that \(\gcd(n,N_{\rm FS})=1\) give necessary conditions for admitting a gapped boundary. Here, \(N_{\rm FS}\) is called the Frobenius-Schur exponent, which is defined as the smallest positive integer such that \(\theta_{\alpha}^{N_{\rm FS}}=1\) for all anyons \(a\). For example, \(\mathrm{U}(1)_{2}\times\mathrm{U}(1)_{-4}\) Chern-Simons theory has \(\zeta_{3}=-1\), which shows that the topological order does not admit a gapped boundary even though \(c_{-}=0\). For (2+1)D bosonic Abelian topological phases one can also derive sufficient conditions: the higher central charges \(\{\zeta_{n}\}\) for \(\gcd(n,\frac{N_{\rm FS}}{\gcd(n,N_{\rm FS})})=1\) give both necessary and sufficient conditions for a gappable boundary [18]. _Main result_ - To extract the higher central charges from a single wavefunction, we consider a (2+1)D topological ordered phase located on a cylinder. The state on the cylinder is labeled by the anyon \(a\), which corresponds to a quasiparticle obtained by shrinking the puncture at the end of the cylinder. Suppose that we have realized a ground state \(\ket{\Psi}\) on the cylinder labeled by the trivial anyon 1. Let us then take a bipartition of the cylinder into the two subsystems labeled by A and B, and write the translation operator for the A subsystem by the angle \(\theta\) along the circumference as \(T_{\rm A;\theta}\) (see Fig. 1). We then find that the following quantity extracts the higher central charge \(\zeta_{n}\), \[\mathcal{T}_{1}\left(\frac{2\pi}{n}\right):=\bra{\Psi}T_{\rm A; \frac{2\pi}{n}}\ket{\Psi}\propto e^{-2\pi i(\frac{2}{n}+n)\frac{c_{-}}{2\pi}} \times\sum_{a}d_{a}^{2}\theta_{a}^{n}\, \tag{2}\] where \(\propto\) in this Letter always means being proportional up to a positive real number. In the special case where \(n=1\), the rhs becomes 1 since \(\sum_{a}d_{a}^{2}\theta_{a}\propto e^{\frac{2\pi i}{8}c_{-}}\), which is consistent with the fact that the rotation by \(2\pi\) acting on the cylinder A gives an identity operator. For \(n>1\) and \(\gcd(n,N_{\rm FS})=1\), the above rhs becomes proportional to the higher central charge \(\zeta_{n}\) and gives information about a non-trivial obstruction to gapped boundary beyond \(c_{-}\). Since \(c_{-}\) can be extracted from a single wavefunction [6; 7], our method allows a complete characterization of all higher central charges. We also note that partial rotation, which is a unitary operator, can be easily evaluated on a quantum computer using methods such as the Hadamard test. _Analytic derivation_ - Eq. (2) can be derived by employing the cut-and-glue approach established in [19; 20], which describes the entanglement spectrum of the A subsystem in the long wavelength by that of the (1+1)D CFT on its edges. Namely, the reduced density matrix for the A subsystem is effectively given by \(\rho_{\rm A}=\rho_{\rm A;\bar{l}}\otimes\rho_{\rm A;\bar{r}}\), where \(\rho_{\rm A;\bar{l}},\rho_{\rm A;\bar{r}}\) denote the CFTs on the left and right edges respectively. The left edge lies at the end of the whole cylinder that realizes the ground state of CFT; the right edge of the A subsystem entangled with the B subsystem is described by a thermal density matrix of a perturbed edge CFT within the fixed topological sector [21]. The form of the perturbation in the entanglement Hamiltonian is not universal. In the calculations to follow, we assume that the entanglement Hamiltonian is that of the unperturbed CFT: \(\rho_{\rm A;\bar{r}}=e^{-\beta_{r}H_{r}}\), and check the validity of this assumption with our numerics. Since the operator \(T_{\rm A,\theta}\) acts as the translation of the CFT on the boundary, the expectation value of \(T_{\rm A;2\pi/n}\) for the A subsystem is evaluated in terms of the translation of edge CFTs. The partial rotation is then expressed in terms of CFT partition functions as \[\mathcal{T}_{1}\left(\frac{2\pi}{n}\right) =\frac{\mathrm{Tr}\big{[}e^{iP_{l}\frac{L}{n}}e^{-\frac{\xi_{l}}{n} H_{l}}\big{]}\mathrm{Tr}\big{[}e^{iP_{r}\frac{L}{n}}e^{-\frac{\xi_{r}}{n}H_{r}} \big{]}}{\mathrm{Tr}\big{[}e^{-\frac{\xi_{l}}{n}H_{l}}\big{]}\mathrm{Tr}\big{[} e^{-\frac{\xi_{r}}{n}H_{r}}\big{]}} \tag{3}\] \[=\frac{\chi_{1}(\frac{i\xi_{l}}{L}+\frac{1}{n})\,\chi_{1}(\frac{i \xi_{l}}{L}-\frac{1}{n})}{\chi_{1}(\frac{i\xi_{l}}{L})\,\chi_{1}(\frac{i\xi_{r }}{L})},\] where we introduced the velocity \(v\) of the CFT, correlation length \(\xi_{l}=v\beta_{l}\), \(\xi_{r}=v\beta_{r}\), and the circumference of the cylinder \(L\). \(P_{l}\) and \(P_{r}\) are translation operators on the left and right edge \(P_{l}=-\frac{1}{v}H_{l}\), \(P_{r}=\frac{1}{v}H_{r}\). \(\chi_{1}(\tau)\) is the CFT character of the trivial topological sector with modular parameter \(\tau\). In our setup where \(L\ll\xi_{l}\), the characters for the left edge are approximated as \[\chi_{1}\left(\frac{i\xi_{l}}{L}\right)\approx e^{\frac{2\pi\xi_{l}}{L}\frac{c_{ -}}{2\pi}},\ \chi_{1}\left(\frac{i\xi_{l}}{L}+\frac{1}{n}\right)\approx e^{\frac{2\pi\xi_{l}}{L} \frac{c_{-}}{2\pi}}e^{-\frac{2\pi i}{8}\frac{c_{-}}{2\pi}}. \tag{4}\] Meanwhile, the edge CFT at the right edge cutting the system has high temperature \(L\gg\xi_{r}\). The characters for the right edge can be approximately computed by performing proper modular \(S,T\) transformations as [22] \[\chi_{1}\left(\frac{i\xi_{r}}{L}\right) =\sum_{a}S_{1,a}\chi_{a}\left(\frac{iL}{\xi_{r}}\right)\approx \frac{1}{\mathcal{D}}e^{\frac{2\pi\xi_{l}}{L}\frac{c_{-}}{2\pi}}, \tag{5}\] \[\chi_{1}\left(\frac{i\xi_{r}}{L}-\frac{1}{n}\right) =\sum_{a}(ST^{n}S)_{1,a}\chi_{a}\left(\frac{iL}{n^{2}\xi_{r}}+ \frac{1}{n}\right)\] \[\approx(ST^{n}S)_{1,1}e^{-\frac{2\pi i}{8}\frac{c_{-}}{2\pi}}e^{ \frac{2\pi\xi_{l}}{L}\frac{c_{-}}{2\pi}}\] \[=\frac{1}{\mathcal{D}^{2}}e^{-2\pi i(n+\frac{1}{n})\frac{c_{-}}{2 \pi}}e^{\frac{2\pi L}{n^{2}\xi_{r}}\frac{c_{-}}{2\pi}}\sum_{a}d_{a}^{2}\theta_ {a}^{n}, \tag{6}\] where \(n\) is assumed to be a small positive integer satisfying \(n^{2}\ll L/\xi_{r}\). The sum is over the anyons \(a\) that labels the conformal block of the edge CFT, and is the total quantum dimension. By combining the above approximations of the characters, \(\mathcal{T}_{1}\left(2\pi/n\right)\) in Eq. (3) is expressed in the form of Eq. (2). A similar computation can be performed when the ground state \(\ket{\Psi}\) lives in a generic topological sector labeled by an anyon of the topological order, \[\mathcal{T}_{a}\left(\frac{2\pi}{n}\right)\propto e^{\frac{2\pi i}{n}h_{a}-2 \pi i\left(\frac{2}{n}+n\right)\frac{c}{24}}\times\sum_{b}S_{ab}d_{b}\theta_{b }^{n}\, \tag{7}\] where \(\mathcal{T}_{a}\left(2\pi/n\right):=\bra{\Psi_{a}}T_{\Lambda;2\pi/n}\ket{ \Psi_{a}}\) with \(\ket{\Psi_{a}}\) being the ground state in the topological sector labeled by an anyon \(a\). We define twisted higher central charge \[\zeta_{n,a}:=\sum_{b}S_{ab}d_{b}\theta_{b}^{n}\, \tag{8}\] which is proportional to higher central charge when \(a=1\), \(\zeta_{n,1}=(\sum_{b}d_{b}^{2}\theta_{b}^{n})/\mathcal{D}\). The derivation of Eq. (7) is relegated to Supplemental Materials. While the definition of the quantity Eq. (2) is akin to that of the momentum polarization in the large \(n\) limit introduced in Ref. [23; 24], we emphasize that the partial rotation by the finite angle \(\mathcal{T}_{a}(2\pi/n)\) extracts a completely different universal quantity from the momentum polarization. Indeed, the momentum polarization with \(n\to\infty\) does not depend on the higher central charge, which is expressed as \[\lim_{n\to\infty}\mathcal{T}_{a}\left(\frac{2\pi}{n}\right)\propto\exp\left[ \frac{2\pi i}{n}\left(h_{a}-\frac{c_{-}}{24}-\frac{c_{-}}{24}\frac{L^{2}}{ \xi_{r}^{2}}\right)\right]. \tag{9}\] Remarkably, while the momentum polarizaton Eq. (9) depends on the circumference \(L\) and the non-universal correlation length \(\xi_{r}\), Eq. (2) solely gives a constant universal value as the combination of \(c_{-}\) and higher central charge. In Supplemental Materials, we describe how the behavior of the partial rotation interpolates between higher central charge and momentum polarization by increasing \(n\) from a small positive integer to infinity. _Numerical results -_ We demonstrate the validity of the formula Eq. (2) for two examples: the Ising topological order realized by the Kitaev honeycomb model, and the U(1)\({}_{2}\) topological order realized by the \(\nu=1/2\) bosonic Laughlin state. Their \(\zeta_{n,a}\) and expected values of the partial rotation \(\mathcal{T}_{a}(2\pi/n)\) are summarized in Table 1. For some of the \(n\)'s in a given topological sector, the magnitude of \(\mathcal{T}_{1}\) might vanish. However, this could only occur when \(\text{gcd}(n,N_{\text{FS}})\neq 1\), which therefore does not obscure the examination of whether the topological order has a gappable boundary. The Kitaev honeycomb model is defined on a honeycomb lattice with a qubit on each vertex, with the Hamiltonian given by \[\begin{split} H=& J_{x}\sum_{\langle ij\rangle\in \text{R edge}}X_{i}X_{j}+J_{y}\sum_{\langle ij\rangle\in\text{B edge}}Y_{i}Y_{j}\\ &+J_{z}\sum_{\langle ij\rangle\in\text{Y edge}}Z_{i}Z_{j}+\kappa \sum_{\langle ijk\rangle}X_{i}Y_{j}Z_{k},\end{split} \tag{10}\] where the last term is a time-reversal breaking term introduced by turning on magnetic field, which realizes the non-Abelian gapped phase [4]. The non-Abelian phase is known to host Ising topological order with anyons \(1\), \(\sigma\), \(\psi\) with topological twists \(\theta_{1}=1\), \(\theta_{\sigma}=e^{2\pi i/16}\), \(\theta_{\psi}=-1\). To compute partial rotation, we employ a cylinder geometry. The cylinder is terminated with zigzag boundary condition on both ends as depicted in Fig. 2, and we act on the left half of system with partial rotation. The model is equivalent to a system of free Majorana fermions coupled to dynamical \(\mathbb{Z}_{2}\) gauge field, by rewriting the spin degrees of freedom using Majorana fermion operators \(c\), which act as dynamical free Majorana fermions, and \(b\), which describes the \(\mathbb{Z}_{2}\) gauge field. As demonstrated in Supplemental Materials, the partial rotation for the state on the cylinder lying in the trivial sector can be expressed as \[\mathcal{T}_{1}\left(\frac{2\pi}{n}\right)\propto\text{Tr}\left(\frac{1+(-1)^ {F}}{2}e^{-H_{E}}T_{\Lambda;\frac{2\pi}{n}}\right), \tag{11}\] where \(H_{E}\) is the entanglement Hamiltonian for the free fermion state in the A subsystem with the fixed flat \(\mathbb{Z}_{2}\) gauge field, with the boundary condition in \(y\) direction taken to be anti-periodic. The operator Figure 2: (a) Geometry of the Kitaev model on a cylinder. Red solid lines, blue dotted-dashed lines, and yellow dotted lines correspond to \(X\), \(Y\) and \(Z\) type Ising interactions, respectively. The lattice is periodic in the \(y\) direction, and has the zigzag boundary condition in the \(x\) direction. (b) The partial rotations \(\mathcal{T}_{a}(2\pi/n)\) evaluated in the Ising topological phase of the Kitaev model at \(n=3,4\). The \(\sigma\) sector at \(n=4\) is not shown since partial rotation results in zero expectation value. We used \(J_{x}=J_{y}=J_{z}=1,\kappa=0.1\) for computation. gives a projector onto the Hilbert space with even fermion parity. Following Ref. [23], one can further evaluate it from the entanglement spectrum of the free Majorana fermions: \[\mathcal{T}_{1}\left(\frac{2\pi}{n}\right) \propto\prod_{m,k_{y}}\left[\frac{1+e^{ik_{y}L_{y}/n}}{2}+\frac{1-e ^{ik_{y}L_{y}/n}}{2}\tanh\frac{\xi_{mk_{y}}}{2}\right]\] \[+\prod_{m,k_{y}}\left[\frac{1-e^{ik_{y}L_{y}/n}}{2}+\frac{1+e^{ik_ {y}L_{y}/n}}{2}\tanh\frac{\xi_{mk_{y}}}{2}\right] \tag{12}\] where \(\xi_{mk_{y}}\) is the entanglement spectrum for \(H_{E}\), carried by a quasiparticle with momentum \(k_{y}\) in \(y\) direction. Analogously, the partial rotation for the \(\sigma\) sector is expressed in terms of the entanglement Hamiltonian \(H_{E}^{\sigma}\) given by setting the periodic boundary condition in \(y\) direction, \(\mathcal{T}_{\sigma}(2\pi/n)\propto\text{Tr}(e^{-H_{E}^{\sigma}}T_{\Lambda _{2}\pi/n})\), which can also be computed from entanglement spectrum of \(H_{E}^{\alpha}\). We show the result of this evaluation for \(1,\sigma\) sectors in Fig. 2. We see that \(\text{Arg}\left(\mathcal{T}_{a}\left(2\pi/n\right)\right)\) converges to predicted values. We only present for \(n\geq 3\) and \(|\mathcal{T}_{a}\left(2\pi/n\right)|>0\). \(\mathcal{T}_{a}(2\pi/n)\) is always real (no phase) for \(n=1\) and \(2\) since the phase part exactly cancels. The second example we consider is the \(\nu=1/2\) bosonic Laughlin state, which realizes \(\text{U}(1)_{2}\) Chern-Simons theory. Its only non-trivial anyon is the semion \(s\) with topological twist \(\theta_{s}=i\). The model we study is a two-dimensional boson gas projected to the lowest Landau level (LLL) that interacts with a contact interaction \(V_{0}=1\) plus a small perturbation \(\delta V_{2}=0.1\), where \(V_{m}\) are the Haldane pseudopotentials [25]. The ground state of the system at \(\nu=1/2\) filling is known to be the \(\nu=1/2\) Laughlin state [26]. To compute the partial rotation, we consider an infinite cylinder geometry, as shown in Fig. 3 (a), and use infinite density matrix renormalization group (iDMRG) calculations [27] to obtain the infinite matrix product state (iMPS) representation of the ground state \(|\Psi\rangle\). Compared to other numerical methods, the MPS representation is advantageous for evaluating the action of partial rotation. If rotation is a good symmetry, the Schmidt states \(|\alpha\rangle_{\Lambda/\text{B}}\) across subsystems A and B have definite momentum \(k_{y}^{\alpha}\) along the circumference. Thus, the action of partial rotation can be evaluated by \[\mathcal{T}_{a}(\theta)=\sum_{\alpha}\lambda_{\alpha}^{2}e^{ik_{y}^{\alpha}L_ {y}\theta}, \tag{13}\] where \(\lambda_{\alpha}\) is the corresponding Schmidt value. We can easily obtain both \(k_{y}^{\alpha}\) and \(\lambda_{\alpha}\) from the momentum label \(\bar{K}_{\bar{n}_{\text{B}};\alpha}\) and the Schmidt value \(\lambda_{\bar{n}_{\text{B}};\alpha}\) of the auxiliary bond \(\bar{n}_{\text{B}}\) across subsystems A and B. For the \(\nu=1/2\) bosonic Laughlin state, we work in the Landau gauge and the corresponding LLL orbital basis. To accelerate the calculation and obtain the momentum label mentioned above, we incorporate both particle number \(\hat{C}=\sum_{n}\hat{C}_{n}\equiv\sum_{n}(\hat{N}_{n}-\nu)\) and momentum \(\hat{K}=\sum_{n}\hat{K}_{n}\equiv\sum_{n}n(\hat{N}_{n}-\nu)\) conservation, where \(\hat{N}_{n}\) is the number operator at site \(n\). We find that \(\mathcal{T}_{a}(2\pi/n)\) converges at bond dimension \(\chi=3200\), cylinder circumference \(L_{y}=40\ell_{B}\) and onsite boson number cutoff \(N_{\text{boson}}=5\). We note there are a few technical complications in applying Eq. (13) to compute \(\mathcal{T}_{a}(\theta)\), which we will scratch here and readers can find more details in Supplemental Materials. First of all, there are a few ambiguities in extracting the physical momentum \(k_{y}^{\alpha}\) from the momentum label \(\bar{K}_{\bar{n}_{\text{A}};\alpha}\). For iMPS, there is an overall ambiguity of momentum labels on auxiliary bonds. The magnetic translation symmetry in quantum Hall systems further tangles the momentum label \(\bar{K}_{\bar{n};\alpha}=\langle\sum_{n<\bar{n}}\hat{K}_{n}\rangle_{\alpha}\) with the charge label \(\bar{C}_{\bar{n};\alpha}=\langle\sum_{n<\bar{n}}\hat{C}_{n}\rangle_{\alpha}\)[28]. These ambiguities can be fixed by matching the entanglement spectrum and the edge CFT spectrum as elaborated in Supplemental Materials. Secondly, which topological sector subsystem A, B belongs to depends on the cut. The \(\nu=1/2\) bosonic Laughlin state has a two-fold ground state degeneracy, characterized by root configuration (pattern of zeros) [01] and [10][29; 30]. It turns out that cutting through the LLL orbital center that corresponds to the 0's (1's) bisects the system into two trivial sectors (semion) sectors. Finally, when we work in the LLL orbital basis, an auxiliary bond divides the system into two sets of LLL orbitals instead of two regions of physical space. This problem can be resolved using the real-space entanglement spectrum (RSES) algorithm developed in Ref. [28]. We note that many of the technicalities discussed here are not specific to the \(\nu=1/2\) bosonic Laughlin state, but provide a general procedure for computing higher central charge of arbitrary wavefunction in the MPS form. Finally we present the result of \(\mathcal{T}_{a}(2\pi/n)\) in both the trivial and the semion sector. As shown in Fig. 3 (b), \(\mathcal{T}_{a}(2\pi/n)\) always converges to the expected phase as shown in Table 1 at sufficiently large \(L_{y}\). Figure 3: (a) A schematics of the infinite cylinder geometry and the LLL orbital basis of the MPS. Partial rotation along a real-space cut can be accomplished by acting a unitary operator on the auxiliary bond of the MPS obtained by the RSES algorithm. (b) \(\text{Arg}\,\mathcal{T}_{a}(2\pi/n)\) of the \(\nu=1/2\) bosonic Laughlin state extracted using Eq. (13). The dotted lines are the CFT predictions given in Table 1. _Discussion -_ In this Letter, we provide a characterization of the higher central charges \(\{\zeta_{n}\}\) in terms of the partial rotation evaluated on a wavefunction of the (2+1)D bosonic topological order, and confirmed the prediction using the Kitaev honeycomb model and the \(\nu=1/2\) bosonic Laughlin state. Partial rotation can be implemented easily in quantum computing architectures with cheap SWAP gates, such as Rydberg atom arrays, which opens up another avenue to studying topological order directly on a quantum computer. For future work, it would be interesting to find a way to extract the Frobenius-Schur exponent \(N_{\text{FS}}\) for a given single wavefunction. Together with the extraction of higher central charge in this Letter, knowing \(N_{\text{FS}}\) would enable us to completely determine whether the bosonic Abelian topological order has a gappable edge or not. In particular, once we figure out \(N_{\text{FS}}\), one can immediately extract chiral central charge from the partial rotation \[\mathcal{T}_{1}\left(\frac{2\pi}{N_{\text{FS}}}\right)\propto e^{-2\pi i(\frac {2}{N_{\text{FS}}}+N_{\text{FS}})\frac{c}{2\pi}}\, \tag{14}\] and accordingly all other higher central charges \(\{\zeta_{n}\}\) for \(1<n<N_{\text{FS}}\) by the partial rotations. Even when we do not know \(N_{\text{FS}}\) precisely, numerical results of \(\{\mathcal{T}_{1}(2\pi/n)\}\) can put a tight constraint on the possible low-energy spectrum of the bulk-boundary system. For instance, suppose that we observed \(\{\mathcal{T}_{1}(2\pi/p_{j})\}\) is a non-trivial phase for a set of distinct prime numbers \(\{p_{j}\}\). One can see that this leaves us two possibilities: 1. the edge is ungappable, or 2. the edge is gappable, where \(N_{\text{FS}}\) must be divisible by \(\prod_{j}p_{j}\). If the minimal \(N_{\text{FS}}\) required for a gappable edge is large and physically unrealistic, one can essentially determine that the boundary must be ungappable. Remarkably, the lower bound \(N_{\text{FS}}\geq\prod_{j}p_{j}\) for a gappable edge implies the lower bound for the number of anyons \(r\) given by \(r\geq r_{0}\), with \(r_{0}\) the smallest integer satisfying \(2^{2r_{0}/3+8}3^{2r_{0}/3}\geq\prod_{j}p_{j}\). This can be derived from the fact that \(N_{\text{FS}}\) of the bosonic topological order with \(r\) distinct anyons has the upper bound \(N_{\text{FS}}\leq 2^{2r/3+8}3^{2r/3}\)[31]. It implies that the ground state on a torus must carry at least \(r_{0}\)-fold degeneracy in order to realize a gappable edge. This argument is reminiscent of the Lieb-Schultz-Mattis type theorems Lieb and Schultz (1963); Lieb and Schultz (1963); Lieb (1963), which constrains the low-energy spectrum for a given input of the symmetry action on the ground state. Also, it would be interesting to extract the higher version of electric Hall conductivity proposed in Ref. [35], which gives an obstruction to U(1) symmetry-preserving gapped boundary of the fermionic topological order with U(1) symmetry beyond electric Hall conductivity and \(c_{-}\). _Acknowledgements -_ We thank Tianle Wang and Mike Zaletel for helpful discussions. RK is supported by the JQI postdoctoral fellowship at the University of Maryland. TW is supported by the U.S. DOE, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract No. DE-AC02-05-CH11231, within the van der Waals Heterostructures Program (KCWF16). TS is supported by a fellowship from the Masson foundation. RM is supported by the National Science Foundation under Award No. DMR-1848336. SR is supported by the National Science Foundation under Award No. DMR-2001181, and by a Simons Investigator Grant from the Simons Foundation (Award No. 566116). This work is supported by the Gordon and Betty Moore Foundation through Grant GBMF8685 toward the Princeton theory program. This research used the Lawrencium computational cluster resource provided by the IT Division at the Lawrence Berkeley National Laboratory (Supported by the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231).
2302.05815
E-unification for Second-Order Abstract Syntax
Higher-order unification (HOU) concerns unification of (extensions of) $\lambda$-calculus and can be seen as an instance of equational unification ($E$-unification) modulo $\beta\eta$-equivalence of $\lambda$-terms. We study equational unification of terms in languages with arbitrary variable binding constructions modulo arbitrary second-order equational theories. Abstract syntax with general variable binding and parametrised metavariables allows us to work with arbitrary binders without committing to $\lambda$-calculus or use inconvenient and error-prone term encodings, leading to a more flexible framework. In this paper, we introduce $E$-unification for second-order abstract syntax and describe a unification procedure for such problems, merging ideas from both full HOU and general $E$-unification. We prove that the procedure is sound and complete.
Nikolai Kudasov
2023-02-11T23:29:56Z
http://arxiv.org/abs/2302.05815v2
# \(E\)-unification for Second-Order Abstract Syntax # \(E\)-unification for Second-Order Abstract Syntax Nikolai Kudasov Innopolis University, Universitetskaya 1, Innopolis, Tatarstan Republic, Russia ###### Abstract Higher-order unification (HOU) concerns unification of (extensions of) \(\lambda\)-calculus and can be seen as an instance of equational unification (\(E\)-unification) modulo \(\beta\eta\)-equivalence of \(\lambda\)-terms. We study equational unification of terms in languages with arbitrary variable binding constructions modulo arbitrary second-order equational theories. Abstract syntax with general variable binding and parametrised metavariables allows us to work with arbitrary binders without committing to \(\lambda\)-calculus or use inconvenient and error-prone term encodings, leading to a more flexible framework. In this paper, we introduce \(E\)-unification for second-order abstract syntax and describe a unification procedure for such problems, merging ideas from both full HOU and general \(E\)-unification. We prove that the procedure is sound and complete. E-unification, higher-order unification, second order syntax 2012 ###### Abstract Higher-order unification (HOU) concerns unification of (extensions of) \(\lambda\)-calculus and can be seen as an instance of equational unification (\(E\)-unification) modulo \(\beta\eta\)-equivalence of \(\lambda\)-terms. We study equational unification of terms in languages with arbitrary variable binding constructions modulo arbitrary second-order equational theories. Abstract syntax with general variable binding and parametrised metavariables allows us to work with arbitrary binders without committing to \(\lambda\)-calculus or use inconvenient and error-prone term encodings, leading to a more flexible framework. In this paper, we introduce \(E\)-unification for second-order abstract syntax and describe a unification procedure for such problems, merging ideas from both full HOU and general \(E\)-unification. We prove that the procedure is sound and complete. ###### Abstract Higher-order unification (HOU) concerns unification of (extensions of) \(\lambda\)-calculus and can be seen as an instance of equational unification (\(E\)-unification) modulo \(\beta\eta\)-equivalence of \(\lambda\)-terms. We study equational unification of terms in languages with arbitrary variable binding constructions modulo arbitrary second-order equational theories. Abstract syntax with general variable binding and parametrised metavariables allows us to work with arbitrary binders without committing to \(\lambda\)-calculus or use inconvenient and error-prone term encodings, leading to a more flexible framework. In this paper, we introduce \(E\)-unification for second-order abstract syntax and describe a unification procedure for such problems, merging ideas from both full HOU and general \(E\)-unification. We prove that the procedure is sound and complete. ###### Abstract Higher-order unification (HOU) concerns unification of (extensions of) \(\lambda\)-calculus and can be seen as an instance of equational unification (\(E\)-unification) modulo \(\beta\eta\)-equivalence of \(\lambda\)-terms. We study equational unification of terms in languages with arbitrary variable binding constructions modulo arbitrary second-order equational theories. Abstract syntax with general variable binding and parametrised metavariables allows us to work with arbitrary binders without committing to \(\lambda\)-calculus or use inconvenient and error-prone term encodings, leading to a more flexible framework. In this paper, we introduce \(E\)-unification for second-order abstract syntax and describe a unification procedure for such problems, merging ideas from both full HOU and general \(E\)-unification. We prove that the procedure is sound and complete. ###### Abstract Higher-order unification (HOU) concerns unification of (extensions of) \(\lambda\)-calculus and can be seen as an instance of equational unification (\(E\)-unification) modulo \(\beta\eta\)-equivalence of \(\lambda\)-terms. We study equational unification of terms in languages with arbitrary variable binding constructions modulo arbitrary second-order equational theories. Abstract syntax with general variable binding and parametrised metavariables allows us to work with arbitrary binders without committing to \(\lambda\)-calculus or use inconvenient and error-prone term encodings, leading to a more flexible framework. In this paper, we introduce \(E\)-unification for second-order abstract syntax and describe a unification procedure for such problems, merging ideas from both full HOU and general \(E\)-unification. We prove that the procedure is sound and complete. ###### Abstract Higher-order unification (HOU) concerns unification of (extensions of) \(\lambda\)-calculus and can be seen as an instance of equational unification (\(E\)-unification) modulo \(\beta\eta\)-equivalence of \(\lambda\)-terms. We study equational unification of terms in languages with arbitrary variable binding constructions modulo arbitrary second-order equational theories. Abstract syntax with general variable binding and parametrised metavariables allows us to work with arbitrary binders without committing to \(\lambda\)-calculus or use inconvenient and error-prone term encodings, leading to a more flexible framework. In this paper, we introduce \(E\)-unification for second-order abstract syntax and describe a unification procedure for such problems, merging ideas from both full HOU and general \(E\)-unification. We prove that the procedure is sound and complete. ###### Abstract Higher-order unification (HOU) concerns unification of (extensions of) \(\lambda\)-calculus and can be seen as an instance of equational unification (\(E\)-unification) modulo \(\beta\eta\)-equivalence of \(\lambda\)-terms. We study equational unification of terms in languages with arbitrary variable binding constructions modulo arbitrary second-order equational theories. Abstract syntax with general variable binding and parametrised metavariables allows us to work with arbitrary binders without committing to \(\lambda\)-calculus or use inconvenient and error-prone term encodings, leading to a more flexible framework. In this paper, we introduce \(E\)-unification for second-order abstract syntax and describe a unification procedure for such problems, merging ideas from both full HOU and general \(E\)-unification. We prove that the procedure is sound and complete. ###### Abstract Higher-order unification (HOU) concerns unification of (extensions of) \(\lambda\)-calculus and can be seen as an instance of equational unification (\(E\)-unification) modulo \(\beta\eta\)-equivalence of \(\lambda\)-terms. We study equational unification of terms in languages with arbitrary variable binding constructions modulo arbitrary second-order equational theories. Abstract syntax with general variable binding and parametrised metavariables allows us to work with arbitrary binders without committing to \(\lambda\)-calculus or use inconvenient and error-prone term encodings, leading to a more flexible framework. In this paper, we introduce \(E\)-unification for second-order abstract syntax and describe a unification procedure for such problems, merging ideas from both full HOU and general \(E\)-unification. We prove that the procedure is sound and complete. ###### Abstract Higher-order unification (HOU) concerns unification of (extensions of) \(\lambda\)-calculus and can be seen as an instance of equational unification (\(E\)-unification) modulo \(\beta\eta\)-equivalence of \(\lambda\)-terms. We study equational unification of terms in languages with arbitrary variable binding constructions modulo arbitrary second-order equational theories. Abstract syntax with general variable binding and parametrised metavariables allows us to work with arbitrary binders without committing to \(\lambda\)-calculus or use inconvenient and error-prone term encodings, leading to a more flexible framework. In this paper, we introduce \(E\)-unification for second-order abstract syntax and describe a unification procedure for such problems, merging ideas from both full HOU and general \(E\)-unification. We prove that the procedure is sound and complete. ###### Abstract Higher-order unification (HOU) concerns unification of (extensions of) \(\lambda\)-calculus and can be seen as an instance of equational unification (\(E\)-unification) modulo \(\beta\eta\)-equivalence of \(\lambda\)-terms. We study equational unification of terms in languages with arbitrary variable binding constructions modulo arbitrary second-order equational theories. Abstract syntax with general variable binding and parametrised metavariables allows us to work with arbitrary binders without committing to \(\lambda\)-calculus or use inconvenient and error-prone term encodings, leading to a more flexible framework. In this paper, we introduce \(E\)-unification for second-order abstract syntax and describe a unification procedure for such problems, merging ideas from both full HOU and general \(E\)-unification. We prove that the procedure is sound and complete. ###### Abstract Higher-order unification (HOU) concerns unification of (extensions of) \(\lambda\)-calculus and can be seen as an instance of equational unification (\(E\)-unification) modulo \(\beta\eta\)-equivalence of \(\lambda\)-terms. We study equational unification of terms in languages with arbitrary variable binding constructions modulo arbitrary second-order equational theories. Abstract syntax with general variable binding and parametrised metavariables allows us to work with arbitrary binders without committing to \(\lambda\)-calculus or use inconvenient and error-prone term encodings, leading to a more flexible framework. In this paper, we introduce \(E\)-unification for second-order abstract syntax and describe a unification procedure for such problems, merging ideas from both full HOU and general \(E\)-unification. We prove that the procedure is sound and complete. ###### Abstract Higher-order unification (HOU) concerns unification of (extensions of) \(\lambda\)-calculus and can be seen as an instance of equational unification (\(E\)-unification) modulo \(\beta\eta\)-equivalence of \(\lambda\)-terms. We study equational unification of terms in languages with arbitrary variable binding constructions modulo arbitrary second-order equational theories. Abstract syntax with general variable binding and parametrised metavariables allows us to work with arbitrary binders without committing to \(\lambda\)-calculus or use inconvenient and error-prone term encodings, leading to a more flexible framework. In this paper, we introduce \(E\)-unification for second-order abstract syntax and describe a unification procedure for such problems, merging ideas from both full HOU and general \(E\)-unification. We prove that the procedure is sound and complete. ###### Abstract Higher-order unification (HOU) concerns unification of (extensions of) \(\lambda\)-calculus and can be seen as an instance of equational unification (\(E\)-unification) modulo \(\beta\eta\)-equivalence of \(\lambda\)-terms. the expense of inconvenient and error-prone term encodings and lack of formal foundations'. Instead, they suggest to consider _second-order abstract syntax_[10], that is, abstract syntax with variable binding and parametrised metavariables. Indeed, Fiore and Szamozvancev [9] use second-order abstract syntax to generate metatheory in Agda for languages with variable bindings. In this paper, we develop a mechanisation of equational reasoning for second-order abstract syntax. We take inspiration in existing developments for HOU and \(E\)-unification. Although we cannot directly reuse all HOU ideas that rely heavily on the syntax of \(\lambda\)-calculus, we are still able to adapt many of them, since second-order abstract syntax provides _parametrised metavariables_ which are similar to _flex_ terms in HOU. ### Related Work To the best of our knowledge, there does not exist a mechanisation of equational reasoning for second-order abstract syntax. Thus, we compare our approach with existing HOU algorithms that encompass equational reasoning. Snyder's higher-order \(E\)-unification [29] extends HOU with first-order equational theories. Nipkow and Prehofer [25] study higher-order rewriting and (higher-order) equational reasoning. As mentioned, these rely on \(\lambda\)-abstraction and a HOAS-like encoding to work with other binding constructions. In contrast, we work with arbitrary binding constructions modulo a second-order equational theory. Dowek, Hardin, and Kirchner [8] present higher-order unification as first-order \(E\)-unification in \(\lambda\sigma\)-calculus (a variant of \(\lambda\)-calculus with explicit substitutions) modulo \(\beta\eta\)-reduction. Their idea is to use explicit substitutions and de Bruijn indices so that metavariable substitution cannot result in name capture and reduces to _grafting_ (first-order substitution). In this way, algorithms for first-order \(E\)-unification (such as _narrowing_) can be applied. Kirchner and Ringeissen [17] develop that approach for higher-order equational unification with first-order axioms. In our work, parametrised metavariables act in a similar way to metavariables with explicit substitutions in \(\lambda\sigma\)-calculus. While it should be possible to encode second-order equations as first-order equations in \(\sigma\)-calculus (with explicit substitution, but without \(\lambda\)-abstraction and application), it appears that this approach requires us to also encode rules of our unification procedure anyway. As some equational theories can be formulated as term rewriting systems, a line of research combining rewrite systems and type systems exists, stemming from the work of Tannen [34], which extends simply-typed \(\lambda\)-calculus with higher-order rewrite rules. Similar extensions exist for the Calculus of Constructions [1, 36, 30, 31] and \(\lambda\Pi\)-calculus [7]. Cockx, Tabareau, and Winterhalter [6] introduce Rewriting Type Theory (RTT) which is an extension of Martin-Lof Type Theory with (first-order) rewrite rules. Chrzkaszcz and Walukiewicz-Chrzkaszcz [3] discuss how to extend Coq with rewrite rules. Cockx [4] reports on a practical extension of Agda with higher-order non-linear rewrite rules, based on the same ideas as RTT [6]. Rewriting is especially useful in proof assistants that compare types (and terms) through _normalisation by evaluation_ (NbE) rather than higher-order unification. Contrary to type theories extended with rewrite rules, our approach relies on simply-typed syntax, but allows for an arbitrary second-order equational theory, enabling unification even in the absence of a confluent rewriting system. Kudasov [20] implements higher-order (pre)unification and dependent type inference in Haskell for an intrinsically scoped syntax using so-called _free scoped monads_ to generate the syntax of the object language from a data type describing node types. Such a definition is essentially a simplified presentation of second-order abstract syntax. Kudasov's pre-unification procedure contains several heuristics, however no soundness or completeness results are given in the preprint. ### Contribution The primary contribution of this paper is the introduction of \(E\)-unification for second-order abstract syntax and a sound and complete unification procedure. The rest of the paper is structured as follows: * In Section 2, we briefly revisit second-order abstract syntax, equational logic, and term rewriting a la Fiore and Hur [10]. * In Section 3, we generalise traditional \(E\)-unification concepts of an \(E\)-unification problem and an \(E\)-unifier for a set of second-order equations \(E\). * In Section 4, we define the unification procedure that enumerates solutions for any given \(E\)-unification problem and prove it sound. * In Section 5, we prove completeness of our unification procedure, taking inspiration from existing research on \(E\)-unification and HOU. * Finally, we discuss some potential pragmatic changes for a practical implementation as well as limitations of our approach in Section 6. ## 2 Second-Order Abstract Syntax In this section, we recall second-order abstract syntax, second-order equational logic, and second-order term rewriting of Fiore and Hur [10]. More details and examples are presented in Appendix B. ### Second-Order Terms We start by recalling a notion of second-order signature, which essentially contains information about the syntactic constructions (potentially with variable bindings) of the object language. A _second-order signature_[10, Section 2]\(\Sigma=(T,O,|-|)\) is specified by a set of types \(T\), a set of operators1\(O\), and an arity function \(|-|:O\to(T^{*}\times T)^{*}\times T\). For an operator \(\mathsf{F}\in O\), we write \(\mathsf{F}:(\overline{\sigma_{1}}.\tau_{1},\ldots,\overline{\sigma_{n}}.\tau _{n})\to\tau\) when \(|\mathsf{F}|=((\overline{\sigma_{1}},\tau_{1}),\ldots,(\overline{\sigma_{n}},\tau_{n}),\tau)\). Intuitively, this means that an operator \(\mathsf{F}\) takes \(n\) arguments each of which binds \(n_{i}=|\overline{\sigma_{i}}|\) variables of types \(\sigma_{i,1},\ldots,\sigma_{i,n_{i}}\) in a term of type \(\tau_{i}\). Footnote 1: In literature on \(E\)-unification, authors use the term _functional symbol_ instead. For the rest of the paper, we assume an ambient signature \(\Sigma\), unless otherwise stated. A _typing context_[10, Section 2]\(\Theta\mid\Gamma\) consists of metavariable typings \(\Theta\) and variable typings \(\Gamma\). Metavariable typings are parametrised types: a metavariable of type \([\sigma_{1},\ldots,\sigma_{n}]\tau\), when parametrised by terms of type \(\sigma_{1},\ldots,\sigma_{n}\), will yield a term of type \(\tau\). We will write a centered dot \((\cdot)\) for the empty (meta)variable context. For example, this context has a metavariable M with two parameters and variables \(x,y\): \(\Theta\mid\Gamma=(\textsc{M}:[\sigma,\sigma\Rightarrow\tau]\tau\mid x:\sigma \Rightarrow\tau,y:\sigma)\). [[10, Section 2]] A judgement for typed **terms** in context \(\Theta\mid\Gamma\vdash-:\tau\) is defined by the rules in Figure 1. Variable substitution on terms is defined in a usual way, see [10, Section 2] for details. _Let \(\Theta=(\texttt{M}_{i}:[\overline{\sigma_{i}}]_{\tau})^{i\in\{1,\ldots,n\}}\), and consider a term \(\Theta\mid\Gamma\vdash t:\tau\), and for all \(i\in\{1,\ldots,n\}\) a term in extended2 context \(\Xi\mid\Gamma,\Delta,\overline{z_{i}}:\overline{\sigma_{i}}\vdash t_{i}:\tau_ {i}\). Then,_ **metavariable substitution**\(t[\texttt{M}_{i}[\overline{z_{i}}]\mapsto t_{i}]^{i\in\{1,\ldots,n\}}\) is defined recursively on the structure of \(t\): Footnote 2: Here we slightly generalise the definition of Fiore and Hur by allowing arbitrary extension of context to \(\Gamma,\Delta\) in the resulting term. This is useful in particular when \(\Gamma\) is empty. See Definition 26. \[x[\texttt{M}_{i}[\overline{z_{i}}]\mapsto t_{i}]^{i\in\{1, \ldots,n\}} =x\] \[\texttt{M}_{k}[\overline{s}][\texttt{M}_{i}[\overline{z_{i}}] \mapsto t_{i}]^{i\in\{1,\ldots,n\}} =t[\overline{z}\mapsto\overline{s}]\qquad\qquad\text{when $k\in\{1, \ldots,n\}$ and $|\overline{s}|=|\overline{z_{k}}|$}\] \[\texttt{N}[\overline{s}][\texttt{M}_{i}[\overline{z_{i}}] \mapsto t_{i}]^{i\in\{1,\ldots,n\}} =\texttt{N}[\overline{s}]\qquad\qquad\qquad\qquad\qquad\text{when $\texttt{N}\not\in\{ \texttt{M}_{1},\ldots,\texttt{M}_{n}\}$}\] \[\texttt{F}(\overline{\overline{x}.s})[\texttt{M}_{i}[\overline{z_{i}}] \mapsto t_{i}]^{i\in\{1,\ldots,n\}} =\texttt{F}(\overline{\overline{x}.s[\texttt{M}_{i}[\overline{z_{i }}]\mapsto t_{i}]^{i\in\{1,\ldots,n\}}})\] _We write \(\theta:\Theta\mid\Gamma\to\Xi\mid\Gamma,\Delta\) for a substitution \(\theta=[\texttt{M}_{i}[\overline{z_{i}}]\mapsto t_{i}]^{i\in\{1,\ldots,n\}}\). When both \(\Gamma\) and \(\Delta\) are empty, we write \(\theta:\Theta\to\Xi\) as a shorthand for \(\theta:\Theta\mid\cdot\to\Xi\mid\cdot\) We write \([\texttt{M}_{k}[\overline{z}]\mapsto t_{k}]:\Theta\mid\Gamma\to\Xi\mid\Gamma,\Delta\) to mean that \(t_{i}=\texttt{M}_{i}[\overline{z_{i}}]\) for all \(i\neq k\)._ ### Second-Order Equational Logic We now define second-order equational presentations and rules of second-order logic, following Fiore and Hur [10, Section 5]. This provides us with tools for reasoning modulo second-order equational theories, such as \(\beta\eta\)-equivalence of \(\lambda\)-calculus. An _equational presentation_[10, Section 5] is a set of axioms each of which is a pair of terms in context. Terms of simply-typed \(\lambda\)-calculus are generated with a family of operators for all \(\sigma,\tau\colon\texttt{abs}^{\sigma,\tau}:\sigma.\tau\to(\sigma\Rightarrow\tau)\) and \(\texttt{app}^{\sigma,\tau}:(\sigma\Rightarrow\tau,\sigma)\to\tau\). And equational presentation for simply-typed \(\lambda\)-calculus is given by a family of axioms: \[\texttt{M}:[\sigma]\tau,\texttt{N}:[\!]\!]\!]\!\] \[\texttt{M}:[\!]\!]\!]\!]\!\] \[\texttt{M}:[\!]\!]\!]\!]\!]\!\] \[\texttt{M}:[\!]\!]\!]\!] \[\texttt{M}:[\!]\!]\!]\!] \[\texttt{M}:[\!]\!]\!]\!] \[\texttt{M}:[\!]\!]\!] \[\texttt{M}\Rightarrow\tau\mid\cdot\vdash\texttt{abs}(x.\texttt{app} (\texttt{M}[\!]\!],x))\equiv\texttt{M}[\!]\!]\!]\!]\!] \[\texttt{M}[\!]\!]\!]\!] \[\texttt{M}]\!]\!] \[\texttt{M}]\!] \[\texttt{M}\] \[]\!] \[\texttt{M}\] \[ In their paper, Fiore and Hur note that metavariables with zero parameters are equivalent to regular variables. Indeed, we can _parametrise_ every term \(\Theta\mid\Gamma\vdash t:\tau\) to yield a term \(\Theta,\widehat{\Gamma}\mid\cdot\vdash\widehat{t}:\tau\) where for \(\Gamma=(x_{1}:\sigma_{1},\ldots,x_{n}:\sigma_{n})\) we have \[\widehat{\Gamma}=(\raisebox{0.0pt}{$\chi$}_{1}:[]\hskip-2.0pt[\sigma_{1}, \ldots,\raisebox{0.0pt}{$\chi$}_{n}:[]\hskip-2.0pt[\sigma_{n})\hskip-2.0pt] \hskip-2.0pt\widehat{t}=t[\raisebox{0.0pt}{$x_{1}\mapsto\raisebox{0.0pt}{$ \chi$}_{1}[]\hskip-2.0pt],\ldots,x_{n}\mapsto\raisebox{0.0pt}{$\chi$}_{n}[] \hskip-2.0pt]\] Applying parametrisation to an equational presentation \(E\) yields a set of parametrised equations \(\widehat{E}\). Note that the following are equivalent: \[\Theta\mid\Gamma\vdash s\equiv_{E}t:\tau\hskip 28.452756pt\text{iff}\hskip 28.452756pt \Theta,\widehat{\Gamma}\mid\cdot\vdash\widehat{s}\equiv_{\widehat{E}}\widehat{t}:\tau\] Thus, from now on, we assume that axioms have empty variable context. ### Second-Order Term Rewriting Finally, for the proof of completeness in Section 5, it will be helpful to rely on chains of term rewrites rather than derivation trees of equality modulo \(E\). Fiore and Hur introduce _second-order term rewriting_ relation [10, Section 8]. An equational presentation \(E\) generates a _second-order term rewriting relation_\(\longrightarrow_{E}\)[10, Fig. 4]. We write \(s\overset{*}{\longrightarrow}_{E}t\) if there is a sequence of terms \(u_{1},\ldots,u_{n}\) such that \(s=u_{1}\longrightarrow_{E}\ldots\longrightarrow_{E}u_{n}=t\). We write \(s\longleftrightarrow_{E}t\) if either \(s\longrightarrow_{E}t\) or \(t\longrightarrow_{E}s\). We write \(s\stackrel{{*}}{{\longleftrightarrow}}_{E}t\) if there is a sequence of terms \(u_{1},\ldots,u_{n}\) such that \(s=u_{1}\longleftrightarrow_{E}\ldots\longleftrightarrow_{E}u_{n}=t\). Since we only care about substitutions of metavariables in axioms (variable context is empty), a simplified version of the rules is given in Figure 3. An important result of Fiore and Hur is that of completeness of second-order term rewriting [10, Section 8]: \(\Theta\mid\Gamma\vdash s\equiv_{E}t:\tau\) iff \(\Theta\mid\Gamma\vdash s\stackrel{{*}}{{\longleftrightarrow}}_{E}t:\tau\). ## 3 \(E\)-unification with Second-Order Equations In this section, we formulate the equational unification problem for second-order abstract syntax, describe what constitutes a solution to such a problem and whether it is complete. We also recognise a subclass of problems in solved form, i.e. problems that have an immediate solution. For the most part, this is a straightforward generalisation of standard concepts of \(E\)-unification [11]. A **second-order constraint**\(\Theta\mid\Gamma_{\exists},\Gamma_{\forall}\vdash s\overset{?}{=}t:\tau\) is a pair of terms in a context, where variable context is split into two components: \(\Gamma=(\Gamma_{\exists},\Gamma_{\forall})\). The idea is that \(\Gamma_{\exists}\) contains variables that we need to solve for (free variables), while \(\Gamma_{\forall}\) contains variables that we cannot substitute (bound variables). Metavariables are always treated existentially, so we do not split metavariable context. Similarly to equational representations, we can parametrise (a set of) constraints, yielding \(\Theta,\widehat{\Gamma}_{\exists}\mid\Gamma_{\forall}\vdash s\overset{?}{=}t:\tau\). Thus, from now on, we will assume \(\Gamma=\Gamma_{\forall}\) (i.e. \(\Gamma_{\exists}\) is empty) for all constraints. Assume \(\alpha=\sigma\Rightarrow\tau,\beta=(\sigma\Rightarrow\tau)\Rightarrow\tau\). The following are equivalent: 1. For all \(g:\alpha,a:\sigma\), find \(m:\alpha\Rightarrow\beta\Rightarrow\tau\) such that \(m\;g\;(\lambda z.z\;a)=g\;a\). 2. \(\cdot\mid m:\alpha\Rightarrow\beta\Rightarrow\tau,g:\alpha,a:\sigma\vdash \texttt{app}(\texttt{app}(m,g),\texttt{abs}(z.\texttt{app}(z,a)))\overset{?}{=} \texttt{app}(g,a):\tau\) 3. \(\textsc{m}:[\hskip-2.0pt[\alpha\Rightarrow\beta\Rightarrow\tau\mid g:\alpha,a: \sigma\vdash\texttt{app}(\texttt{app}(\textsc{m}[],g),\texttt{abs}(z.\texttt{ app}(z,a)))\overset{?}{=}\texttt{app}(g,a):\tau\) 4. \(\textsc{m}:[\hskip-2.0pt[\alpha,\beta]\tau\mid g:\alpha,a:\sigma\vdash \textsc{m}[\hskip-2.0pt[g,\texttt{abs}(z.\texttt{app}(z,a))]\overset{?}{=} \texttt{app}(g,a):\tau\) Here, Item 2 is a direct encoding of Item 1 as a second-order constraint. Item 3 is a parametrised version of Item 2. Item 4 is equivalent to Item 3 modulo \(\beta\)-equality, witnessed by metasubstitutions \([\![\![]\!]\mapsto\mathsf{abs}(x.\mathsf{abs}(y.\mathsf{app}(x,y)))]\) and \([\![\![x,y]\mapsto\mathsf{app}(\mathsf{app}(\![\![x],x),y)]\!]\). Given an equational presentation \(E\), an \(E\)**-unification problem**\(\langle\Theta,S\rangle\) is a finite set \(S\) of \(E\)-unification constraints in a shared metavariable context \(\Theta\). We present an \(E\)-unification problem as a formula of the following form: \[\exists(\!\textsc{M}_{1}:[\overline{\sigma_{1}}]\tau_{1},\ldots,\textsc{M}_{n}: [\overline{\sigma_{n}}]\tau_{n}).(\forall(\overline{z_{1}}:\overline{\rho_{1}} ).s_{1}\,\overset{?}{=}\,t_{1}:\tau_{1})\wedge\ldots\wedge(\forall(\overline {z_{k}}:\overline{\rho_{k}}).s_{k}\,\overset{?}{=}\,t_{k}:\tau_{k})\] A metavariable substitution \(\xi:\Theta\to\Xi\) is called an \(E\)**-unifier** for an \(E\)-unification problem \(\langle\Theta,S\rangle\) if for all constraints \((\Theta\mid\Gamma_{\forall}\vdash s\,\overset{?}{=}\,t:\tau)\in S\) we have \[\Xi\mid\Gamma_{\forall}\vdash\xi s\equiv_{E}\xi t:\tau\] We write \(U_{E}(S)\) for the set of all \(E\)-unifiers for \(\langle\Theta,S\rangle\). Consider unification problem \(\langle\Theta,S\rangle\) for the simply-typed \(\lambda\)-calculus: \[\Theta=\textsc{M}:[\sigma\Rightarrow\tau,(\sigma\Rightarrow\tau) \Rightarrow\tau]\tau\] \[S=\{\Theta\mid g:\sigma\Rightarrow\tau,y:\sigma\vdash\textsc{M}[g,\mathsf{abs}(x.\mathsf{app}(x,y))]\,\overset{?}{=}\,\mathsf{app}(g,y):\tau\}\] Substitution \([\textsc{M}[z_{1},z_{2}]\mapsto\mathsf{app}(z_{2},z_{1})]:\Theta\to\cdot\) is an \(E\)-unifier for \(\langle\Theta,S\rangle\). ### Unification Problems in Solved Form Here, we recognise a class of trivial unification problems. The idea is that a constraint that looks like a metavariable substitution can be uniquely unified. A unification problem can be unified as long as substitutions for constraints are sufficiently disjoint. More precisely: An \(E\)-unification problem \(\langle\Theta,S\rangle\) is in **solved form** when \(S\) consists only of constraints of the form \(\Theta,\textsc{M}:[\overline{\sigma}]\tau\mid\Gamma_{\forall}\vdash\textsc{M}[ \overline{z}]\,\overset{?}{=}\,t:\tau\) such that 1. \(\overline{z}:\overline{\sigma}\subseteq\Gamma_{\forall}\) (parameters of M are _distinct_ variables from \(\Gamma_{\forall}\)) 2. \(\Theta\mid\overline{z}:\overline{\sigma}\vdash t:\tau\) (M and variables not occurring in \(\overline{z}\) do not occur in \(t\)) 3. all constraints have distinct metavariables on the left hand side Let \(\Theta=(\textsc{M}:[\sigma,\sigma]\sigma)\). Then 1. \(\{\Theta\mid x:\sigma,y:\sigma\vdash\textsc{M}[y,x]\,\overset{?}{=}\,\mathsf{ app}(\mathsf{abs}(z.x),y):\sigma\}\) is in solved form; 2. \(\{\Theta\mid x:\sigma,y:\sigma\vdash\textsc{M}[x,x]\,\overset{?}{=}\,\mathsf{ app}(\mathsf{abs}(z.x),y):\sigma\}\) is not in solved form, since parameters of M are not _distinct_ variables and also since variable \(y\) occurs on the right hand side, but does not occur in parameters of M; 3. \(\{\Theta\mid f:\sigma\Rightarrow\sigma,y:\sigma\vdash\textsc{M}[y,\mathsf{ app}(f,y)]\,\overset{?}{=}\,\mathsf{app}(f,y):\sigma\}\) is not in solved form, since second parameter of M is not a variable; An \(E\)-unification problem \(\langle\Theta,S\rangle\) in solved form has an \(E\)-unifier. Assume \(\Theta=\{\Theta\mid\Gamma_{i}\vdash\textsc{M}_{i}[\overline{z_{i}}]\,\overset{? }{=}\,t_{i}:\tau_{i}\}^{i\in\{1,\ldots,n\}}\). Let \(\xi_{S}=[\textsc{M}_{i}[\overline{z_{i}}]\mapsto t_{i}]^{i\in\{1,\ldots,n\}}\). Note that \(\xi_{S}\) is a well formed metasubstitution since, by assumption, each \(\overline{z_{i}}\) is a sequence of distinct variables, \(t_{i}\) does not reference other variables or M\({}_{i}\), and each metavariable M\({}_{i}\) is mapped only once in \(\xi_{S}\). Applying \(\xi_{S}\) to each constraint we get trivial constraints, which are satisfied by reflexivity: \(\Theta\mid\Gamma_{i}\vdash t_{i}\equiv_{E}t_{i}:\tau_{i}\). Thus, \(\xi_{S}\) is an \(E\)-unifier for \(\langle\Theta,S\rangle\). Later, we will refer to the \(E\)-unifier constructed in the proof of Proposition 3 as \(\xi_{S}\). ### Comparing \(E\)-unifiers In general, a unification problem may have multiple unifiers. Here, we generalise the usual notion of comparing \(E\)-unifiers [11] to the second-order abstract syntax using the _subsumption_ order, leading to a straightforward generalisation of the ideas of _the most general unifier_ and _a complete set of unifiers_. We do not consider generalising _essential unifiers_[14, 33] or _homeomorphic embedding_[32], although these might constitute a prospective future work. Two metavariable substitutions \(\theta,\xi:\Theta\rightarrow\Xi\) are said to be **equal modulo**\(E\) (notated \(\theta\equiv_{E}\xi\)) if for all metavariables \(\mathtt{M}:[\overline{\sigma}]\tau\in\Theta\), any context \(\Gamma\), and any terms \(\Theta\mid\Gamma\vdash t_{i}:\sigma_{i}\) (for all \(i\in\{1,\ldots,n\}\)) we have \[\Xi\mid\Gamma\vdash\theta_{\mathtt{M}}[t_{1},\ldots,t_{n}]\equiv_{E}\xi\! \mathrel{\mathop{\kern 0.0pt\wedge}\limits}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! 2. _(minimality) for any_ \(\theta,\xi\in\mathsf{CSU}_{E}(S)\) _if_ \(\theta\preccurlyeq_{E}\xi\) _then_ \(\theta=\xi\)_._ We reserve the notation \(\mathsf{CSU}_{E}(S)\) to refer to minimal complete sets of \(E\)-unifiers (i.e. satisfying both conditions). The \(E\)-unification problem \(\langle\Theta,S\rangle\) in untyped \(\lambda\)-calculus has an infinite \(\mathsf{CSU}_{E}(S)\): \[\langle\Theta,S\rangle=\exists(\texttt{{M}}:[\star,\star]\star). \,\forall(g:\star,y:\star).\,\texttt{{M}}[g,\texttt{{abs}}(x.\texttt{{app}}(x,y))]\,\stackrel{{?}}{{=}}\texttt{{app}}(g,y):\star\] \[\mathsf{CSU}_{E}(S)=\{[\texttt{{M}}[z_{1},z_{2}] \mapsto\texttt{{app}}(z_{2},z_{1})],\] \[[\texttt{{M}}[z_{1},z_{2}] \mapsto\texttt{{app}}(z_{1},\texttt{{app}}(z_{2},\texttt{{abs}}(x.x)))],\] \[[\texttt{{M}}[z_{1},z_{2}] \mapsto\texttt{{app}}(\texttt{{app}}(z_{2},\texttt{{abs}}(x. \texttt{{abs}}(f.\texttt{{app}}(f,x)))),z_{1})],\ldots\}\] For any two minimal complete sets of \(E\)-unifiers \(\mathsf{CSU}^{1}_{E}(S)\) and \(\mathsf{CSU}^{2}_{E}(S)\), there exists a bijection \(f:\mathsf{CSU}^{1}_{E}(S)\longleftrightarrow\mathsf{CSU}^{2}_{E}(S)\) such that \[\forall\theta\in\mathsf{CSU}^{1}_{E}(S).\quad\theta\equiv_{E}f(\theta)\] Thus, \(\mathsf{CSU}_{E}(S)\) is unique up to a bijection modulo \(E\), so from now on we will refer to the complete set of \(E\)-unifiers. When the complete set of \(E\)-unifiers \(\mathsf{CSU}_{E}(S)\) is a singleton set, then we refer to its element as the most general \(E\)-unifier of \(S\) (notated \(\texttt{{m}}\texttt{{gu}}_{E}(S)\)). Consider this \(E\)-unification problem \(\langle\Theta,S\rangle\) in simply-typed \(\lambda\)-calculus: \[\exists(\texttt{{M}}:[\sigma\Rightarrow\tau,(\sigma\Rightarrow\tau)\Rightarrow \tau]\tau).\,\forall(g:\sigma\Rightarrow\tau,y:\sigma).\,\texttt{{M}}[g, \texttt{{abs}}(x.\texttt{{app}}(x,y))]\,\stackrel{{?}}{{=}} \texttt{{app}}(g,y):\tau\] For this problem the most general \(E\)-unifier exists: \(\texttt{{m}}\texttt{{gu}}_{E}(S)=[\texttt{{M}}[z_{1},z_{2}]\mapsto\texttt{{ app}}(z_{2},z_{1})]\) If \(\langle\Theta,S\rangle\) is an \(E\)-unification problem in solved form, then \(\texttt{{m}}\texttt{{gu}}_{E}(S)\equiv_{E}\xi_{S}\). It is enough to check that for any \(E\)-unifier \(\theta\in U_{E}(S)\) we have \(\xi_{S}\preccurlyeq_{E}\theta\). Observe that \(\theta\equiv_{E}\theta\circ\xi_{S}\) since for any constraint \((\Theta\mid\Gamma_{\forall}\vdash M[\overline{z}]\,\stackrel{{?}}{{= }}\,t:\tau)\in S\) such that \(\texttt{{M}}:[\overline{\sigma}]\tau\in\Theta\), any context \(\Gamma\), and any terms \(\Theta\mid\Gamma\vdash t_{i}:\sigma_{i}\) (for all \(i\in\{1,\ldots,n\}\)) we have \[\Xi\mid\Gamma\vdash\theta_{\texttt{{M}}}[\overline{t}]\equiv_{E}\theta t[ \overline{z}\mapsto\overline{t}]\equiv_{E}\theta(\xi_{S}\texttt{{M}}[ \overline{z}])[\overline{z}\mapsto\overline{t}]\equiv_{E}\theta(\xi_{S} \texttt{{M}}[\overline{t}]):\tau\] ## 4 Unification Procedure In this section, we introduce a unification procedure to solve arbitrary \(E\)-unification problems over second-order abstract syntax. We show that the procedure is sound at the end of this section, and we devote Section 5 for the completeness result. Our unification procedure has features inspired by classical \(E\)-unification and HOU algorithms. For the equational part, we took inspiration from the complete sets of transformations for general (first-order) \(E\)-unification of Gallier and Snyder [11]. For unification of metavariables, we took inspiration from Huet's higher-order pre-unification [15] and Jensen and Pietrzykowski's procedure [16]. Some key insights from the recent work by Vukmirovic, Bentkamp, and Nummelin [35] give us the opportunity to improve the algorithm further, however, we are not attempting to achieve an _efficient_\(E\)-unification for second-order abstract syntax in this paper. Note that we cannot directly reuse HOU ideas in our procedure, since we do not have full \(\lambda\)-calculus at our disposal. Instead we only have parametrised metavarables \(\texttt{M}[t_{1},\ldots,t_{n}]\) which are analogous to applications of variables in HOU (\(m\ t_{1}\ \ldots\ t_{n}\)). Still, we can adapt some ideas if they do not rely on normalisation or specific syntax of \(\lambda\)-calculus. For other ideas, we introduce simpler, yet more general versions. This allows us to preserve completeness, perhaps, sacrificing some efficiency, making the search space larger. While we believe it is possible to optimise our procedure to have virtually the same running time for unification problems in \(\lambda\)-calculus as HOU algorithms mentioned above, we leave such optimisations for future work. To produce the unification procedure we follow and generalise some of the common steps that can be found in literature on HOU and first-order \(E\)-unification: 1. Classify substitutions that will constitute partial solutions for certain classes of constraints. The idea is that an overall solution will emerge as a composition of partial solutions. 2. Define transition rules that make small steps towards a solution. 3. Determine when to stop (succeed or fail). 4. If possible, organize rules in a proper order, yielding a unification procedure. ### Bindings Now we define different elementary substitutions that will serve as partial solutions for some constraints in our unification procedure. Here, we generalise a list of bindings collected by Vukmirovic, Bentkamp, and Nummelin [35]. From that list, Huet-style projection (also known as _partial binding_ in HOU literature) is not used. Instead, imitation for axioms and JP-style projection bindings cover all substitutions that can be generated by Huet-style projection bindings3. We also use a simplified version of iteration binding here, again, since it generates all necessary bindings when considered together with generalised imitation binding. Footnote 3: Note, that Huet-style projection cannot be formulated in pure second-order abstract syntax as it explicitly relies on abs and app. Thus, in \(E\)-unification we can recover such projections only by using axioms in some form. Kudasov [20] implements a heuristic that resembles a generalisation of Huet-style projections. We leave proper generalisations for future work. _JP-style projection for m. If \(\texttt{M}:[\sigma_{1},\ldots,\sigma_{k}]\tau\) and \(\sigma_{i}=\tau\) then_ \(\zeta=[\texttt{M}[\overline{z}]\mapsto z_{i}]\) _is a JP-style projection binding_ _Imitation for m. If \(\texttt{M}:[\sigma_{1},\ldots,\sigma_{k}]\tau\), \(\texttt{F}:(\overline{\alpha_{1}},\beta_{1},\ldots,\overline{\alpha_{n}},\beta _{n})\rightarrow\tau\) and \(\texttt{M}_{i}:[\sigma_{1},\ldots,\sigma_{k},\overline{\alpha_{i}}]\beta_{i}\)_ _for all \(i\),_ \(\zeta=[\texttt{M}[\overline{z}]\mapsto\texttt{F}(\overline{x_{1}}\texttt{M}_{ 1}[\overline{z},\overline{x_{1}}],\ldots,\overline{x_{n}}\texttt{M}_{n}[ \overline{z},\overline{x_{n}}])]\) _is an imitation binding_ _Elimination for m. If \(\texttt{M}:[\sigma_{1},\ldots,\sigma_{k}]\tau\) and \(1\leq j_{1}<\ldots j_{n}\leq k\) such that \(\texttt{E}:[\sigma_{j_{1}},\ldots,\sigma_{j_{n}}]\tau\) then_ \(\zeta=[\texttt{M}[\overline{z}]\mapsto\texttt{E}[z_{j_{1}},\ldots,z_{j_{n}}]]\) _is a (parameter) elimination binding_ _Identification of m and n. If \(\texttt{M}:[\sigma_{1},\ldots,\sigma_{k}]\tau\), \(\texttt{N}:[\nu_{1},\ldots,\nu_{l}]\tau\), \(\texttt{I}:[\sigma_{1},\ldots,\sigma_{k},\nu_{1},\ldots,\nu_{l}]\tau\), \(\texttt{M}_{i}:[\sigma_{1},\ldots,\sigma_{k}]nu_{i}\) for all \(i\in\{1,\ldots,l\}\), and \(\texttt{N}_{j}:[\nu_{1},\ldots,\nu_{l}]\sigma_{j}\) for all \(j\in\{1,\ldots,k\}\) then_ \([\texttt{M}[\overline{z}]\mapsto\texttt{I}[\overline{z},\texttt{M}_{1}[ \overline{z}],\ldots,\texttt{M}_{l}[\overline{z}]],\ \ \texttt{N}[\overline{y}]\mapsto\texttt{I}[\texttt{N}_{1}[\overline{y}],\ldots, \texttt{N}_{k}[\overline{y}],\overline{y}]]\) _is an identification binding_ Iteration for M. If \(\textsc{M}:[\sigma_{1},\ldots,\sigma_{k}]_{\tau}\), \(\mathsf{F}:(\overline{\alpha_{1}}.\beta_{1},\ldots,\overline{\alpha_{n}}.\beta _{n})\rightarrow\gamma\), \(\textsc{H}:[\sigma_{1},\ldots,\sigma_{k},\gamma]\tau\), and \(\textsc{M}_{i}:[\sigma_{1},\ldots,\sigma_{k},\overline{\alpha_{i}}]\beta_{i}\) for all \(i\), then \([\textsc{M}[\overline{z}]\mapsto\textsc{H}[\overline{z},\mathsf{F}( \overline{x_{1}}.\textsc{M}_{1}[\overline{z},\overline{x_{1}}],\ldots, \overline{x_{n}}.\textsc{M}_{n}[\overline{z},\overline{x_{n}}])]]\) is an iteration binding_ The iteration bindings allow to combine parameters of a metavariable in arbitrary ways. This is also particularly influenced by the fact that the type \(\gamma\) used in the bindings may be arbitrary. This type of bindings introduce arbitrary branching in the procedure below, so should be used with caution in pragmatic implementations. Intuitively, we emphasize two distinct use cases for the iteration bindings: 1. To extract a new term from one or more parameters by application of an axiom. In this case, we use iteration, where the root of one of the sides of an axiom is used as an operator \(\mathsf{F}\). 2. To introduce new variables in scope. In this case, any operator that introduces at least one variable into scope is used in an iteration. This use case is important for the completeness of the procedure. See Example 49. ### Transition Rules We will write each transition rule of the unification procedure in the form \((\Theta\mid\Gamma_{\forall}\vdash s\mathop{\stackrel{{?}}{{=}}}t: \tau)\mathop{\stackrel{{\theta}}{{\longrightarrow}}}\langle \Xi,S\rangle\), where \(\theta:\Theta\rightarrow\Xi\) is a metavariable substitution and \(S\) is a new set of constraints that is supposed to replace \(s\mathop{\stackrel{{?}}{{=}}}t\). We will often write \(S\) instead of \(\langle\Xi,S\rangle\) when \(\Xi\) is understood from context. We will now go over the rules that will constitute the \(E\)-unification procedure when put in proper order. The first two rules are straightforward. [delete] If a constraint has the same term on both sides, we can **delete** it: \[(\Theta\mid\Gamma_{\forall}\vdash t\mathop{\stackrel{{?}}{{=}}}t: \tau)\mathop{\stackrel{{\mathrm{id}}}{{\longrightarrow}}}\varnothing\] [decompose] We define two variants of this rule: 1. Let \(\mathsf{F}:(\overline{\sigma_{1}}.\tau_{1},\ldots,\overline{\sigma_{n}}.\tau_{ n})\rightarrow\tau\), then we can **decompose** a constraint with \(\mathsf{F}\) on both sides into a set of constraints for each pair of (scoped) subterms: \[(\Theta\mid\Gamma_{\forall}\vdash\mathsf{F}(\overline{\overline{x}.t}) \mathop{\stackrel{{?}}{{=}}}\mathsf{F}(\overline{\overline{x}.s}) :\tau)\mathop{\stackrel{{\mathrm{id}}}{{\longrightarrow}}}\{ \Theta\mid\Gamma_{\forall},\overline{x_{i}}:\overline{\sigma_{i}}\vdash t \mathop{\stackrel{{?}}{{=}}}s:\tau_{i}\}^{i\in\{1,\ldots,n\}}\] 2. Let \(\textsc{M}:[\sigma_{1},\ldots,\sigma_{n}]\tau\), then we can **decompose** a constraint with \(\textsc{M}\) on both sides into a set of constraints for each pair of parameters: \[(\Theta\mid\Gamma_{\forall}\vdash\textsc{M}[\underline{t}]\mathop{\stackrel{{?} }{{=}}}\textsc{M}[\overline{s}]:\tau)\mathop{\stackrel{{\mathrm{id }}}{{\longrightarrow}}}\{\Theta\mid\Gamma_{\forall}\vdash t\mathop{\stackrel{{?} }{{=}}}s:\sigma_{i}\}^{i\in\{1,\ldots,n\}}\] [decompose] \[\Theta\mid\Gamma=(\textsc{M}:[\sigma]\sigma\Rightarrow\sigma\mid f: \sigma\Rightarrow\sigma)\] \[\{\Theta\mid\Gamma\vdash\mathsf{abs}(x.\mathsf{app}(\textsc{M} [x],x))\mathop{\stackrel{{?}}{{=}}}\mathsf{abs}(x.\mathsf{app}(f,x ))\}\] \[\mathop{\stackrel{{\mathrm{id}}}{{\longrightarrow}}}\{ \Theta\mid\Gamma,x:\sigma\vdash\mathsf{app}(\textsc{M}[x],x))\mathop{ \stackrel{{?}}{{=}}}\mathsf{app}(f,x)\}\] (decompose) \[\mathop{\stackrel{{\mathrm{id}}}{{\longrightarrow}}}\{ \Theta\mid\Gamma,x:\sigma\vdash\textsc{M}[x]\mathop{\stackrel{{?}}{{=}}}f, \quad\Theta\mid\Gamma,x:\sigma\vdash x\mathop{\stackrel{{?}}{{=}}}x\}\] (decompose) \[\mathop{\stackrel{{\mathrm{id}}}{{\longrightarrow}}}\{ \Theta\mid\Gamma,x:\sigma\vdash\textsc{M}[x]\mathop{\stackrel{{?}}{{=}}}f\}\] (delete) The next two rules are second-order versions of _imitate_ and _project_ rules used in many HOU algorithms. The idea is that a metavariable can either imitate the other side of the constraint, or simply project one of its parameters: **Definition 24** (imitate).: _For flex-rigid constraints with a metavariable \(\textsc{m}:[\overline{\sigma_{s}}]\tau\) and an operator \(\mathsf{F}:(\overline{\sigma_{1}}.\tau_{1},\ldots,\overline{\sigma_{n}}.\tau_{n})\to\tau\) we can_ **imitate** _the rigid side using an imitation binding:_ \[(\Theta\mid\Gamma_{\forall}\vdash\textsc{m}[\overline{\mathsf{s}}]\stackrel{{?}}{{=}}\mathsf{F}(\overline{\overline{x}}.t):\tau)\stackrel{{[ \textsc{m}[\overline{\mathsf{s}}]\to\mathsf{F}(\overline{\overline{x}}.\cdot \mathsf{T}[\overline{\mathsf{s}},\overline{x}])]}}{{=}}\{\Theta\mid\Gamma_{ \forall}\vdash\mathsf{F}(\overline{\overline{x}}.\mathsf{T}[\overline{ \mathsf{s}},\overline{x}])\stackrel{{?}}{{=}}\mathsf{F}( \overline{\overline{x}}.t):\tau\}\] Note that **(imitate)** can be followed up by an application of **(decompose)** rule. **Definition 25** (project).: _For constraints with a metavariable \(\textsc{m}:[\overline{\sigma_{s}}]\tau\) and a term \(u:\tau\) we can produce a JP-style projection binding:_ \[(\Theta\mid\Gamma_{\forall}\vdash\textsc{m}[\overline{\mathsf{s}}]\stackrel{{?}}{{=}}u:\tau)\stackrel{{[\textsc{m}[\overline{ \mathsf{s}}]\to\mathsf{z}_{i}]}}{{=}}\{\Theta\mid\Gamma_{\forall}\vdash s_{i} \stackrel{{?}}{{=}}u:\tau\}\] The next rule is concerned with matching one side of a constraint against one side of an axiom. When matching with an axiom, we need to _instantiate_ it to the particular use (indeed, an axiom serves as a schema!). However, it is not sufficient to simply map metavariables of the axiom into fresh metavariables of corresponding types. Since we are instantiating axiom for a particular constraint which may have a non-empty \(\Gamma_{\forall}\), it is important to add all those variables to each of the fresh metavariables4: Footnote 4: This is different to \(E\)-inflation with first-order axioms, where metavariables do not carry their own context and can be unified with an arbitrary variable later. **Definition 26**.: _Let \(\Gamma_{\forall}=(\overline{x}:\overline{\alpha})\) and \(\xi:\Xi\mid\cdot\to\Theta\mid\Gamma_{\forall}\). We say \(\xi\)_ **instantiates the axiom \(\Xi\mid\cdot\vdash\mathit{l}\equiv r:\tau\) in context \(\Theta\mid\Gamma_{\forall}\)** _if \(\mathbf{1}\). for any \((\textsc{m}_{i}:[\overline{\sigma}]\tau)\in\Xi\), \(\xi\) maps \(\textsc{m}_{i}[\overline{t}]\) to \(\textsc{m}_{i}[\overline{t},\overline{x}]\);_ \(\textsc{m}_{i}=\textsc{m}_{j}\) _iff \(i=j\) for all \(i,j\)._ **Example 27**.: Let \(\xi=[\textsc{m}[\textsc{z}]\mapsto\textsc{m}_{\textsc{l}}[z,g,y],\textsc{N}[ ]\mapsto\textsc{n}_{\textsc{l}}[g,y]]\). Then, \(\xi\) instantiates the axiom \[\textsc{m}:[\sigma]\tau,\textsc{m}:[\underline{\sigma}\mid\cdot\vdash\mathsf{ app}(\mathsf{abs}(x.\textsc{m}[x]),\textsc{N}[])\equiv\textsc{m}[\textsc{N}[]]:\tau\] in context \(\textsc{m}_{\textsc{l}}:[\sigma,\sigma\Rightarrow\tau,\sigma]\tau,\textsc{m}_ {\textsc{l}}:[\sigma\Rightarrow\tau,\sigma]\sigma\mid g:\sigma\Rightarrow\tau,y:\sigma\). **Definition 28** (mutate).: _For constraints where one of the sides matches an axiom in \(E\):_ \[\Xi\mid\cdot\vdash\mathit{l}\equiv r:\tau\] _We rewrite the corresponding side (here, \(\xi\) instantiates the axiom in context \(\Theta\mid\Gamma_{\forall}\))._ \[(\Theta\mid\Gamma_{\forall}\vdash t\stackrel{{?}}{{=}}s:\tau) \stackrel{{\textsc{i}}}{{\to}}\{\Theta\mid\Gamma_{\forall}\vdash t \stackrel{{?}}{{=}}\xi l:\tau\}\uplus\{\Theta\mid\Gamma_{\forall} \vdash\xi r\stackrel{{?}}{{=}}s:\tau\}\] In general, we may rewrite in both directions. However, it may be pragmatic to choose a single direction to some of the axioms (e.g. \(\beta\eta\)-reductions), while keeping others bidirectional (e.g. commutativity and associativity axioms). The remaining rules deal with constraints with metavariables on both sides. One rule attempts to unify distinct metavariables: **Definition 29** (identify).: _When a constraint consists of a pair of distinct metavariables \(\textsc{m}:[\sigma_{1},\ldots,\sigma_{k}]\tau\) and \(\textsc{m}:[\gamma_{1},\ldots,\gamma_{l}]\tau\), we can use an identification binding:_ \[(\Theta\mid\Gamma_{\forall}\vdash\textsc{m}[\overline{\mathsf{s}}]\stackrel{{?}}{{=}}\textsc{m}[\overline{\mathsf{s}}])\stackrel{{[ \textsc{m}[\overline{\mathsf{s}}]\to\mathsf{z}_{i}]}}{{=}}\{\Theta\mid\Gamma_{ \forall}\vdash\mathsf{l}[\overline{\mathsf{s}},\frac{\gamma}{{\textsc{m}}^{ \prime}[\overline{\mathsf{s}}]}]\stackrel{{?}}{{=}}\{}\{\exists ^{\prime}[\overline{\mathsf{s}}^{\prime}[\overline{\mathsf{u}}],\overline{ \mathsf{u}}]\}\}\] Another rule attempts to unify identical metavariables with distinct lists of parameters: [eliminate] When a constraint has the same metavariable \(\mathtt{M}:[\sigma_{1},\ldots,\sigma_{n}]\tau\) on both sides and there is a sequence \((j_{k})_{k=1}^{n}\) such that \(s_{j_{k}}=u_{j_{k}}\) for all \(k\in\{1,\ldots,n\}\), then we can **eliminate** every other parameter and leave the remaining terms identical: \[(\Theta\mid\Gamma_{\forall}\vdash\mathtt{M}[\overline{s}]\stackrel{{?}}{{=}}\mathtt{M}[\overline{t}])\stackrel{{[\mathtt{M}[ \overline{z}]\mapsto\mathtt{F}[z_{j_{1}},\ldots,z_{j_{n}}]]}}{{=}}\varnothing\] The idea of the final rule is to extend a list of parameters with some combination of those that exist already. For example, consider constraint \(\forall x,y,z.\mathtt{M}[\mathtt{pair}(x,y),z]\stackrel{{?}}{{=}} \mathtt{N}[x,z]\). It is clear, that if we can work with a pair of \(x\) and \(y\), then we can work with them individually, since we can extract \(x\) using \(\mathtt{fst}\) and \(y\) using \(\mathtt{snd}\). Thus, a substitution \([\mathtt{M}[p,z]\mapsto\mathtt{M}_{1}[p,z,\mathtt{fst}(p)]]\) would result in a new constraint \(\forall x,y,z.\mathtt{M}_{1}[\mathtt{pair}(x,y),z,\mathtt{fst}(\mathtt{pair}(x, y))]\stackrel{{?}}{{=}}\mathtt{N}[x,z]\). This one can now be solved by applying **(identify)**, **(eliminate)**, and **(decompose)** rules that will lead us to \(\forall x,y,z.\mathtt{fst}(\mathtt{pair}(x,y))\stackrel{{?}}{{=}}x:\sigma\) which will be processed using **(mutate)** rule. [iterate] When a constraint consists of a pair of (possibly, identical) metavariables \(\mathtt{M}:[\sigma_{1},\ldots,\sigma_{k}]\tau\) and \(\mathtt{N}:[\gamma_{1},\ldots,\gamma_{l}]\tau\), we can use an iteration binding: \[(\Theta\mid\Gamma_{\forall}\vdash\mathtt{M}[\overline{s}]\stackrel{{? }}{{=}}\mathtt{N}[\overline{t}])\stackrel{{[\mathtt{M}[ \overline{z}]\mapsto\mathtt{H}[\overline{z},\overline{F}(\overline{\overline{x} },\overline{x}])]}}{{=}}\{\Theta\mid\Gamma_{\forall}\vdash\mathtt{H}[\overline {s},\mathtt{F}(\overline{\overline{x}}.\mathtt{K}[\overline{x},\overline{x}] )]\stackrel{{?}}{{=}}\mathtt{N}[\overline{t}]\}\] The **\(E\)-unification procedure** over an equational presentation \(E\) is defined by repeatedly applying the following transitions (non-deterministically) until a stop: 1. If no constraints are left, then stop (\(\mathit{succeed}\)). 2. If possible, apply **(delete)** rule. 3. If possible, apply **(mutate)** or **(decompose)** rule. 4. If there is a constraint consisting of two non-metavariables and none of the above transitions apply, stop **(fail)**. 5. If there is a constraint \(\mathtt{M}[\ldots]\stackrel{{?}}{{=}}\mathtt{F}(\ldots)\), apply **(imitate)** or **(project)**. 6. If there is a constraint \(\mathtt{M}[\ldots]\stackrel{{?}}{{=}}x\), apply **(project)**. 7. If possible, apply **(identify)**, **(eliminate)**, or **(iterate)**. 8. If none of the rules above are applicable, then stop **(fail)**. Many HOU algorithms [23, 21] implement a rule (typically called _eliminate_) that allows to eliminate metavariables, when a corresponding constraint is in solved form. Such a rule is not necessary here, as it is covered by a combination of **(imitate)**, **(decompose)**, **(delete)**, **(identify)**, and **(eliminate)** rules. However, it simplifies presentation of examples and also serves as a practical optimisation, so we include it as an optional rule: [eliminate*] When a constraint \(C=(\Theta\mid\Gamma_{\forall}\vdash\mathtt{M}[\overline{z}]\stackrel{{? }}{{=}}u)\) is in solved form, we can eliminate it with a corresponding unifier \(\xi_{\{C\}}=[\mathtt{M}[\overline{z}]\mapsto u]\): \[(\Theta\mid\Gamma_{\forall}\vdash\mathtt{M}[\overline{s}]\stackrel{{? }}{{=}}u)\stackrel{{[\mathtt{M}[\overline{z}]\mapsto u]}}{{=}}\varnothing\] The **(eliminate*)** rule should have the same priority as **(delete)** in the procedure. In the procedure defined in Definition 3, each step is sound. That is, if \(S\stackrel{{\theta}}{{\longrightarrow}}S^{\prime}\) is a single-step transition that the procedure takes and \(\xi\in U_{E}(S^{\prime})\) then \(\xi\circ\theta\in U_{E}(S)\). Proof.: It is sufficient to show that each step is sound with respect to the constraint it acts upon. That is, we consider the step \(\{C\}\stackrel{{\theta}}{{\longrightarrow}}S^{\prime\prime}\) such that \(C\in S\) and \(S^{\prime\prime}\subseteq S^{\prime}\). By assumption \(\xi\in U_{E}(S^{\prime})\) and thus also \(\xi\in U_{E}(S^{\prime\prime})\). Note that for any constraint \(D\in(S-\{C\})\) we have a corresponding constraint \(D^{\prime}\in(S^{\prime}-S^{\prime\prime})\) such that \(D^{\prime}=\theta C\). Since \(\xi\) unifies \(D^{\prime}\) it follows that \(\xi\circ\theta\) unifies \(C\). Thus, it is enough for us to show that \(\xi\circ\theta\) unifies \(U_{E}(\{C\})\). We now go over the list of possible steps: * [noitemsep,topsep=0pt] * **(delete)**: it is clear that any substitution unifies \(C\); * **(decompose)**: since \(\xi\) unifies all subterm pairs in \(S^{\prime\prime}\), it also unifies \(C\); * **(imitate)**, **(project)**, **(identify)**, **(eliminate)**, **(iterate)**: all of these rules simply make a decision on how to substitute some metavariables (choose \(\theta\)) and immediately apply that substitution. So, \(S^{\prime\prime}=\{\theta C\}\) and since \(\xi\) unifies \(S^{\prime}\) then \(\xi\circ\theta\) unifies \(S\). * **(mutate)**: let \(C=(\Theta\mid\Gamma_{\forall}\vdash s\stackrel{{?}}{{=}}t:\tau)\) and we mutate according to axiom (\(\Xi\mid\cdot\vdash l\equiv r:\tau\)) \(\in E\) with substitution \(\zeta\) instantiating this axiom. By assumption, \(\xi\) unifies both \(s\stackrel{{?}}{{=}}\langle l\) and \(\langle r\stackrel{{?}}{{=}}t\). Also, \(\Theta\mid\Gamma_{\forall}\vdash\langle l\equiv_{E}\langle r:\tau\). In this rule, \(\theta=\mathsf{id}\), and so we can show that \(\xi\circ\theta=\xi\) unifies \(s\stackrel{{?}}{{=}}t\): \(\xi s\equiv_{E}\xi(\langle l\rangle\equiv_{E}\xi(\langle r\rangle\equiv_{E} \xi t\) The procedure defined in Definition 3.2 is sound. That is, if \(S\stackrel{{\theta_{1}}}{{\longrightarrow}}S_{1}\stackrel{{ \theta_{2}}}{{\longrightarrow}}\cdots\stackrel{{\theta_{n}}}{{ \longrightarrow}}\varnothing\) is a path produced by the procedure, then \(\theta_{1}\circ\theta_{2}\circ\ldots\circ\theta_{n}\in U_{E}(S)\). Proof.: Direct corollary of Lemma 3.2. ## 5 Proof of Completeness In this section we prove our main theorem, showing that our unification procedure is complete. We present a compact version of the proof here, while the full proof is written out in Appendix C. We start with a definition of mixed operators: We say that an operator \(\mathsf{F}:(\overline{\alpha_{1}}.\beta_{1},\ldots,\overline{\alpha_{n}}. \beta_{n})\to\gamma\) is **mixed** iff \(\alpha_{i}\) is empty and \(\alpha_{j}\) is not empty for some \(i\) and \(j\). In the following theorem, we assume that all operators either introduce scopes in all subterms, or in none. The assumption is justified since we can always encode a mixed operator as a combination of non-mixed operators. For example, \(\mathsf{let}(t_{1},x.t_{2})\) can be encoded as \(\mathsf{let}(t_{1},\mathsf{block}(x.t_{2}))\). Assuming no mixed operators are used, the procedure described in Definition 3.2 is complete, meaning that all paths from a root to all (success) leaves in the search tree constructed by the procedure, form a complete5 set of \(E\)-unifiers. More specifically, let \(E\) be an equational presentation and \(\langle\Theta,S\rangle\) be an \(E\)-unification problem. Then for any \(E\)-unifier \(\theta\in U_{E}(S)\) there exists a path \(S\stackrel{{\xi}}{{\longrightarrow}}\varnothing\) such that \(\xi\preccurlyeq_{E}\theta\). Footnote 5: but not minimal, in general The main idea of the proof is to take the unification problem \(S\) together with its \(E\)-unifier \(\theta\) and then choose one of the rules of the procedure guided by \(\theta\). Applying a rule updates constraints and the remaining substitution is also updated. To show that this process terminates, we introduce a measure that strictly decreases with each rule application. **Definition 38**.: _Let \(\theta\in U_{E}(S)\). Then, define the_ **measure** _on pairs \(\langle S,\theta\rangle\) as the lexicographic comparison of_ 1. _sum of lengths of the rewriting sequences_ \(\theta s\xleftrightarrow{*}_{E}\theta t\) _for all_ \(s\overset{?}{=}t\) _of_ \(S\)_;_ 2. _total number of operators used in_ \(\theta\)_;_ 3. _total number of metavariables used in_ \(\theta\)_;_ 4. _sum of sizes of terms in_ \(S\)_._ _We denote the quadruple above as_ \(\mathsf{ord}(S,\theta)\)_._ One of the crucial points in the proof is to understand whether we can apply **(identify)** or **(eliminate)** rules for constraints with two metavariables on both sides. The following lemma provides precise conditions for this, allowing for **(identify)** when metavariables are distinct or **(eliminate)** when they are equal. **Lemma 39** (Lemma 46).: _Let \(s=\textsc{m}[\overline{u}]\) and \(t=\textsc{n}[\overline{v}]\) such that \(\langle s\xleftrightarrow{*}_{E}\langle t\). Let \(s_{1},\ldots,s_{n}\) be the subterms of \(\langle s\), \(t_{1},\ldots,t_{n}\) the subterms of \(\langle t\) such that the rewriting sequence \(\langle s\xleftrightarrow{*}_{E}\langle t\) corresponds to the union of independent rewritings \(s_{i}\xleftrightarrow{*}_{E}t_{i}\) for all \(i\in\{1,\ldots,n\}\). If for all \(i\) we have either that \(s_{i}\) is a subterm of an occurrence of \(\langle u_{j_{i}}\rangle\) or that \(t_{i}\) is a subterm of an occurrence of \(\langle v_{j_{i}}\rangle\), then there exist terms \(w\), \(\overline{u^{\prime}}\), \(\overline{v^{\prime}}\) such that_ 1. \(\overline{u^{\prime}}[\overline{y}\mapsto\langle\overline{v}]\equiv_{E} \langle\overline{u}\) _and_ \(\overline{v^{\prime}}[\overline{z}\mapsto\langle\overline{u}]\equiv_{E} \langle\overline{v}\)_,_ 2. \(\langle\textsc{m}[\overline{z}]=w[\overline{y}\mapsto\overline{v^{\prime}}]\) _and_ \(\langle\textsc{n}[\overline{y}]=w[\overline{z}\mapsto\overline{u^{\prime}}]\)_._ We are now ready to prove Theorem 37. Proof.: Let \(S_{0}=S\) and \(\theta_{0}=\rho\circ\theta\), where \(\rho\) is some renaming substitution such that every metavariable occurring in \(\theta_{0}S_{0}\) does not occur in \(S_{0}\). Note that \(\theta_{0}\) is an \(E\)-unifier of \(S\), since \(\theta\) is by assumption. We now inductively define \(S_{i},\xi_{i}\), and \(\theta_{i}\) until we reach some \(i\) such that \(S_{i}=\varnothing\). We ensure that \(\mathsf{ord}(S_{i},\theta_{i})\) decreases with every step, so that such sequence of steps would always terminate. We maintain the following invariants for each step: 1. \(\langle S_{i},\theta_{i}\rangle\overset{\xi_{i}}{\longrightarrow}\langle S_{i +1},\theta_{i+1}\rangle\) where \(S_{i}\overset{\xi_{i}}{\longrightarrow}S_{i+1}\) by some rule of the unification procedure; 2. \(\mathsf{ord}(S_{i+1},\theta_{i+1})<\mathsf{ord}(S_{i},\theta_{i})\); 3. \(\theta_{i}\in U_{E}(S_{i})\); 4. \(\theta_{0}\equiv_{E}\theta_{i}\circ\xi_{i-1}\circ\ldots\circ\xi_{0}\); 5. every free variable occurring in \(\theta_{i}S_{i}\) does not occur in \(S_{i}\); If \(S_{i}\neq\varnothing\), then let \(\forall\overline{x}:\overline{\sigma}.s\overset{?}{=}t:\tau\) be a constraint in \(S_{i}\). We consider cases with respect to the rewriting sequence \(\Theta_{i}\mid\overline{x}:\overline{\sigma}\vdash\theta_{i}s\xleftrightarrow{* }_{E}\theta_{i}t:\tau\) and the structure of terms \(s\) and \(t\). In each case, we one of the transitions in the unification procedure can be applied to advance towards the solution \(\theta\), while reducing the measure. We now have a sequence \(\langle S_{0},\theta_{0}\rangle\overset{\xi_{0}}{\longrightarrow}\langle S_{1 },\theta_{1}\rangle\overset{\xi_{1}}{\longrightarrow}\ldots\). The sequence is finite since the measure \(\mathsf{ord}(S_{i},\theta_{i})\) strictly decreases with every step. Therefore, \(\langle S,\theta_{0}\rangle\overset{\xi_{0}}{\longrightarrow}\ldots\overset{ \xi_{n}}{\longrightarrow}\langle\varnothing,\textsc{id}\rangle\) and \(\theta\equiv_{E}\rho^{-1}\circ\theta_{i}\circ\xi_{i-1}\circ\ldots\xi_{0} \equiv_{E}\rho^{-1}\circ\xi_{n}\circ\ldots\circ\xi_{0}\preccurlyeq_{E}\xi_{n} \circ\ldots\circ\xi_{0}\), completing the proof. ## 6 Discussion A pragmatic implementation of our procedure may enjoy the following changes. We find that these help make a reasonable compromise between completeness and performance: 1. remove **(iterate)** rule; 2. implement **(eliminate*)** rule; 3. split axioms \(E=B\uplus R\) such that \(R\) constitutes a confluent and terminating term rewriting system, and introduce **(normalize)** rule to normalize terms (lazily) before applying any other rules except **(delete)** and **(eliminate*)**; 4. introduce a limit on a number of applications of **(mutate)** rule; 5. introduce a limit on a number of bindings that do not decrease problem size; 6. introduce a limit on total number of bindings. When adapting ideas from HOU algorithms, we had to simplify whenever the those ideas relied on normalisation, \(\eta\)-expansion, or specific syntax of \(\lambda\)-terms. Many HOU algorithms look at syntactic properties of terms to determine which rules to apply. In particular, HOU algorithms often distinguish _flex_ and _rigid_ terms [15, 23]. Jensen and Pietrzykowski introduce a notion of _\(\omega\)-simple_ terms [16]. Vukmirovic, Bentkamp, and Nummelin [35] introduce notions of _base-simple_ and _solid_ terms. These properties crucially depend on specific normalisation properties of \(\lambda\)-calculus, which might be inaccessible in an arbitrary second-order equational theory. Thus, our procedure contains more non-determinism than might be necessary. In HOU algorithms, it is also common to have substitutions of the form \[[\![\,\mbox{\rm{M}}\mapsto\lambda x_{1},\ldots,x_{n}.f\;(\mbox{\rm{H}}_{1}\;x _{1}\;\ldots\;x_{n})\;\ldots\;(\mbox{\rm{H}}_{k}\;x_{1}\;\ldots\;x_{n})\!]\] where \(f\) can be a bound variable (one of \(x_{1},\ldots,x_{n}\)) or a constant of type \(\sigma_{1}\Rightarrow\ldots\Rightarrow\sigma_{k}\Rightarrow\tau\). These are called Huet-style projection or imitation bindings [16, 35] or partial bindings [15, 23]. Huet-style projections (and conditions prompting their use) are non-trivial to generalise well to arbitrary second-order abstract syntax, so we skipped them in this paper, opting out for simpler rules but larger search space. ## 7 Conclusion and Future Work We have formulated equational unification problem for second-order abstract syntax, allowing to reason naturally about unification of terms in languages with binding constructions. Such languages include, but are not limited to higher-order systems such as \(\lambda\)-calculus, which expands potential applications to more languages. We also presented a procedure to solve such problems and our main result shows completeness of this procedure. In future work, we will focus on optimisations and recognition of decidable fragments of \(E\)-unification over second-order equations. One notable optimisation is splitting \(E\) into two sets \(R\uplus B\), where \(R\) is a set of directed equations, forming a confluent second-order term rewriting system, and \(B\) is a set of undirected equations (such as associativity and commutativity axioms). Another potential optimisation stems from a generalisation of Huet-style binding (also known as _partial binding_), which can lead to more informed decisions on which rule to apply in the procedure, introduce Huet-style version of **(project)** and improve **(iterate)** rule, significantly reducing the search space. A version of such an optimisation has been implemented in a form of a heuristic to combine **(imitate)** and **(project)** rules by Kudasov [20]. There are several well-studied fragments both for \(E\)-unification and higher-order unification. For example, unification in monoidal theories is essentially solving linear equations over semirings [26]. In higher-order unification, there are several well-known decidable fragments such as pattern unification [23]. Vukmirovic, Bentkamp, and Nummelin have identified some of the practically important decidable fragments as well as a new one in their recent work on efficient full higher-order unification [35]. It is interesting to see if these fragments could be generalised to second-order abstract syntax and used as oracles, possibly yielding an efficient \(E\)-unification for second-order abstract syntax as a strict generalisation of their procedure.
2307.09629
Gravitational Waves and Pommaret Bases
The first finite length differential sequence, now called {\it Janet sequence}, has been introduced by Janet in 1920. Thanks to the first book of Pommaret in 1978, this algorithmic approach has been extended by Gerdt, Blinkov, Zharkov, Seiler and others who introduced Janet and Pommaret bases in computer algebra. After 1990, new intrinsic tools have been developed in homological algebra with the definition of {\it extension differential modules} through the systematic use of {\it double differential duality} (Zbl 1079.93001). If an operator ${\cal{D}}_1$ generates the compatibility conditions (CC) of an operator ${\cal{D}}$, then the {\it adjoint operator} $ad( {\cal{D}})$ may not generate the CC of $ad({\cal{D}}_1)$. Equivalently, an operator ${\cal{D}}$ with coefficients in a differential field $K$ can be parametrized by an operator ${\cal{D}}_{-1}$ iff the differential module $M$ defined by ${\cal{D}}$ is torsion-free, that is $t(M)={ext}^1_D(N,D) = 0$ when $N$ is the differential module defined by $ad( {\cal{D}})$ and $D$ is the ring of differential operators with coefficients in $K$. Also $R = hom_K(M,K)$ is a differential module for the Spencer operator $d:R \rightarrow T^* \otimes R$, first introduced by Macaulay in 1916 with {\it inverse systems}. When ${\cal{D}}$ is the self-adjoint Einstein operator, it is not evident that $t(M)\neq 0$ is generated by the Weyl tensor having only to do with the group of conformal transformations. Gravitational waves are not coherent with these results because the stress-functions parametrizing the Cauchy = ad (Killing) operator have nothing to do with the metric, {\it exactly like the Airy or Maxwell functions in elasticity}. Similarly, the Cauchy operator has nothing to do with any contraction of the Bianchi operator.
J. -F. Pommaret
2023-06-15T16:01:15Z
http://arxiv.org/abs/2307.09629v1
# Gravitational Waves and Pommaret Bases ###### Abstract The first finite length differential sequence, now called _Janet sequence_, has been introduced by Janet in 1920. Thanks to the first book of Pommaret in 1978, this algorithmic approach has been extended by Gerdt, Blinkov, Zharkov, Seiler and others who introduced Janet and Pommaret bases in computer algebra. Between 1920 and 1990, new intrinsic tools have been developed by Spencer and successors in homological algebra, culminating with the definition of _extension differential modules_ through the systematic use of _double differential duality_. If an operator \({\cal D}_{1}\) generates the compatibility conditions (CC) of an operator \({\cal D}\), then the _adjoint operator_\(ad({\cal D})\) may not generate the CC of \(ad({\cal D}_{1})\). Equivalently, an operator \({\cal D}\) with coefficients in a differential field \(K\) can be parametrized by an operator \({\cal D}_{-1}\) iff the differential module \(M\) defined by \({\cal D}\) is torsion-free, that is \(t(M)=ext_{D}^{1}(N,D)=0\) when \(N\) is the differential module defined by \(ad({\cal D})\) and \(D\) is the ring of differential operators with coefficients in \(K\). Also \(R=hom_{K}(M,K)\) is a differential module for the Spencer operator \(d:R\to T^{*}\otimes R\), first introduced by Macaulay in 1916 with _inverse systems_. When \({\cal D}\) is the self-adjoint Einstein operator, it is not evident that \(t(M)\neq 0\) is generated by the components of the Weyl tensor having only to do with the pseudogroup of conformal transformations. Gravitational waves are thus not coherent with these results because the stress-functions parametrizing the Cauchy = ad (Killing) operator have nothing to do with the metric in general, _exactly like the Airy or Maxwell functions in elasticity_. Similarly, the Cauchy operator has nothing to do with any contraction of the Bianchi operator. The main difficulty is that \(ad({\cal D})\) may not be involutive _at all_ when \({\cal D}\) is involutive, a well known fact in OD control theory leading to the Kalman test (Zbl 1079.93001). Differential sequence; Spencer operator; Differential modules; ; Differential duality; Extension modules; Control theory.
2303.03195
Query Learning Algorithm for Ordered Multi-Terminal Binary Decision Diagrams
We propose a query learning algorithm for ordered multi-terminal binary decision diagrams (OMTBDDs) using at most n equivalence and 2n(l\lcei\log_2 m\rceil+ 3n) membership queries by extending the algorithm for ordered binary decision diagrams (OBDDs). Tightness of our upper bounds is checked in our experiments using synthetically generated target OMTBDDs. Possibility of applying our algorithm to classification problems is also indicated in our other experiments using datasets of UCI Machine Learning Repository.
Atsuyoshi Nakamura
2023-03-03T02:59:29Z
http://arxiv.org/abs/2303.03195v1
# Query Learning Algorithm for Ordered Multi-Terminal Binary Decision Diagrams ###### Abstract We propose a query learning algorithm for ordered multi-terminal binary decision diagrams (OMTBDDs) using at most \(n\) equivalence and \(2n(\lceil\log_{2}m\rceil+3n)\) membership queries by extending the algorithm for ordered binary decision diagrams (OBDDs). Tightness of our upper bounds is checked in our experiments using synthetically generated target OMTBDDs. Possibility of applying our algorithm to classification problems is also indicated in our other experiments using datasets of UCI Machine Learning Repository. keywords: query learning, sample complexity, OMTBDD + Footnote †: journal: Elsevier ## 1 Introduction A _binary decision diagram (BDD)_ is a representation of a Boolean function, and it is known to be compact for many functions and easy to be manipulated [1]. It is a kind of directed acyclic graph (DAG) that has a root and two sinks labeled \(0\) and \(1\). Each non-sink node is labeled a Boolean variable \(x_{i}\) and assignment \(x_{i}=a_{i}\) for the variables decides a path from the root to one of sinks by selecting \(a_{i}\)-labeled outgoing edge at the node labeled \(x_{i}\). If the sequence of labeled variables on any path from the root to one of sinks in the passing order is consistent with some variable order, then it is called an _ordered BDD (OBDD)_. An OBDD is very popular due to its good property that any Boolean function can be represented by a unique _reduced OBDD_ for any fixed variable order. A _multi-terminal binary decision diagram (MTBDD)_[2] is an extension of BDD so as to represent a multi-valued function of Boolean variables. The structural difference is the number of sinks only; an MTBDD can have more than two sinks. An _ordered MTBDD (OMTBDD)_ is an MTBDD with variable labels of non-sink nodes restricted as an OBDD. An OMTBDD is known to inherit good properties such as the unique reduced form from an OBDD. In this paper, we extend query learning algorithm for an OBDD to that for an OMTBDD. Query learning that we adopt here is a learning using equivalence and membership queries [3]. In the study of query learning for function class \(\mathcal{F}\), we develop an efficient algorithm that can identify an unknown target function \(f\) in \(\mathcal{F}\) using queries allowed to ask. An equivalence query EQ(\(h\)) for hypothesis \(h\in\mathcal{F}\) of the learner's choice is a query to ask whether \(h=f\) or not and the answer to the query is 'YES' if \(h=f\) and 'NO' otherwise. In the case with answer 'NO', the learner also obtains counterexample \(e\) for which \(h(e)\neq f(e)\). A membership query is a query to ask the function value \(f(a)\) for an assignment \(a\) of the learner's choice, and the value \(f(a)\) is answered. The oracle that answers to membership queries can be realized as a blackbox function but the oracle that answers to equivalence queries cannot be realized easily. Stochastic testing \(h(x)=f(x)\) for randomly sampled \(x\) is known to be enough for probably approximately correct (PAC) learning instead of identification [3]. Our query learning algorithm for an OMTBDD, called QLearn-OMTBDD, is an extension of QLearn-\(\pi\)-OBDD [4] that identifies a target OBDD using equivalence and membership queries. In the algorithms, we use (node) classification tree \(T_{i}\) to classify a partial assignment \(x_{1}=a_{1},x_{2}=a_{2},\ldots,x_{i-1}=a_{i-1}\) into an internal node labeled \(x_{i}\) in the hypothesis OMTBDD or \(\mu\) in order to judge which node the partial assignment reach or no reachable node (in the case with \(\mu\)) in it. The judgment by the classification tree is done depending on the answers to the membership queries. Since the number of possible answers for membership queries increases from two to \(K\) in the case with an OMTBDD with \(K\) sinks, the structure of the trees must be modified, which forces some modifications of their update procedure. Especially, there is a case that no existing edge label corresponds to the answer to a membership query at a node in a classification tree, which never occurs for OBDDs. In such case, the trees must be updated so as to include such edge. We prove that an arbitrary target reduced OMTBDD can be learned using at most \(2n(\lceil\log_{2}m\rceil+3n)\) membership queries and at most \(n\) equivalence queries by Algorithm QLearn-OMTBDD, which are exactly the same upper bounds for OBDDs shown in [4]. The tightness of the upper bounds on these query complexities is checked in our experiments using synthetically generated OMTBDDs. We also check applicability of our algorithm to classification problem. Our multi-terminal extension enables OMTBDDs to represent multi-class classifiers. Even real-valued features can be converted to binary features whose value represents above or below some fixed threshold. OMTBDD representations for classifiers are useful in the point that various operations with some condition-represented OMTBDDs are possible. Through our experiments using 12 real-valued feature datasets in UCI Machine Learning Repository, we show a way of constructing an OMTBDDs by query learning from a tree-based classifier using its training data that are correctly predicted by the classifier, where the classifier is used for answering to membership queries and consistency with the training data is replaced with identification by equivalence queries. As for tree-based classifiers, we use decision trees and random forests. On decision tree classifiers, significant accuracy deterioration cannot be observed except for one dataset, and the number of nodes are kept at most the same number for 5 among 12 datasets. For two datasets, the number of nodes in the learned OMTBDDs is smaller than that in the original decision trees even in the trees whose leaves are shared among those with the same label. On random forest classifiers, accuracy deterioration is observed except two datasets but significant reduction in the number of nodes is achieved for 5 of 10 datasets1. As for two of the 5 successfully reduced datasets, accuracy of the OMTBDD learned from a random forest is better than that of the decision tree. Our results on classification problem for these benchmark datasets indicate possibility of application of our query learning algorithm. Footnote 1: As for two datasets, epileptic seizure and magic datasets, OMTBDDs could not be learned from random forests due to their large number of nodes. Related Work.Query learning is proposed by Angluin [3], and the algorithm for learning deterministic finite automata (DFAs) is one of the most famous query learning algorithms using equivalence and membership queries. In the query learning for DFAs, Kearns and Vazirani [5] used a classification tree instead of an observation table used in Angluin's algorithm. Gavalda and Guijarro [6] extended Angluin's query learning algorithm for DFAs to the algorithm for OBDDs. Nakamura [4] reduced the number of membership queries used for learning an OBDD by a factor of \(O(m)\) by using classification trees, where \(m\) is the number of variables. The ZDD-version of Nakamura's algorithm was developed by Mizumoto et al. [7]. ## 2 Preliminaries A _multi-terminal binary decision diagram_ (MTBDD) is an extension of a binary decision diagram (BDD) that can represent a function of more than 2 values from domain \(\{0,1\}^{m}\). Let \(\{0,\ldots,K-1\}\) for \(K\geq 2\) represent the set of values of \(K\)-valued functions. An MTBDD representing a \(K\)-valued function is a directed acyclic graph with one root and at most \(K\) sinks, nodes labeled \(0,\ldots,K-1\). The simplest MTBDD is composed of one sink that is also the root, and it represents a constant function. Other MTBDDs have at least two sinks and one internal (non-sink) node. Each internal node is labeled a Boolean variable and has two outgoing edges, 0-labeled and 1-labeled edges. An _ordered MTBDD (OMTBDD)_ denotes an MTBDD in which the sequence of variables labeling nodes on any path from the root to one of sinks must be consistent with a certain preset order. The variable order is fixed to \(x_{1},x_{2},...,x_{m}\) in this paper. Given an assignment \(x_{1}=a_{1},\ldots,x_{m}=a_{m}\), the value of the function represented by an OMTBDD at the assignment is calculated as follows: starting from its root, selecting the \(a_{i}\)-labeled outgoing edge at a node labeled \(x_{i}\) and taking value \(j\in\{0,\ldots,K-1\}\) that is the label of the finally-reached sink. An assignment \(x_{1}=a_{1},\ldots,x_{m}=a_{m}\) is also represented by the binary string \(a_{1}a_{2}\ldots a_{m}\). We use bold letters like \(\mathbf{a},\mathbf{b}\) to represent strings and the length of any string \(\mathbf{a}\) is denoted as \(|\mathbf{a}|\). The concatenated string of \(\mathbf{a}\) and \(\mathbf{b}\) is denoted as \(\mathbf{a}\cdot\mathbf{b}\), which is sometimes abbreviated as \(\mathbf{a}\mathbf{b}\). For a string \(\mathbf{a}\), \(\operatorname{pre}(\mathbf{a},i)\) and \(\operatorname{suf}(\mathbf{a},i)\) are the prefix and suffix strings of \(\mathbf{a}\) with length \(i\), respectively. For two length-\(i\) strings \(\mathbf{a},\mathbf{b}\) and a natural number \(j<i\), \(\operatorname{cro}(\mathbf{a},\mathbf{b},j)\) denote the length-\(i\) string constructed by concatenating \(\operatorname{pre}(\mathbf{a},i-j)\) and \(\operatorname{suf}(\mathbf{b},j)\). We abuse the notation, and the function represented by an OMTBDD \(D\) is also denoted as \(D\), so \(D(a_{1},a_{2},\ldots,a_{m})\) is the value of \(D\) for the assignment \(x_{1}=a_{1},\ldots,x_{m}=a_{m}\) and it is also written as \(D(\mathbf{a})\) for \(\mathbf{a}=a_{1}a_{2}\cdots a_{m}\). For node \(N\) in OMTBDD \(D\), an _access string of node \(N\)_ is a string \(a_{1}a_{2}\cdots a_{i-1}\) such that the path on \(D\) for assignment \(x_{1}=a_{1},x_{2}=a_{2},\ldots,x_{i-1}=a_{i-1}\) just reaches node \(N\). The length of the access string is \(i-1\) for nodes with labeled \(x_{i}\) and \(m\) for sinks. We let \(\operatorname{nodes}_{i}(D)\) denote the set of length-\((i-1)\) access strings for nodes in \(D\) and let \(\operatorname{nodes}(D)=\bigcup_{i=1}^{m+1}\operatorname{nodes}_{i}(D)\). Then, for \(\mathbf{a},\mathbf{b}\in\text{nodes}(D)\), an equivalence relation '\(\stackrel{{ D}}{{=}}\)' is defined as follows: \[\mathbf{a}\stackrel{{ D}}{{=}}\mathbf{b}\stackrel{{ def}}{{ \Leftrightarrow}}\mathbf{a}\text{ and }\mathbf{b}\text{ are access strings to the same node in }D.\] Let \([\mathbf{a}]\) denote the equivalence class of \(\mathbf{a}\). We call a subset \(V\) of nodes(\(D\)) a _node id set of \(D\)_ if \(V\) has just one access string per each node in \(D\). For a node id set \(V\) of \(D\), let \(V_{i}\) denote the set of access strings in \(V\) whose length is \((i-1)\). Then \(V=\bigcup_{i=1}^{m+1}V_{i}\), and nodes\({}_{i}(D)\) can be partitioned into \(\{[\mathbf{v}]\mid\mathbf{v}\in V_{i}\}\) and nodes(\(D\)) can be also partitioned into \(\{[\mathbf{v}]\mid\mathbf{v}\in V\}\). We use \(\mathbf{v}\in V\) as a node id, and let 'node \(\mathbf{v}\)' mean the node with access string \(\mathbf{v}\). The learning framework we consider is the _query learning_ proposed by Angluin [3]. In this framework, an unknown target function \(f\) is identified using _equivalence queries_ and _membership queries_. An _equivalence query_ asks if \(f\) is equivalent to a hypothesis \(h\); if so, 'YES' is returned, and if not, 'NO' and a counterexample \(e\) (\(f(e)\neq h(e)\)) are returned. We treat an equivalence query as a function EQ from a hypothesis \(h\) to a pair of an answer and a counterexample \((\text{Ans},e)\) for \(\text{Ans}\in\{\text{'YES'},\text{'NO'}\}\), and let EQ(\(h\)) represent \((\text{Ans},e)\). A _membership query_ asks for the value \(f(a)\) of the target function \(f\) for an assignment \(a\). ## 3 Node Classification Trees For each \(i=1,2,\ldots,m+1\), let \(\mathcal{S}_{i}\) be the set of \(\{0,1\}\)-strings with length \(i-1\). Let \(D\) be an OMTBDD and let \(V\) be a node id set of \(D\). Then, \(\mathcal{S}_{i}\) can be partitioned as \(\mathcal{S}_{i}=\bigcup_{\mathbf{v}\in V_{i}}[\mathbf{v}]\cup\big{(}\mathcal{S}_{i} \setminus\bigcup_{\mathbf{v}\in V_{i}}[\mathbf{v}]\big{)}\), where \(\mathcal{S}_{i}\setminus\bigcup_{\mathbf{v}\in V_{i}}[\mathbf{v}]\) is the set of length-\((i-1)\) strings that reach none of the nodes in \(D\). If an OMTBDD \(D\) and its node id set \(V\) are given, we can easily answer that a given \(\mathbf{a}=a_{1}a_{2}\cdots a_{i-1}\in\mathcal{S}_{i}\) reaches node \(\mathbf{v}\in V_{i}\) or does not reach any node \(\mathbf{v}\in V_{i}\) from the path in \(D\) for assignment \(x_{1}=a_{1},x_{2}=a_{2},\ldots,x_{i-1}=a_{i-1}\). Then, can we answer the same question when \(D\) is not given but we can ask membership queries for \(D\)? If \(D\) is reduced, the answer is yes. The reason is as follows. Consider the OMTBDD \(D_{\mathbf{v}}\) which is the subgraph reachable from node \(\mathbf{v}\in V_{i}\). OMTBDD \(D_{\mathbf{v}}\) can be seen as a \(K\)-valued function over \((x_{i},x_{i+1},\ldots,x_{m})\in\{0,1\}^{m-i+1}\), that is, \(D_{\mathbf{v}}(x_{i},x_{i+1},\ldots,x_{m})=D(v_{1},v_{2},\ldots,v_{i-1},x_{i},x_{i +1},\ldots,x_{m})\) for \(\mathbf{v}=v_{1}v_{2}\cdots v_{i-1}\). Let node \(\mathbf{v}_{a}\in V\) be the node in which the \(a\)-labeled edge outgoing from node \(\mathbf{v}\) comes. Then, \(D_{\mathbf{v}}(0,x_{i+1},\ldots,x_{m})\) and \(D_{\mathbf{v}}(1,x_{i+1},\ldots,x_{m})\) are functions represented by \(D_{\mathbf{v}_{0}}\) and \(D_{\mathbf{v}_{1}}\), respectively. If \(D\) is reduced, \(D_{\mathbf{v}}(0,x_{i+1},\ldots,x_{m})\) and \(D_{\mathbf{v}}(1,x_{i+1},\ldots,x_{m})\) must be different because, otherwise, the further reduction is possible; node \(\mathbf{v}\) can be removed, and its incoming edge can directly come in node \(\mathbf{v}_{0}\) (or \(\mathbf{v}_{1}\)). Thus, there must be a string \(a_{i+1}a_{i+2}\cdots a_{m}\) for which \(D_{\mathbf{v}}(0,a_{i+1},\ldots,a_{m})\neq D_{\mathbf{v}}(1,a_{i+1},\ldots,a_{m})\). Let \(\mathbf{r}^{(\mathbf{v})}\) be one of such strings \(0a_{i+1}a_{i+2}\cdots a_{m}\) and \(1a_{i+1}a_{i+2}\cdots a_{m}\), and let \(\dot{\mathbf{r}}^{(\mathbf{v})}\) denote the string that is made by flipping the first bit of \(\mathbf{r}^{(\mathbf{v})}\), that is, the other one of them. If \(\mathbf{a}\in\mathcal{S}_{i}\setminus\bigcup_{\mathbf{v}\in V_{i}}[\mathbf{v}]\), then \(D(\mathbf{a}\mathbf{r}^{(\mathbf{v})})=D(\mathbf{a}\dot{\mathbf{r}}^{(\mathbf{v})})\) holds for all \(\mathbf{v}\in V_{i}\), thus \(D(\mathbf{a}\mathbf{r}^{(\mathbf{v})})\neq D_{\mathbf{v}}(\mathbf{r}^{(\mathbf{v})})\) or \(D(\mathbf{a}\dot{\mathbf{r}}^{(\mathbf{v})})\neq D_{\mathbf{v}}(\dot{\mathbf{r}}^{(\mathbf{v})})\) holds for all \(\mathbf{v}\in V_{i}\). From the above fact, \[\mathbf{a}\in\mathcal{S}_{i}\setminus\bigcup_{\mathbf{v}\in V_{i}}[\mathbf{v}]\Leftrightarrow D (\mathbf{a}\mathbf{r}^{(\mathbf{v})})\neq D(\mathbf{v}\mathbf{r}^{(\mathbf{v})})\text{ or }D(\mathbf{a}\dot{\mathbf{r}}^{(\mathbf{v})})\neq D(\mathbf{v}\dot{\mathbf{r}}^{(\mathbf{v})})\text{ for all }\mathbf{v}\in V_{i} \tag{1}\] holds. This means that, if we know \(\mathbf{r}^{(\mathbf{v})},D(\mathbf{v}\mathbf{r}^{(\mathbf{v})})\) and \(D(\mathbf{v}\dot{\mathbf{r}}^{(\mathbf{v})})\) for all \(\mathbf{v}\in V_{i}\), we can check whether \(\mathbf{a}\in\mathcal{S}_{i}\setminus\bigcup_{\mathbf{v}\in V_{i}}[\mathbf{v}]\) holds or not by asking membership queries for \(\mathbf{a}\mathbf{r}^{(\mathbf{v})}\) and \(\mathbf{a}\dot{\mathbf{r}}^{(\mathbf{v})}\) for all \(\mathbf{v}\in V_{i}\). Note that \(D(\mathbf{a}\mathbf{r}^{(\mathbf{v})})=D(\mathbf{v}\mathbf{r}^{(\mathbf{v})})\) and \(D(\mathbf{a}\dot{\mathbf{r}}^{(\mathbf{v})})=D(\mathbf{v}\dot{\mathbf{r}}^{(\mathbf{v})})\) might hold for \(\mathbf{a}\in[\mathbf{v}^{\prime}]\) with \(\mathbf{v}^{\prime}\stackrel{{ D}}{{\neq}}\mathbf{v}\), and \(\mathbf{r}^{(\mathbf{v}^{\prime})}\) for \(\mathbf{v}^{\prime}\stackrel{{ D}}{{\neq}}\mathbf{v}\) can be equal to \(\mathbf{r}^{(\mathbf{v})}\). Even though \(D(\mathbf{a}\mathbf{r}^{(\mathbf{v})})=D(\mathbf{v}\mathbf{r}^{(\mathbf{v})})\) and \(D(\mathbf{a}\dot{\mathbf{r}}^{(\mathbf{v})})=D(\mathbf{v}\dot{\mathbf{r}}^{(\mathbf{v})})\) holds for \(\mathbf{a}\in[\mathbf{v}^{\prime}]\) with \(\mathbf{v}^{\prime}\stackrel{{ D}}{{\neq}}\mathbf{v}\), we can know whether \(\mathbf{a}\in[\mathbf{v}]\) or not by asking membership queries if \(D\) is reduced. If \(D\) is reduced, then for any two nodes \(\mathbf{v},\mathbf{v}^{\prime}\in V_{i}\), there is a string \(a_{i}a_{i+1}\cdots a_{m}\) such that \(D_{\mathbf{v}}(a_{i},a_{i+1},\ldots,a_{m})\neq D_{\mathbf{v}^{\prime}}(a_{i},a_{i+1}, \ldots,a_{m})\) because, if not, \(D_{\mathbf{v}}=D_{\mathbf{v}^{\prime}}\) holds, then further reduction is possible; node \(\mathbf{v}^{\prime}\) can be removed and all its incoming edges can come in node \(\mathbf{v}\). Let \(r^{(\mathbf{v},\mathbf{v}^{\prime})}\) denote the string \(a_{i}a_{i+1}\cdots a_{m}\) with \(D_{\mathbf{v}}(a_{i},a_{i+1},\ldots,a_{m})\neq D_{\mathbf{v}^{\prime}}(a_{i},a_{i+1}, \ldots,a_{m})\). Then, we can check whether \(\mathbf{a}\in[\mathbf{v}]\) or not by asking membership queries at \(\mathbf{a}\mathbf{r}^{(\mathbf{v},\mathbf{v}^{\prime})}\) for all \(\mathbf{v}^{\prime}\stackrel{{ D}}{{\neq}}\mathbf{v}\) that satisfies \(D(\mathbf{v}^{\prime}\mathbf{r}^{(\mathbf{v})})=D(\mathbf{v}\mathbf{r}^{(\mathbf{v})})\) and \(D(\mathbf{v}^{\prime}\dot{\mathbf{r}}^{(\mathbf{v})})=D(\mathbf{v}\dot{\mathbf{r}}^{(\mathbf{v})})\). From the above discussion, we can construct _node classification trees_\(T_{i}\) (\(i=1,\ldots,m\)) that classifies \(\mathbf{a}\in\mathcal{S}_{i}\) into \(\mathbf{v}\in V_{i}\) or \(\mu\), where \(\mu\) means \(\mathbf{a}\in\mathcal{S}_{i}\setminus\bigcup_{\mathbf{v}\in V_{i}}[\mathbf{v}]\). The leftmost figure in Figure 1 is the form of a node classification tree \(T_{i}\). It is a rooted tree and composed of two types of internal nodes and leaf nodes. One type of internal nodes is a _twin-test node_ and the other type is a _single-test node_. A twin-test node has label \(\mathbf{r}^{(\mathbf{v})}\) for some \(\mathbf{v}\in V_{i}\) and two membership queries for \(\mathbf{a}\mathbf{r}^{(\mathbf{v})}\) and \(\mathbf{a}\dot{\mathbf{r}}^{(\mathbf{v})}\) is asked to classify string \(\mathbf{a}\in\mathcal{S}_{i}\). Each twin-test node has two outgoing edges, one is labeled \((D(\mathbf{v}\mathbf{r}^{(\mathbf{v})}),D(\mathbf{v}\dot{\mathbf{r}}^{(\mathbf{v})}))\) and coming in a single-test node or a leaf, and the other is non-labeled and coming in a twin-test node or a leaf labeled \(\mu\). A single-test node has label \(\mathbf{r}^{(\mathbf{v},\mathbf{v}^{\prime})}\) for some \(\mathbf{v},\mathbf{v}^{\prime}\in V_{i}\) and one membership query for \(\mathbf{ar}^{(\mathbf{v},\mathbf{v}^{\prime})}\) is asked to classify string \(\mathbf{a}\in\mathcal{S}_{i}\). Each single-test node has at most \(K\) outgoing edges labeled \(\ell\in\{0,1,\ldots,K-1\}\) that comes in a single-test node or a leaf labeled some \(\mathbf{v}^{\prime\prime}\in V_{i}\). If the edge comes in a leaf labeled \(\mathbf{v}^{\prime\prime}\), the label of the edge must be \(D(\mathbf{v}^{\prime\prime}\mathbf{r}^{(\mathbf{v},\mathbf{v}^{\prime})})\). Tree \(T_{i}\) has at most \(|V_{i}|+1\) leaves and each of them is labeled \(\mathbf{v}\in V_{i}\) or \(\mu\). Distinct leaves must have different labels. If a \(\mu\)-labeled leaf exists, all the nodes on the path from the root to the \(\mu\)-labeled leaf must be twin-test nodes and all the twin-test nodes must appear on the path, which guarantees the satisfaction of the righthand side condition of (1). Classification of \(\mathbf{a}\in\mathcal{S}_{i}\) into \(V_{i}\cup\{\mu\}\) can be done by using the node classification tree \(T_{i}\) as follows. Start from the root. At a twin-test node labeled \(\mathbf{r}^{(\mathbf{v})}\), ask two membership queries for \(\mathbf{ar}^{(\mathbf{v})}\) and \(\mathbf{a}\dot{\mathbf{r}}^{(\mathbf{v})}\), and select the edge labeled \((D(\mathbf{ar}^{(\mathbf{v})}),D(\mathbf{a}\dot{\mathbf{r}}^{(\mathbf{v})}))\) if such labeled edge exists, otherwise select the non-labeled edge. At a single-test node labeled \(\mathbf{r}^{(\mathbf{v},\mathbf{v}^{\prime})}\), ask one membership query for \(\mathbf{ar}^{(\mathbf{v},\mathbf{v}^{\prime})}\), and select the edge labeled \(D(\mathbf{ar}^{(\mathbf{v},\mathbf{v}^{\prime})})\). Label of \(\mathbf{a}\) classified by \(T_{i}\), denoted as \(T_{i}(\mathbf{a})\), is the label of the leaf that is reached finally repeating the above operations. **Example 1**.: _Consider the OMTBDD \(D\) shown in the center of Figure 1. The underlined string beside each node is its access string in some node id set \(V\) of \(D\). Node classification trees \(T_{4}\) and \(T_{6}\) of this OMTBDD are shown in the right of the figure. \(T_{4}\) is composed of one twin-test node, one single-test node and three leaves including \(\mu\)-labeled leaf. \(T_{6}\) is composed of two Figure 1: The form of a node classification tree (left) and its instance (right) for an example OMTBDD with the access strings of nodes (center). twin-test nodes and three leaves. String \(1100\in\mathcal{S}_{4}\) reaches node \(0101\) in \(D\). Since \(D(11000011)=2\), \(D(11001011)=0\) and \(D(11000001)=1\), it is classified into \(0101\) by \(T_{4}\), which coincides with the reached node in \(D\) by the assignment \((x_{1},x_{2},x_{3},x_{4})=(1,1,0,0)\). String \(101001\in\mathcal{S}_{6}\) does not reach any nodes in \(D\). Since \(D(10100101)=D(10100111)=2\), it is classified into \(\mu\) by \(T_{6}\), which also coincides with nonexistence of nodes reached by the assignment \((x_{1},x_{2},x_{3},x_{4},x_{5},x_{6})=(1,0,1,0,0,1)\)._ **Remark 1**.: _Node classification trees for an OMTBDD are different from classification trees [4] for an OBDD in edge labels. In a classification tree \(T_{i}\) for an OBDD, the label of an edge outgoing from a twin-test node labeled \(\mathbf{r}\) is \(0\) or \(1\), and \(1\)-labeled edge is selected for test string \(\mathbf{a}\in\{0,1\}^{i-1}\) if and only if \(D(\mathbf{a}\mathbf{r})=1\) and \(D(\mathbf{a}\dot{\mathbf{r}})=0\). In the case with an OMTBDD, the number of combinations of different two function values can be more than one, so we adopt label \((j,j^{\prime})\in\{0,\ldots,K-1\}^{2}\) and no label instead of \(1\) and \(0\), respectively; \((j,j^{\prime})\)-edge is selected for \(\mathbf{a}\) if and only if \((D(\mathbf{a}\mathbf{r}),D(\mathbf{a}\dot{\mathbf{r}}))=(j,j^{\prime})\). The label of an edge outgoing from a single-test node is naturally extended from \(j\in\{0,1\}\) to \(j\in\{0,\ldots,K-1\}\), and \(j\)-labeled edge is selected if and only if \(D(\mathbf{a}\mathbf{r})=j\)._ ## 4 Algorithm We extend algorithm QLearn-\(\pi\)-OBDD [4] for OBDDs to an algorithm for OMTBDDs. Starting from a simple hypothesis OBDD, QLearn-\(\pi\)-OBDD repeatedly asks an equivalence query for the current hypothesis OBDD and updates it using the obtained counterexample from the query until 'YES' is returned to the equivalence query. The basic structure of the extended algorithm is the same as QLearn-\(\pi\)-OBDD. In this section, we explain how to extend QLearn-\(\pi\)-OBDD. ### Hypothesis Data Structure In QLearn-\(\pi\)-OBDD, the current hypothesis is stored with additional information so as to be updated easily. It keeps the current hypothesis as an _OBDD with access strings (OBDDAS)_ and (node) classification trees. In order to deal with more than two function values, node classification trees must be modified, and they are already extended in the previous section. In the followings, we describe extension of OBDDASs for OBDDs to those for OMTBDDs, which does not need modification except the number of sinks. An _OMTBDD with access strings (OMTBDDAS)_\(S\) represents some OMTBDD, which is denoted by \(\mathcal{D}(S)\), and stores additional information at edges and nodes. Differences from an OMTBDD are followings: * Its root is always a node labeled \(x_{1}\), which may be a dummy node having only one outgoing edge. * Labels of edges are binary strings instead of 0 or 1. The length of the edge label string between nodes labeled \(x_{i}\) and \(x_{j}\) is \(|i-j|\). If an edge goes out from the node labeled \(x_{i}\) to a sink, then its length is \(m+1-i\). The first bits of two edges going out from the same node must be different. * Each node has an id string which is a member of some node id set \(V(S)\) of the represented OMTBDD \(\mathcal{D}(S)\) if the node is not dummy, and \(\lambda\) (null string) if the node is dummy. The represented OMTBDD \(\mathcal{D}(S)\) is obtained from an OMTBDDAS \(S\) by removing the dummy root, its outgoing edge and all bits except the first bit from the label strings of all edges (and throwing away the id strings possessed by all nodes). We define \(E(S)\) as the set of edges in \(S\), which is represented as a subset of \(V(S)\times V(S)\). The label string of edge \((\mathbf{u},\mathbf{v})\in E(S)\) in \(S\) is denoted as \(l(\mathbf{u},\mathbf{v})\). **Example 2**.: _An example of an OMTBDDAS \(S\) is shown in the above. The root of \(S\) is a dummy, and the id of each node is the underlined string written beside the node, which is a member of the node id set \(V(S)\) of \(\mathcal{D}(S)\) except \(\lambda\), the node id of the dummy node. Here, \(\lambda\) denotes a null string. The OMTBDD \(\mathcal{D}(S)\) with node id set \(V(S)\) represented by the OBDDAS \(S\) is shown in the center of Figure 1._ As QLearn-\(\pi\)-OBDD does2, for an unknown target OMTBDD \(D\), our its extension grows hypothesis OMTBDDAS \(S\) and node classification trees \(T_{1},T_{2},\ldots,T_{m}\) so as to keep the following conditions CN, CT and CE. Note that classification by a hypothesis node classification tree is done by membership queries for not \(\mathcal{D}(S)\) but \(D\) though non-\(\mu\) leaf labels are strings in \(V(S)\). Thus, for any classification tree \(T_{i}\), 1. \(\forall\mathbf{v}_{1},\forall\mathbf{v}_{2}\in\text{nodes}(D)\) with \(|\mathbf{v}_{1}|=|\mathbf{v}_{2}|=i\ \ [\mathbf{v}_{1}\overset{D}{=}\mathbf{v}_{2}\Rightarrow T_{i}(\mathbf{v}_{1})=T_{i}(\mathbf{v}_ {2})]\) holds. 1. [Node condition] \((1)V(S)\subseteq\text{nodes}(D)\), \((2)\forall\mathbf{v}\in V(S)_{m}[\mathcal{D}(S)(\mathbf{v})=D(\mathbf{v})]\), and \((3)\forall\mathbf{v}_{1},\forall\mathbf{v}_{2}\in V(S)[\mathbf{v}_{1}\neq\mathbf{v}_{2} \Rightarrow\mathbf{v}_{1}\overset{D}{\neq}\mathbf{v}_{2}]\). 2. [Node classification tree condition] \((1)\forall\mathbf{v}\in V(S)[T_{|\mathbf{v}|}(\mathbf{v})=\mathbf{v}]\), and \((2)\forall i\in\{1,...,m\},\forall\mathbf{a}\in\{0,1\}^{i}[\mathbf{a}\not\in\text{ nodes}(D)\Rightarrow T_{i}(\mathbf{a})=\mu]\). 3. [Edge condition] For all \((\mathbf{u},\mathbf{v})\in E(S)\), \((1)T_{|\mathbf{v}|}(\mathbf{u}\cdot l(\mathbf{u},\mathbf{v}))=\mathbf{v}\), and \((2)|\mathbf{u}|<\forall j<|\mathbf{v}|\), \(T_{j}(\mathbf{u}\cdot\text{pre}(l(\mathbf{u},\mathbf{v}),j-|\mathbf{u}|))=\mu\). Condition CN is conditions for the node id set \(V(S)\): (1) any element of \(V(S)\) must be an access string for some node in \(D\), (2) \(\mathcal{D}(S)\) must have the same value as \(D\) for all the length-\(m\) strings in \(V(S)\), and (3) any distinct strings in \(V(S)\) must reach distinct nodes in \(D\). Condition CT is conditions for hypothesis node classification trees \(T_{1},T_{2},\ldots,T_{m}\): (1) any node id must be classified into itself, and (2) any non-access-string must be classified into \(\mu\). Condition CE is conditions for edges in \(\mathcal{D}(S)\): for any edge, (1) the concatenated string of its from-node id and its label string must be classified into its to-node id, and (2) the concatenated string of its from-node id and any prefix of its label string must be classified into \(\mu\). Note that hypothesis node classification trees might be incomplete. As a result, for the label \(\mathbf{r}\) of some single test node in \(T_{i}\) (\(i=1,\ldots,m\)), no edge labeled \(D(\mathbf{a}\mathbf{r})\) might exist for some \(\mathbf{a}\in\{0,1\}^{i}\). In such case, we define \(T_{i}(\mathbf{a})\) as the label of the last single test node that is reached by repeating edge selection operations at twin-test and single-test nodes starting from the root node. ``` 0: the updated OMTBDDAS \(S\) and node classification trees \(T_{1},T_{2},...,T_{m}\) 1:\((\mathbf{p}_{1},\mathbf{p}_{2},...,\mathbf{p}_{k})\leftarrow\) the id sequence of nodes in \(S\) passed by the counterexample \(\mathbf{e}\) in the passing order. 2:\(i\leftarrow\) the index \(i\) s.t. \(1\leq i<k\) and \(D(\mathbf{p}_{i}\text{suf}(\mathbf{e},m-|\mathbf{p}_{i}|))\neq D(\mathbf{p}_{i+1}\text{suf}( \mathbf{e},m-|\mathbf{p}_{i+1}|))=D(\mathbf{p}_{k})\) 3:if\(D(\mathbf{p}_{i}l(\mathbf{p}_{i},\mathbf{p}_{i+1})\text{suf}(\mathbf{e},m-|\mathbf{p}_{i+1}|))\neq D (\mathbf{p}_{k})\)then 4:\((S,T_{1},T_{2},...,T_{m})\leftarrow\text{NodeSplit}(S,T_{1},T_{2},...,T_{m}, \mathbf{e})\) 5:else 6:\((S,T_{1},T_{2},...,T_{m})\leftarrow\text{NewBranchingNode}(S,T_{1},T_{2},...,T_{m}, \mathbf{e},\mathbf{p}_{i},\mathbf{p}_{i+1})\) 7:return\((S,T_{1},T_{2},...,T_{m})\) ``` **Algorithm 1** QLearn-OMTBDD() ### Pseudocode Our OMTBDD-version extension of algorithm QLearn-\(\pi\)-OBDD is called QLearn-OMTBDD, and its pseudocodes are shown in Algorithm 1-5. Main extensions are followings: * It is not easy to find all the sink nodes initially as QLearn-\(\pi\)-OBDD does, so the extended algorithm finds only two sink nodes initially and add necessary sink nodes at Line 10 of AddEdge (Algorithm 5) later. * When the algorithm adds one single-test node to some node classification tree, it seems inefficient to find and add all its child leaves at that time as QLearn-\(\pi\)-OBDD does, so the extended algorithm adds only two child leaves to the new single-test node at Line 3 of NodeSplit ``` 0: the updated OMTBDDAS \(S\) and node classification trees \(T_{1},T_{2},...,T_{m}\) 1:\(\mathbf{e}_{i+1}\leftarrow\text{suff}(\mathbf{e},m-|\mathbf{p}_{i+1}|)\), \(\mathbf{v}\leftarrow\mathbf{p}_{i}l(\mathbf{p}_{i},\mathbf{p}_{i+1})\) 2: Add a node \(\mathbf{v}\) labeled \(x_{|\mathbf{v}|+1}\) and an edge \((\mathbf{p}_{i},\mathbf{v})\) labeled \(l(\mathbf{p}_{i},\mathbf{p}_{i+1})\) to \(S\), and remove the edge \((\mathbf{p}_{i},\mathbf{p}_{i+1})\) from \(S\). 3: Create a tree \(T_{|\mathbf{v}|}^{\mathbf{e}_{i+1}}\) that is composed of a single-test root \(T_{|\mathbf{v}|}^{\mathbf{e}_{i+1}}\) node labeled \(\mathbf{e}_{i+1}\) and its two child nodes labeled \(\mathbf{p}_{i+1}\) and \(\mathbf{v}\) which are connected by edges labeled \(D(\mathbf{p}_{k})\) and \(D(\mathbf{v}\mathbf{e}_{i+1})\), respectively. 4:\(\mathbf{t}\leftarrow\) the label of the last twin-test node in \(T_{|\mathbf{v}|}\) on the path from the root to the \(\mathbf{p}_{i+1}\)-labeled leaf. 5: Add two new edges outgoing from \(\mathbf{v}\) by executing AddEdge\((\mathbf{v},\mathbf{t})\) and AddEdge\((\mathbf{v},\dot{\mathbf{t}})\), where \(\dot{\mathbf{t}}\) denotes the string obtained by flipping the first bit of \(\mathbf{t}\). 6:for all \(\mathbf{v}_{1}\in V(S)\) s.t. \((\mathbf{v}_{1},\mathbf{p}_{i+1})\in E(S)\)do 7: Ask a membership query for \(\mathbf{v}_{1}\mathbf{q}^{\prime}\mathbf{e}_{i+1}\) and obtain \(D(\mathbf{v}_{1}\mathbf{q}^{\prime}\mathbf{e}_{i+1})\), where \(\mathbf{q}^{\prime}=l(\mathbf{v}_{1},\mathbf{p}_{i+1})\). 8:if\(T_{|\mathbf{v}|}^{\mathbf{e}_{i+1}}\) has an edge labeled \(D(\mathbf{v}_{1}l(\mathbf{v}_{1},\mathbf{p}_{i+1})\mathbf{e}_{i+1})\)then 9:if\(D(\mathbf{v}_{1}\mathbf{q}^{\prime}\mathbf{e}_{i+1})\neq D(\mathbf{p}_{k})\)then 10: Remove edge \((\mathbf{v}_{1},\mathbf{p}_{i+1})\) and add edge \((\mathbf{v}_{1},\mathbf{u})\), where \(\mathbf{u}\) is the label of the leaf with incoming edge labeled \(D(\mathbf{v}_{1}\mathbf{q}^{\prime}\mathbf{e}_{i+1})\) in \(T_{|\mathbf{v}|}^{\mathbf{e}_{i+1}}\). 11:else 12: Add a node \(\mathbf{v}^{\prime}=\mathbf{v}_{1}\mathbf{q}^{\prime}\) labeled \(x_{|\mathbf{v}^{\prime}|+1}\) and an edge \((\mathbf{v}_{1},\mathbf{v}^{\prime})\) labeled \(\mathbf{q}^{\prime}\) to \(S\), and remove the edge \((\mathbf{v}_{1},\mathbf{p}_{i+1})\) from \(S\). 13: Add a leaf node labeled \(\mathbf{v}^{\prime}\) to \(T_{|\mathbf{v}|}^{\mathbf{e}_{i+1}}\) as a child of the root with an edge labeled \(D(\mathbf{v}_{1}\mathbf{q}^{\prime}\mathbf{e}_{i+1})\). 14: Add two new edges outgoing from \(T_{|\mathbf{v}|}\)\(\mathbf{v}^{\prime}\) by executing AddEdge\((\mathbf{v}^{\prime},\mathbf{t})\) and AddEdge\((\mathbf{v}^{\prime},\dot{\mathbf{t}})\). 15: Replace the leaf node labeled \(\mathbf{p}_{i+1}\) of the tree \(T_{|\mathbf{v}|}\) with the tree \(T_{|\mathbf{v}|}^{\mathbf{e}_{i+1}}\). ``` **Algorithm 3** NodeSplit\((S,T_{1},T_{2},...,T_{m},\mathbf{e},\mathbf{p}_{i},\mathbf{p}_{i+1})\) (Algorithm 3), and add other child leaves when they are tried to be accessed (Line 13 of NodeSplit and Line 8 of AddEdge). * Each child leaf addition to some single-test node in a classification tree is accompanied by discovery and addition of a node in \(D\) (Line 12 of NodeSplit and Line 12 of AddEdge). As a result, more than one node in \(D\) can be added to the hypothesis OMTBDDAS \(S\) during one execution of Update-Hypothesis, which never happens in QLearn-\(\pi\)-OBDD. First, QLearn-OMTBDD asks an equivalence query (EQ) for a trivial OMTBDD, denoted by \(\mathbf{0}\), the OMTBDD being composed of only one sink labeled \(0\). If 'YES' is returned to the query, the algorithm outputs the hypothesis \(\mathbf{0}\) and stops. If the answer is 'NO' and counterexample \(\mathbf{e}^{\prime}\) is returned to the equivalence query, then the algorithm asks a membership query for assignment \(\mathbf{e}^{\prime}\) and obtains \(\ell^{\prime}=D(\mathbf{e}^{\prime})\), where \(D\) is the target OMTBDD. Next, the algorithm asks an equivalence query for another trivial OMTBDD \(\mathbf{\ell}^{\prime}\) which is composed of one sink labeled \(\ell^{\prime}\) alone. If 'YES' is returned to the query, the algorithm outputs the hypothesis \(\mathbf{\ell}^{\prime}\) and stops. If answer 'NO' and counterexample \(\mathbf{e}\) is returned, then the algorithm obtains \(\ell=D(\mathbf{e})\) by asking a membership query. Then, the algorithm makes an initial OMTBDD-DAS and node classification trees by algorithm Initial-Hypothesis from the two counterexamples \(\mathbf{e}^{\prime}\) and \(\mathbf{e}\) with \(D(\mathbf{e}^{\prime})=\ell^{\prime}\) and \(D(\mathbf{e})=\ell\). The algorithm Initial-Hypothesis works as follows. Since \(\mathbf{e}^{\prime}=\operatorname{cro}(\mathbf{e}^{\prime},\mathbf{e},0)\), \(\mathbf{e}=\operatorname{cro}(\mathbf{e}^{\prime},\mathbf{e},m)\) and \(D(\mathbf{e}^{\prime})=\ell^{\prime}\neq D(\mathbf{e})\), there exists \(i\) with \(0<i\leq m\) such that \(D(\operatorname{cro}(\mathbf{e}^{\prime},\mathbf{e},i-1))=\ell^{\prime}\neq D( \operatorname{cro}(\mathbf{e}^{\prime},\mathbf{e},i))\), and such an \(i\) can be found by a binary search using \(\lceil\log_{2}m\rceil\) membership queries. See Figure 2 for an initial OMTBDDAS \(S^{0}\) and initial node classification trees \(T^{0}_{j}\) for \(j=1,...,m\) constructed in the procedure. Note that \(S^{0}\) and \(\{T^{0}_{1},...,T^{0}_{m}\}\) satisfy the conditions CN, CT and CE. Assume that the algorithm has a current OMTBDDAS \(S\) and current node classification trees \(T_{i}\) for \(i=1,..,m\). Let \(\boldsymbol{e}\) be the last counterexample and let \(\ell=D(\boldsymbol{e})\). If \(S(\boldsymbol{e})\neq\ell\), then \(\boldsymbol{e}\) is still a counterexample for current hypothesis OMTBDDAS \(S\). Otherwise, the algorithm asks an equivalence query for \(\mathcal{D}(S)\) and outputs it if 'YES' is returned. When 'NO' is returned, a new counterexample \(\boldsymbol{e}\) can be obtained, and then a membership query for \(\boldsymbol{e}\) is asked to obtain \(\ell=D(\boldsymbol{e})\). Using the counterexample \(\boldsymbol{e}\), it executes the algorithm Update-Hypothesis shown in Algorithm 2. This process is repeated until 'YES' is returned to the equivalence query. Each execution of the procedure Update-Hypothesis finds at least one node of the target-reduced OMTBDD and updates the current hypothesis. Consider the path in \(S\) made by a given counterexample \(\boldsymbol{e}\). Assume that there are \(k\) nodes on the path and let \(\boldsymbol{p}_{i}\) be the id string of the \(i\)th node on the path from the root. Since \(\boldsymbol{e}\) is a counterexample for \(S\), the leaf node \(\boldsymbol{p}_{k}\) reached by the path is not correct, that is, \(D(\boldsymbol{p}_{k})\neq D(\boldsymbol{e})\). Let \(\boldsymbol{e}_{i}=\text{suf}(\boldsymbol{e},m-|\boldsymbol{p}_{i}|)\). Since \(\boldsymbol{p}_{k}=\boldsymbol{p}_{k}\boldsymbol{e}_{k}\) and \(\boldsymbol{e}=\boldsymbol{p}_{1}\boldsymbol{e}_{1}\), there must exist \(i\) such that \(1\leq i<k\) and \(D(\boldsymbol{p}_{i}\boldsymbol{e}_{i})\neq D(\boldsymbol{p}_{i+1}\boldsymbol {e}_{i+1})=D(\boldsymbol{p}_{k})\). Such \(i\) is calculated at Line 2 by using membership queries. For this \(i\), let \(\boldsymbol{q}\) denote the label of the edge \((\boldsymbol{p}_{i},\boldsymbol{p}_{i+1})\), that is, \(\boldsymbol{q}=l(\boldsymbol{p}_{i},\boldsymbol{p}_{i+1})\). There are two cases depending on the value of \(D(\boldsymbol{p}_{i}\boldsymbol{q}\boldsymbol{e}_{i+1})\). When \(D(\boldsymbol{p}_{i}\boldsymbol{q}\boldsymbol{e}_{i+1})\neq D(\boldsymbol{p} _{i+1}\boldsymbol{e}_{i+1})\), \(\boldsymbol{p}_{i}\boldsymbol{q}\) and \(\boldsymbol{p}_{i+1}\) must reach different nodes in \(D\), and this case is dealt with in the algorithm NodeSplit (Algorithm 3), where at least one single-test node is added to one Figure 2: Initial OMTBDDAS \(S^{0}\) and initial classification trees \(T^{0}_{j}\) for \(j=1,...,m\): \(T^{0}_{m-i}\) has one twin-test node, \(T^{0}_{m}\) has one single-test node, and the other \(T^{0}_{j}\)s are composed of only one leaf node labeled \(\mu\). of the classification trees. When \(D(\mathbf{p}_{i}\mathbf{q}\mathbf{e}_{i+1})=D(\mathbf{p}_{i+1}\mathbf{e}_{i+1})=D(\mathbf{p}_{k})\), there must exist a node between \(\mathbf{p}_{i}\) and \(\mathbf{p}_{i+1}\) from which the path for \(\mathbf{p}_{i}\mathbf{e}_{i}\) and the path for \(\mathbf{p}_{i}\mathbf{q}\mathbf{e}_{i+1}\) branches, and this case is dealt with in the algorithm NewBranchingNode (Algorithm 4), where at least one twin-test node is added to one of the classification trees. Both algorithms add a new node \(\mathbf{v}\) to the current OMTBDDAS, update \(T_{|\mathbf{v}|}\), change all edges that must enter \(\mathbf{v}\) and add edges going out from \(\mathbf{v}\). ### Example of Algorithm Execution An example of QLearn-OMTBDD execution is shown in Figure 3. The OMTBDDAS and node classification trees \((S,T_{1},\ldots,T_{8})\) constructed by Initial-Hypothesis(\((10100100,1),01111100)\)) is shown in 1. Since \(S(01111100)=0\), an equivalence query is asked for \(S\) at Line 8 in QLearn-OMTBDD and a counterexample \(01000101\) with \(D(01000101)=2\) is assumed to be obtained. As a sequence of access strings passed by \(01000101\), \((\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p}_{3})=(\lambda,1010,10100100)\) is obtained at Line 1 in Update-Hypothesis. At Line 2 in Update-Hypothesis, \(i\) is set to 1 because \(2=D(01000101)=D(\mathbf{p}_{1}\text{suf}(01000101,8)\neq D(\mathbf{p}_{2}\text{suf}(01 000101,4)=D(10100101)=1\) holds. Since \(1=D(10100101)=D(\mathbf{p}_{1}l(\mathbf{p}_{1},\mathbf{p}_{2})\text{suf}(01000101,4))=D( \mathbf{p}_{3})\) holds, algorithm NewBranchingNode is executed at Line 6. At Line 2 of NewBranchingNode, \(j\) is set to \(4=|\mathbf{q}|\) because \(2=D(01000101)=D(\lambda\text{cro}(1010,0100,4)0101)\neq D(\lambda\text{cro}( 1010,0100,3)0101)=D(11000101)=1\). Thus, Line 1 is executed, and the dummy root becomes a non-dummy root. Since \(\mathbf{r}\) is set to \(01000101\) at Line 3, an edge outgoing from the root \(\lambda\) is added by executing algorithm AddEdge(\(\lambda,01000101\)) (Algorithm 5). (See 2.) In algorithm AddEdge, \(T_{|\lambda|+j}(\lambda\text{pre}(01000101,j))=T_{j}(\text{pre}(01000101,j))=\mu\) for all \(j=1,\ldots,7\) and \(T_{8}(01000101)=\lambda\) because no edge outgoing from the root of \(T_{8}\) is labeled 2. Then, a leaf node labeled 01000101 is added as a child of node labeled \(\lambda\) in \(T_{8}\) (Line 8 in AddEdge), and a sink labeled 2 and an edge \((\lambda,01000101)\) labeled 01000101 is added to \(S\) (Line 10 in AddEdge). Algorithm NodeSplit is executed in 4. For \(S\) shown in 3, a counterexample \(01111010\) is assumed to be obtained. The sequence of access strings passed by \(01111010\) is \((\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p}_{3},\mathbf{p}_{4})=(\lambda,01,1010,10101100)\) and \(i\) is set to 2 because \(1=D(01111010)=D(\mathbf{p}_{2}\text{suf}(01111010,6)\neq D(\mathbf{p}_{3}\text{suf}(01 111010,4)=D(10101010)=0\) at Line 2 in Update-Hypothesis. In this case, \(1=D(01101010)=D(\mathbf{p}_{2}l(\mathbf{p}_{2},\mathbf{p}_{3})\text{suf}(01111010,4))\neq D (\mathbf{p}_{4})=0\) holds, so algorithm NodeSplit is executed at Line 4. In algorithm NodeSplit, node 0110 is split from node 1010 (Line 2), the leaf node labeled 1010 in \(T_{4}\) is replaced (Line 15) with Figure 3: An example of an OMTBDDAS constriction by algorithm QLearn-OMTBDD a single-test node labeled \(1010\) that has a child node labeled \(0110\) connected by an edge labeled \(1\) and a child node labeled \(1010\) connected by an edge labeled \(0\) (Line 3). After this node split, the last counterexample \(0111010\) is still counterexample for the updated \(S\) shown in 4, that is, \(0=S(01111010)\neq\ell=D(01111010)=1\), so Lines 8-10 in QLearn-OMTBDD are not executed and Update-Hypothesis is executed for the same \((e,\ell)\). In the update from 5 to 6, node \(101001\) labeled \(x_{7}\) is added to \(S\) by algorithm NewBranchingNode, in which the edge \((1010,10100100)\) is replaced with the edge \((1010,101001)\). This is done during the execution of for-loop of Lines 7-10. Since the OMTBDD corresponding to the OMTBDDAS \(S\) shown in 7 is equivalent to \(D\), the answer of the equivalence query for \(S\) is 'Yes' at Line 9 in QLearn-OMTBDD and \(\mathcal{D}(S)(=D)\) is outputted. ### Correctness and Efficiency **Theorem 1**.: _For an arbitrary target reduced OMTBDD \(D\), algorithm QLearn-OMTBDD exactly learns \(D\) using at most \(n\) equivalence queries and at most \(2n(\lceil\log_{2}m\rceil+3n)\) membership queries, where \(m(\geq 1)\) is the number of variables and \(n\) is the number of nodes in \(D\)._ Proof.: See A. Assuming that all the operations on strings of length \(m\) need at most \(O(m)\) steps, it can be easily shown that the running time is at most \(O(nm(\log m+n))\), that is, a factor of \(O(m)\) larger than the number of queries. ## 5 Experiments We conducted experiments to show effectiveness of our algorithm using a synthetic dataset and several benchmark datasets in UCI Machine Learning Repository. ### Synthetic Dataset We investigated the empirical sample complexity of our algorithm using a synthetic dataset. We randomly generated OMTBDDs with the number of nodes \(n\), the number of variables \(m\), and the number of leaves \(K\) for various \((n,m,K)\). Ten OMTBDDs were generated for each \((n,m,K)\) (1) with \(n=1,2,4,8,16,32,64,128,256,512(\times 10^{2})\), and fixed \(m=3200\) and \(K=32\), (2) with \(m=1,2,4,8,16,32,64,128,256,512(\times 10^{2})\), and fixed \(n=3200\) and \(K=32\), and (3) with \(K=2,4,8,16,32,64,128,256,512\), and fixed \(n,m=3200\), using a procedure similar to the OBDD generation procedure [4] (See B). The number of queries for OMTBDDs of parameter list \((n,m,K)\) are averaged over the ten OMTBDDs. The results are shown in Figure 4. The tables and line graphs in the left column are the results for membership queries and those in the right column are for equivalence queries. The both axes are log-scaled. Numerical values in the tables are rounded to three significant digits. Theoretical upper bounds (\(2n(\lceil\log_{2}m\rceil+3n)\) for membership queries, \(n\) for equivalence queries, where \(n,m\) are the numbers of nodes and variables, respectively) are also shown for comparison. As for the number of nodes, the theoretical upper bound query numbers for membership and equivalence queries are \(O(n^{2})\) and \(O(n)\), respectively, and those orders of increasing are observable in row (1) of the figure. With respect to the number of variables, \(O(\log m)\) and \(O(1)\) are the orders of increasing for membership and equivalence queries, respectively, but not only the number of equivalence queries but also that of membership queries looks almost constant in row (2) of the figure. The upper bound on the number of membership queries is \(6400\lceil\log_{2}m\rceil+6\times 6400^{2}\), thus additional \(6400\) queries are required for two times larger number of variables, and \(6400\) is small compared to \(6\times 6400^{2}\). The numbers of both the queries are not affected by the number of leaves as shown in raw (3) of the figure, though the number of membership queries looks reduced in the case that the number of leaves is very small (\(K=2,4\)). The number of distinct functions becomes larger as the number of leaves increase, which means the complexity of the function class increases. That might be the reason of this phenomenon. ### Benchmark Datasets One of applications is transformation of learned classifiers to OMTBDDs. If compact OMTBDD representations of learned classifiers are obtained, those are more appropriate for hardware implementation with resource-limited devices and more tractable because various operations between functions are available for OMTBDDs. We used 12 datasets registered in UCI Machine Learning Repository [8] that are composed of real-valued features only. (See Table 1) The process of our experiment for each dataset is as follows. 1. A tree-based classifier is learned using a given dataset. 2. Branching conditions of component decision trees are reduced by the branching condition sharing algorithm Min_DBN [9]. The tree-based classifier is converted to a simpler classifier by sharing the branching conditions. Figure 4: The number of membership and equivalence queries (1) for various number of nodes, (2) for various number of variables, and (3) for various number of leaves. 3. Each training data is converted to a binary feature data using each distinct branching condition of the converted classifier as a binary feature. 4. The variable ordering of the binary features is decided as follows. Consider a directed graph that is composed of vertices corresponding to the binary features. For each pair of two binary features \((x_{i},x_{j})\), count the number of occurrences that node labeled \(x_{i}\) is an ancestor of node labeled \(x_{j}\) and the number of occurrences that the opposite relation holds. Define the direction of edges between vertices corresponding to \(x_{i}\) and \(x_{j}\) as that from the vertex corresponding to the feature whose number of ancestor occurrences is more than the other feature's number of ancestor occurrences. Also define the weight of the directed edge as the difference between their number of ancestor occurrences. Remove edges from those with the smallest weights until the topological sorting of the whole vertices is succeeded. Define the binary feature order as the order of corresponding vertices. 5. An OMTBDD is learned by QLearn-OMTBDD from the set of the converted labeled binary feature data using the converted tree-based classifier as the membership oracle and using consistency with all the given data as equivalence to the target function for an equivalence query and returning an inconsistent given data as a counter example. Note that we only use the data whose labels are predicted correctly by the converted tree-based classifier. As tree-based classifiers, a decision tree and a random forest are used in our experiment. We adopt DecisionTreeClassifier and RandomForest \begin{table} \begin{tabular}{l r r r l} \hline \hline dataset & \#data & \#feature & \#class & details \\ \hline iris & 150 & 4 & 3 & Iris \\ parkinsons & 195 & 22 & 2 & Parkinsons \\ breast cancer & 569 & 30 & 2 & Breast Cancer Wisconsin (Diagnostic) \\ blood & 748 & 4 & 2 & Blood Transfusion Service Center \\ RNA-Seq PANCAN & 801 & 20531 & 5 & gene expression cancer RNA-Seq \\ wine quality red & 1599 & 11 & 11 & Wine Quality \\ wine quality white & 4898 & 11 & 11 & Wine Quality \\ waveform & 5000 & 40 & 3 & Waveform Database Generator (Version 2) \\ robot & 5456 & 24 & 4 & Wall-Following Robot Navigation \\ musk & 6598 & 166 & 2 & Musk (Version 2) \\ epileptic seizure & 11500 & 178 & 5 & Epileptic Seizure Recognition \\ magic & 19020 & 10 & 2 & MAGIC Gamma Telescope \\ \hline \hline \end{tabular} \end{table} Table 1: Datasets of UCI Machine Learning Repository [8] that is used in our experiment. Classifier of scikit-learn version 1.1.dev0. The number of component trees (n_estimators) in RandomForestClassifier is set to 100. Other parameters but n_jobs and random_state are set to defaults in RandomForestClassifier: n_jobs\(=-1\) (the number of jobs to run in parallel is set to the number of processors), random_state\(=0\) (the seed of randomized selections is set to 0). Parameter random_state (the seed of random permutation of features at each split) is set to 0 in DecisionTreeClassifier and other parameters are set to defaults. We evaluated compactness and accuracy of the learned OMTBDDs by the number of nodes and accuracy for test datasets separated from training datasets by 5-fold crossvalidation. The largest two datasets, epileptic seizure and magic, are too large for our query learning algorithm to learn an OMTBDD from the random forest classifier learned using them, thus we exclude them from our experiments for random forest classifiers. Results are shown in Table 2 for decision trees and in Table 3 for random forests. In the tables, '#node' is the number of nodes and '(l.share)' means that of its leaf-shared tree-based classifier whose same-labeled leaves share a single leaf. An OMTBDD has only one leaf with the same label, thus comparison in number of nodes to the original tree-based classifier should be done in that of its leaf-shared form. '#DC' is the number of distinct branching conditions in an original tree-based classifier and '#RDC' means that in the classifier reduced by the branching condition sharing algorithm Min_DBN. All the numbers are rounded to three significant digits. For simple classifier problem, in which the original decision tree has less than 30 nodes, their leaf-shared decision trees are already OMTBDDs, and query learning \begin{table} \begin{tabular}{|l|c c c c|c c c|} \hline \multirow{2}{*}{dataset} & \multicolumn{5}{c|}{decision tree} & \multicolumn{3}{c|}{OMTBDD} \\ & \#node(l. share) & accuracy & \#DC & \#RDC & \#node & accuracy \\ \hline iris & 14.6 & (9.8) & 0.947 & 6.8 & 6.8 & 9.8 & 0.947 \\ parkinson & 25.4 & (14.2) & 0.856 & 12.2 & 11.8 & 14.2 & 0.856 \\ breast cancer & 37.4 & (20.2) & 0.924 & 18.2 & 18.2 & 19.8 & 0.924 \\ blood & 320 & (162) & 0.713 & 106 & 72.2 & 342 & 0.722 \\ RNA-Seq PANCAN & 14.6 & (11.8) & 0.974 & 6.8 & 6.8 & 11.8 & 0.974 \\ wine quality red & 668 & (345) & 0.650 & 300 & 179 & 3811 & 0.652 \\ wine quality white & 2088 & (1050) & 0.604 & 769 & 370 & 29700 & 0.598 \\ waveform & 797 & (401) & 0.735 & 397 & 304 & 3883 & 0.743 \\ robot & 57.4 & (32.7) & 0.995 & 28.0 & 22.4 & 49.0 & 0.996 \\ musk & 252 & (128) & 0.965 & 126 & 121 & 123 & 0.964 \\ epileptic seizure & 3690 & (1850) & 0.472 & 1830 & 1290 & 722000 & 0.452 \\ magic & 3200 & (1600) & 0.818 & 1600 & 771 & 49800 & 0.820 \\ \hline \end{tabular} \end{table} Table 2: Number of nodes and accuracy of OMTBDDs learned from decision trees. algorithm learns them exactly. (See the results for iris, parkinson, RNA-Seq PANCAN datasets.) Interesting results are those for breast cancer and musk datasets; the OMTBDD size is smaller than the original leaf-shared decision tree size for those. What happened in one execution of our 5-fold crossvalidation for breast cancer dataset is shown in Figure 5. The corresponding leaf-shared tree of the decision tree in the upper figure is already an OMTBDD, and the OMTBDD in the lower figure is the one learned from the lower figure. Figure 5: Decision tree for breast cancer dataset and the OMTBDD learned from it. it. The difference between them is the shaded part. The original decision tree is constructed in top-down manner, and in that manner, \(x_{10}\leq 1.0476\) is a good condition to classify. After constructing all the descendants, all the training data classified into 0 by the condition are found to be classified also into 0 in the descendant part. Thus, the condition is not needed. For other larger classifiers except that for epileptic seizure dataset, OMTBDD size is less than 30 times larger than the leaf-shared decision tree size of the original decision tree, and no significant accuracy deterioration is observed. For epileptic seizure dataset, OMTBDD size is 165 times larger and accuracy is also deteriorated. Decision boundary of the decision tree for epileptic seizure dataset is guessed to be complex compared to those for the other datasets in the given variable ordering, and the OMTBDD that is consistent with the given data might not be a good approximation for the original decision tree. As for random forest classifier results, original classifiers' accuracies for all the datasets are improved or the same. Accuracies of OMTBDDs learned from them, however, are the same or deteriorated. Accuracy deterioration is small for the datasets for which size reduction by the learned OMTBDD is succeeded: iris, parkinson, breast cancer, blood, and robot. Especially for parkinson and breast cancer datasets, accuracies of the OMTBDDs learned from the random forest classifiers are better than those learned from the decision tree classifiers. Those OMTBDDs are meaningful in the point that their accuracies are better than the decision trees and their size are smaller than the random forests. For other 5 datasets, accuracies are deteriorated significantly and sizes become 1.5-48 times larger. The reason why such deterioration occurred for the 5 datasets seems the same as the above-mentioned \begin{table} \begin{tabular}{|l|c c c c|c c c|} \hline \multirow{2}{*}{dataset} & \multicolumn{4}{c|}{random forest} & \multicolumn{2}{c|}{OMTBDD} \\ & \#node(l. share) & accuracy & \#DC & \#RDC & \#node & accuracy \\ \hline iris & 1430 & (965) & 0.947 & 104 & 43.6 & 18.2 & 0.947 \\ parkinson & 2660 & (1480) & 0.892 & 1010 & 404 & 147 & 0.872 \\ breast cancer & 3570 & (1940) & 0.961 & 1420 & 594 & 203 & 0.944 \\ blood & 23200 & (11800) & 0.749 & 322 & 145 & 1520 & 0.723 \\ RNA-Seq PANCAN & 3960 & (2430) & 0.996 & 1920 & 1840 & 3540 & 0.885 \\ wine quality red & 55900 & (29000) & 0.692 & 4120 & 900 & 271000 & 0.598 \\ wine quality white & 181000 & (91600) & 0.679 & 7290 & 1400 & 2080000 & 0.594 \\ waveform & 83800 & (42200) & 0.854 & 32500 & 5870 & 2010000 & 0.689 \\ robot & 25000 & (12900) & 0.995 & 9900 & 3140 & 1010 & 0.996 \\ musk & 34600 & (17500) & 0.975 & 12200 & 6240 & 574000 & 0.908 \\ \hline \end{tabular} \end{table} Table 3: Number of nodes and accuracy of OMTBDDs learned from random forests. reason for decision tree accuracy deterioration using epileptic seizure dataset. ## 6 Conclusions and Future Work We developed a query learning algorithm QLearn-OMTBDD for OMTBDDs by extending QLearn-\(\pi\)-OBDD [4] for OBDDs. In our algorithm, the length-\((i-1)\) prefix of an assignment is classified into a node with label \(x_{i}\) or \(\mu\) (no node corresponds to the assignment prefix) by the node classification tree for length \(i-1\) using membership queries, and the fact that the number of possible answers is more than two for each membership query, prevents straightforward extension of the classification trees and their operating procedure. In the experiments using benchmark datasets, we showed possibility of our algorithm's application to a classification problem by constructing compact and accurate OMTBDDs for some datasets. On the other hand, there are other datasets for which learned OMTBDDs have a lot of nodes and low accuracy. we would like to clarify whether there are compact and accurate OMTBDDs for such datasets or not. ## Acknowledgments This work was supported by JST CREST Grant Number JPMJCR18K3, Japan.
2304.07354
NEV-NCD: Negative Learning, Entropy, and Variance regularization based novel action categories discovery
Novel Categories Discovery (NCD) facilitates learning from a partially annotated label space and enables deep learning (DL) models to operate in an open-world setting by identifying and differentiating instances of novel classes based on the labeled data notions. One of the primary assumptions of NCD is that the novel label space is perfectly disjoint and can be equipartitioned, but it is rarely realized by most NCD approaches in practice. To better align with this assumption, we propose a novel single-stage joint optimization-based NCD method, Negative learning, Entropy, and Variance regularization NCD (NEV-NCD). We demonstrate the efficacy of NEV-NCD in previously unexplored NCD applications of video action recognition (VAR) with the public UCF101 dataset and a curated in-house partial action-space annotated multi-view video dataset. We perform a thorough ablation study by varying the composition of final joint loss and associated hyper-parameters. During our experiments with UCF101 and multi-view action dataset, NEV-NCD achieves ~ 83% classification accuracy in test instances of labeled data. NEV-NCD achieves ~ 70% clustering accuracy over unlabeled data outperforming both naive baselines (by ~ 40%) and state-of-the-art pseudo-labeling-based approaches (by ~ 3.5%) over both datasets. Further, we propose to incorporate optional view-invariant feature learning with the multiview dataset to identify novel categories from novel viewpoints. Our additional view-invariance constraint improves the discriminative accuracy for both known and unknown categories by ~ 10% for novel viewpoints.
Zahid Hasan, Masud Ahmed, Abu Zaher Md Faridee, Sanjay Purushotham, Heesung Kwon, Hyungtae Lee, Nirmalya Roy
2023-04-14T19:20:26Z
http://arxiv.org/abs/2304.07354v1
NEV-NCD: Negative Learning, Entropy, and Variance Regularization Based Novel Action Categories Discovery ###### Abstract _Novel Categories Discovery_ (NCD) facilitates learning from a partially annotated label space and enables deep learning (DL) models to operate in an open-world setting by identifying and differentiating instances of novel classes based on the labeled data notions. One of the primary assumptions of NCD is that the novel label space is perfectly disjoint and can be equipartitioned, but it is rarely realized by most NCD approaches in practice. To better align with this assumption, we propose a novel single-stage joint optimization-based NCD method, **N**egative learning, **E**ntropy, and **V**ariance regularization NCD (**NEV**-NCD). We demonstrate the efficacy of NEV-NCD in previously unexplored NCD applications of video action recognition (VAR) with the public UCF101 dataset and a curated in-house partial action-space annotated multi-view video dataset. We perform a thorough ablation study by varying the composition of final joint loss and associated hyper-parameters. During our experiments with UCF101 and multi-view action dataset, NEV-NCD achieves \(\approx~{}83\%\) classification accuracy in test instances of labeled data. NEV-NCD achieves \(\approx 70\%\) clustering accuracy over unlabeled data outperforming both naive baselines (by \(\approx 40\%\)) and state-of-the-art pseudo-labeling-based approaches (by \(\approx 3.5\%\)) over both datasets. Further, we propose to incorporate optional view-invariant feature learning with the multiview dataset to identify novel categories from novel viewpoints. Our additional view-invariance constraint improves the discriminative accuracy for both known and unknown categories by \(\approx 10\%\) for novel viewpoints. Zahid Hasan\({}^{*}\), Masud Ahmed\({}^{*}\), Abu Zaher Md Faridee\({}^{*}\), Sanjay Purushotham\({}^{*}\) Heesung Kwon\({}^{\$}\), Hyungtae Lee\({}^{\$}\), Nirmalya Roy\({}^{*}\)+ Footnote †: This research is supported by the NSF CAREER grant #1750936, REU Site grant #2050999 and U.S. Army grant #W911NF2120076. \({}^{*}\)Department of Information Systems, University of Maryland, Baltimore County, USA \({}^{*}\)Amazon, USA; \({}^{\$}\)DEVCOM Army Research Laboratory (ARL), USA NCD, Negative Learning, Variance Loss. ## 1 Introduction Deep learning (DL) based approaches form the backbone of modern video understanding, video action recognition (VAR), surveillance, and human-computer interactions. They learn to extract relevant spatiotemporal action features from videos and map them to predefined action classes. The traditional VAR models often assume the existence of exact labels (often laboriously annotated) for all action classes and fail to comprehend novel unlabeled action categories. Besides these models often fail to identify known classes from different viewpoints due to a lack of view-dependent features learning. _Novel Categories Discovery_ (NCD) [1, 2, 3], an emerging field of DL, aims at more generalized model development under the partial class-space labeled data and alleviates the need for class-specific exact labels for all classes. NCD methods utilize the semantics knowledge of class notions from the labeled data to detect and structure unlabeled novel classes while retaining the classification performance (Figure 1). In this research, we study a novel NCD method in VAR to extract generalized spatial-temporal action features from videos based on labeled class notions to identify and cluster unlabeled actions. The current NCD approaches rely on either a two-stage or a single-stage joint optimization [4] with various data-specific assumptions (simultaneous/sequential data availability, disjoint classes (mutually exclusive sets) between labeled and unlabeled data, knowledge about novel class quantity and their uniform distribution). These approaches optimize the classification loss of labeled data and cluster unlabeled data based on the class notions and semantics of the labeled data. The clustering constraints for unlabeled data are the primary key to NCD and can be enforced by pairwise similarity losses [5], pseudo-labeling [6], prototype learning [7], Sinkhorn-Knopp clustering [8], and self-training [9], using the uniformity property. These NCD clustering objectives aim to achieve low entropy and equal variance among the unlabeled classes. However, these approaches fail to explicitly capitalize on the NCD's assumption of unique, uniform, and disjoint labels for the unlabeled instances. Hence, we formulate the NCD's disjoint properties as learning from complementary labels for unlabeled data and find the similarities of fair \(k-\)face dice statistics with the unique and uniform conditions of the unlabeled data. We propose an intuitive solution to achieve low-entropy, equal class Figure 1: NCD objective aims to detect and structure the unlabeled novel categories based on the semantics of disjoint labeled data. variance aligning equipartition constraint for the novel classes in separate embedding spaces from the labeled classes taking inspiration from negative learning [10] and VICreg [11]. We hypothesize that negative learning (disjoint labels), explicit entropy (unique label), and variance (equipartition) regularization would cluster the unlabeled instances aligning with the semantics of the partially labeled data. Further, we study the viewpoint dependency in model learning, i.e. the model fails to detect learned categories from complementary viewpoints for both the known and novel classes due to distribution shift. We hypothesize that learning view-invariant features would enhance model robustness to decision-making from diverse viewpoints and address the research question of _learning novel classes from novel viewpoints_. We propose to learn view-invariant spatiotemporal feature embedding for robust action recognition by incorporating adversarial learning to remove view information from the embedded representation and contrastive learning to merge representation from different views of the same activities together. Our contribution of incorporating this complementary information and _k_-face dice statistics into a novel NCD methodology with view-invariant features is summarized below: (1) We propose a novel joint optimization-based single-stage NCD method NEV-NCD (_N_egative learning, _E_ntropy, and _V_ariance regularization _NCD_) by integrating negative learning, entropy, and variance constraints. Further, we adapt supervised contrastive learning [12] to enhance the cluster discrimination. (2) We validate NEV-NCD in the VAR by experimenting with public UCF101 [13] and our in-house multi-view video datasets. We thoroughly investigate the impacts of defined class notions, individual loss components, backbone networks, and viewpoints. NEV-NCD achieves \(92\%\), \(84\%\) test accuracy on the labeled data and \(82\%\), \(70\%\) average clustering accuracy on the unlabeled data for UCF101 and in-house dataset respectively. Our approaches outperform naive and single-stage NCD baselines in both datasets. We open-source our codes and data 1 for further research. Footnote 1: Codes and Dataset (3) We demonstrate the importance of view-invariant feature learning by contrastive and adversarial learning for recognizing novel views for novel actions for robust NCD performance for both novel and known classes from diverse viewpoints by experimenting with a multiview VAR dataset. Our thorough experiments empirically provide insight regarding optimal viewpoints, and data selection settings to develop view-invariant NCD models with computational and labeled efficiency. ## 2 Methodology ### Problem Formulation and Notations The dataset consists of labeled data \(\mathcal{D}^{l}=\{(x_{i}^{l},y_{i})\}_{i=1}^{N^{l}}\) where \(y_{i}\in C_{L}=\{c_{1},c_{2},...,c_{L}\}\) from \(L=|C_{L}|\) unique ordinary categories (instance belongs to the class) and unlabeled data \(\mathcal{D}^{u}=\{(x_{i}^{u})\}_{i=N^{l}+1}^{N^{u}}\) from \(U=|C_{U}|\) latent novel classes \(y_{i}\in C_{U}=\{c_{L+1},c_{L+2},...,c_{L+U}\}\) with non-overlapping categories \(C_{L}\cap C_{U}=\emptyset\). Alternatively, we rewrite \(\mathcal{D}^{u}\) with multiple complementary labels (instance does not belong to complementary label classes) by \(\{(x_{i}^{u},\hat{y}_{i})\}_{i=N^{l}+1}^{N^{u}}\), where \(x_{i}^{u}\notin\hat{y}_{i}\) and \(\hat{y}_{i}\in C_{L}\). Further, the individual class distribution of \(\mathcal{D}^{l}\) and \(\mathcal{D}^{u}\) depends on the captured viewpoint \(v_{j}\) where \(j\in 1,2,...K\) denotes on one of the \(K\) camera view angle positions. The union of these individual view distributions, \(\mathcal{D}_{c_{i},v_{j}}\) for \(c_{j}\in C_{L}\bigcup C_{U}\), forms the complete data distribution \(\mathcal{D}_{c_{i}}\) for each class as in equation 1. \[\mathcal{D}_{c_{i}}=\bigcup_{j=1}^{K}\mathcal{D}_{c_{i},v_{j}} \tag{1}\] We denote parameterized 3D convolutional neural network (CNN) for video encoder \(f_{\theta}\), fully connected representation layer \(h_{\theta}\), and fully connected classification head layer \(g_{\theta}\). The \(g_{\theta}\) consists of the concatenation of label head \(l_{\theta}\) (0 to \(L-1\) labeled categories) and unlabeled head \(u_{\theta}\) (\(L\) to \(L+U-1\) unlabeled categories) to represent the probabilities over the total dataset categories. The NCD objectives aim to develop DL models to classify the \(\mathcal{D}^{l}\) by comprehending the labeling semantics and clustering the \(\mathcal{D}^{u}\) counterpart based on their underlying latent class distribution. ### NEV-NCD Our proposed NEV-NCD method falls under the single-stage approach requiring the availability of both \(\mathcal{D}^{l}\) and \(\mathcal{D}^{u}\) concurrently. NEV-NCD jointly optimizes both the supervised and clustering objectives (Figure 2) as discussed in the following. **Ordinary Label Learning:** The supervised training covers learning from both ordinary and multiple complementary labels. The ordinary labeled data \(\mathcal{D}^{l}\) are utilized to optimize the categorical cross-entropy loss 2 of \(y_{i}\in C_{L}\) in the classification head. We denote network prediction \(\hat{\mathbf{y}}_{1}=\sigma_{k}(g_{\theta}(h_{\theta}(f_{\theta}(x_{i}))))\), where \(\sigma_{k}\) represents Softmax operation over the prediction containing \(L+U\) neurons of \(g_{\theta}\). \[\mathcal{L}_{ce}=-\mathop{\mathbb{E}}_{(x_{i},y_{i})\in\mathcal{D}^{l}}\sum_{j =0}^{L-1}\!\!y_{ij}\mathrm{log}\hat{y}_{ij} \tag{2}\] \(y_{ij}\) represents a one-hot encoding vector of the target \(y_{i}\). **Negative Learning:** We utilize the disjoint condition between \(\mathcal{D}^{u}\) and \(\mathcal{D}^{l}\) latent classes to formulate the problem as learning from multiple complementary labels [14] for the \(\mathcal{D}^{u}\) and propose to incorporate negative-learning-based solution [10]. The negative learning utilizes the complementary labels to reduce their confidence on \(l_{\theta}\) for the corresponding instances contrary to the ordinary labeled supervised learning. Figure 2: Overview of our NCD approach based on negative learning and variance constraint to identify novel clusters In our approach, we enforce zero (low-confidence) for the complementary labels by applying the loss defined in Equation 3. \[\mathcal{L}_{nl}=-\operatorname*{\mathbb{E}}_{x_{i}\in\mathcal{D}^{u}}[\sum_{j=0}^ {L-1}\log(1-\hat{y}_{ij})] \tag{3}\] **Entropy Regularization:** The final classification layer Soft-Max operation and negative learning loss function impose higher confidence in the unlabeled classification head \(u_{\theta}\) for \(x_{i}\in\mathcal{D}^{u}\). We employ the uniquely-labeled data condition and apply entropy constraint loss (Equation 4) to increase the confidence of the unlabeled head as the Dirac-delta distribution minimizes the entropy [15]. The entropy regularization further ensures gradient flow through the weight matrix of \(u_{\theta}\) nodes. \[\mathcal{L}_{H}=-\operatorname*{\mathbb{E}}_{x_{i}\in U}[\sum_{j=0}^{L+U-1} \hat{y}_{ij}\log\hat{y}_{ij}] \tag{4}\] **Contrastive and Consistency Loss:** We adopt the SupCon [12] framework with NCD to enhance the representation quality by modifying the data sampling using the underlying data conditions. We optimize probabilistic InfoNCE contrastive objectives to learn both categorical and instance discrimination in the representation layer output denoted by \(z_{i}=h_{\theta}(f_{\theta}(x_{i}))\). _Category Discrimination:_ We utilize the known categories of the \(\mathcal{D}^{l}\) and disjoint criterion to perform categorical contrastive learning. Here, we sample query and positive instances from \(c_{j}\in C_{L}\) and take negative instances distributed from remaining classes and unlabeled data \(C_{L}/\{c_{j}\}\cup C_{U}\) to minimize the contrastive objective of 5 where \(s_{i,j}=cosine(z_{i,j}),\tau\) denotes cosine similarity between embedding and temperature. \[\mathcal{L}_{cl}=\operatorname*{\mathbb{E}}_{c_{l}\sim C_{L}}\operatorname*{ \mathbb{E}}_{\begin{subarray}{c}x_{i},x_{j}\mid c_{l}\\ x_{k,n}\mid c_{k}\neq c_{l}\end{subarray}}-\log\frac{e^{(s_{i,j}/\tau)}}{e^ {(s_{i,j}/\tau)}+\sum_{n=0}^{N}e^{(s_{i,kn}/\tau)}} \tag{5}\] _Instance Discrimination:_ Although there lacks information about the intra-class similarity between unlabeled instances, we have their complementary labels. For the query of unlabeled instances, we sample negatives from \(\mathcal{D}^{l}\) to avoid sampling false negative keys and minimize 6. We also enforce consistency [16] between the representation of the instance \(x_{i}\) and their augmented version \(x_{i}^{\prime}\) by equation 7. Here, we rely on variance regularization 2.2 instead of contrastive learning to discriminate between unlabeled latent classes. \[\mathcal{L}_{cl}=\operatorname*{\mathbb{E}}_{\begin{subarray}{c}x_{i}\in \mathcal{D}^{u}\\ x_{k,n}\mid c\in C_{L}\end{subarray}}-\log\frac{e^{(s_{i,i^{\prime}}/\tau)}}{ e^{(s_{i,i^{\prime}}/\tau)}+\sum_{n=0}^{N}e^{(s_{i,kn}/\tau)}} \tag{6}\] \[\mathcal{L}_{mse}=\operatorname*{\mathbb{E}}_{x_{i}\in\mathcal{D}^{u}}||z-z^ {\prime}||_{2}^{2} \tag{7}\] **Variance Regularization:** Uniform random sampling from a balanced data of \(k\) classes is equivalent to roll a fair \(k-\)face to with probabilities, \(P=\frac{1}{k}\), of each class. We can define a binary random variable, \(X_{j}\in\{0,1\}\), representing the events of \(j-\)th face, \(j\in 1,2,...,k\), outcome success and failure with probabilities, \(P(X_{j}=1)=\frac{1}{k}\), mean, \(\operatorname*{\mathbb{E}}(X_{j})=\frac{1}{k}\), and variance \(\frac{k-1}{k^{2}}\). We explicitly apply this constraint as regularization 8 for the empirical distribution of the sharpened (sharpening hyper-parameter \(Sr\)) unlabeled prediction head \(\tilde{y}_{i}=\sigma_{k}(\tilde{y}_{i}/Sr)\) for relatively large batch size. \[\mathcal{L}_{var}=\operatorname*{\mathbb{E}}_{x_{i}\in\mathcal{D}^{u}}|var( \tilde{y}_{i})-\frac{k-1}{k^{2}}|^{2} \tag{8}\] **Joint Optimization:** We jointly optimize the learning objective 9 of a weighted (\(\lambda\)) sum of the discussed losses and regularization by utilizing both \(\mathcal{D}^{l}\) and \(\mathcal{D}^{u}\) in a single stage with supervised contrastive architecture framework. \[\mathcal{L}=\lambda_{ce}\mathcal{L}_{ce}+\lambda_{cl}\mathcal{L}_{cl}+\lambda_ {nl}\mathcal{L}_{nl}+\lambda_{H}\mathcal{L}_{H}+\lambda_{var}\mathcal{L}_{var} \tag{9}\] **View-invariance constraint:** We impose optional additional constraints with the joint optimization to learn view-invariant features given the availability of the viewpoint specifications. We adopt adversarial training [17] to enforce the encoder to learn view-invariant discriminative action features by alternatively updating a discriminator and encoder towards an adversarial objective. In our setting, we place a multi-class discriminator \(f_{d}\) to retrieve the source viewpoint information [minimizing the loss of equation 10], whereas the encoder network \(f_{\theta}\) aims to learn viewpoint information free action embedding [minimizing the loss of 11] to make the \(f_{d}\) task no better than a random guess than maximize the discriminator loss of 10. \[\mathcal{L}_{d}=-\operatorname*{\mathbb{E}}_{\bigcup_{j=1}^{3}c_{i,v_{j}}} \sum_{j=0}^{2}\mathbf{e}_{j}\log f_{d}(h_{\theta}(g_{\theta}(x_{i}|c_{i,v_{j}}))) \tag{10}\] Where, \(\mathbf{e}_{j}\) denotes one-hot vector for \(j-th\) position. \[\mathcal{L}_{adv}=-\operatorname*{\mathbb{E}}_{\bigcup_{j=1}^{3}c_{i,v_{j}}} \sum_{j=0}^{2}\mathbf{e}_{0}\log f_{d}(h_{\theta}(g_{\theta}(x_{i}|c_{i,v_{j}}))) \tag{11}\] Moreover, we enforce contrastive learning objectives to collapse the distributions from different viewpoints to learn a view-invariant representation that only depends on the underlying action class. Particularly, we sample positives by taking class instances from different viewpoints and negatives by sampling instances from other classes following TCN work [18]. ## 3 Experiments We demonstrate the NEV-NCD effectiveness by experimenting with two video datasets and performing ablation studies. ### Dataset **UCF101 [13]:** We have preprocessed the UCF101 data with randomly selected \(91\) labeled actions and the remaining \(10\) as unlabeled actions. We balance the distribution of the instances from these unlabeled classes to match the NCD assumption. **In-house dataset:** Our collected multiview dataset contains three synchronous views \(K=3\) from three cameras (view 0, \(v_{0}\): horizontally positioned wide-lens static action camera, view 1, \(v_{1}\): horizontally positioned handheld smartphone and view 2, \(v_{2}\): low-altitude flying drone camera) that captured the action from three realistic positions with varying camera settings, positions, and angles. These camera positions are designed to capture complementary views with respect to each other while the volunteers perform actions without any directional preferences. It contains class-balanced ten regular micro-actions with both static (i.e. body poses: sitting, standing, lying with face up and down) and dynamic patterns (i.e. temporal patterns: walking, push up, waving hand, leg exercise, object carrying, and pick/drop). In total, we collect \(\approx\,14\) hours of video from \(11\) volunteers under varying realistic lighting, environments, and backgrounds. We perform controlled experiments to study various NEV-NCD components, view-invariant, and action selection using our in-house dataset due to its scalable label space and available viewpoint meta-data. ### Implementation Details **Network Architecture:** We experimented with three different CNN architectures (SlowFast, R2+1D [19], and P3D [20]) that have demonstrated their capabilities in VAR as encoder design choices. We experimented with both random initialization and weight transfer from the kinetics dataset pre-trained model weights available in PyTorch Modelzoo and remove their classification head. We placed a \(1000-\)dimensional embedding layer with ReLU activation and classification head with SoftMax activation on top of encoder features. **Losses:** Inspired by [21], we applied adaptive weights for the contrastive learning, negative learning, variance loss, and cross-entropy loss components with the epoch number, \(n_{ep}\), throughout the learning phase as shown in equation 12 and 13. In our experimentations, We fixed entropy loss weights (\(\lambda_{H}=1.0\)) and other hyper-parameters (\(\tau=0.05\), \(Sr=0.1\)). To calculate the empirical distribution variance, we traded off between computational cost and the law of large numbers and chose the batch size of \(10\times U\) instances. \[\lambda_{cl} = \lambda_{nl}=\lambda_{var}=0.2+0.5\times n_{ep} \tag{12}\] \[\lambda_{ce} = \max(0,\!1\!-\!0.01\!\times\!n_{ep})+0.5 \tag{13}\] ### View experiments with NCD We perform extensive experiments [table 4 and 5] in multiple settings to demonstrate the importance of viewpoint and view-invariant feature learning by providing viewpoint information and incorporating view-invariant constraints in the NCD training phase for both labeled and unlabeled data. Firstly, we consider all three viewpoints for all the labeled classes while considering only single viewpoint instances in the unlabeled data and vice versa. secondly, we incorporate the partial viewpoints for the labeled and unlabeled classes by obfuscating one of the views in the training phases. Thirdly, we study the optimal viewpoints by providing viewpoints only for selective classes. Further, we also develop models by ignoring view-invariant constraints in all experiments for baseline comparison. ### Baseline Comparison and Ablation Studies We compare our methodology with three naive approaches - (i) the supervised training using only the labeled parts, (ii) instance discrimination-based contrastive learning utilizing a complete dataset, and (iii) the SupCon approach combining both labeled and unlabeled data. Additionally, we compare the performance of NEV-NCD with the most relevant single-stage joint optimization-based state-of-the-art NCD method UNO [8] with single head Sinkhorn-Knopp clustering (replacing the swapped prediction with self-labeling and retraining [9]) by adapting it for our VAR scenario. We run numerous experiments to study the impacts of different design choices toward better understanding the proposed NCD VAR system. Firstly, we investigate the impacts and the relative importance of each loss component of our proposed joint loss. We also vary the hyper-parameters of the loss components (\(\lambda_{nl}\), \(\lambda_{nl}\), negative sample number) to study their robustness. Secondly, we investigate the impacts of choosing different 3D CNN architectures (i.e. P3D, R2+1D, SlowFast) as the parameterized feature encoders for our VAR model. Finally, we inspect the effect of multiview properties by varying members of \(C_{L}\),\(C_{U}\), and multiple viewpoint variations for the actions using our in-house dataset. ## 4 Results We evaluated NEV-NCD in left-out labeled and unlabeled test clips by both quantitative and qualitative metrics for both datasets. To scrutinize the quality (i.e. separability of classes) of the learned embeddings, we employed low-dimensional t-SNE visualization. We quantified the performance over the labeled and unlabeled data by deploying standard top-1 accuracy (ACR) and average clustering accuracy (ACC) (equation 14), respectively. \[ACC\!=\!\max_{p\in Pe}\!\frac{1}{N}\!\sum_{i=1}^{N}\!1\left\{y_{i}\!=\!p(\hat{y }_{i})\right\} \tag{14}\] Here \(y_{i}\in C_{U}\) and \(\hat{y}_{i}\) represent the ground-truth and cluster prediction of a sample \(x_{i}\!\in\!D^{u}\), respectively. \(Pe\) denotes the set of all permutations and can be efficiently computed \(\forall x_{i}\)[22]. ### Main Results and comparisons We report quantitative results over the left-out test data of the UCF101 and the in-house dataset in Table 1 and 2 respectively. In both cases, NEV-NCD outperforms others in clustering the \(\mathcal{D}^{u}\) (high ACC) while classifying the \(\mathcal{D}^{l}\) (high ACR). \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Methods & \multicolumn{2}{c|}{Loss Components} & \multicolumn{2}{c|}{Metrics} \\ \cline{2-5} Methods & \(\mathcal{L}_{ce}\), & SK & \(\mathcal{L}_{nl}\), & Labeled & \multirow{2}{*}{Classes} & ACR & ACC \\ \cline{3-5} \cline{5-6} & \(\mathcal{L}_{cl}\) & & \(\mathcal{L}_{var}\), & & & (\%) & (\%) \\ \hline \hline \multirow{4}{*}{Ours (R2+1d)} & \multirow{4}{*}{Y} & \multirow{4}{*}{N} & \multirow{4}{*}{Y} & Uniform & 82.9 & 67.3 \\ & & & & Dynamics & 81.8 & 64.2 \\ & & & & Static & 87.1 & 53.0 \\ \hline \multirow{4}{*}{Ours (SlowFast)} & \multirow{4}{*}{Y} & \multirow{4}{*}{N} & \multirow{4}{*}{Y} & Uniform & 84.1 & 70.5 \\ & & & & Dynamics & 82.7 & 68.2 \\ & & & & Static & 87.8 & 53.2 \\ \hline \multirow{4}{*}{Ours (P3D)} & \multirow{4}{*}{Y} & \multirow{4}{*}{N} & \multirow{4}{*}{Y} & Uniform & 76.5 & 62.3 \\ & & & & Dynamics & 72.9 & 59.3 \\ & & & & Static & 78.6 & 45.7 \\ \hline \multirow{4}{*}{Sup} & \(\mathcal{L}_{ce}\) & N & N & Uniform & 84.6 & 12.5 \\ \cline{2-5} & UNO [8] & \(\mathcal{L}_{ce}\) & Y & N & Uniform & 83.4 & 62.1 \\ \hline \multirow{4}{*}{SpCn [12]} & Y & N & N & Uniform & 85.2 & 12.8 \\ \cline{2-5} & & Fine tuned & Uniform & 80.6 & 13.4 \\ \cline{2-5} & & Y & N & \(\mathcal{L}_{nl}\) & Uniform & 83.9 & 25.3 \\ \cline{2-5} & Y & N & \(\mathcal{L}_{var}\) & Uniform & 85.1 & 15.2 \\ \cline{2-5} & Y & N & \(\mathcal{L}_{nl}\) & Uniform & 84.8 & 25.6 \\ \cline{2-5} & Y & N & \(\mathcal{L}_{nl}\) & \(\mathcal{L}_{var}\) & Uniform & 83.9 & 30.2 \\ \cline{2-5} & Y & N & \(\mathcal{L}_{nl}\) & \(\mathcal{L}_{var}\) & Uniform & 83.7 & 60.5 \\ \hline \end{tabular} \end{table} Table 2: Overall Result on our in-house video dataset The naive baselines fail to cluster the \(\mathcal{D}^{u}\) separately (low ACC) from the labeled data embedding without external constraints. NEV-NCD outperforms the comparable UNO (self-labeling via Sinkhorn-Knopp (SK) clustering and retraining on pseudo-label) in the ACC metrics without retraining. Simultaneously, we analyze the 2D t-SNE plot for the embedding layer output. As depicted in figure 3, we observe that NEV-NCD learns distinguished features for both labeled and unlabeled instances. The NCD performance drops significantly in the absence of \(\mathcal{L}_{var}\) and \(\mathcal{L}_{nl}\) as reported in Table 2. The \(\mathcal{L}_{nl}\) enforces a boundary between labeled and unlabeled classes by pushing away the unlabeled embedding from the labeled embedding. However, the \(\mathcal{L}_{nl}\) results in collapsing unlabeled classes into a single cluster without additional constraints. The \(\mathcal{L}_{var}\) provides the equipartition cluster constraint and avoids trivial collapses for the embedding of unlabeled instances. The \(\mathcal{L}_{H}\) loss increase inter-class separation by enforcing unique decision for the unlabeled classification heads. Additionally, we study and report the impact of relative weights of the loss components in Table 3. We also observe the impact of labeled class notions in unlabeled data clustering. In our VAR experiments, labeled class sets with only static categories (sitting, standing, etc.) reduce the performance of clustering of dynamic actions (walking, push-ups, etc.). Besides, we find that NCD methods learn the invariant representations of actions from different viewpoints of unlabeled data by providing multiview labeled instances. Finally, we notice that the strong function class improves the overall performance in both labeled and unlabeled data (Table 2). ### View-point Ablation Results We report our results over the left-out-test person data from all three views \(v_{0,1,2}\) for all the actions in table 4. We quantitatively analyze the embedding clustering properties by measuring the silhouette coefficient (SC). We also measure the classification performance for the labeled and unlabeled classes by reporting the accuracy and average clustering accuracy (ACC) metrics. In our experimentation, we consistently observe the improved clustering performance by adding view-invariance (VI) constraints. We also empirically study the impact of individual views and report our findings in Table 5. We find the most complementary viewpoints provide the optimal data instances to develop the robust model. In our in-house data, \(v_{0}\) and \(v_{1}\) share common viewpoints as they capture horizontal views, and \(v_{2}\) provides alternative vertical observations concerning the other two. We observe a performance boost for matching data quantity by including complementary viewpoints by combining \(v_{0}\) or \(v_{1}\) with \(v_{2}\). ## 5 Discussion and Future Works Our current method leverages three typical NCD data assumptions: (i) prior knowledge about the number of latent classes for the unlabeled data (unknown target cluster quantity), (ii) disjoint set condition between the \(C_{L}\) and \(C_{U}\) (negative learning), and (iii) the uniform distribution among unlabeled classes (variance regularization). Moreover, the empirical variance estimation requires a large batch size relative to \(U\) randomly sampled from unlabeled data as per the strong law of large numbers. In the future, we aim to challenge the NCD assumptions and investigate the minimal constraint required for the unlabeled data to solve the NCD clustering. We plan to extend our works towards generalized categories discovery [24] and explore over-clustering, self-training, and distribution aligning solutions. We also plan to integrate and investigate the impact of attention-based learners (vision transformers) in NCD VAR applications. Due to the combinatorial possibilities of viewpoint experiments, we report our results to a plausible set of experiments to demonstrate the importance of multiview data and view-invariant constraints for robust model development. However, the optional view-invariant constraint increases the NCD \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline Exp & \multicolumn{2}{l|}{Train Data} & \multicolumn{2}{l|}{VI} & \multirow{2}{*}{SC} & \multirow{2}{*}{Accu.} & \multirow{2}{*}{ACC} \\ No & L & U & & & Loss \\ \hline 1 & \(v_{0,1,2}\) & \(v_{0,1,2}\) & N & 0.51 & 82.9 & 67.3 \\ \hline 2 & \(v_{0,1,2}\) & \(v_{0,1,2}\) & Y & 0.56 & 83.3 & 67.5 \\ \hline 3 & \(v_{0,1,2}\) & \(v_{1}\) & N & 0.38 & 83.2 & 49.3 \\ \hline 4 & \(v_{0,1,2}\) & \(v_{1}\) & Y & 0.51 & 83.1 & 62.8 \\ \hline 5 & \(v_{1}\) & \(v_{0,1,2}\) & N & 0.31 & 53.5 & 62.1 \\ \hline 6 & \(v_{1}\) & \(v_{0,1,2}\) & Y & 0.45 & 76.3 & 66.7 \\ \hline \end{tabular} \end{table} Table 4: Results for view-invariant (VI) losses for NEV-NCD setting with R\(2+1\)d model over the in-house dataset. \begin{table} \begin{tabular}{c c c c} \hline Weights & \multicolumn{2}{c}{Metrics} & \multicolumn{2}{c}{Comment} \\ \hline \(\lambda_{nl}\) & \(\lambda_{var}\) & ACR (\%) & ACC (\%) \\ \(0.1\!\sim\!1\) & 1 & 84 & 65 \(\sim\!70\) & Slower convergence \\ \(1\!\sim\!1\) & \(0.1\!\sim\!1\) & 84 & 64 \(\sim\!70\) & for smaller values \\ \hline \end{tabular} \end{table} Table 3: Ablation Studies using in-house dataset for different constant \(\lambda_{nl}\) and \(\lambda_{var}\) with SlowFast model (fixed other parameters). Figure 3: 2D t-SNE plot for the representation of the embedding layer (a) UNO (b) NEV-NCD (c) Supervised Baseline (d) Negative Learning \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline Exp & \multicolumn{2}{l|}{Train Data} & \multicolumn{2}{l|}{VI} & \multirow{2}{*}{SC} & \multirow{2}{*}{Accu.} & \multirow{2}{*}{ACC} \\ No & L & U & & Loss \\ \hline 1 & \(v_{0,1,2}\) & \(v_{0,1,2}\) & N & 0.51 & 82.9 & 67.3 \\ \hline 2 & \(v_{0,1,2}\) & \(v_{0,1,2}\) & Y & 0.56 & 83.3 & 67.5 \\ \hline 3 & \(v_{0,1,2}\) & \(v_{1}\) & N & 0.38 & 83.2 & 49.3 \\ \hline 4 & \(v_{0,1,2}\) & \(v_{1}\) & Y & 0.51 & 83.1 & 62.8 \\ \hline 5 & \(v_{1}\) & \(v_{0,1,2}\) & N & 0.31 & 53.5 & 62.1 \\ \hline 6 & \(v_{1}\) & \(v_{0,1,2}\) & Y & 0.45 & 76.3 & 66.7 \\ \hline \end{tabular} \end{table} Table 5: Results for viewpoint and view-invariant loss importance for NEV-NCD setting with R\(2+1\)d model. model's robustness against viewpoint variation with an expanse of additional viewpoint information associated with the instances and the computational cost of the adversarial components. Alternatively, synchronous video data can alleviate such requirements as it enables to implementation of temporal contrastive learning TCN [18] and TCLR [25] to learn view-invariant representation. ## 6 Conclusion In this work, we introduced and studied the negative learning, entropy, and variance regularization in NCD research to propose a novel single-stage NCD method NEV-NCD removing additional pseudo-label generation and retraining steps. We validated NEV-NCD by performing extensive experiments with the VAR domain using 3D CNN-based networks. The qualitative and quantitative results demonstrate the impacts of NEV-NCD components and compatibility with state-of-the-art NCD works. We also demonstrate that the NCD losses alone fail to capture novel categories from novel viewpoints. Finally, we propose and validate that adding view-invariant constraint improves model robustness and enhances models' capacity to identify novel categories from novel viewpoints.
2307.01275
Tensionless Tales of Compactification
We study circle compactifications of tensionless bosonic string theory, both at the classical and the quantum level. The physical state condition for different representations of BMS$_3$, the worldsheet residual gauge symmetry for tensionless strings, admits three inequivalent quantum vacua. We obtain the compactified mass spectrum in each of these vacua using canonical quantization and explicate their properties.
Aritra Banerjee, Ritankar Chatterjee, Priyadarshini Pandit
2023-07-03T18:05:12Z
http://arxiv.org/abs/2307.01275v1
# Tensionless Tales of Compactification ###### Abstract We study circle compactifications of tensionless bosonic string theory, both at the classical and the quantum level. The physical state condition for different representations of BMS\({}_{3}\), the worldsheet residual gauge symmetry for tensionless strings, admits three inequivalent quantum vacua. We obtain the compactified mass spectrum in each of these vacua using canonical quantization and explicate their properties. ## 1 Introduction ### Review of tensionless strings: Classical and Quantum \(2.1\) Classical tensionless strings \(2.2\) Quantization of tensionless strings \(2.3\) Oscillator vacuum \(2.4\) Induced Vacuum \(2.5\) Flipped vacuum \(3\) Compactification of Target Space \(3.1\) Compactification on Circle \(S^{1}\) \(3.2\) Compactification on Torus \(T^{d}\) \(4\) Effect of compactification: Oscillator Vacuum \(4.1\) Modified level matching condition and mass spectrum \(4.2\) States from compactification \(4.3\) Limiting theory \(4.4\) A brief Look at multiple dimensions compactification \(4.5\) Summary \(5\) Effect of compactification: Induced Vacuum \(5.1\) What happens to the vacuum? \(5.2\) Limit from tensile mass formula \(5.3\) What happens to the perturbative states? \(5.4\) Nonperturbative states \(5.5\) A brief look at multiple dimensions compactification \(5.6\) Summary \(6\) Effect of compactification: Flipped Vacuum \(6.1\) Modified constraint on level \(6.2\) States at various levels \(6.3\) Limit from tensile closed twisted string \(6.4\) A brief look at multiple dimensions compactification \(6.5\) Summary \(7\) Conclusions and future directions \(7.1\) Summary of the work \(7.2\) Future plan \(8\) Light-cone quantization B The fate of tensile perturbative states at tensionless limit C Physical states of flipped vacuum D Multiple dimensions compactification of twisted tensile string Introduction String theory has been a leading candidate for quantum theory of gravity over last few decades. This theory generalises the notion of point particle to fundamental one-dimensional string characterized by its tension \(T\), given by: \[T=\frac{1}{2\pi\alpha^{\prime}},\] where \(\alpha^{\prime}\) gives the square of the length of the string. This tension \(T\) is the only free parameter in the non-interacting string theory. Any possible candidate of quantum theory of gravity has to be consistent with general relativity. String theory in the point particle limit (\(T\rightarrow\infty\)) reduces to general relativity and superstring theory under the same limit leads us to supergravity. The diametrically opposite limit (\(T\to 0\)) then corresponds to the extreme'stringy' or the so-called ultra high energy sector of the theory, and the worldsheet becomes null in this limit. The null (or "Tensionless") sector of string theory was first analyzed by Schild [1], and subsequently has found intriguing applications in diverse physical situations. In [2; 3; 4] Gross and Mende showed that in the limit \(\alpha^{\prime}\rightarrow\infty\), the string scattering amplitudes become considerably simpler. Massless higher spin symmetry [5; 6] is also expected to appear in this sector [7; 8]. The physics of tensionless strings have been recently emerging in other circumstances as well. For instance, in [9; 10] it has been shown that a closed string becomes tensionless (i.e. the worldsheet becomes null) when it falls on the event horizon of a Schwarzschild black hole. Hence studying the tensionless limit of string theory might prove to be useful to understand how strings would perceive spacetime singularities. Tensionless strings are also expected to emerge when a gas of strings are heated to very near the Hagedorn temperature. It is expected that a phase transition will occur here and new degrees of freedom would appear [11; 12; 13]. As an indication of this, a novel closed-to-open transition was discovered in [14] when the string tension was dailed to zero. Finally, these tensionless strings have been recently used to build a quantum model of black holes, specifically BTZ black holes in AdS\({}_{3}\)[15] in a manner reminiscent of the black hole membrane paradigm. The entropy and logarithmic corrections were obtained by a counting of null string states. #### Tensionless strings: A brief history The formulation of tensionless string theory has been done using two different approaches. The first approach, taken in [16], involves the construction of action and formulation of the theory from first principle 1. Here, the metric of the worldsheet proves to be degenerate which is incorporated in the action. This action is invariant under a gauge symmetry i.e, worldsheet diffeomorphism invariance which can only be fixed partially, quite similar to the case of tensile string theory. After gauge fixing, the action still remains invariant under a residual gauge symmetry, the generators of which close to form the BMS\({}_{3}\) (\(3d\) Bondi-Metzner-Sachs) algebra [23]. The BMS algebras are the asymptotic symmetry algebra of the Minkowski spacetime at null infinity, studied in [24; 25; 26; 27]. This algebra has also been used intensively in the study of flat space holography [28; 29; 30; 31; 32; 33; 34; 35]. The other approach, namely the limiting approach, considers taking the appropriate limit on the worldsheet coordinates from the tensile string theory [23; 36]. The limit turns out to be the ultra-relativistic limit or Carrollian limit [37] on the worldsheet where the worldsheet velocity of light tends to zero. This can be realised in terms of worldsheet coordinates \((\tau,\sigma)\) scaling as \(\{\tau\rightarrow\epsilon\tau,\sigma\rightarrow\sigma\}\) with \(\epsilon\to 0\). Under this scaling, the two copies of Virasoro algebra (residual symmetry algebra of tensile string theory) scales to BMS\({}_{3}\) algebra, making this approach consistent with the intrinsic approach. This consistency between the two approaches, related closely by the symmetry algebra, has been the driving force behind recent explorations into this arena. Such studies have even been extended to Supersymmetric versions of tensionless string theory in [38; 39; 40]. The geometry of the worldsheet of tensionless string naturally carries a \(2d\) version of Carroll geometry i.e. the geometry of a generic null manifold [41; 42; 43]. This manifold emerges in physics on various occasions as one departs from the well understood pseudo-Riemannian paradigm. Event horizon of a generic black hole, for instance, happens to be a null manifold, and hence contains Carrollian structure [44]. Carrollian physics from the perspective of holography has been explored in [28; 29; 32; 33; 45; 46; 47; 48; 49; 50; 51]. Field theories on null manifolds, having intrinsic Carroll symmetries, have been analysed in [52; 53]. Recently Carrollian physics has found surprising applications in different areas of physics, such as cosmology [54], condensed matter systems [55; 56] and fluid dynamics [57]. Other aspects of Carrollian physics has been studied in [58; 59; 60]. It is then clear just from the Carroll symmetry perspective, delving deep into the formalism of tensionless strings is going to be very important. In recent years, substantial progress has been made in the quantization of tensionless strings as well [61; 62; 14; 63]. It has been found [61] that the classical theory based on the action constructed in [16] can be quantized in different ways resulting in three consistent inequivalent quantum theories. These three quantum theories are based on three distinct vacua which have been named the _Oscillator vacuum_, _Induced vacuum and Flipped vacuum_ in [61]. To elucidate, taking the null limit on the usual tensile quantum theory based on highest weight representations, would lead us to the tensionless quantum theory based on Induced vacuum. This theory corresponds to the Induced representations of BMS algebra. One of the intriguing observation of this theory has been the emergence of open string from closed tensile string theory [14]. On the other hand, the quantum theory constructed on Flipped vacuum belongs to the explictly constructed highest weight representation of the BMS algebra. As it turns out, this is the tensionless limit of a twisted string theory, a close cousin of usual tensile string theory. Classically these two theories are identical having the same action, but quantum mechanically they have striking differences [61; 64; 65]. Unlike these two theories, the Oscillator vacuum is based on a construction akin to the tensile string theory, relying on (seemingly) decoupled left and right moving oscillators. However, the theory is still very interesting due to its connection to tensile vacuum through a Bogoliubov transformation as well as emergence of massive states. As an intriguing example of the usefulness of the oscillator theory, a link between tensionless limit and infinite acceleration limit of a string has been explored from worldsheet perspective in [10]. Moreover, the light cone quantization of all the three tensionless theories has been studied in [62]. Just like tensile string theory, the critical dimension of the oscillator and flipped vacuum has been found to be 26, whereas, there seems to be no such restriction on the dimension of target space defined on the Induced vacuum.2 Footnote 2: For path integral quantization of both tensionless bosonic as well as tensionless Superstring theories, one may look at [63]. #### Tensile string compactification String theory by nature requires multiple target space dimensions to be consistent with Lorentz invariance. As is well known, the way of making these theories compatible with the four-dimensional world is to compactify the extra dimensions [66, 67] on some compact manifold. Compactification of any dimension (say, on a circle) introduces two new quantum numbers in the theory, namely, winding number (\(W\)) and quantized momentum (\(K\)). These compactified dimensions give rise to several new states in the spectrum: massless and massive vector states, massless and massive scalar states etc. One of the most intriguing features of the tensile string theory with compactification is that the mass spectrum is symmetric under the interchange of \(W\) and \(K\) along with the following transformation on the radius of compactification \[R\to\frac{\alpha^{\prime}}{R}.\] This transformation is called T-duality, which relates string theories compactified on circles of different radii. At specific points on the moduli space given by \(R=\sqrt{\alpha^{\prime}}\), i.e. at the self dual point of the above transformation, we get new massless scalar and vector states. 3 Footnote 3: Note that, tensile twisted bosonic string theory in compactified background has been studied in [68, 69]. This will be important for the discussions in the current manuscript. #### Our present work: Compactified tensionless strings This naturally brings us to the question we address in the current paper: Do compactified string theories make sense in the tensionless regime as well? We will answer that question in the affirmative, while noting that regardless of the quantum vacuum chosen, it is important to work with multiple target space dimensions in the case of null strings. In this work, we will confine ourselves to the notion that these target spaces are necessarily (\(D\) dimensional) Minkowski spaces. We should then ask, how does compactification change the spectra of the tensionless theories based on the three vacua we have been discussing? But even before we jump to the question of spectrum, the motivation for studying compactified target spaces for tensionless strings is already linked to various applications. As mentioned earlier, tensionless strings in compactified background has already been used as a building block in the construction of [15], specifically for the oscillator vacuum. It has been postulated that the event horizon of a BTZ black hole coincides with an ensemble of tensionless string states. In this setup, the angular direction on the horizon was recognized as the compactified coordinate in the target space that the null string wraps. The BTZ black hole microstates have thus been identified as the physical states of the tensionless string theory constructed on the oscillator vacuum. It was found that the combinatorics of those microstates result into Bekenstein-Hawking entropy along with logarithmic corrections. In other developments, string theory in zero tension limit gives rise to infinitely many massless higher-spin fields with consistent mutual interactions [7; 8]. With the recent progress in discussing higher-spin fields in flat space (see [70] for example), compact sectors of flat space tensionless strings may be an interesting new realm to explore. Moreover, since novel phases coming out of Hagedorn transitions are closely connected to tensionless strings, it needs to be pointed out that the very high energy limit of string density of states takes the universal Hagedorn form only when one considers a compact target space [71; 72]. As shown in [71] the topology of this compact space does not affect the nature of the transition. All of these intriguing ideas, which are still nascent and require dedicated discussions, makes taking first steps towards deciphering compactified null strings an important problem. ### Plan of the paper The rest of the paper is organised in the following way: In section (2), we revisit the analysis of classical and quantum tensionless closed string theory. We construct Weyl invariant classical action following [16] and discuss its symmetries. Then we briefly review the quantum structure of the tensionless theory following [61] by analysing imposition of constraints. In section (3), we introduce the machinery to study the effect of compactification on the three inequivalent description of quantum theories based on three distinct vacua. We discuss both one or multiple spatial dimensional compactifications, either on a circle (\(S^{1}\)) or \(d\)-dimensional torus (\(T^{d}\)) respectively. Section (4), (5) and (6) are dedicated to the detailed discussions on the effect of compactification on level matching condition and mass spectrum separately for all the three inequivalent vacua, namely, Oscillator, Induced and Flipped. We intricately discuss the distinct structure of these three theories and focus on potential implications. In section (7) we summarise and conclude our discussions with further comments and future perspectives. Appendices at the end contain details of computations and extra discussions. Review of tensionless strings: Classical and Quantum In this section we revisit the classical and quantum aspects of bosonic tensionless string theory. In the first part, we review the classical aspect of tensionless string theory both from the intrinsic as well as limiting approach. Then we move on to quantize the bosonic tensionless closed string theory and discuss different ways of imposing quantum constraints on the physical states resulting in three distinct inequivalent quantum theories. We discuss in detail all the three consistent quantum tensionless string theory based on three distinct vacua namely, the oscillator, Induced, and flipped vacuum. ### Classical tensionless strings Following the method introduced in [16], we use the Hamiltonian formalism to construct the Weyl invariant action from Nambu-Goto action of the tensile theory, where tensionless limit can be imposed. Under this limit, the metric density \(T\sqrt{-g}g^{\alpha\beta}\) turns out to be degenerate and hence is replaced by a rank one matrix \(V^{\alpha}V^{\beta}\). This leads to the following form of the tensionless string action: \[S=\int d^{2}\xi\ V^{\alpha}V^{\beta}\partial_{\alpha}X^{\mu} \partial_{\beta}X^{\nu}\eta_{\mu\nu}, \tag{2.1}\] where \(V^{\alpha}\) is the vector density, \(\xi^{\alpha}\) represents the worldsheet coordinates, \(X^{\mu}\) are the spacetime coordinates and \(\eta_{\mu\nu}\) is the flat background metric. The above action is invariant under worldsheet diffeomorphisms resulting in the following gauge fixing: \[V^{\alpha}=(v,0), \tag{2.2}\] where \(v\) is a constant. However, even after fixing this gauge, there is still a residual symmetry left over analogous to the tensile theory. This residual symmetry in the tensionless string theory turns out to be the BMS\({}_{3}\) (Bondi-Metzner-Sachs) algebra with generators \((L_{n},M_{n})\) satisfying: \[[L_{m},L_{n}]=(m-n)L_{m+n},\ \ [L_{m},M_{n}]=(m-n)M_{m+n},\ \ [M_{m},M_{n}]=0. \tag{2.3}\] This residual symmetry algebra is without any central extension and hence can be identified as the classical part of \(3d\) Bondi-Metzner-Sachs (BMS\({}_{3}\)) algebra [16; 36]. Remember, the analogous residual symmetry in the tensile case for closed string is two copies of the Virasoro algebra. #### Mode expansions The equations of motion obtained for the action (2.1) are: \[\partial_{\alpha}(V^{\alpha}V^{\beta}\partial_{\beta}X^{\mu})=0, \quad V^{\beta}\gamma_{\alpha\beta}=0, \tag{2.4}\] where \(\gamma_{\alpha\beta}=\partial_{\alpha}X^{\mu}\partial_{\beta}X^{\nu}\eta_{\mu\nu}\) is the Induced metric on the worldsheet. The second equation in (2.4) indicates that the metric \(\gamma_{\alpha\beta}\) is degenerate [16]. The above equations of motion simplifies in the gauge \(V^{\alpha}=(1,0)\) as: \[\ddot{X}^{\mu}=0;\ \ \ \ \dot{X}\cdot X^{\prime}=0=T_{1},\quad\dot{X}^{2}=0=T_ {2}, \tag{2.5}\] where \(T_{1}\) and \(T_{2}\) are the energy momentum tensor of the worldsheet theory. We now concentrate on finding the solutions of the equation of motion. The mode expansion which solves the above equation of motion can be written in general as: \[X^{\mu}(\tau,\sigma)=x^{\mu}+\sqrt{\frac{c^{\prime}}{2}}A_{0}^{\mu}\sigma+\sqrt{ \frac{c^{\prime}}{2}}B_{0}^{\mu}\tau+i\sqrt{\frac{c^{\prime}}{2}}\sum_{n\neq 0 }\frac{1}{n}(A_{n}^{\mu}-in\tau B_{n}^{\mu})e^{-in\sigma}. \tag{6}\] Note that \(c^{\prime}\) is a parameter with the dimension \(\left[L\right]^{2}\) to make everything consistent. For \(X^{\mu}\) to satisfy the closed string boundary condition given by \(X^{\mu}(\tau,\sigma)=X^{\mu}(\tau,\sigma+2\pi)\), \(A_{0}^{\mu}\) must be zero. We now define the generators of the residual symmetry algebra in terms of the oscillator modes \((A,B)\) as: \[L_{n}=\frac{1}{2}\sum_{m}A_{-m}\cdot B_{m+n},\ \ \ \ M_{n}=\frac{1}{2}\sum_{m}B_{- m}\cdot B_{m+n}. \tag{7}\] Using the above relation on the two constraints in (5), we obtain the expression for the energy momentum tensor in terms of generators of the BMS\({}_{3}\) algebra as follows: \[T_{1}(\tau,\sigma)=\frac{1}{2\pi}\sum_{n}(L_{n}-in\tau M_{n})e^{-in\sigma},\ \ \ T_{2}(\tau,\sigma)=\frac{1}{2\pi}\sum_{n}M_{n}e^{-in\sigma}. \tag{8}\] We now proceed to compute the algebra satisfied by these modes. The Poisson brackets between \(X\) and \(P\) require \[\{A_{m}^{\mu},A_{n}^{\nu}\}_{P.B}\ =\ \{B_{m}^{\mu},B_{n}^{\nu}\}_{P.B}=0,\ \ \ \ \{A_{m}^{\mu},B_{n}^{\nu}\}_{P.B}=-2im\delta_{m+n}\eta^{\mu\nu}. \tag{9}\] We can clearly see here that this is not the harmonic oscillator algebra. In order to get the algebra for the same we define new modes in the following way [61], \[C_{n}^{\mu}=\frac{1}{2}(A_{n}^{\mu}+B_{n}^{\nu}),\ \ \ \tilde{C}_{n}^{\mu}=\frac{1}{2}(-A_{-n}^{\mu}+B_{-n}^{\mu}), \tag{10}\] which satisfy the algebra similar to oscillator modes of tensile string case. So, the Poisson brackets now take the following form \[\{C_{m}^{\mu},C_{n}^{\nu}\}=-im\delta_{m+n,0}\eta^{\mu\nu},\ \ \ \{\tilde{C}_{m}^{\mu},\tilde{C}_{n}^{\nu}\}=-im\delta_{m+n,0}\eta^{\mu\nu},\ \ \ \{C_{m}^{\mu},\tilde{C}_{n}^{\nu}\}=0. \tag{11}\] We call this as the oscillator basis of the tensionless string. We can now write the mode expansion (6) in terms of these new modes as \[X^{\mu}(\tau,\sigma)=x^{\mu}+ \sqrt{\frac{c^{\prime}}{2}}(C_{0}^{\mu}-\tilde{C}_{0}^{\mu}) \sigma+\sqrt{\frac{c^{\prime}}{2}}(C_{0}^{\mu}+\tilde{C}_{0}^{\mu})\tau \tag{12}\] \[+ i\sqrt{\frac{c^{\prime}}{2}}\sum_{n\neq 0}\frac{1}{n}\left[(C_{n} ^{\mu}-\tilde{C}_{-n}^{\mu})-in\tau(C_{n}^{\mu}+\tilde{C}_{-n}^{\mu})\right]e ^{-in\sigma},\] where periodicity of \(X^{\mu}\) demands \(C_{0}^{\mu}\) to be equal to \(\tilde{C}_{0}^{\mu}\) resulting in vanishing of the second term. Analogous to the tensile string, we can split the above mode expansion of the tensionless string in terms of \(C^{\mu}\) and \(\tilde{C}^{\mu}\) oscillators representing "left" and "right" modes respectively [61] as: \[X_{L}^{\mu} =\frac{x^{\mu}}{2}+\sqrt{\frac{c^{\prime}}{2}}C_{0}^{\mu}(\tau+ \sigma)+i\sqrt{\frac{c^{\prime}}{2}}\sum_{n\neq 0}\frac{1}{n}(C_{n}^{\mu}-in \tau C_{n}^{\mu})e^{-in\sigma}, \tag{13}\] \[X_{R}^{\mu} =\frac{x^{\mu}}{2}+\sqrt{\frac{c^{\prime}}{2}}\tilde{C}_{0}^{\mu }(\tau-\sigma)+i\sqrt{\frac{c^{\prime}}{2}}\sum_{n\neq 0}\frac{1}{n}( \tilde{C}_{n}^{\mu}-in\tau\tilde{C}_{n}^{\mu})e^{in\sigma}, \tag{14}\] where \(C_{0}^{\mu}=\tilde{C}_{0}^{\nu}=\sqrt{\frac{c^{\prime}}{2}}k^{\mu}\) are related to the momentum of the tensionless string. #### Limit from tensile closed strings The algebra for the modes in oscillator basis of tensionless string has been derived from the equation of motion. This is called the "intrinsic" approach. However, it is crucial to check the result from the "limiting" approach. Following [36] we take a suitable limit on mode expansion of the tensile string theory and verify that we are arriving at an identical expression for tensionless case. For tensile closed string, the expression for the mode expansion is \[X^{\mu}(\tau,\sigma)=x^{\mu}+2\sqrt{2\alpha^{\prime}}\alpha_{0}^{\mu}\tau+i \sqrt{2\alpha^{\prime}}\sum_{n\neq 0}\frac{1}{n}\left[\alpha_{n}^{\mu}e^{-in( \tau+\sigma)}+\tilde{\alpha}_{n}^{\mu}e^{-in(\tau-\sigma)}\right]. \tag{15}\] Here the zeroth modes for left and right moving oscillators are equal. Now in order to get to the tensionless strings, we take the following limit on the worldsheet coordinates \[\tau\rightarrow\epsilon\tau,\ \ \ \ \sigma\rightarrow\sigma\ \ \ \text{and}\ \ \ \alpha^{\prime}\to c^{\prime}/\epsilon,\ \ \epsilon\to 0. \tag{16}\] Here \(c^{\prime}\) is a finite parameter that takes care of the mass dimensions. In this limit, the above mode expansion reduces to the following form: \[X^{\mu}(\tau,\sigma)=x^{\mu}+2\sqrt{2\epsilon c^{\prime}}\alpha_{0}^{\mu}\tau+ i\sqrt{2c^{\prime}}\sum_{n\neq 0}\frac{1}{n}\left[\frac{\alpha_{n}^{\mu}- \tilde{\alpha}_{-n}^{\mu}}{\sqrt{\epsilon}}-in\tau\sqrt{\epsilon}(\alpha_{n}^ {\mu}+\tilde{\alpha}_{-n}^{\mu})\right]e^{-in\sigma}. \tag{17}\] We now compare (17) with (6) to find the relation of \((\alpha,\tilde{\alpha})\) with \(A\)'s and \(B\)'s as: \[A_{n}^{\mu}=\frac{1}{\sqrt{\epsilon}}(\alpha_{n}^{\mu}-\tilde{\alpha}_{-n}^{ \mu}),\ \ \ B_{n}^{\mu}=\sqrt{\epsilon}(\alpha_{n}^{\mu}+\tilde{\alpha}_{-n}^{\mu}). \tag{18}\] Using (10), we can also compute the relation between tensile oscillators \((\alpha,\tilde{\alpha})\) and tensionless oscillators \((C,\tilde{C})\). They are related through a Bogoliubov transformation given by, \[\begin{split}& C_{n}^{\mu}=\frac{1}{2}\Big{(}\sqrt{\epsilon}+\frac{ 1}{\sqrt{\epsilon}}\Big{)}\alpha_{n}^{\mu}+\frac{1}{2}\Big{(}\sqrt{\epsilon}- \frac{1}{\sqrt{\epsilon}}\Big{)}\tilde{\alpha}_{-n}^{\mu}\\ &\tilde{C}_{n}^{\mu}=\frac{1}{2}\Big{(}\sqrt{\epsilon}-\frac{1}{ \sqrt{\epsilon}}\Big{)}\alpha_{-n}^{\mu}+\frac{1}{2}\Big{(}\sqrt{\epsilon}+ \frac{1}{\sqrt{\epsilon}}\Big{)}\tilde{\alpha}_{n}^{\mu}.\end{split} \tag{19}\] We can clearly see here that in a strict limit \(\epsilon=1\), the oscillators in the \(C\) basis goes to \(\alpha\) basis of the tensile string. However, in the \(\epsilon\to 0\) limit, one gets tensionless oscillators defined in (10). So, as we shift the value of \(\epsilon\) from 1 towards 0, we systematically land on the tensionless string from tensile string. ### Quantization of tensionless strings Now we proceed to quantize the classical bosonic tensionless string in the usual canonical formalism. We begin with the tensionless action (1), choosing the gauge \(V^{\alpha}=(1,0)\), which results in the constraints \(\dot{X}\cdot X^{\prime}=0=\dot{X}^{2}\). We now promote \(X^{\mu}\) and its canonical momenta \(P^{\mu}\) to operators obeying the commutation relations \[\big{[}X^{\mu}(\tau,\sigma),P_{\nu}(\tau,\sigma^{\prime})\big{]}=i\delta( \sigma-\sigma^{\prime})\delta^{\mu}_{\nu}. \tag{20}\] Using these relations in the mode expansion (6), we get the following commutators \[[A^{\mu}_{m},A^{\nu}_{n}]=0=[B^{\mu}_{m},\!B^{\nu}_{n}],\ \ \ \ [A^{\mu}_{m},B^{ \nu}_{n}]=2m\delta_{m+n}\eta^{\mu\nu},\ \ \ [x^{\mu},p^{\nu}]=i\eta^{\mu\nu}. \tag{21}\] As we already mentioned earlier, the algebra of \((A,B)\) is not the harmonic oscillator algebra and hence the commutation relation of the \(C\) oscillators in (10) satisfying harmonic oscillator algebra, is given by \[[C^{\mu}_{m},C^{\nu}_{n}]=[\tilde{C}^{\mu}_{m},\tilde{C}^{\nu}_{n}]=m\eta^{\mu \nu}\delta_{m+n},\ \ \ \ [C^{\mu}_{m},\tilde{C}^{\nu}_{n}]=0. \tag{22}\] Now, we use these oscillators to define a vacuum and build a Hilbert space on it. However, in order to get the physical string spectrum, we have to apply constraints on the Hilbert space. In the tensionless case, there are different ways to impose these constraints, leading to distinct but consistent quantum theories which we discuss in detail below. #### Quantum constraints on physical states From the classical analysis part of this section, we have seen that the components of the energy momentum tensor vanishes (5). However, when we quantize the theory, these components of energy momentum tensor \(T_{1}\) and \(T_{2}\) are promoted to operators and setting the entire operator to zero will be too strong a constraint. The most general constraint we impose on these operators is that all the matrix elements of the operator in the physical Hilbert space will vanish, namely, \[\langle phys^{\prime}|\,T_{1}\,|phys\rangle=\langle phys^{\prime}|\,T_{2}\,| phys\rangle=0. \tag{23}\] In terms of the generators \(L_{n}\) and \(M_{n}\) these constraints boil down to \[\langle phys^{\prime}|\,L_{n}\,|phys\rangle=0,\ \ \ \langle phys^{\prime}|\,M_{n}\,| phys\rangle=0,\ \ \forall n\in\mathbb{Z}. \tag{24}\] As discussed in [61] we can get 9 possible ways to constrain our physical states which are consistent with the above relations. They can be listed as follows, \[L_{n}\left|phys\right\rangle=0,(n>0), \left\{\begin{aligned} & M_{m}\left|phys\right\rangle= &\ 0,(m>0)\\ & M_{m}\left|phys\right\rangle=&\ 0,(m\neq 0)\ ;\\ & M_{m}\left|phys\right\rangle=&\ 0,(\forall\ m) \end{aligned}\right. \tag{25a}\] \[L_{n}\left|phys\right\rangle=0,(n\neq 0), \left\{\begin{aligned} & M_{m}\left|phys\right\rangle= &\ 0,(m>0)\\ & M_{m}\left|phys\right\rangle=&\ 0,(m\neq 0)\ ;\\ & M_{m}\left|phys\right\rangle=&\ 0,(\forall\ m) \end{aligned}\right.\] (25b) \[L_{n}\left|phys\right\rangle=0,(\forall\ n), \left\{\begin{aligned} & M_{m}\left|phys\right\rangle= &\ 0,(m>0)\\ & M_{m}\left|phys\right\rangle=&\ 0,(m\neq 0)\.\\ & M_{m}\left|phys\right\rangle=&\ 0,(\forall\ m) \end{aligned}\right. \tag{25c}\] A detailed calculation [61] however shows that out of 9 possible consistent ways to impose constraints, only three possibilities are consistent with the BMS\({}_{3}\) algebra, resulting in three different quantum theories on three distinct vacua, namely, oscillator, Induced and flipped vacuum. The three consistent constraints are as follows: \[L_{n}\left|phys\right\rangle= \quad M_{n}\left|phys\right\rangle=0\quad(\forall\ n>0), \tag{26a}\] \[L_{n}\left|phys\right\rangle\neq 0,\quad M_{n}\left|phys\right\rangle= 0\quad(\forall\ n\neq 0),\] (26b) \[L_{n}\left|phys\right\rangle\neq 0,\quad M_{n}\left|phys\right\rangle \neq 0\quad(\forall\ n). \tag{26c}\] Except for the case of flipped vacuum, we assume the vacuum state to be a physical state, i.e, \(\left|phys\right\rangle=\left|0\right\rangle\)4. So, the above physical state conditions correspond to Flipped, Induced and Oscillator vacuum respectively. In what follows, we will review the structure of theories built upon these three vacua. The reader is directed to [61] for a very detailed account of the same. Footnote 4: We shall see later that the physical state conditions in flipped vacuum demands that for non-compact target spacetime, the vacuum itself won’t be a physical state. The only physical state for non-compact target spacetime is of level 2. ### Oscillator vacuum In this section we start with the canonical quantization of tensionless string in oscillator vacuum. The physical state condition based on which this theory has been built is the weakest of the three and is given below in what is known as the "sandwich" form: \[\left\langle phys^{\prime}\right|L_{n}-a_{L}\delta_{n,0}\left|phys\right\rangle =0,\quad\left\langle phys^{\prime}\right|M_{n}-a_{M}\delta_{n,0}\left|phys \right\rangle=0, \tag{27}\] where \(a_{L}\) and \(a_{M}\) are normal ordering constants of \(L_{0}\) and \(M_{0}\) respectively. This theory is constructed on the oscillator vacuum which is defined as \[C_{n}^{\mu}\left|0,k\right\rangle_{c}=\tilde{C}_{n}^{\mu}\left|0,k\right\rangle _{c}=0\quad\forall n>0, \tag{28}\] where the oscillators \(\{C,\tilde{C}\}\) satisfy the commutator relations (22). The expansion of the bosonic field \(X^{\mu}(\sigma,\tau)\) in terms of these oscillators is given in (12). Subsequently, the generators of the worldsheet BMS algebra \(\{L_{n},M_{n}\}\) can be expressed in terms of \(\{C,\tilde{C}\}\) as follows: \[\begin{split} L_{n}&=\frac{1}{2}\sum_{m}\left[C_{-m} \cdot C_{m+n}-\tilde{C}_{-m}\cdot\tilde{C}_{m-n}\right],\\ M_{n}&=\frac{1}{2}\sum_{m}\left[C_{-m}\cdot C_{m+n }+\tilde{C}_{-m}\cdot\tilde{C}_{m-n}+2C_{-m}\cdot\tilde{C}_{-m-n}\right].\end{split} \tag{2.29}\] The expression of \(L_{0}\) and \(M_{0}\) becomes, \[L_{0}=\mathcal{N}-\widetilde{\mathcal{N}},\quad M_{0}=c^{\prime}k^{2}+ \mathcal{N}+\widetilde{\mathcal{N}}+X+X^{\dagger}, \tag{2.30}\] where \(k^{2}=-m^{2}\) and the operators are given by: \[\mathcal{N}=\sum_{m>0}C_{-m}\cdot C_{m};\quad\ \widetilde{\mathcal{N}}=\sum_{m>0} \tilde{C}_{-m}\cdot\tilde{C}_{m};\quad\ X=\sum_{m>0}C_{m}\cdot\tilde{C}_{m}. \tag{2.31}\] \(\mathcal{N}\) and \(\widetilde{\mathcal{N}}\) here are number operators and the entire Hilbert space can be spanned by using the eigenstates of them as a basis. A generic eigenstate of \(\mathcal{N}\) and \(\widetilde{\mathcal{N}}\) is given by \[\ket{r,s}=\sum_{j}\rho_{j}\Bigg{(}\prod_{i=1}^{p}C_{-m_{i}}^{a_{i}}\prod_{j=1} ^{q}\tilde{C}_{-n_{j}}^{b_{j}}\Bigg{)}_{j}\ket{0,k^{\mu}}_{c}, \tag{2.32}\] where \(a_{i}\) and \(b_{j}\) are powers of the \(C_{-m_{i}}\) and \(\tilde{C}_{-n_{j}}\) oscillators respectively. The level of state is \((r+s)\) where \(r\) and \(s\) are given by \[r=\sum_{i}^{p}a_{i}m_{i}\quad\ s=\sum_{i}^{q}b_{i}n_{i}. \tag{2.33}\] Let us apply the \(L_{0}\) physical state condition as in (2.27) with \(\ket{phys}=\ket{phys^{\prime}}=\ket{0,k_{0}^{\mu}}\). This immediately leads us to: \[\bra{0,k_{0}^{\mu}}L_{0}\ket{0,k_{0}^{\mu}}=a_{L}. \tag{2.34}\] That means the only way to ensure that the vacuum is physical state is to demand that \(a_{L}=0\). As a consequence, sandwiching \(L_{0}\) with the general state \(\ket{r,s}\), and applying physical state condition (2.27) with \(a_{L}=0\), we can see that \[\bra{r,s}L_{0}\ket{r,s}=\bra{r,s}\left(\mathcal{N}-\widetilde{\mathcal{N}} \right)\ket{r,s}=0, \tag{2.35}\] which gives us the level matching condition for \(\ket{r,s}\) being physical state. On the other hand, the \(M_{0}\) physical state condition from (2.27) on a level matched state \(\ket{n,n}\) will give us the mass of the level matched state. As argued in [61], the \(M_{0}\) condition would lead us to the following mass-spectrum, \[m^{2}\ket{n,n}=\frac{1}{c^{\prime}}(2n-a_{M})\ket{n,n}. \tag{2.36}\] Hence, all that is left for us to do is to determine the normal ordering constant \(a_{M}\). Just like in the case of tensile string theory, here too, working in light-cone gauge comes handy when we try to determine \(a_{M}\) as well as the critical dimension. One way of determining them is to calculate the normal ordering of \(M_{0}\) directly. In light cone gauge we will be able to find its expression in terms of critical dimension \(D\). Then we impose spacetime Lorentz symmetry and find out both \(a_{M}\) and \(D\). This approach differs from the one used in [61] and has not been attempted previously. We determine \(a_{M}\) and \(D\) in this method in Appendix A and find that \(a_{M}=2\) for \(D=26\). Another more rigorous method of calculating them can be found in [62], and it also gives the same result. #### Analysis of spectrum Based on (2.36) we can briefly discuss the nature of the particles at various level. Let us start from the vacuum itself, which is given by \(n=0\), and further use \(a_{M}=2\). Like tensile string theory, here too, we get a tachyonic vacuum with mass given by \[m^{2}\ket{0,k^{\mu}}_{c}=-\frac{2}{c^{\prime}}\ket{0,k^{\mu}}_{c}. \tag{2.37}\] A generic state with \(n=1\) are given by5 Footnote 5: In Appendix A we have used lightcone quantization just to determine \(a_{M}\). Here we continue to work in covariant quantization. \[\ket{2}=\rho_{\mu\nu}C_{-1}^{\mu}\tilde{C}_{-1}^{\nu}\ket{0,k^{\mu}}_{c}. \tag{2.38}\] Just like the level 1 states of tensile string theory we can decompose these states into traceless symmetric, antisymmetric and singlet (trace) part. The traceless symmetric part will correspond to a massless symmetric tensor field \(G_{\mu\nu}(X)\) of spin 2, which can be identified with metric of spacetime [73]. The antisymmetric massless tensor field would give us the Kalb-Ramond background field \(B_{\mu\nu}(X)\). The trace part will give us a scalar field \(\Phi(X)\), which can be identified as the dilaton in this case. Furthermore, the mass spectrum (2.36) with \(a_{M}=2\) clearly shows that for level \(n>1\), we will have higher spin massive states (also see [61]). ### Induced Vacuum Our discussion of the Induced vacuum theory is based on [14; 61]. If we directly take the tensionless limit of the quantum tensile string theory constructed on a highest weight vacuum, it will lead us to the tensionless string theory constructed on the Induced vacuum. Similarly, taking ultrarelativistic limit on the highest weight representaton of Virasoro algebra would lead us to the Induced representation of the BMS algebra [74]. The physical state condition of the tensile string theory under this limit boils down to the following conditions \[\bra{phys^{\prime}}L_{n}\ket{phys}=0\ \ \ \ \forall n,\ \ \ \ M_{n}\ket{phys}=0,\ \ \ \ \forall n\neq 0. \tag{2.39}\] The vacuum on which this theory has been built is the explicit tensionless limit of the tensile vacuum. Let us recall the definition of the tensile vacuum \[\alpha_{n}^{\mu}\ket{0,k^{\mu}}_{\alpha}=\tilde{\alpha}_{n}^{\mu}\ket{0,k^{ \mu}}_{\alpha}=0\ \ \ \ \forall n>0. \tag{2.40}\] In terms of oscillators \(\{A,B\}\) (2.18) this definition can be rewritten as \[\left(\sqrt{\epsilon}A_{n}^{\mu}+\frac{1}{\sqrt{\epsilon}}B_{n}^{\mu}\right)\ket{0,k^{\mu}}_{\alpha}=\Big{(}-\sqrt{\epsilon}A_{-n}^{\mu}+\frac{1}{\sqrt{\epsilon} }B_{-n}^{\mu}\Big{)}\ket{0,k^{\mu}}_{\alpha}=0\quad\forall n>0. \tag{2.41}\] In the above equation, we have used the inverse relation to (2.18). The new vacuum arising at the explicit tensionless limit (\(\epsilon=0\)) is given by \[B_{n}^{\mu}\ket{0,k^{\mu}}_{I}=0\quad\forall n\neq 0,\quad B_{0}^{\mu}\ket{0,k^{ \mu}}_{I}=k^{\mu}\ket{0,k^{\mu}}_{I}. \tag{2.42}\] This state does satisfy the physical state condition in (2.39). The action of \(M_{0}\) on this state will give us the mass of the state. Here it is worth highlighting that since \(B\)'s commute with each other, the normal ordering constant \(a_{M}\) for this theory is \(0\). This results to the following: \[M_{0}\ket{0,k^{\mu}}_{I}=\sum_{n}B_{-n}\cdot B_{n}\ket{0,k^{\mu}}_{I}=\left( \sum_{n\neq 0}B_{-n}\cdot B_{n}+B_{0}^{2}\right)\ket{0,k^{\mu}}_{I}=0. \tag{2.43}\] Applying (2.42) on (2.43) leads us to \[B_{0}^{2}\ket{0,k^{\mu}}_{I}=k^{2}\ket{0,k^{\mu}}_{I}=0. \tag{2.44}\] Applying the \(L_{n}\) physical state condition on the induced vacuum state \(\ket{0,k^{\mu}}_{I}\) we get \[\bra{0,k^{\mu}}L_{n}\ket{0,k^{\mu}}=\bra{0,k^{\mu}}A_{n}\cdot B_{0}\ket{0,k^{ \mu}}=c^{\prime}k\cdot\bra{0,k^{\mu}}A_{n}\ket{0,k^{\mu}}=0. \tag{2.45}\] Recalling that \(A_{0}^{\mu}=0\) due to periodicity condition, the \(L_{0}\) condition is trivially satisfied by the induced vacuum. The fate of tensile perturbative states under tensionless limit, has been determined in [14]. We discuss about this in Appendix B and see that all the perturbative states condense on the Induced vacuum. There we also briefly touch on the non-perturbative degrees of freedom emerging in tensionless limit. ### Flipped vacuum The tensionless string theory constructed on Flipped vacuum corresponds to the highest weight representation of the BMS algebra. The physical state conditions for this theory mirrors its tensile cousin \[\left(L_{n}-a_{L}\delta_{n,0}\right)\ket{phys}=0\quad\forall n\geq 0,\quad \left(M_{n}-a_{M}\delta_{n,0}\right)\ket{phys}=0\quad\forall n\geq 0. \tag{2.46}\] The Flipped vacuum itself can be defined in terms of oscillators \(\{C,\tilde{\mathcal{C}}\}\) as \[C_{n}^{\mu}\ket{0,k}_{A}=\tilde{\mathcal{C}}_{n}^{\mu}\ket{0,k}_{A}=0\quad \forall n>0, \tag{2.47}\] where we have defined the oscillator \(\tilde{\mathcal{C}}\) as \[\tilde{\mathcal{C}}_{n}=\tilde{C}_{-n}, \tag{2.48}\] i.e. role of creation and annihilation operators are "flipped" in this sector 6 Footnote 6: This is much like the parent “twisted” string theory where the vacuum condition changes to \[\alpha_{n}^{\mu}\ket{0,k^{\mu}}_{\alpha}=\tilde{\alpha}_{-n}^{\mu}\ket{0,k^{\mu}} _{\alpha}=0\ \ \ \ \ \forall n>0. \tag{49}\] \[[C_{m}^{\mu},C_{n}^{\nu}]=m\eta^{\mu\nu}\delta_{m+n}\ \ \ \ [\tilde{\mathcal{C}}_{m}^{\mu}, \tilde{\mathcal{C}}_{n}^{\nu}]=-m\eta^{\mu\nu}\delta_{m+n},\ \ \ \ [C_{m}^{\mu}, \tilde{\mathcal{C}}_{n}^{\nu}]=0. \tag{50}\] The generators of the residual symmetry algebra in terms of these new oscillators have the following form \[\begin{split} L_{n}&=\frac{1}{2}\sum_{m}\left[C_{- m}\cdot C_{m+n}-\tilde{\mathcal{C}}_{m}\cdot\tilde{\mathcal{C}}_{-m+n}\right], \\ M_{n}&=\frac{1}{2}\sum_{m}\left[C_{-m}\cdot C_{m+n} +\tilde{\mathcal{C}}_{m}\cdot\tilde{\mathcal{C}}_{-m+n}+2C_{-m}\cdot\tilde{ \mathcal{C}}_{m+n}\right].\end{split} \tag{51}\] The bosonic scalar field \(X^{\mu}(\tau,\sigma)\) can be expressed in terms of these new oscillators as \[X^{\mu}(\tau,\sigma)=x^{\mu}+2\sqrt{\frac{c^{\prime}}{2}}C_{0}^{\mu}\tau+i \sqrt{\frac{c^{\prime}}{2}}\sum_{n\neq 0}\frac{1}{n}\left[(C_{n}^{\mu}-\tilde{ \mathcal{C}}_{-n}^{\mu})-in\tau(C_{n}^{\mu}+\tilde{\mathcal{C}}_{-n}^{\mu}) \right]e^{-in\sigma}. \tag{52}\] Now, \(L_{0}\) and \(M_{0}\) in terms of the new oscillators becomes \[L_{0}=\mathcal{N}+\bar{\mathcal{N}},\ \ \ \ \ \ \ M_{0}=c^{\prime}k^{2}+ \mathcal{N}-\bar{\mathcal{N}}+X+Y. \tag{53}\] where \(\mathcal{N},\bar{\mathcal{N}},X\) and \(Y\) are defined as \[\mathcal{N}=\sum_{m>0}C_{-m}\cdot C_{m},\ \ \bar{\mathcal{N}}=-\sum_{m>0} \tilde{\mathcal{C}}_{-m}\cdot\tilde{\mathcal{C}}_{m},\ \ X=\sum_{m>0}C_{-m}\cdot\tilde{\mathcal{C}}_{m},\ \ Y=\sum_{m>0} \tilde{\mathcal{C}}_{-m}\cdot C_{m}. \tag{54}\] Note the (-ve) sign in front of the \(\bar{\mathcal{N}}\) operator in this case. In [61] it has been shown that when we take ultrarelativistic limit of tensile twisted string theory, it gives us \(a_{L}=2\), \(a_{M}=0\). The same values of \(a_{L}\) and \(a_{M}\) have been reproduced in light-cone quantization method [62], and also in path-integral method [63]. The critical dimension \(D\) of this theory too, has been found to be 26 in [62; 63], like the parent twisted theory. This value of \(a_{L}\), along with \(L_{0}\) condition (46), demands that only level 2 states will be physical. The other physical state conditions impose more constraints on these level 2 states. These states can be obtained from physical states in the parent twisted theory by taking a tensionless limit. For some more detailed discussion on physical states of this theory, the reader is referred to Appendix C. Compactification of Target Space This section deals with the effects of compactification which are common in all three quantum tensionless string theories discussed in the previous section. We consider both the cases where one/multiple spatial coordinates are compactified on a circle \(S^{1}/d\)-dimensional torus \(T^{d}\) respectively. ### Compactification on Circle \(S^{1}\) We begin with rewriting the solutions of the intrinsic equations of motion of tensionless closed string [36] given in (6): \[X^{\mu}(\tau,\sigma)=x^{\mu}+\sqrt{\frac{c^{\prime}}{2}}A_{0}^{\mu}\sigma+ \sqrt{\frac{c^{\prime}}{2}}B_{0}^{\mu}\tau+i\sqrt{\frac{c^{\prime}}{2}}\sum_{n \neq 0}\frac{1}{n}(A_{n}^{\mu}-in\tau B_{n}^{\mu})e^{-in\sigma}.\] Here \(\mu\in\{0,1,...,25\}\) and \(D=26\) is the dimension of the target spacetime in this case. The algebra satisfied by the modes \(A_{n}\)'s and \(B_{n}\)'s are given in (9). We now choose to compactify the coordinate \(X^{25}\) on a circle of radius \(R\). In that case, we are identifying the following two points \[X^{25}\sim X^{25}+2\pi RW,\hskip 28.452756ptW\in\mathbb{Z}, \tag{10}\] where \(X^{25}\) parametrizes a 1-dimensional circle \(S^{1}\) of radius \(R\) and the integer \(W\) parameterises the winding number of the string. The function \(X^{25}(\tau,\sigma)\) maps the closed string \(0\leq\sigma\leq 2\pi\) to the 1-dimensional circle (\(0\leq X^{25}\leq 2\pi R\)). Therefore we need to modify the periodicity condition of a closed string in this direction, \[X^{25}(\sigma+2\pi)=X^{25}(\sigma,\tau)+2\pi RW. \tag{11}\] The extra term \(2\pi RW\) gives rise to strings that are closed only due to the compactification (i.e. they are closed only on the circle \(S^{1}\) and not on \(\mathbb{R}\)). When we quantize this theory, this gives rise to winding states, characterised by winding number \(W\). We now write the mode expansion of \(X^{25}(\sigma,\tau)\) and see the consequence of contraction \[X^{25}(\tau,\sigma)=x^{25}+\sqrt{\frac{c^{\prime}}{2}}A_{0}^{25}\sigma+\sqrt{ \frac{c^{\prime}}{2}}B_{0}^{25}\tau+i\sqrt{\frac{c^{\prime}}{2}}\sum_{n\neq 0 }\frac{1}{n}(A_{n}^{25}-in\tau B_{n}^{25})e^{-in\sigma}. \tag{12}\] As shown in [36], \(k^{\mu}=\sqrt{\frac{1}{2c^{\prime}}}B_{0}^{\mu}\). Keeping in mind the fact that the wave function \(e^{ik^{25}X^{25}}\) must be single-valued, we must restrict the allowed values of \(k^{25}\) to discrete values and finally end up getting the following allowed values of \(B_{0}^{25}\): \[B_{0}^{25}=\sqrt{2c^{\prime}}\Big{(}\frac{K}{R}\Big{)}\hskip 28.452756ptK\in \mathbb{Z}. \tag{13}\] The modified periodic condition (11) demands \[A_{0}^{25}=\sqrt{\frac{2}{c^{\prime}}}RW. \tag{14}\] Therefore the expansion (3.3) takes the following form \[X^{25}=x^{25}+RW\sigma+\Big{(}\frac{c^{\prime}K}{R}\Big{)}\tau+i \sqrt{\frac{c^{\prime}}{2}}\sum_{n\neq 0}\frac{1}{n}(A_{n}^{25}-in\tau B_{n}^{25})e^{ -in\sigma}. \tag{3.6}\] For \(\mu\neq 25\), \(A_{0}^{\mu}=0\) in order to maintain periodicity in \(\sigma\). As highlighted in section (2.1), in order to get the harmonic oscillator algebra [36] we need to introduce new modes \((C,\tilde{C})\) defined in (2.10). Using equations (3.4), (3.5) in (2.10), we find the modified zero modes in the compactified case, \[C_{0}^{25}=\frac{1}{2}\left[\sqrt{2c^{\prime}}\left(\frac{K}{R} \right)+\sqrt{\frac{2}{c^{\prime}}}RW\right],\ \ \ \ \tilde{C}_{0}^{25}=\frac{1}{2}\left[\sqrt{2c^{\prime}}\left(\frac{K}{R} \right)-\sqrt{\frac{2}{c^{\prime}}}RW\right]. \tag{3.7}\] We can see here that for compactified dimension, \(C_{0}^{25}\) and \(\tilde{C}_{0}^{25}\) have different values. However, for non-compactified dimensions indexed by \(\mu=\{0,1,\cdots,24\}\), we have the usual \(C_{0}^{\mu}=\tilde{C}_{0}^{\mu}=\sqrt{\frac{c^{\prime}}{2}}k^{\mu}\). ### Compactification on Torus \(T^{d}\) In this subsection we are going to generalise our analysis for a background with \(d\) number of dimensions compactified, resulting into a \(26-d\) (\(D=26\)) dimensional effective theory. Here we have a \(d\)-dimensional torus \(T^{d}\) instead of a circle \(S^{1}\). In the torus we make the following identification for the compactified coordinates \[X^{I}\sim X^{I}+2\pi RW^{I},\ \ \ I\in\{26-d,\cdots,25\}. \tag{3.8}\] where the winding can be written as: \[W^{I}=\sum_{i=1}^{d}\omega^{i}e_{i}^{I},\ \ \ \ \omega^{i}\in\mathbb{Z}. \tag{3.9}\] The components of metric in compactified directions are assumed to be \(G_{IJ}=\delta_{IJ}\). Here \(e_{i}=\{e_{i}^{I}\}\) form the basis of a \(d\)-dimensional lattice \(\Lambda_{d}\). The momentum is denoted by \(K_{I}\). In order to make \(e^{iX^{I}K_{I}}\) single-valued here again we have to make \(W^{I}(RK_{I})\in\mathbb{Z}\), implying that \(RK_{I}\) has to reside in the lattice \(\Lambda_{d}^{*}\) which is dual to \(\Lambda_{d}\). That means \(K_{I}\) can be expressed as \[RK_{I}=\sum_{i=1}^{d}k_{i}\ e_{I}^{*i}\implies K_{I}=\sum_{i=1}^{ d}\frac{k_{i}}{R}\ e_{I}^{*i},\ \ \ \ k_{i}\in\mathbb{Z}. \tag{3.10}\] where \(e^{*i}=\{e_{I}^{*i}\}\) form the basis of the dual lattice \(\Lambda_{d}^{*}\), and are dual to \(e_{i}\). \[e_{i}\cdot e^{*j}=e_{i}^{I}e_{I}^{*j}=\delta_{i}^{j}. \tag{3.11}\] The metric on the lattice \(\Lambda_{d}\) and \(\Lambda_{d}^{*}\) are respectively defined as \[g_{ij}=e_{i}\cdot e_{j}=e_{i}^{I}e_{j}^{J}\delta_{IJ},\ \ \ \ g_{ij}^{*}=e^{*i}\cdot e^{*j}=e_{I}^{*i}e_{J}^{*j} \delta^{IJ}=g^{ij}. \tag{3.12}\] For our convenience we define dimensionless field \(Y^{I}\) as \[X^{I}=\sqrt{\frac{c^{\prime}}{2}}Y^{I}. \tag{3.13}\] We also use the (3.13). The expansion of \(Y^{I}\), however, would be in terms of oscillators \(A\)'s and \(B\)'s. \[Y^{I}=y^{I}+A^{I}_{0}\sigma+B^{I}_{0}\tau+i\sum_{n\neq 0} \frac{1}{n}(A^{I}_{n}-in\tau B^{I}_{n})e^{-in\sigma}. \tag{3.14}\] Together (3.8) and (3.10) imply that \[B^{I}_{0}=\sqrt{2c^{\prime}}K^{I},\hskip 28.452756ptA^{I}_{0}= \sqrt{\frac{2}{c^{\prime}}}RW^{I}. \tag{3.15}\] Expressing \(Y^{I}\) in terms of oscillators \(\{C,\tilde{C}\}\) and splitting \(Y^{I}\) to left and right part we can do the following mode expansion \[Y^{I}_{L} =y^{I}_{L}+k^{I}_{L}(\tau+\sigma)+i\sum_{n\neq 0}\frac{1}{n}(C^{ \mu}_{n}-in\tau C^{\mu}_{n})e^{-in\sigma},\] \[Y^{I}_{R} =y^{I}_{L}+k^{I}_{R}(\tau-\sigma)+i\sum_{n\neq 0}\frac{1}{n}( \tilde{C}^{\mu}_{n}-in\tau\tilde{C}^{\mu}_{n})e^{in\sigma}. \tag{3.16}\] Here, \(k^{I}_{L}\) and \(k^{I}_{R}\) respectively are dimensionless left and right momenta. (3.8) and (3.10) together imply that \[k^{I}_{L,R}=\frac{1}{\sqrt{2}}\Bigg{(}\sqrt{c^{\prime}}K^{I}\pm \frac{1}{\sqrt{c^{\prime}}}W^{I}R\Bigg{)}. \tag{3.17}\] With the formalims in place, we are now ready to study the effect of compactification on different quantum theories of tensionless strings. We will deal with the three quantum theories built on the three different vacua individually in the following sections. Our focus would be mainly on the effect of circle compactifications, and although we provide a quick look into the toroidal case, the details of it will be addressed in a subsequent work [75]. Before starting the upcoming sections let us mention our notation: while summing over repeated indices, without using summation symbol, we mean sum over all coordinates-including the compactified ones. We use summation symbol when we sum over non-compact coordinates only. Effect of compactification: Oscillator Vacuum In this section we start by computing the level matching condition as well as mass spectrum for the tensionless string theory constructed on oscillator vacuum as defined in (28). We shall see that the level matching condition will be modified due to the difference between the zero modes of oscillators \((C_{0}^{25},\hat{C}_{0}^{25})\) derived in the earlier section. ### Modified level matching condition and mass spectrum As already defined in subsection (3), the oscillator vacuum is annihilated by oscillators \(C_{n}^{\mu}\) and \(\tilde{C}_{n}^{\mu}\). As mentioned earlier, we will assume the target space in this case is 26 dimensional Minkowski space for consistency i.e. \(\mu=0,1,...,25\). The vacuum still remains an eigenstate of the momentum operator. For compactification on a \(S^{1}\) along the 25th direction, it should also have a winding number \((W)\) along it, resulting in: \[\left|0\right\rangle_{c} \equiv\left|0,k^{\mu},K,W\right\rangle_{c}, \tag{42}\] \[\hat{k}^{\mu}\left|0,k^{\mu},K,W\right\rangle_{c} =k^{\mu}\left|0,k^{\mu},K,W\right\rangle_{c}\ \ \ \ \mu=0,1,\cdots,24,\] \[\hat{k}^{25}\left|0,k^{\mu},K,W\right\rangle_{c} =\frac{K}{R}\left|0,k^{\mu},K,W\right\rangle_{c}.\] Since for compactified case we can feel only the dimensions which are non-compact, the square of the mass measured must be the sum over only those components of momentum which belong to the non-compact dimensions \[m^{2}=-\sum_{\mu=0}^{24}k_{\mu}k^{\mu}. \tag{43}\] For non-compactified case [61] as already discussed, the normal ordered zero modes \(L_{0}\) and \(M_{0}\) follow (30), where the number operators \(\mathcal{N}\), \(\widetilde{\mathcal{N}}\) and the sum of annihilation operators \(X\) are defined as (31). We now move on to compactified case. Using (39) in (30) we get the expression for modified normal ordered zero modes as, \[L_{0}=\mathcal{N}-\widetilde{\mathcal{N}}+KW,\ \ \ M_{0}=c^{\prime}\frac{K^{2}}{R^{2}}+c^{ \prime}\sum_{\mu=0}^{24}k_{\mu}k^{\mu}+\mathcal{N}+\widetilde{\mathcal{N}}+X+ X^{\dagger}, \tag{44}\] where \(\mathcal{N}\), \(\widetilde{\mathcal{N}}\), \(X\) and \(X^{\dagger}\) are same as defined in (31). So, we observe that due to compactification, there is a modification in both the level matching condition as well as \(M_{0}\). We now compute the physical states using sandwich conditions (27). Let us consider the case where \(\left|phys\right\rangle=\left|phys^{\prime}\right\rangle=\left|0,k^{\mu},K,W\right\rangle\). While for \(n\neq 0\) the physical state conditions are trivially satisfied, for zero modes we have the following \[\left\langle 0,k^{\mu},K,W\right|L_{0}\left|0,k^{\mu},K,W \right\rangle=\left\langle 0,k^{\mu},K,W\right|(\mathcal{N}-\widetilde{ \mathcal{N}}+KW)\left|0,k^{\mu},K,W\right\rangle=a_{L}=0,\] \[\left\langle 0,k^{\mu},K,W\right|M_{0}\left|0,k^{\mu},K,W \right\rangle=c^{\prime}\Bigg{(}\frac{K^{2}}{R^{2}}+\sum_{\mu=0}^{24}k_{\mu}k ^{\mu}\Bigg{)}=a_{M}=2. \tag{45}\] In the above we have used the values of \(a_{L}\) and \(a_{M}\) obtained in subsection (2.3). For the lowest energy state, we immediately have \(\mathcal{N}=\widetilde{\mathcal{N}}=0\). The above considered state is physical only when \(KW=a_{L}=0\). Hence, the only way to make sure that the state \(\ket{0,k^{\mu},K,W}\) is physical is to demand either \(W=0\) or \(K=0\). The mass shell condition in this case becomes \[m^{2}=\frac{K^{2}}{R^{2}}-\frac{2}{c^{\prime}}. \tag{4.5}\] Let us consider a generic state of the form \(\ket{r,s,k^{\mu},K,W}\), where \[\mathcal{N}\ket{r,s,k^{\mu},K,W}=r\ket{r,s,k^{\mu},K,W},\quad \widetilde{\mathcal{N}}\ket{r,s,k^{\mu},K,W}=s\ket{r,s,k^{\mu},K,W}.\] As we have seen in subsection (2.3), non level-matched states can not be physical for non compactified background when we choose oscillator vacuum i.e. we need to impose \(r=s\). However, in the present case, the level matching condition will be changed due to winding modes. Applying the sandwich condition (2.27) with \(a_{L}=0\) and \(\ket{phys}=\ket{phys^{\prime}}=\ket{r,s,k^{\mu},K,W}\) we see that \[\bra{r,s,k^{\mu},K,W}L_{0}\ket{r,s,k^{\mu},K,W}=r-s+KW=0\implies s =r+KW. \tag{4.6}\] Now we want to check whether the level matched states satisfy the following sandwich condition \[\bra{phys}L_{n}\ket{phys}=0,\quad\ n\neq 0. \tag{4.7}\] As shown in [61], the action of \(L_{n}\) on state \(\ket{r,s}\) is given by \[L_{n}\ket{r,s}=\ket{r-n,s}-\ket{r,s+n}. \tag{4.8}\] That means if we take a level matched state with \(s=r+KW\), then after operating \(L_{n}\) on it, we would end up with sum of states \(\ket{r-n,s}\) and \(\ket{r,s+n}\), both of which are non level-matched, and subsequently orthogonal to any level-matched state. Hence inner product of this sum with any level-matched state is bound to be zero. Consequently, we conclude that the level-matched states satisfy the sandwich condition on \(L_{n}\) for all \(n\). Now, we apply the sandwich condition for \(M_{0}\) level-matched states in order to find their mass. Following the method outlined in [61], we see that the \(M_{0}\) physical state condition leads us to the following constraint on a state \(\ket{r,s}\) \[\left(c^{\prime}\frac{K^{2}}{R^{2}}+c^{\prime}\sum_{\mu=0}^{24}k_ {\mu}k^{\mu}+\mathcal{N}+\widetilde{\mathcal{N}}\right)\ket{r,s}=2\ket{r,s} \implies\left(c^{\prime}\frac{K^{2}}{R^{2}}+r+s-2\right)\ket{r,s}=c^{\prime}m ^{2}\ket{r,s}\] \[\implies m^{2}\ket{r,s}=\left[\frac{K^{2}}{R^{2}}+\frac{1}{c^{ \prime}}(r+s-2)\right]\ket{r,s}. \tag{4.9}\] In the above we have used equation (4.2). Hence, a generic physical state \(\ket{r,s,k^{\mu},K,W}\) must satisfy the level matching condition as given in (4.6) along with mass shell condition (4.9). This result matches with the result already derived in [15], in the sense there is no winding number contribution to the mass formula. It is a textbook concept that the winding contribution to the string mass is understood as the energy required to wrap the string around the compact circle, and an absence of such contribution may be attributed to the so called "long string" state associated to the tensionless regime, where the rest energy associated to wrapping just vanishes. ### States from compactification In the following subsection, we are going to discuss about the new states arising due to compactification (apart from the states we have already discussed in (3)). In the case of tensile string theory compactified on \(S^{1}\), we had two massless vector states along with a massless scalar state for any value of compactifying radius \(R\). At self-dual radius \(R=\sqrt{\alpha^{\prime}}\), four additional vector states and eight additional scalar states would become massless. There were also infinite number of vacuum states with either \(K=0\) or \(W=0\), which will become massless at particular value of radius. However, we will see that, for tensionless string theory on oscillator vacuum, there will be an infinite number of massless vector states and scalar states for any value of the compactification radius. Moreover, at \(R=\sqrt{c^{\prime}}\) we will have four additional massless vectors and four massless scalars. Like tensile theory there will be infinitely many vacuum states with internal momenta which become massless at particular values of \(R\). #### Level zero states Let us consider the following states (with \(r=s=0\)) \[|0,k^{\mu},K,0\rangle\,,\hskip 28.452756ptK\in\mathbb{Z}. \tag{41}\] The mass formula in (40) implies that states in (41) will have: \[m^{2}=\frac{K^{2}}{R^{2}}-\frac{2}{c^{\prime}}. \tag{42}\] Hence, the state in (41) will become massless for a given value of internal momentum \(K\), particularly when the radius of compactification \(R\) is \[R=K\sqrt{\frac{c^{\prime}}{2}}. \tag{43}\] In tensile string theory too, there are states of this kind, which become massless at compactified radius \(R=K\frac{\sqrt{\alpha^{\prime}}}{2}\). In general the above states can be tachyonic, massless or massive depending on the compact radius, which again mirrors the nature of the tensile counterpart thereof. Note that winding vacuum states like \(|0,k^{\mu},0,W\rangle\) will still be purely tachyonic with a mass square \(-\frac{2}{c^{\prime}}\) for all values of \(W\). #### Level 1 vector states Now we consider the following physical states at level 1, with either \(r=1\) or \(s=1\), \[|V^{\mu}_{\pm}\rangle=C^{\mu}_{-1}\,|0,k^{\mu},\pm 1,\mp 1\rangle_{c}\,, \hskip 14.226378pt|\tilde{V}^{\mu}_{\pm}\rangle=\tilde{C}^{\mu}_{-1}\,|0,k^{ \mu},\pm 1,\pm 1\rangle_{c} \tag{44}\] where \(\mu=\{0,1,...,24\}\) and clearly \(KW=\pm 1\). These states are vector states with the following mass squared \[m^{2}=\frac{1}{R^{2}}-\frac{1}{c^{\prime}}. \tag{4.14}\] These states become massless at radius \(R=\sqrt{c^{\prime}}\). They can be compared to the aforementioned vector states in tensile string theory which become massless at \(R=\sqrt{\alpha^{\prime}}\). However in the tensile case, the analogous states always have non negative mass square values, which is not guaranteed in this case. The implication of this observation is not completely clear to us, and we will come back to this in future correspondences. #### Level 1 scalar states By acting the oscillators \(C_{-1}^{25}\) and \(\tilde{C}_{-1}^{25}\) on \(\ket{0,k^{\mu},\pm 1,\mp 1}_{c}\) and \(\ket{0,k^{\mu},\pm 1,\pm 1}_{c}\) respectively we can construct 4 more scalar states of level 1: \[\ket{\phi_{\pm}}=C_{-1}^{25}\ket{0,k^{\mu},\pm 1,\mp 1}_{c},\hskip 14.226378pt\ket{\tilde{\phi}_{\pm}}=\tilde{C}_{-1}^{25}\ket{0,k^{\mu},\pm 1,\pm 1}_{c}. \tag{4.15}\] These states are scalar states having same mass as in (4.14), and hence, they too, become massless at \(R=\sqrt{c^{\prime}}\). They can be compared to the aforementioned scalar states in tensile string theory which become massless at \(R=\sqrt{\alpha^{\prime}}\). #### Level 2 massless vector states For \(K=0\), at level 2 (i.e. \(r=1\)) we have a tower of massless vector states, since according to (4.6) and (4.9), for any value of the winding number \(W\), we shall have \(m=0\). These states are denoted by \[\ket{V_{W}^{\mu}}=C_{-1}^{\mu}\tilde{C}_{-1}^{25}\ket{0,k^{\mu},0,W}_{c},\hskip 14.226378pt\ket{\widetilde{V}_{W}^{\mu}}=\tilde{C}_{-1}^{\mu}C_{- 1}^{25}\ket{0,k^{\mu},0,W}_{c}, \tag{4.16}\] where \(\mu,\nu=\{0,1,\cdots,24\}\). Tensile string theory also has vector states like (4.16), but they are massless only if both \(K\) and \(W\) are zero. Hence tensile bosonic string theory can have only two vector states which can be massless for any \(R\). #### Level 2 massless scalar states For \(r=1\) with \(K=0\), we also have infinite number of massless scalar states as well. They are denoted by \[\ket{\phi_{W}}=C_{-1}^{25}\tilde{C}_{-1}^{25}\ket{0,k^{\mu},0,W} _{c}. \tag{4.17}\] The states given in (4.16) and (4.17) are massless for any value of compactification radius \(R\). Tensile string theory also has similar states, but they can be massless only for \(K=W=0\). ### Limiting theory We have seen earlier that in non-compactified background tensile and tensionless oscillators are related by a set of Bogoliubov transformations. In this section we would like to see whether a similar situation occurs starting from a compactified target space for the tensile theory and consistently taking limits at every step. It is quite obvious that (2.19) will be intact for all the non-compactified dimensions. We will then rederive the Bogoliubov transformation only focusing on the compactified coordinate. Let us also consider the mode expansion of \(X^{25}(\tau,\sigma)\) in tensile string theory, which we take to be compactified on a circle as well, \[X^{25}(\tau,\sigma)= x^{25}+\sqrt{\frac{\alpha^{\prime}}{2}}\tilde{\alpha}_{0}^{25}( \tau-\sigma)+\sqrt{\frac{\alpha^{\prime}}{2}}\alpha_{0}^{25}(\tau+\sigma)\] \[+i\sqrt{\frac{\alpha^{\prime}}{2}}\sum_{n\neq 0}\frac{1}{n}\Big{[} \alpha_{n}^{25}e^{-in(\tau+\sigma)}+\tilde{\alpha}_{n}^{25}e^{-in(\tau-\sigma) }\Big{]}. \tag{4.18}\] Taking ultra-relativistic limit (\(\tau\to\epsilon\tau,\sigma\to\sigma,\alpha^{\prime}\to\frac{\epsilon^{ \prime}}{\epsilon}\)) of (4.18) and comparing this with (2.12), we get the following relation between the \((C_{0}^{25},\tilde{C}_{0}^{25})\) and \((\alpha_{0}^{25},\tilde{\alpha}_{0}^{25})\) \[\begin{split} C_{0}^{25}&=\frac{1}{2}\Big{(}\sqrt{ \epsilon}+\frac{1}{\sqrt{\epsilon}}\Big{)}\alpha_{0}^{25}+\frac{1}{2}\Big{(} \sqrt{\epsilon}-\frac{1}{\sqrt{\epsilon}}\Big{)}\tilde{\alpha}_{0}^{25},\\ \tilde{C}_{0}^{25}&=\frac{1}{2}\Big{(}\sqrt{\epsilon }-\frac{1}{\sqrt{\epsilon}}\Big{)}\alpha_{0}^{25}+\frac{1}{2}\Big{(}\sqrt{ \epsilon}+\frac{1}{\sqrt{\epsilon}}\Big{)}\tilde{\alpha}_{0}^{25}.\end{split} \tag{4.19}\] One can note, the relation between \((C_{n}^{25},\tilde{C}_{n}^{25})\) and \((\alpha_{n}^{25},\tilde{\alpha}_{n}^{25})\) modes with \(n\neq 0\) remains same as (2.19). Now let us make the following identification on \(X^{25}\) in (4.18) \[X^{25}\sim X^{25}+2\pi R^{\prime}W^{\prime},\hskip 14.226378ptW\in\mathbb{Z} \tag{4.20}\] i.e. the target space of the tensile theory is compactified in \(25^{th}\) dimension with radius \(R^{\prime}\)7 and a winding number \(W^{\prime}\). The quantized momentum in the compactified direction will be \(\frac{K^{\prime}}{R^{\prime}}\). It can be easily shown that Footnote 7: Here we consider the possibility that while taking \(T\to\epsilon T\) limit from tensile string theory we might have to scale the compactification radius as well \(R^{\prime}\to\epsilon^{p}R\). \[\alpha_{0}^{25}=\frac{1}{2}\left[\sqrt{2\alpha^{\prime}}\left(\frac{K^{\prime }}{R^{\prime}}\right)+\sqrt{\frac{2}{\alpha^{\prime}}}R^{\prime}W^{\prime} \right],\hskip 28.452756pt\tilde{\alpha}_{0}^{25}=\frac{1}{2}\left[\sqrt{2 \alpha^{\prime}}\left(\frac{K^{\prime}}{R^{\prime}}\right)-\sqrt{\frac{2}{ \alpha^{\prime}}}R^{\prime}W^{\prime}\right] \tag{4.21}\] Now, using the expressions of \(\alpha_{0}^{25}\) and \(\tilde{\alpha}_{0}^{25}\) in the r.h.s. of (4.19), we end up with the following expressions for \(C_{0}^{25}\) and \(\tilde{C}_{0}^{25}\) \[C_{0}^{25}=\frac{1}{2}\left[\sqrt{2c^{\prime}}\left(\frac{K^{\prime}}{R^{ \prime}}\right)+\sqrt{\frac{2}{c^{\prime}}}R^{\prime}W^{\prime}\right], \hskip 14.226378pt\tilde{C}_{0}^{25}=\frac{1}{2}\left[\sqrt{2c^{\prime}} \left(\frac{K^{\prime}}{R^{\prime}}\right)-\sqrt{\frac{2}{c^{\prime}}}R^{ \prime}W^{\prime}\right]. \tag{4.22}\] We see that the expressions of \(C_{0}^{25}\) and \(\tilde{C}_{0}^{25}\) in (4.22) are exactly in the same form as in the intrinsically calculated zero modes (3.7). Comparing the two expressions, we can conclude: \[\frac{K^{\prime}}{R^{\prime}}=\frac{K}{R},\hskip 14.226378ptW^{\prime}R^{\prime}=WR. \tag{4.23}\] Hence, if we want to make a scaling \(R^{\prime}=\epsilon^{p}R\), then we must have to make the following scaling on \(W^{\prime}\) and \(K^{\prime}\) as well \[K^{\prime}=\epsilon^{p}K,\ \ \ \ W^{\prime}=\epsilon^{-p}W. \tag{4.24}\] Since we want both \(K\) and \(W\) to be finite integers, one of the probable ways to ensure that is to demand that \(p=0\), which means we should have \[R^{\prime}=R. \tag{4.25}\] This automatically implies that \(K^{\prime}=K\) and \(W^{\prime}=W\), which is one of the possibilities, and we will assume this without loss of generality in what follows. Another implication of this observation is that the tensionless string theory built on the oscillator vacuum resides in the target spacetime identical to that of the tensile theory, and the vacua are just connected through Bogoliubov transformations. ### A brief Look at multiple dimensions compactification In this section we briefly discuss about oscillator vacuum theory on a background with \(d\) dimensions compactified on a torus \(T^{d}\). Recalling the discussion in section (3.2), we express \(L_{0}\) and \(M_{0}\) in terms of \(k_{L,R}^{I}\) to get, \[L_{0}\ =\ \mathcal{N}-\widetilde{\mathcal{N}}+RK^{I}W_{I}\ =\ \mathcal{N}- \widetilde{\mathcal{N}}+\sum_{i=1}^{d}k_{i}\omega^{i}, \tag{4.26}\] \[M_{0}=\frac{1}{2}\left(k_{L}^{I}k_{IL}+k_{R}^{I}k_{IR}+2k_{L}^{I} k_{IR}\right)+c^{\prime}k^{2}+\mathcal{N}+\widetilde{\mathcal{N}}+X+X^{\dagger}\] \[\qquad\qquad=c^{\prime}K^{I}K_{I}+c^{\prime}k^{2}+\mathcal{N}+ \widetilde{\mathcal{N}}+X+X^{\dagger}\] \[\implies M_{0}=\frac{c^{\prime}}{R^{2}}\sum_{i,j=1}^{d}k_{i}g^{ij}k_{j}+c^ {\prime}k^{2}+\mathcal{N}+\widetilde{\mathcal{N}}+X+X^{\dagger}.\] \(k_{L,R}^{I}\) in terms of \(W^{I}\) and \(K^{I}\) are given in (3.17). The remaining steps are very much same as that in section (4.1). We apply the level matching condition and finally obtain the mass spectrum as \[s-r=\sum_{i=1}^{d}k_{i}\omega^{i},\ \ \ \ \ \ \ \ m^{2}=\frac{1}{R^{2}}\sum_{i,j=1}^{d}k_{i}g^{ij}k_{j}+ \frac{1}{c^{\prime}}\left(r+s-2\right), \tag{4.27}\] Which generalizes our earlier discussion. This is however much more intricate as many parameters are involved, and a thorough investigation of the associated spectrum will be detailed elsewhere [75] as promised earlier. ### Summary Given below is the summary of our findings in this section: * The new level matching condition (4.6) from physical state conditions has been calculated. It turns out that the level matching condition has been modified in a similar way as for tensile compactified case. * The mass spectrum in (4.9) has been computed. Unlike tensile string theory, the mass spectrum is not straightforwardly invariant under T-duality transformation. * We discussed about the new states arising due to compactification. We find an infinite tower of massless states. There are massless vector states in (4.17), which are massless for any value of compactified radius. There are other states as well, which become massless at specific values of radius such as states in (4.10) and (4.15). * Oscillator vacuum in a target space with \(d\) dimensions compactified on a torus \(T^{d}\) has been considered. The level matching condition and mass spectrum has been derived (4.27). Effect of compactification: Induced Vacuum As already highlighted in earlier section, the theory with Induced vacuum emerges when we explicitly follow through with the tensionless limit of the tensile string theory. In this section, we study the effect of compactification on the theory built upon this vacuum. We will also perform a consistent limiting analysis on the tensile perturbative states to ascertain what happens in the explicit limit. ### What happens to the vacuum? The tensile string vacuum with non-zero internal momentum \(K\) and winding number \(W\), in the explicit tensionless limit, will give rise to Induced vacuum with same internal momentum and winding number 8 Footnote 8: This can be seen again from a comparison of (4.22) and (3.7), where same compactification radius means same \(K\) and \(W\). This comparison remains valid for all vacua. \[\lim_{\epsilon\to 0}\left|0,k_{\alpha}^{\mu},K,W\right\rangle_{\alpha}=\left|0,k_ {I}^{\mu},K,W\right\rangle_{I}. \tag{5.1}\] We denote the non-compact momentum of tensile theory as \(k_{\alpha}^{\mu}\), and the same in tensionless theory as \(k_{I}^{\mu}\) in order to distinguish them from each other. We will see later in this section that at explicit tensionless limit the momentum will change and hence this distinction is important. Intrinsically this new vacuum is defined in analogy to (2.42) as: \[B_{n}\left|0,k_{I}^{\mu},K,W\right\rangle_{I} =0,\qquad n\neq 0, \tag{5.2}\] \[B_{0}^{\mu}\left|0,k_{I}^{\mu},K,W\right\rangle_{I} =\sqrt{2c^{\prime}}k_{I}^{\mu}\left|0,k_{I}^{\mu},K,W\right\rangle _{I}\quad\mu=0,1,\cdots,24,\] \[B_{0}^{25}\left|0,k_{I}^{\mu},K,W\right\rangle_{I} =\sqrt{2c^{\prime}}\frac{K}{R}\left|0,k_{I}^{\mu},K,W\right\rangle _{I}.\] Now we know from [36] that generators of BMS algebra \(L_{n}\) and \(M_{n}\) can be written in terms of \(A_{n}\)'s and \(B_{n}\)'s as (2.7). Let us recall from the discussion in section (2) that the vacuum \(\left|0,k_{I}^{\mu},K,W\right\rangle_{I}\) belongs to the Induced representation of the BMS algebra. The physical state conditions satisfied by these states are given in (2.39). Hence in order to become physical state, the vacuum must satisfy the following condition \[M_{n}\left|0,k_{I}^{\mu},K,W\right\rangle_{I}=\frac{1}{2}\sum_{m}B_{-m}\cdot B _{n+m}\left|0,k_{I}^{\mu},K,W\right\rangle_{I}=0. \tag{5.3}\] As pointed out in [61], \(B_{n}\)'s commute with each other i.e., there is no normal ordering ambiguity in the expression of operator \(M_{n}\), which implies \(a_{M}=0\). So we can promptly write: \[M_{0}\left|0,k_{I}^{\mu},K,W\right\rangle_{I}=a_{M}\left|0,k_{I}^{\mu},K,W \right\rangle_{I}=0, \tag{5.4}\] which in turn gives \[\sum_{m}B_{-m}\cdot B_{m}\left|0,k_{I}^{\mu},K,W\right\rangle_{I} =0\implies\Big{(}\sum_{m\neq 0}B_{-m}\cdot B_{m}+B_{0}^{2}\Big{)} \left|0,k_{I}^{\mu},K,W\right\rangle_{I}=0 \tag{5.5}\] \[\implies B_{0}^{2}\left|0,k_{I}^{\mu},K,W\right\rangle_{I} =2c^{\prime}\Bigg{(}\sum_{\nu=0}^{24}k_{I\nu}k_{I}^{\nu}+\frac{K^{2}}{R^{2}} \Bigg{)}\left|0,k_{I}^{\mu},K,W\right\rangle_{I}=0.\] Hence we find here that the vacuum has a mass spectrum given by: \[m^{2}\left|0,k_{I}^{\mu},K,W\right>_{I}=-\sum_{\nu=0}^{24}k_{I\nu}k_{I}^{\nu} \left|0,k_{I}^{\mu},K,W\right>_{I}=\frac{K^{2}}{R^{2}}\left|0,k_{I}^{\mu},K,W \right>_{I}. \tag{100}\] So the string in the Induced vacuum state only has a rest energy contributed by the internal momentum, in a way similar to a relativistic massless particle. The \(L_{n}\) physical state condition on the induced vacuum can be written as follows: \[\left<0,k_{I}^{\mu},K,W\right|L_{n}\left|0,k_{I}^{\mu},K,W\right>=0\ \ \ \ \forall n \tag{101}\] For \(n=0\), this would give us the following \[\begin{split}\left<0,k_{I}^{\mu},K,W\right|A_{0}\cdot B_{0} \left|0,k_{I}^{\mu},K,W\right>=KW\left<0,k_{I}^{\mu},K,W\right|0,k_{I}^{\mu},K, W\right>=0\\ \implies\,KW=0.\end{split} \tag{102}\] In the above we have used the fact that \(A_{0}^{\mu}=0\) for \(\mu=\{0,1,...24\}\) along with expression of \(A_{0}^{25}\) as given in (100). This implies the state \(\left|0,k_{I}^{\mu},K,W\right>_{I}\) can be physical iff \(KW=0\). This mirrors the fact for tensile string theory, the physical vacuum must have \(KW=0\). ### Limit from tensile mass formula Since Induced vacuum comes directly from taking tensionless limit of the tensile case, we shall check whether we arrive at the same conclusion by taking the appropriate limit (i.e. \(\alpha^{\prime}\rightarrow\infty\)) of the tensile string theory. We start from the following mode expansion of the compactified coordinate in the tensile case \[X^{25}(\sigma,\tau)= x^{25}+\alpha^{\prime}p^{25}\tau+WR\sigma \tag{103}\] \[+i\sqrt{\frac{\alpha^{\prime}}{2}}\sum_{n\neq 0}\frac{1}{n} \Big{(}\alpha_{n}^{25}e^{-in(\tau-\sigma)}+\tilde{\alpha}_{n}^{25}e^{-in(\tau +\sigma)}\Big{)},\] where the internal momenta takes the discrete form \(p^{25}=\frac{K}{R}\). The physical state conditions for tensile string are \[\left(\mathcal{L}_{n}-a\delta_{n,0}\right)\left|phys\right>=0,\ \ \ \ \ \left(\bar{ \mathcal{L}}_{n}-a\delta_{n,0}\right)\left|phys\right>=0\ \ \ \ \ \ \forall n>0. \tag{104}\] The mass formula and the level matching conditions can now be derived from (104) as, \[m^{2}=\frac{K^{2}}{R^{2}}+\frac{1}{\alpha^{\prime 2}}W^{2}R^{2}+\frac{2}{ \alpha^{\prime}}(N+\widetilde{N}-2),\ \ \ \ \widetilde{N}-N=KW. \tag{105}\] Here \(N\) and \(\widetilde{N}\) denote left and right level of tensile string respectively, and \(W\) is the winding number. In the tensionless limit, \(\alpha^{\prime}\) gets scaled to \(\frac{\epsilon^{\prime}}{\epsilon}\) and hence it is straight forward to see that as \(\epsilon\to 0\), the 2nd term and 3rd term in (105) vanishes and only the 1st term survives, which is same as what we calculated intrinsically (100) 9. ### What happens to the perturbative states? In section (2.4), we have seen that in a non-compactified background, all the perturbative states of tensile string theory under the tensionless limit condense on the Induced vacuum of the tensionless string. In this subsection we shall study the fate of the tensile perturbative states in tensionless limit when the target space has one dimension compactified. Note that the corresponding non-compactified computation of [61] has been reviewed in Appendix (B), and we will explicitly follow the same procedure in the current case. #### Perturbative states with either of \(K,w\neq 0\) under tensionless limit Now, let us have a look at the states having non-zero winding number \(W\). As we have noticed in (5.6), the winding number did not appear in the mass spectrum. To understand the reason of this, we need to have a look at (3.4) and (5.2). We see that unlike in the case of oscillator modes \(C_{0}\) and \(\tilde{C}_{0}\) (see (3.7)), the winding number does not appear in the expression of \(B_{0}^{25}\). This means that the non-compact mass \(m^{2}=-\sum_{\nu=0}^{24}k_{I}\nu k_{I}^{\nu}\) becomes independent of the winding number. As we have seen, this is consistent with the tensionless limit of tensile string as well, since the term containing the winding number \(W\) in (5.11) does vanish in tensionless limit. The tensile vacuum (\(N=\widetilde{N}=0\)) for non-zero internal momentum will essentially have zero winding number as dictated by the level matching condition. Hence in tensionless limit, tensile vacuum with \(W=0\), internal momentum \(\frac{K}{R}\) and momentum \(k_{\alpha}^{\mu}\) (\(m^{2}=-k^{2}\) and its value can be found from (5.11) with \(W=N=N=0\)) will end up as a state with \(W=0\), internal momentum \(\frac{K}{R}\) and and a new momentum \(k_{I}^{\mu}\), where \(k_{I}^{2}=-\frac{K^{2}}{R^{2}}\). Hence, this vacuum is given by \[\left|0,k_{I}^{\mu},K,0\right\rangle_{I},\] which will satisfy the following equation \[\widehat{W}\left|0,k_{I}^{\mu},K,0\right\rangle_{I}=0. \tag{5.12}\] It can be shown that all the tensile perturbative states with zero winding number and momentum (\(k_{\alpha}^{\mu},\frac{K}{R}\)) will condense on this state. #### States with \(W=0\), \(K\neq 0\) Since the level matching condition dictates that \(\widetilde{N}-N=KW\), for states with \(W=0\), \(N=\widetilde{N}\). Hence, we can consider the following perturbative tensile state \[\left|\Phi\right\rangle=\sigma_{\mu\nu}\alpha_{-n}^{\mu}\tilde{ \alpha}_{-n}^{\nu}\left|0,k_{\alpha}^{\mu},K,0\right\rangle_{\alpha}. \tag{5.13}\] Following the expansion methods detailed in Appendix (B), we consider the following evolution of the vacuum state with the parameter \(\epsilon\) \[\left|0,k_{\alpha}^{\mu},K,0\right\rangle_{\alpha}=\left|0,k_{I}^ {\mu},K,0\right\rangle_{I}+\epsilon\left|I_{1}\right\rangle+\epsilon^{2} \left|I_{2}\right\rangle\cdots \tag{5.14}\] After this, using the conditions \[\alpha_{n}=\frac{1}{2}\Big{[}\sqrt{\epsilon}A_{n}+\frac{1}{\sqrt{\epsilon}}B_{n} \Big{]},\hskip 14.226378pt\tilde{\alpha}_{n}=\frac{1}{2}\Big{[}-\sqrt{\epsilon}A_{- n}+\frac{1}{\sqrt{\epsilon}}B_{-n}\Big{]}, \tag{5.15}\] and using the algebra of the \(A,B\) modes in (2.9), we can also find the order by order action of the modes \[B_{n}\ket{0}_{I} =0,\hskip 14.226378pt\forall n\neq 0 \tag{5.16}\] \[A_{n}\ket{0}_{I} =-B_{n}\ket{I_{1}}\hskip 14.226378ptA_{-n}\ket{0}_{I}=B_{-n}\ket{I_{ 1}}\hskip 14.226378pt\forall n>0\] \[A_{n}\ket{I_{1}} =-B_{n}\ket{I_{2}}\hskip 14.226378ptA_{-n}\ket{I_{1}}=B_{-n}\ket{I_{ 2}}\hskip 14.226378pt\forall n>0\] \[\vdots\hskip 28.452756pt\vdots\hskip 28.452756pt\vdots\hskip 28.452756pt\vdots\] \[A_{n}\ket{I_{r}} =-B_{n}\ket{I_{r+1}}\hskip 14.226378ptA_{-n}\ket{I_{r}}=B_{-n} \ket{I_{r+1}}\hskip 14.226378pt\forall n>0.\] With these in hand, we can easily see the perturbative state basically condenses down onto the zero winding Induced vacuum, i.e. \[\ket{\Phi}\rightarrow\Sigma\ket{0,k_{I}^{\mu},K,0}_{I},\hskip 14.226378pt \Sigma=2n\eta^{\mu\nu}\sigma_{\mu\nu}. \tag{5.17}\] Now let us move to tensile string states with winding number \(W\) (\(W\neq 0\)), which can be separated into two distinct categories: states with \(K=0\), and states with \(K\neq 0\). In what follows, we will see that states of these two categories will have different fate under the tensionless limit. #### States with \(W\neq 0\), \(K=0\) For such state evidently we have \(N=\widetilde{N}\). Hence we start from a state \(\ket{\chi}\) having very much similar form as (5.13), just the relevant vacuum state \(\ket{0,k_{\alpha}^{\mu},K,0}_{\alpha}\) would be replaced by the zero internal momentum one \(\ket{0,k_{\alpha}^{\mu},0,W}_{\alpha}\). The rest of the procedure would be very much the same and we will end up with the following \[\ket{\chi}\rightarrow\Theta\ket{0,k_{I}^{\mu},0,W}_{I},\hskip 14.226378pt \Theta=2n\eta^{\mu\nu}\theta_{\mu\nu}, \tag{5.18}\] where \(\theta_{\mu\nu}\) is the polarization tensor of \(\ket{\chi}\). Hence, winding states with \(K=0\), will condense to an Induced vacuum with \(K=0\), \(W\neq 0\), which is given by \[\widehat{W}\ket{0,k_{I}^{\mu},0,W}_{I}=W\ket{0,k_{I}^{\mu},0,W}_{I}. \tag{5.19}\] #### States with \(W\neq 0\), \(K\neq 0\) For such state the level matching condition will be \(\widetilde{N}=N+KW\). Since the level matching condition has changed here, instead of states of the form (5.13), we have to consider states having the following form \[\ket{\zeta_{n}}=\rho_{\mu\nu}\alpha_{-n}^{\mu}\tilde{\alpha}_{-n-KW}^{\nu} \ket{0,k_{\alpha}^{\mu},K,W}_{\alpha}\lx@note{footnote}{This includes tensile states arising due to compactification such as \alpha_{-n}^{25}\tilde{\alpha}_{-n-KW}^{25}\ket{0,k_{\alpha}^{\mu},K,W}_{ \alpha}\). To get this state we can always take \(\rho_{2525}=1\) and \(\rho_{\mu\nu}=0\)\(\forall\mu,\nu\neq 25\). Expanding the tensile vacuum \(\left|0,k^{\mu},K,W\right\rangle_{\alpha}\) as in the previous cases, we end up with the following expression \[\left|\zeta_{n}\right\rangle=\frac{\rho_{\mu\nu}}{\epsilon}\Big{(}B_{-n}^{\mu}+ \epsilon A_{-n}^{\mu}\Big{)}\Big{(}B_{n+KW}^{\nu}-\epsilon A_{n+KW}^{\nu} \Big{)}\Big{(}\left|0\right\rangle_{I}+\epsilon\left|I_{1}\right\rangle+ \epsilon^{2}\left|I_{2}\right\rangle+\cdots\Big{)}. \tag{108}\] Now, we have to be careful about taking the exact limit. Here we have two different cases: 1) \(KW>0\) (i.e. either both \(K,W>0\) or \(K,W<0\)) and 2) \(KW<0\) (i.e. either \(K<0,W>0\) or \(K>0,W<0\)). Lets first look at the case \(KW>0\). Here, when we apply the algebra (9) and evaluate the expressions, we shall see that all the terms of orders \(\mathcal{O}(\epsilon^{-1})\) and \(\mathcal{O}(\epsilon^{0})\) will vanish. Hence the dominant terms will be of \(\mathcal{O}(\epsilon)\), and from (108) we can find four such states. The states are as given below: \[\epsilon\rho_{\mu\nu}\Big{(}-A_{-n}^{\mu}A_{n+KW}^{\nu}\left|0 \right\rangle_{I}+A_{-n}^{\mu}B_{n+KW}^{\nu}\left|I_{1}\right\rangle-B_{-n}^{ \mu}A_{n+KW}^{\nu}\left|I_{1}\right\rangle+B_{-n}^{\mu}B_{n+KW}^{\nu}\left|I_ {2}\right\rangle\Big{)}. \tag{109}\] Again applying our usual expansion methods on the states, it can be shown that \[B_{-n}A_{n+KW}\left|I_{1}\right\rangle=-A_{-n}B_{n+KW}\left|I_{1} \right\rangle=-B_{-n}B_{n+KW}\left|I_{2}\right\rangle=A_{-n}A_{n+KW}\left|0 \right\rangle_{I}. \tag{110}\] As a result, the \(\mathcal{O}(\epsilon)\) term of (108) as given in (109) simply becomes \[-4\epsilon\rho_{\mu\nu}A_{-n}^{\mu}A_{n+KW}^{\nu}\left|0\right\rangle_{I}. \tag{111}\] As discussed in [61], such states with multiple \(A\)'s acting on the Induced vacuum are unphysical states 11. Hence, the winding states having non-zero internal momentum in tensionless limit will end up being unphysical states. Footnote 11: States constructed with only actions of \(A\)’s have an ill defined norm, as \([A,A]=0\). Now, let us look at the \(KW<0\) states. Let \(l=\left|KW\right|\). Then (107) will become \[\left|\zeta_{n}\right\rangle=\rho_{\mu\nu}\alpha_{-n}^{\mu}\tilde{\alpha}_{- n+l}^{\nu}\left|0,k_{\alpha}^{\mu},K,W\right\rangle_{\alpha}, \tag{112}\] and the expansion will become (113) For the states with \(n>l\), the computation will follow the case with \(KW>0\). Using the steps similar to (109), (110), the final form of the limiting state \(\left|\zeta_{n}\right\rangle\) will be \[-4\epsilon\rho_{\mu\nu}A_{-n}^{\mu}A_{n-l}^{\nu}\left|0\right\rangle_{I}, \tag{114}\] which again is a unphysical state. For states with \(n=l\), the expansion in (113) will be replaced by \[\left|\zeta_{l}\right\rangle=\frac{\rho_{\mu\nu}}{\epsilon}\Big{(}B_{-l}^{\mu }+\epsilon A_{-l}^{\mu}\Big{)}\Big{(}B_{0}^{\nu}-\epsilon A_{0}^{\nu}\Big{)} \Big{(}\left|0\right\rangle_{I}+\epsilon\left|I_{1}\right\rangle+\epsilon^{2 }\left|I_{2}\right\rangle+\cdots\Big{)}. \tag{115}\] For these states the term with order \(\mathcal{O}(\epsilon^{-1})\) will again vanish, since \(B\)'s commute with each other. The leading order term that survives the \(\epsilon\to 0\) limit is of order \(\mathcal{O}(\epsilon)\) and is given by \[\rho_{\mu\nu}\Big{(}A^{\mu}_{-l}B^{\nu}_{0}\ket{0}_{I}+B^{\mu}_{-l}B^{\nu}_{0} \ket{I_{1}}\Big{)}. \tag{110}\] Using a bit of algebra on (110) we get \[2\rho_{\mu\nu}A^{\mu}_{-l}B^{\nu}_{0}\ket{0}_{I}=2\Bigg{(}\sum_{\nu=0}^{24}\rho _{\mu\nu}k^{\nu}_{I}+\rho_{\mu 25}\frac{K}{R}\Bigg{)}A^{\mu}_{-l}\ket{0}_{I}. \tag{111}\] So, we again have an unphysical state, although it is different from (109) and (108). For states with \(n<l\) something new happens. For convenience let us take \(l-n=m\), where \(m>0\). Then we will have a state with the following form \[\ket{\zeta_{n}}=\frac{\rho_{\mu\nu}}{\epsilon}\Big{(}B^{\mu}_{-n}+\epsilon A^ {\mu}_{-n}\Big{)}\Big{(}B^{\nu}_{-m}-\epsilon A^{\nu}_{-m}\Big{)}\Big{(}\ket{ 0}_{I}+\epsilon\ket{I_{1}}+\epsilon^{2}\ket{I_{2}}+\cdots\Big{)}. \tag{112}\] Applying the algebra (9) along with our order by order evolution, we can easily see that \(\mathcal{O}(\epsilon^{-1})\) and \(\mathcal{O}(\epsilon^{0})\) will vanish. The \(\mathcal{O}(\epsilon)\) terms in (112) are written below \[\epsilon\rho_{\mu\nu}\Big{(}-A^{\mu}_{-n}A^{\nu}_{-m}\ket{0}_{I}+A^{\mu}_{-n}B^ {\nu}_{-m}\ket{I_{1}}-B^{\mu}_{-n}A^{\nu}_{-m}\ket{I_{1}}+B^{\mu}_{-n}B^{\nu}_{ -m}\ket{I_{2}}\Big{)}. \tag{113}\] Using (107) it can be shown that \[A^{\mu}_{-n}B^{\nu}_{-m}\ket{I_{1}}=B^{\mu}_{-n}A^{\nu}_{-m}\ket{I_{1}}=B^{\mu }_{-n}B^{\nu}_{-m}\ket{I_{2}}=A^{\mu}_{-n}A^{\nu}_{-m}\ket{0}_{I}. \tag{114}\] Hence, we see \(\mathcal{O}(\epsilon)\) term vanishes too. The order \(\mathcal{O}(\epsilon^{r})\) part in (112) turns out to be \[\epsilon^{r}\rho_{\mu\nu}\Big{(}-A^{\mu}_{-n}A^{\nu}_{-m}\ket{I_{r-1}}+A^{\mu }_{-n}B^{\nu}_{-m}\ket{I_{r}}-B^{\mu}_{-n}A^{\nu}_{-m}\ket{I_{r}}+B^{\mu}_{-n} B^{\nu}_{-m}\ket{I_{r+1}}\Big{)}. \tag{115}\] For \(r=1\) this expression gives (113). Here again, using (107), we can show that (115) vanishes, exactly like (113). Hence, for \(n<l\), \(\ket{\zeta_{n}}\) vanishes in all the orders of \(\epsilon\). This effectively means the tensionless progeny of states like (110) do not really matter in the spectrum. We can quickly summarise our results as below: 1. The tensile states with \(K=W=0\) will condense at the Induced vacuum with \(K=W=0.\) In [14] this was pointed out as a Bose-Einstein condensation on the worldsheet theory as \(\alpha^{\prime}\to\infty\). 2. The tensile states with \(K\neq 0\) but \(W=0\) will condense at Induced vacuum with internal momentum \(\frac{K}{R}\). The emergent vacua in this case a family of massive ones, labelled by \(K\) values. 3. The tensile states with \(K=0\) but \(W\neq 0\) will condense at Induced vacuum with \(K=0\), and winding number \(W\). These are a family of massless vacua, labelled by \(W\) values. 4. The tensile states with both \(K,W>0\), and \(K,W<0\) will tend to different unphysical states under tensionless limit. 5. The tensile states with \(K<0,W>0\), and \(K>0,W<0\) will end up being different unphysical states provided the level of the state \(n\geq|KW|\). 6. The tensile states with \(K<0,W>0\), and \(K>0,W<0\) will altogether vanish under tensionless limit if the level of the state \(n<|KW|\). ### Nonperturbative states As we have mentioned in section (2.4), there are non-perturbatively defined states in the Induced representation of BMS algebra, which introduces new kind of physical states in this tensionless theory (see Appendix B). In this subsection we look at the effect of compactification on such states. Identifying the Induced vacuum winding states \(\ket{0,k^{\mu},K,W}_{I}\), we can construct following non-perturbative states \[\ket{\phi}=\exp\left(i\sum_{n}\omega_{n}L_{n}\right)\ket{0,k^{\mu}_{I},K,W}_{ I}. \tag{5.35}\] This state satisfies the physical state condition (2.39). Writing \(L_{n}\) in terms of \(A_{n}\)'s and \(B_{n}\)'s we get \[\ket{\phi}=\exp\left(i\sum_{n,m}\omega_{n}A_{n-m}\cdot B_{m}\right)\ket{0,k^{ \mu}_{I},K,W}_{I}. \tag{5.36}\] Now, let us recall the algebra in (2.9) which says that both \(A_{0}\) and \(B_{0}\) will commute with all \(A_{n}\)'s and \(B_{n}\)'s. Hence the mass operator \(m^{2}=\sum_{\mu=0}^{24}B_{0}^{\mu}B_{0\mu}\) will have the same eigenvalue for the eigenstate in (5.36) as for the vacuum on which it is built \[m^{2}\ket{\phi}=\frac{K^{2}}{R^{2}}\ket{\phi}, \tag{5.37}\] and the winding number operator, which is proportional to \(A_{0}^{25}\), has the same eigenvalue for \(\ket{\phi}\) as for \(\ket{0,k^{\mu}_{I},K,W}_{I}\): \[\widehat{W}\ket{\phi}=W\ket{\phi}. \tag{5.38}\] ### A brief look at multiple dimensions compactification In this subsection we briefly look at the tensionless string theory constructed on Induced vacuum in a background with \(d\) number of dimensions compactified. Applying the \(M_{0}\) physical state condition on \(\ket{0,k^{\mu},k^{i},\omega^{i}}_{I}\) we get \[\begin{split}\sum_{m}B_{-m}\cdot B_{m}\ket{0,k^{\mu},k^{i}, \omega^{i}}_{I}=0\implies\Big{(}\sum_{m\neq 0}B_{-m}\cdot B_{m}+B_{0}^{2} \Big{)}\ket{0,k^{\mu},k^{i},\omega^{i}}_{I}=0\\ \implies B_{0}^{2}\ket{0,k^{\mu},k^{i},\omega^{i}}_{I}=2c^{\prime} \Big{(}k^{2}+K^{I}K_{I}\Big{)}\ket{0,k^{\mu},k^{i},\omega^{i}}_{I}=0.\end{split} \tag{5.39}\] The mass of the state \({}^{12}|0,k^{\mu},k^{i},\omega^{i}\rangle\) is given by \[m^{2}=K^{I}K_{I}=\frac{1}{R^{2}}\sum_{i,j=1}^{d}k_{i}g^{ij}k_{j}. \tag{108}\] The \(L_{0}\) condition, for this vacuum will give us the following \[\langle 0,k^{\mu},k^{i},\omega^{i}|\,A_{0}\cdot B_{0}\,|0,k^{\mu},k^{i},\omega^{i }\rangle=0\implies\sum_{i=1}^{d}k_{i}\omega^{i}=0. \tag{109}\] Recall, the mass spectrum for tensile string in a background with \(d\) dimensions compactified on a Torus was given by \[m^{2}=\frac{2}{\alpha^{\prime}}(N_{L}+N_{R}-2)+\frac{1}{R^{2}}\sum_{i,j}^{d}k_ {i}g^{ij}k_{j}+\frac{1}{\alpha^{\prime 2}}\sum_{i,j}\omega^{i}g_{ij}\omega^{j} \tag{110}\] where \(N_{R}-N_{L}=\sum_{i=1}^{d}k_{i}\omega^{i}\). We can now explicitly take the tensionless limit of this (\(\alpha\to\frac{c^{\prime}}{\epsilon}\), \(\epsilon\to 0\)), and obtain the same spectrum as in (108). ### Summary Let us summarize the results in this section below: * We get a series of vacua labelled by internal momentum \(K\) and winding number \(W\) (equation (106)). However, mass of those (equation (104)) do not contain any contribution from \(W\). Quite naturally, T-duality is absent. * Since this theory is an explicit tensionless limit of usual string theory, it is important to ensure that the intrinsically derived results are consistent with the tensionless limit of the results in the parent theory. In section (107) we have shown that our results from intrinsic analysis matches with that of the limiting one. * In section (108) we have shown that the perturbative states in the tensile string theory have different fates depending on the different values of \(K\) and \(W\). There are three possible fates of them: (i) Some of them condense on the Induced vacua. And these family of vacua depending on \(K\) and \(W\) values are the only states in the Induced theory. (ii) Some of them become unphysical states and (iii) Rest of them just vanish. * The non-perturbative states constructed on a vacua with internal momentum \(K\) and winding number \(W\) have same value of internal momentum and winding number. As a result, the mass of the state remains same as that of the vacuum it is constructed on (equation (109) and (110)). * We also considered \(d\) dimension compactification of the theory on a torus \(T^{d}\) and computed the mass spectrum (equation (108)). We have also reproduced the same spectrum by directly taking tensionless limit of mass spectrum of tensile string theory. Effect of compactification: Flipped Vacuum In this section we discuss the effect of target space compactification on the flipped vacuum of tensionless string theory. We will analyze the physical states and closely study the constraints imposed on them by the associated non-trivial physical state condition. We should reiterate, this vacuum is important since it is directly connected to the highest weight representations of the BMS algebra. ### Modified constraint on level The flipped vacuum for tensionless string in terms of oscillators \(C\) and \(\tilde{\mathcal{C}}\) has been expressed in (2.47) with the flipped oscillator \(\tilde{\mathcal{C}}\) defined as (2.48). The normal ordered zero modes \(L_{0}\) and \(M_{0}\) in terms of \(C\) and \(\tilde{\mathcal{C}}\) can be written as \[\begin{split} L_{0}=\mathcal{N}+\bar{\mathcal{N}}+KW,\\ M_{0}=\frac{1}{2}\left[C_{0}^{2}+\tilde{\mathcal{C}}_{0}^{2}+2C _{0}\cdot\tilde{\mathcal{C}}_{0}\right]+\sum_{m>0}\Big{[}C_{-m}\cdot C_{m}+ \tilde{\mathcal{C}}_{-m}\cdot\tilde{\mathcal{C}}_{m}+C_{-m}\cdot\tilde{ \mathcal{C}}_{m}+\tilde{\mathcal{C}}_{-m}\cdot C_{m}\Big{]}.\end{split} \tag{6.1}\] Now using (3.7) and (2.48) we conclude the following \[M_{0}=c^{\prime}\frac{K^{2}}{R^{2}}+c^{\prime}k^{2}+\mathcal{N}-\bar{\mathcal{ N}}+X+Y, \tag{6.2}\] where \(\mathcal{N}\) and \(\bar{\mathcal{N}}\) are the operataors defined in (2.54). Let's recall from earlier section (2.5) that for flipped case, \(a_{L}=2\) and \(a_{M}=0\). Hence, for non-compactified case only states with level 2 would be physical. However, in case of the flipped string in compactified background, the presence of the \(KW\) term in the \(L_{0}\) condition will allow states having any level to be physical state with appropriate values of \(K\) and \(W\). This intriguing result is going to be at the heart of the discussion that follows. ### States at various levels In the following subsection we shall impose the physical state conditions on a generic state \(|r,s,k^{\mu},K,W\rangle\) as given in (2.46) along with (6.1) on the level zero, level one and level two states and examine the mass spectrum. We will also discuss the structure of higher level states. #### Level 0 States While we apply conditions (2.46) on \(|0,0,k^{\mu},K,W\rangle\) we note that for \(n\geq 1\), (2.46) are trivially satisfied leaving us with only two non-trivial conditions on \(|0,0,k^{\mu},K,W\rangle\), which read 13: Footnote 13: Tensionless string theory on this vacuum can be shown to be equivalent to what is known as the Ambitwistor string [64], hence the subscript \(A\) on the states. \[\begin{split}(L_{0}-a_{L})\left|0,0,k^{\mu},K,W\right\rangle_{A} &=\left(\mathcal{N}+\bar{\mathcal{N}}+KW-a_{L}\right)\left|0,0,k^ {\mu},K,W\right\rangle_{A}=0,\\ (M_{0}-a_{M})\left|0,0,k^{\mu},K,W\right\rangle_{A}& =\left(c^{\prime}\frac{K^{2}}{R^{2}}+c^{\prime}\sum_{\mu=0}^{24}k _{\mu}k^{\mu}-a_{M}\right)\left|0,0,k^{\mu},K,W\right\rangle_{A}=0.\end{split} \tag{6.3}\] Now, we are looking at states with \(\mathcal{N}=\bar{\mathcal{N}}=0\). In order to make them physical state we must have then \(KW=a_{L}\). Since from earlier works we have already deduced \(a_{L}=2\), for level zero states to be physical, one must have \(KW=2\). That means there will be four possible states depending on the values of \(K\) and \(W\): {K=1,W=2}, {K=2,W=1}, {K=-1,W=-2} and{K=-2,W=-1}. Since \(a_{M}=0\), the mass of the level zero states are given by: \[m^{2}=\frac{K^{2}}{R^{2}}, \tag{6.4}\] where the values of \(K=\pm 1,\pm 2\). #### Level 1 States For level 1 states, (2.46) is trivially satisfied for \(n\geq 2\). Hence, we have only \(L_{0}\), \(M_{0}\), \(L_{1}\) and \(M_{1}\) conditions to satisfy. There are two possibilities: either \(\mathcal{N}=1,\bar{\mathcal{N}}=0\), or \(\mathcal{N}=0,\bar{\mathcal{N}}=1\). Applying the \(L_{0}\) condition on level 1 states we see that the condition for level 1 state being physical state is \(KW=1\), i.e. either \(K=W=1\), or \(K=W=-1\). A generic state of level one can be expressed as a linear combination of \(\left|1,0\right\rangle\) and \(\left|0,1\right\rangle\) states as given below [61] \[\left|1,K,W\right\rangle_{A}=a_{\mu}C_{-1}^{\mu}\left|0,0,k^{\mu},K,W\right\rangle _{A}+b_{\mu}\tilde{\mathcal{C}}_{-1}^{\mu}\left|0,0,k^{\mu},K,W\right\rangle _{A}. \tag{6.5}\] Now, we apply the remaining three nontrivial conditions on this generic state in order to put restrictions on it, \[L_{1}\left|1,K,W\right\rangle_{A}=M_{1}\left|1,K,W\right\rangle_{A}=M_{0} \left|1,K,W\right\rangle_{A}=0. \tag{6.6}\] After doing a bit of algebra we end up with the above conditions for physical states can be written as: \[\left[\frac{K}{R}(a_{25}+b_{25})+\frac{RW}{c^{\prime}}(a_{25}-b_{ 25})+\sum_{\mu=0}^{24}k^{\mu}(a_{\mu}+b_{\mu})\right]\left|0,0,k^{\mu},K,W \right\rangle_{A}=0,\] \[\left[\frac{K}{R}(a_{25}-b_{25})+\sum_{\mu=0}^{24}k^{\mu}(a_{\mu} -b_{\mu})\right]\left|0,0,k^{\mu},K,W\right\rangle_{A}=0, \tag{6.7}\] \[\left[\left(c^{\prime}\frac{K^{2}}{R^{2}}+c^{\prime}\sum_{\nu=0} ^{24}k_{\nu}k^{\nu}+1\right)a_{\mu}-b_{\mu}\right]C_{-1}^{\mu}\left|0,0,k^{\mu },K,W\right\rangle_{A}\] \[+\!\left[\left(c^{\prime}\frac{K^{2}}{R^{2}}+c^{\prime}\sum_{\nu =0}^{24}k_{\nu}k^{\nu}-1\right)b_{\mu}+a_{\mu}\right]\!\tilde{\mathcal{C}}_{ -1}^{\mu}\left|0,0,k^{\mu},K,W\right\rangle_{A}=0.\] As discussed in [61], for \(a_{\mu},b_{\mu}\neq 0\) the last condition reads \[c^{\prime}\frac{K^{2}}{R^{2}}+c^{\prime}\sum_{\mu=0}^{24}k_{\mu}k^{\mu}=0, \hskip 14.226378pta_{\mu}=b_{\mu}. \tag{6.8}\] As a result we again have exact same expression for the mass as in (6.4). However, for this case the only permissible values are \(K=\pm 1\). Note that, since \(a_{\mu}=b_{\mu}\), the norm of the state (6.5) is \(\langle 1|\,|1\rangle=a^{2}-b^{2}=0\). Hence level 1 states are null states. The other physical state conditions in (6.7) will give us the following: \[\frac{K}{R}a_{25}+\sum_{\mu=0}^{24}k^{\mu}a_{\mu}=0, \tag{6.9}\] which gives the extra condition on the coefficients \(a_{\mu}\). #### Level 2 States In line with what we have seen so far, for level 2 states we will have six non-trivial physical state conditions, namely \(L_{0}\), \(M_{0}\), \(L_{1}\), \(M_{1}\), \(L_{2}\) and \(M_{2}\) conditions. A generic state of level 2 can be expressed as given below \[\left|2,K,W\right\rangle_{A} =a_{\mu}C^{\mu}_{-2}\left|0\right\rangle_{A}+e_{\mu\nu}C^{\mu}_{- 1}C^{\nu}_{-1}\left|0\right\rangle_{A}+h_{\mu\nu}C^{\mu}_{-1}\tilde{C}^{\nu}_ {-1}\left|0\right\rangle_{A}\] \[+b_{\mu}\tilde{\mathcal{C}}^{\mu}_{-2}\left|0\right\rangle_{A}+ f_{\mu\nu}\tilde{\mathcal{C}}^{\mu}_{-1}\tilde{\mathcal{C}}^{\nu}_{-1}\left|0 \right\rangle_{A}+j_{\mu\nu}C^{\mu}_{-1}\tilde{\mathcal{C}}^{\nu}_{-1}\left| 0\right\rangle_{A}. \tag{6.10}\] Here \(\left|0,0,k^{\mu},K,W\right\rangle_{A}\) is shortened as \(\left|0\right\rangle_{A}\) and hereafter we will use this shortened notation. \(e_{\mu\nu}\) and \(f_{\mu\nu}\) are manifestly symmetric while \(h_{\mu\nu}\) and \(j_{\mu\nu}\) are assumed to be symmetric and anti-symmetric respectively. For such states the \(L_{0}\) condition would imply that \(KW=0\), i.e. either \(K=0\) or \(W=0\). Hence, for level 2, unlike the case of other levels, we will get infinite number of states, since for \(K=0\), \(W\) can take any value and vice-versa. The other physical state conditions will be applied on this state in exactly the same way as they were in the non-compactified case. The \(M_{0}\) condition would yield the following \[M_{0}\left|2\right\rangle =\Biggl{[}\Biggl{(}c^{\prime}\frac{K^{2}}{R^{2}}+c^{\prime}\sum _{\nu=0}^{24}k_{\nu}k^{\nu}+2\Biggr{)}a_{\mu}-2b_{\mu}\Biggr{]}C^{\mu}_{-2} \left|0\right\rangle_{A}\] \[+\Biggl{[}\Biggl{(}c^{\prime}\frac{K^{2}}{R^{2}}+c^{\prime}\sum _{\nu=0}^{24}k_{\nu}k^{\nu}-2\Biggr{)}b_{\mu}+2a_{\mu}\Biggr{]}\tilde{ \mathcal{C}}^{\mu}_{-2}\left|0\right\rangle_{A}\] \[+\Biggl{[}\Biggl{(}c^{\prime}\frac{K^{2}}{R^{2}}+c^{\prime}\sum _{\nu=0}^{24}k_{\nu}k^{\nu}+2\Biggr{)}e_{\mu\nu}-h_{\mu\nu}\Biggr{]}C^{\mu}_{- 1}C^{\nu}_{-1}\left|0\right\rangle_{A} \tag{6.11}\] \[+\Biggl{[}\Biggl{(}c^{\prime}\frac{K^{2}}{R^{2}}+c^{\prime}\sum _{\nu=0}^{24}k_{\nu}k^{\nu}\Biggr{)}(h_{\mu\nu}+j_{\mu\nu})+2(e_{\mu\nu}-f_{ \mu\nu})\Bigr{]}C^{\mu}_{-1}\tilde{\mathcal{C}}^{\nu}_{-1}\left|0\right\rangle _{A}=0.\] The other non-trivial conditions would yield \[L_{1}\left|2\right\rangle =\Biggl{[}2a_{\nu}+2e_{25\nu}\left(c^{\prime}\frac{K}{R}+RW \right)+2c^{\prime}\sum_{\mu=0}^{24}e_{\mu\nu}k^{\mu}+(h_{25\nu}-j_{25\nu}) \left(c^{\prime}\frac{K}{R}-RW\right)\] \[+c^{\prime}\sum_{\mu=0}^{24}(h_{\mu\nu}-j_{\mu\nu})k^{\mu}\Biggr{]} C^{\nu}_{-1}\left|0\right\rangle_{A}+\Biggl{[}2b_{\nu}+2f_{25\nu}\left(c^{ \prime}\frac{K}{R}-RW\right) \tag{6.12}\] \[+2c^{\prime}\sum_{\mu=0}^{24}f_{\mu\nu}k^{\mu}+(h_{25\nu}+j_{25 \nu})\left(c^{\prime}\frac{K}{R}+RW\right)+c^{\prime}\sum_{\mu=0}^{24}(h_{\mu \nu}+j_{\mu\nu})k^{\mu}\Biggr{]}\tilde{C}^{\nu}_{-1}\left|0\right\rangle_{A}=0,\] \[M_{1}\left|2\right\rangle =2\Bigg{[}(a_{\nu}-b_{\nu})+2c^{\prime}\frac{K}{R}e_{25\nu}+2c^{ \prime}\sum_{\mu=0}^{24}e_{\mu\nu}k^{\mu}-c^{\prime}\frac{K}{R}(h_{25\nu}-j_{25 \nu})\] \[\quad-c^{\prime}\sum_{\mu=0}^{24}(h_{\mu\nu}-j_{\mu\nu})k^{\mu} \Bigg{]}C_{-1}^{\nu}\left|0\right\rangle_{A}+2\Bigg{[}(a_{\nu}-b_{\nu})-2c^{ \prime}\frac{K}{R}f_{25\nu} \tag{6.13}\] \[\quad-2c^{\prime}\sum_{\mu=0}^{24}f_{\mu\nu}k^{\mu}+c^{\prime} \frac{K}{R}(h_{25\nu}+j_{25\nu})+c^{\prime}\sum_{\mu=0}^{24}(h_{\mu\nu}+j_{ \mu\nu})k^{\mu}\Bigg{]}\tilde{c}_{-1}^{\nu}\left|0\right\rangle_{A}=0,\] \[L_{2}\left|2\right\rangle =\Bigg{[}\frac{K}{R}(a_{25}+b_{25})+\frac{RW}{c^{\prime}}(a_{25}- b_{25})+\sum_{\mu=0}^{24}k^{\mu}(a_{\mu}+b_{\mu})+\frac{1}{2c^{\prime}}(e_{ \,\mu}^{\mu}-f_{\,\mu}^{\mu})\Bigg{]}\left|0\right\rangle_{A}=0, \tag{6.14}\] \[M_{2}\left|2\right\rangle =\Bigg{[}\frac{K}{R}(a_{25}-b_{25})+\sum_{\mu=0}^{24}k^{\mu}(a_{ \mu}-b_{\mu})+\frac{1}{4c^{\prime}}\big{(}e_{\,\mu}^{\mu}+f_{\,\mu}^{\mu}-h_{ \,\mu}^{\mu}\big{)}\Bigg{]}\left|0\right\rangle_{A}=0. \tag{6.15}\] The \(M_{0}\) condition (6.11) along with the \(L_{1}\) condition (6.12) give us \(a_{\mu}=b_{\mu}=0\). Together, (6.11), (6.12) and (6.13) give us \(h_{\mu\nu}=2e_{\mu\nu}=2f_{\mu\nu}\). Additionally, these three equations also lead us to the following constraints on \(e_{\mu\nu}\) and \(j_{\mu\nu}\) \[e_{25\nu}\frac{K}{R}+\sum_{\nu=0}^{24}e_{\mu\nu}k^{\mu}=j_{25\nu}\frac{K}{R}+ \sum_{\nu=0}^{24}j_{\mu\nu}k^{\mu}=0,\hskip 28.452756ptj_{25\nu}W=0. \tag{6.16}\] This means for physical states, the coefficients \(j_{25\nu}\) can be non-zero only for states with \(W=0\). Gathering all terms, the physical state (6.10) can be written in the form: \[\left|2,K,W\right\rangle =e_{\mu\nu}\Big{[}C_{-1}^{\mu}C_{-1}^{\nu}\left|0\right\rangle_{A }+2C_{-1}^{\mu}\tilde{\mathcal{C}}_{-1}^{\nu}\left|0\right\rangle_{A}+\tilde {\mathcal{C}}_{-1}^{\mu}\tilde{\mathcal{C}}_{-1}^{\nu}\left|0\right\rangle_{A }\Big{]}+j_{\mu\nu}C_{-1}^{\mu}\tilde{\mathcal{C}}_{-1}^{\nu}\left|0\right\rangle _{A}. \tag{6.17}\] In the above equation, \(KW=0\). Here too, the norm of the states vanish like the level 1 state. However, the mass spectrum would be modified due to the presence of internal momentum \(K\). For states having \(W=0\), mass squared will be same as (6.4). Here, unlike level 0 or level 1 states, \(K\) can assume any integral value, hence there are infinite such states in the spectrum. For winding states (\(W\neq 0\)), however, the mass squared will be zero, since for those states, \(K=0\), leading to yet another infinite tower of massless states as well. Note that this is a particular property of level 2 states since \(KW=0\) in this case. #### Higher level states As already stated, the presence of \(KW\) term in the \(L_{0}\) physical state condition implies that states of any level will be physical state. A generic state of level \(l\) (\(l=r+s\)) can be written as \[\left|r,s,k^{\mu},K,W\right\rangle =\sum_{j}\rho_{j}\Bigg{(}\prod_{i=1}^{p}C_{-m_{i}}^{a_{i}}\prod_{ j=1}^{q}\tilde{\mathcal{C}}_{-n_{j}}^{b_{j}}\Bigg{)}_{j}\left|0,k^{\mu},K,W \right\rangle. \tag{6.18}\] In the above, \(a_{i}\) and \(b_{j}\) are powers of the oscillators \(C\) and \(\tilde{\mathcal{C}}\) respectively. The level of the state \(l\) and \(KW\) are given by \[l=r+s=\sum_{i}^{p}a_{i}m_{i}+\sum_{i}^{q}b_{i}n_{i},\ \ \ \ KW=2-l. \tag{111}\] Hence, we can have more number of possible states at level \(l\), depending on the number of possible values of \(KW\) satisfying \(KW=2-l\). For state of level \(l\), there will be \(2(l+1)\) number of nontrivial physical state conditions14. Without the \(L_{0}\) condition we have already used to determine \(KW\), we will be left with \(2l+1\) physical state conditions. The \(M_{0}\) condition for higher levels too, will give us mass shell condition same as (108) and the remaining physical state conditions would put constraints of the coefficients \(\rho_{j}\). Lastly, note that for \(l\neq 2\), both \(K\) and \(W\) has to be non-zero and as a result, we won't get massless state at higher levels. Footnote 14: For level \(l\), the nontrivial physical state conditions will come from \(L_{n}\) and \(M_{m}\) with \(n,m\in\{0,1,2,....,l\}\). That means we will have \((l+1)\) number of \(L_{n}\) physical state conditions and \((l+1)\) number of \(M_{n}\) physical state conditions. That leaves us with \(2(l+1)\) number of nontrivial physical state conditions. ### Limit from tensile closed twisted string The twisted parent theory of the flipped tensionless string is already a peculiar theory. The effect of compactification on such closed bosonic twisted string has been studied in [68] and [69]. The physical state conditions for the twisted string theory is given below \[(\mathcal{L}_{n}-a\delta_{n,0})\ket{phys}=0,\ \ (\bar{\mathcal{L}}_{-n}-a \delta_{n,0})\ket{phys}=0\ \ \ \ \forall n>0, \tag{112}\] which is seemingly a mixture of lowest weight and highest weight representations. These physical state conditions will lead us to the following level matching condition [68] when we compactify the target space on a single circle: \[N+\widetilde{N}+KW=2. \tag{113}\] The level matching condition turns out to be identical to the level matching condition we have found in our intrinsic analysis in the last section as well, with tensile oscillators replaced by tensionless ones. From the discussion in [61] one can see that when we take tensionless limit, level 2 tensile twisted string state with finite norm gives us level 2 null physical states of tensionless twisted string. In a similar vein, for the case of compactified background we can consider the following level 2 tensile state \[\ket{\psi}=\xi_{\mu\nu}\alpha_{-1}^{\mu}\bar{\alpha}_{-1}^{\nu}\ket{0,0,k^{ \mu},K,W}_{A}, \tag{114}\] where either \(K=0\), or \(W=0\). Under tensionless limit this state will reduce to the physical states given in (111). This is comparable to what happens in the non-compactified case, and the reader can look at Appendix (C) for details. However, as we have seen, the new level matching condition dictates that, for suitable values of \(K\) and \(W\) we can get physical state at any level. Since tensile theory too has the same level matching condition, this statement is true for tensile twisted theory as well. Moving on, let us consider starting from the following tensile physical state of level 1 instead: \[\ket{\Phi}=l_{\mu}\alpha^{\mu}_{-1}\ket{0,0,k^{\mu},K,W}_{A},\ \ \ \ KW=1. \tag{111}\] Using the Bogoliubov relation between \(\{\alpha,\bar{\alpha}\}\) and \(\{C,\tilde{\mathcal{C}}\}\), we can rewrite this state as \[\ket{\Phi}=\frac{1}{2}l_{\mu}\left[\left(\sqrt{\epsilon}+\frac{1}{\sqrt{ \epsilon}}\right)C^{\mu}_{-1}-\left(\sqrt{\epsilon}-\frac{1}{\sqrt{\epsilon}} \right)\tilde{\mathcal{C}}^{\mu}_{-1}\right]\ket{0}_{A}=a_{\mu}\ket{\Phi_{1}} +\epsilon a_{\mu}\ket{\Phi_{2}}. \tag{112}\] where \[a_{\mu}=\frac{l_{\mu}}{2\sqrt{\epsilon}},\ \ \ \ket{\Phi_{1}}=C^{\mu}_{-1} \ket{0}_{A}+\tilde{\mathcal{C}}^{\mu}_{-1}\ket{0}_{A},\ \ \ \ \ket{\Phi_{2}}=C^{\mu}_{-1}\ket{0}_{A}-\tilde{ \mathcal{C}}^{\mu}_{-1}\ket{0}_{A}. \tag{113}\] Clearly at the strict tensionless limit \(\epsilon\to 0\), this state will reduce just to \(a_{\mu}\ket{\Phi_{1}}\). Recalling from section (109), we identify this state as a physical state15. The combination in the state \(\ket{\Phi_{2}}\) is not a physical state combination, as we have from intrinsic analysis. And from limiting analysis we see that this state, appearing at subleading order, vanishes at tensionless limit. Both \(\ket{\Phi_{1}}\) and \(\ket{\Phi_{2}}\) are null states here, but when we take tensionless limit on the norm of the total level 1 state \(\bra{\Phi}\), it does remain conserved: Footnote 15: In section (109) we have seen that a generic level 1 state as given in (108) is physical state provided \(a_{\mu}=b_{\mu}\) (equation (107)) \[\lim_{\epsilon\to 0}\bra{\Phi}=l_{\mu}l_{\nu}\big{[}\cosh^{2}\theta \bra{0}C^{\mu}_{1}C^{\nu}_{-1}\ket{0}+\sinh^{2}\theta\bra{0}\tilde{\mathcal{C }}^{\mu}_{1}\tilde{\mathcal{C}}^{\nu}_{-1}\ket{0}\big{]}=l_{\mu}l_{\nu}\eta^{ \mu\nu}=l_{\mu}l^{\mu}. \tag{114}\] The reason of this is that the non-zero part of the \(\bra{\Phi}\) comes from cross product \(\bra{\Phi_{1}}\ket{\Phi_{2}}\). Advancing in the same way, one can easily check that the state \[\ket{\chi}=l_{\mu}\bar{\alpha}^{\mu}_{-1}\ket{0,k^{\mu},K,W}_{A}, \ \ \ \ KW=1\] would also give the level 1 null physical state in section (109) under the limit. Although we do not provide here the explicit calculation of tensionless limit of higher level (i.e. level greater that 2) tensile flipped states, we can make some generalised comments. Firstly, it is clear that tensile physical state of any level will reduce to tensionless null physical state of same level. However, the norm of the original state will be preserved even at tensionless limit. Secondly, the internal momentum \(K\) and winding number \(W\) will also be intact. Now, let us recall the mass spectrum of the tensile twisted string, which is given by [68] \[m^{2}=\frac{K^{2}}{R^{2}}+\frac{1}{\alpha^{\prime 2}}W^{2}R^{2}+ \frac{2}{\alpha^{\prime}}(N-\widetilde{N}). \tag{115}\] If we take the limit \(\alpha^{\prime}\rightarrow\frac{\epsilon^{\prime}}{\epsilon}\), \(\epsilon\to 0\), then the second and third term in the mass square expression vanishes, much like they did in case of Induced vacuum and we are left with a mass spectrum identical to (106). As we have seen in our intrinsic analysis, (106) happens to be the mass spectrum for all levels, not just for the level zero, and hence, the tensionless limit from tensile twisted string theory is consistent with the intrinsically developed tensionless flipped string theory. ### A brief look at multiple dimensions compactification Now let us consider this theory on a background with \(d\) number of compactified dimensions. In this section we address it very briefly. Using (3.8), (3.9) and (3.10), and the redefinition in (3.13) we obtain an expression for \(k_{L,R}^{I}\) which is same as (3.17), namely \[k_{L,R}^{I}=\frac{1}{\sqrt{2}}\Big{(}\sqrt{c^{\prime}}K^{I}\pm\frac{1}{\sqrt{c ^{\prime}}}W^{I}R\Big{)}.\] \(L_{0}\) and \(M_{0}\) in their normal ordered form in terms of \(k_{L,R}^{\mu}\) will be \[L_{0}=N+\bar{N}+RK^{I}W_{I}=N+\bar{N}+\sum_{i=1}^{d}k_{i}\omega^{i}, \tag{6.28}\] \[\begin{split} M_{0}=\frac{1}{2}&\left(k_{L}^{I}k_{ IL}+k_{R}^{I}k_{IR}+2k_{L}^{I}k_{IR}\right)+c^{\prime}k^{2}+N-\bar{N}+X+Y\\ &=c^{\prime}K^{I}K_{I}+c^{\prime}k^{2}+N-\bar{N}+X+Y\\ \implies M_{0}=\frac{c^{\prime}}{R^{2}}\sum_{i,j=1}^{d}k_{i}g^{ij} k_{j}+c^{\prime}k^{2}+N-\bar{N}+X+Y.\end{split} \tag{6.29}\] The constraints on the levels of physical state in flipped vacuum will be \[r+s+\sum_{i=1}^{d}k_{i}\omega^{i}=2, \tag{6.30}\] where as usual \(r\) and \(s\) respectively denote the eigenvalues of the number operators \(N\) and \(\bar{N}\). The mass of a physical state of generic will be given by \[m^{2}\left|r,s,k^{i},\omega^{i}\right\rangle=K^{I}K_{I}\left|r,s,k^{i},\omega^ {i}\right\rangle=\Big{(}\frac{1}{R^{2}}\sum_{i,j=1}^{d}k_{i}g^{ij}k_{j}\Big{)} \left|r,s,k^{i},\omega^{i}\right\rangle. \tag{6.31}\] One can similarly show the above formula appears as a consistent limit of the mass operator in toroidal compactifications of the twisted parent theory. Details can be found in Appendix (C). ### Summary We summarize this section as follows: * The \(L_{0}\) physical state condition puts restrictions on the level, internal momentum \(K\) and winding number \(W\). Unlike non-compactified case, here we see that the level of physical states is not truncated at two. Instead, for appropriate value of \(K\) and \(W\), state of any level can be physical (section 6.1). In level 2, we get infinite number of physical states. * We analyze the physical states of level 0, level 1 and level 2 and studied the constraints put on them by non-trivial physical state condition. We also have calculated the mass spectrum for each level (see equations (6.4), (6.8), (6.17) (6.16)). * We took ultra relativistic limit on the physical states of the parent tensile theory and found that physical states of level 1 and level 2 in the parent theory reduces respectively to physical states of level 1 and level 2 in the tensionless theory (section 6.3). It is reasonable to speculate that tensile physical state of any level will reduce to tensionless physical state of the same level. We also saw that the mass spectrum obtained directly by taking tensionless limit of the parent theory matches with the intrinsically derived mass spectrum. * We then considered \(d\) dimensional compactification of the theory on a torus \(T^{d}\) and computed the level constraint (6.30) and mass spectrum (6.31) for the same. Conclusions and future directions ### Summary of the work In this paper, we first reviewed classical tensionless closed string theory by studying the action and its symmetries [16]. Then we revisited the discussion in [61] about canonical quantization of bosonic tensionless closed string theory. We have discussed all three consistent ways of quantizing tensionless string theory based on three distinct vacua, namely, the oscillator, Induced, and flipped vacuum. The physical state conditions for all three theories have been analysed. For oscillator and flipped vacuum, the physical state conditions puts constraint on level of the states, a feature absent in the Induced case. In all three theories the physical state conditions give us the mass spectrum. We then move on to investigate all three tensionless quantum string theories in a target spacetime compactified on a circle \(S^{1}\). The analysis has been done in a two pronged approach, both from an intrinsic tensionless theory compactified on a circle, and from taking consistent limits on respective tensile theories compactified on a circle. The consistency of our theories are established via an explicit matching of results from the both approaches. We see that for oscillator vacuum, the level matching condition gets modified exactly like in the case of tensile string theory. The Induced vacuum, quite expectedly, has no discernible constraints put on the level of states. On the other hand, the constraint on level of the states in flipped vacuum case is identical to the same in its twisted tensile parent theory. As for the mass spectrum, we notice that unlike tensile theory, all of the tensionless theories with compactification do not seemingly respect T-duality. However, compactification does replace the role of \(\alpha^{\prime}\) with \(c^{\prime}\), in the sense of construction of zero modes, and even there is a semblance of a self dual radius at \(R\sim\sqrt{c^{\prime}}\) where extra massless states occur in the oscillator spectrum. The meaning of this symmetry is not clear from the present discussion and needs further investigation. Compactification also introduces interesting new states in the spectrum of other vacua as well. Finally, we have provided a glimpse into the generalisation of our analysis of all three theories to target space with \(d\) dimensions compactified on a torus \(T^{d}\). ### Future plan There are several directions that could be pursued in near future. In tensile string theory when we add a constant Kalb-Ramond \(B\) field in the Polyakov action, any observable effect of it on the spectrum can be realised only in a compactified target spacetime. As a result, effect of constant \(B\) field is often discussed along with compactification of string theory [66, 67, 76]. In a companion work [75], we have considered tensionless strings with a constant \(B\) field. Here again, we have seen that compactification is necessary. In this work, we have observed that none of the mass spectrum of the three quantum theories respect T-duality as its symmetry. It probably owes to the fact that since the definition of T-duality itself involves tension in such a way that it is not immediately obvious how to define such transformation at tensionless limit. Since, T-duality maps string theories in two different target space, it would be of significance to investigate what happens to the T-dual theory under tensionless limit. The partition function of tensile string in a compactified background is [67] \[Z(\tau,\bar{\tau})=\frac{1}{|\eta(\tau)^{2}|}\sum_{K,W\in\mathbb{Z}}q^{\frac{1}{4 }\left(\sqrt{\alpha^{\prime}}\frac{K}{R}-\frac{RW}{\sqrt{\alpha^{\prime}}} \right)^{2}}\bar{q}^{\frac{1}{4}\left(\sqrt{\alpha^{\prime}}\frac{K}{R}+\frac {RW}{\sqrt{\alpha^{\prime}}}\right)^{2}} \tag{7.1}\] This partition function turns out to be invariant under the transformation \(R\to\frac{\alpha^{\prime}}{R}\). That means T-duality does not manifest itself merely in mass spectrum of tensile string theory but also in partition function. Hence an obvious future direction of us would be to calculate the partition function of all these three tensionless theories in order to see whether it also violates T-duality, or even becomes manifestly invariant under some alternative symmetry. Even without an explicit realization of T-duality in the mass formula, we do see a number of extra massless states occurring at the special point \(R\sim\sqrt{c^{\prime}}\), at least for the oscillator vacuum spectrum. Remember that for compact tensile strings, from 25 dimensional perspective, one would have two massless Kaluza-Klein gauge fields transforming in the \(U(1)_{L}\times U(1)_{R}\) group, and one extra massless scalar. At the self dual radius \(R=\sqrt{\alpha^{\prime}}\), this gauge symmetry was enhanced to \(SU(2)_{L}\times SU(2)_{R}\) and a plethora of new massless states emerged. It remains an open question whether such a symmetry enhancement happens in the tensionless case as well. To answer this, one must understand the vertex operator structure associated to the current alegbra of tensionless string theory better. Since the worldsheet here is Carrollian, one can hope that Non-Lorentzian Kac-Moody algebras discussed, for example, in [77] may be of help in this endeavour. For the Induced vacuum, there are still more interesting regimes one needs to explore. For example, in [14] it was shown that the Induced vacuum state, where all the perturbative tensile state condense upon, can be written as a Neumann boundary state along all directions. Since these states are equivalent to space-filling D-branes, one could show open string degrees of freedom appearing from closed string ones in a Bose-Einstein-like condensation setting. We haven't really touched on the analogous phenomena in the current manuscript, and it will be interesting to see how the physics changes with one (or more) directions compactified. Since such a phase transition can be directly linked with the Hagedorn transition, it remains to be seen how the extra compactified direction sets the corresponding temperature scale. For the flipped vacuum, since the underlying representation is highest weight BMS\({}_{3}\), one hopes to use well known BMSFT techniques to understand more about the symmetries of the theory. In the compactified case, intriguingly we do not have a truncated spectrum anymore, which makes the theory much nicer to play with. A point to note is that we do not see any symmetry enhancement points in the mass spectrum for this theory, whereas the parent twisted tensile theory was claimed to have an infinite number of those points [69]. This surely requires more scrutiny in the future. We also wish to generalise our analysis to the tensionless superstring theories. The underlying supersymmetric algebra on the worldsheet of the closed tensionless superstring could possibly be Homogenous [38] and Inhomogeneous Super BMS (SBMS\({}_{H/I}\)) [39]. These two theories come about from two different Inonu-Wigner contractions of two copies of the super-Virasoro algebra, and are significantly different from each other in both classical and quantum level. A proper classification of vacuum structures for supersymmetric tensionless strings is still missing from the literature. It would be interesting to compile a full classification of those for both non-compactified and compactified theories. We hope to come back to this problem soon. ## Acknowledgements The authors are indebted to Arjun Bagchi and Shailesh Lal for numerous illuminating discussions and constructive comments on the draft. It is also a pleasure to thank Sudipta Dutta, Kedar Kolekar, Mangesh Mandlik, Punit Sharma and Mirian Tsulaia for interesting discussions and useful comments. ArB is supported by the Quantum Gravity Unit of the Okinawa Institute of Science and Technology (OIST). RC is supported by the CSIR grant File No: 09/092(0991)/2018-EMR-I. PP would like to acknowledge the support provided by SPO/SERB/PHY/2019525. ## Appendix A Light-cone quantization Let us apply the light-cone gauge on \(X^{+}\), where \(X^{\pm}\) are defined as \[X^{\pm}=\frac{1}{\sqrt{2}}\big{(}X^{0}\pm X^{D-1}\big{)}.\] (A.1) The light-cone gauge on \(X^{+}\) is \[X^{+}=x^{+}+c^{\prime}k^{+}\tau.\] (A.2) This gauge implies \[C^{+}_{n}=\tilde{C}^{+}_{n}=0\hskip 14.226378pt\forall n \neq 0.\] (A.3) Using the equations of motions (2.5), we can express \(C^{-}\) in terms of the transverse coordinates \(i\) (\(i\in 1,2...,D-2\)) as follows \[\begin{split}& C^{-}_{m}=\frac{1}{8C^{+}_{0}}\sum_{n}:\Big{(}C^{i}_{ m-n}+\tilde{C}^{i}_{-(m-n)}\Big{)}\left(3C^{i}_{n}-\tilde{C}^{i}_{-n}\right) :\\ &\tilde{C}^{-}_{m}=\frac{1}{8C^{+}_{0}}\sum_{n}:\Big{(}C^{i}_{-(m -n)}+\tilde{C}^{i}_{m-n}\Big{)}\left(3C^{i}_{n}-\tilde{C}^{i}_{-n}\right):. \end{split}\] (A.4) Applying this light-cone gauge on \(L_{0}\) and \(M_{0}\), we can rewrite them in terms of transverse coordinates only. Their final expressions after applying gauge choice are same as (2.30), however, now, \(\mathcal{N}\), \(\widetilde{\mathcal{N}}\) and \(X\) are expressed just in terms of transverse oscillators \[\mathcal{N}=\sum_{i=1}^{D-2}\sum_{m>0}C^{i}_{-m}C^{i}_{m},\hskip 14.226378pt \widetilde{\mathcal{N}}=\sum_{i=1}^{D-2}\sum_{m>0}\tilde{C}^{i}_{-m}\tilde{C }^{i}_{m},\hskip 14.226378ptX=\sum_{i=1}^{D-2}\sum_{m>0}C^{i}_{m}\tilde{C}^{i}_ {m}.\] (A.5) Now let us consider the following state of level 2 (n=1) as \[C^{i}_{-1}\tilde{C}^{j}_{-1}\ket{0,k^{\mu}}. \tag{104}\] Using an argument similar to [67, 73, 76], it can be proved that these states have to be massless in order to make the theory Lorentz symmetric16. Let us consider massive particles in \(\mathbb{R}^{1,D-1}\). In the rest frame of the massive particle, momentum becomes \(k^{\mu}=(m,\overrightarrow{0})\), and it becomes symmetric under the Wigner's little group of \(SO(D-1)\) spatial rotations. Hence, any massive particle in \(\mathbb{R}^{1,D-1}\) essentially form \(SO(D-1)\) representation. However, from (104), we see that there could be only \((D-2)^{2}\) states in level 2, and therefore, these states cannot be fit into any representation of \(SO(D-1)\). The only way to resolve this inconsistency is to demand that the level two states in (104) are massless, and as a consequence, there is no rest frame. For massless particle, we can choose a frame where the momentum of the particle is \(k^{\mu}=(k,0\cdots 0,k)\), which is symmetric under the Wigner's little group \(SO(D-2)\). There would be no problem in fitting the states in (104) in an \(SO(D-2)\) representation. Footnote 16: In [67, 73, 76], it has been argued using Wigner’s classification that tensile bosonic string theory can respect Lorentz invariance iff the first excited states \(\alpha^{i}_{-1}\tilde{\alpha}^{j}_{-1}\ket{0,k^{\mu}}\) are massless. The conclusion was that the normal ordering constant of \(L_{0}\) is \(a=1\). Equation (36) dictates that setting the mass of states \(\ket{1,1}\) to zero essentially means \(a_{M}=2\). So, the mass spectrum in tensionless string theory from oscillator vacuum is \[m^{2}\ket{n,n}=\frac{1}{c^{\prime}}(2n-2)\ket{n,n}. \tag{105}\] Both \(a_{L}\) and \(a_{M}\) can also be calculated directly from the expressions of the \(L_{0}\) and \(M_{0}\) using the commutators (22). It is not hard to see that \[\frac{1}{2}\sum_{i=1}^{D-2}\sum_{m=-\infty}^{\infty}C^{i}_{-m}C^{i}_{m}=\frac {1}{2}\sum_{i=1}^{D-2}\sum_{m=-\infty}^{\infty}:C^{i}_{-m}C^{i}_{m}:+\frac{D- 2}{2}\sum_{m=1}^{\infty}m, \tag{106}\] and the same for right moving oscillators \(\tilde{C}\). Famously using \(\zeta\) function regularisation one can write \[\sum_{m=1}^{\infty}m=-\frac{1}{12}. \tag{107}\] Using this in the expression of (103), one can see that \[\mathcal{N}=:\mathcal{N}:-\frac{D-2}{24}\quad\text{ and}\quad\widetilde{\mathcal{N}}=:\widetilde{\mathcal{N}}:-\frac{D-2}{24}. \tag{108}\] Armed with these, we arrive at the following \[\begin{split} L_{0}=\mathcal{N}-\widetilde{\mathcal{N}}=: \mathcal{N}:-:\widetilde{\mathcal{N}}:=:L_{0}:\\ M_{0}=c^{\prime}k^{2}+\mathcal{N}+\widetilde{\mathcal{N}}+X+X^{ \dagger}=:M_{0}:-\frac{D-2}{12}.\end{split} \tag{109}\] Here we have used \(X=:X:\) since \(C\) and \(\tilde{C}\) commute with each other. From (A.11) we find the following values of \(a_{L}\) and \(a_{M}\) \[a_{L}=0,\qquad a_{M}=\frac{D-2}{12}.\] (A.12) Since we already know that \(a_{M}=2\), we can see that the only dimension where this quantum theory can make sense is \(D=26\). Hence this is the critical dimension for Oscillator vacuum tensionless string theory. Of course this approach of calculating critical dimension is rather heuristic and the rigorous way of calculating critical dimension is to obtain the closure of the background Lorentz algebra, which has been done in [62]. ## Appendix B The fate of tensile perturbative states at tensionless limit Following [14], we will show here that under tensionless limit the perturbative states for non-compactified background condense to the Induced vacuum. Let us consider a perturbative state in the tensile string theory \[\ket{\Psi}=\xi_{\mu\nu}\alpha_{-n}^{\mu}\tilde{\alpha}_{-n}^{\nu}\ket{0}_{ \alpha},\] (B.1) where \(\xi_{\mu\nu}\) is an arbitrary polarisation tensor. The state \(\ket{0}_{\alpha}\) evolves under the tensionless limit (\(T\to\epsilon T\)) as follows \[\ket{0}_{\alpha}=\ket{0}_{I}+\epsilon\ket{I_{1}}+\epsilon^{2}\ket{I_{2}}+\cdots\] (B.2) The conditions defining the tensile vacuum are \[\alpha_{n}\ket{0}_{\alpha}=\tilde{\alpha}_{n}\ket{0}_{\alpha}=0\,\quad\forall n >0.\] (B.3) Now, the modes \(A_{n}\) and \(B_{n}\) are related to \(\alpha_{n}\) and \(\tilde{\alpha}_{n}\) as \[\alpha_{n}=\frac{1}{2}\Big{[}\sqrt{\epsilon}A_{n}+\frac{1}{\sqrt{\epsilon}}B_ {n}\Big{]},\quad\tilde{\alpha}_{n}=\frac{1}{2}\Big{[}-\sqrt{\epsilon}A_{-n}+ \frac{1}{\sqrt{\epsilon}}B_{-n}\Big{]}.\] (B.4) Hence, the conditions in (B.3), under tensionless limit will evolve to the order by order actions mentioned in (5.16). Hence the state \(\ket{\Psi}\) given in (B.1) in tensionless limit will emerge as the following state \[\ket{\Psi}=\frac{1}{\epsilon}\Big{(}B_{-n}+\epsilon A_{-n}\Big{)}\Big{(}B_{n} -\epsilon A_{n}\Big{)}\Big{(}\ket{0}_{I}+\epsilon\ket{I_{1}}+\epsilon^{2} \ket{I_{2}}+\cdots\Big{)}.\] (B.5) Recalling the algebra satisfied by \(A\)'s and \(B\)'s as given in (2.9), we shall see that the \(\ket{\Psi}\) as given in (B.5), will reduce to the following \[\ket{\Psi}\to\Xi\ket{0}_{I},\quad\Xi=2n\eta^{\mu\nu}\xi_{\mu\nu}.\] (B.6) Hence, for non-compactified background spacetime, any perturbative state will condense on the Induced vacuum. #### Non-perturbative States The Induced vacuum belongs to the Induced representation of the BMS algebra. As discussed in [61] and [74], in Induced representation of BMS algebra, we can non-perturbatively define state on any state \(\ket{M,s}\), which has a well defined norm. One such state in BMS Induced representation is given by \[\ket{\phi}=\exp\left(i\sum_{n}\omega_{n}L_{n}\right)\ket{M,s}, \tag{114}\] where \(\omega_{n}\) are complex numbers satisfying \(\omega_{n}^{*}=\omega_{-n}\). It can be seen that this state does satisfy the physical state conditions (39). The nature of such non-perturbative states, however, is yet to be determined. ## Appendix C Physical states of flipped vacuum With the value of \(a_{L}=2\), we can rewrite the \(L_{0}\) physical state condition in (46) as \[(N+\widetilde{N}-2)\ket{phys}=0. \tag{115}\] The implication is that any physical state must be of level 2. A generic state of level 2 in the tensionless theory is given by \[\ket{2} =a_{\mu}C_{-2}^{\mu}\ket{0}_{A}+e_{\mu\nu}C_{-1}^{\mu}C_{-1}^{\nu} \ket{0}_{A}+h_{\mu\nu}C_{-1}^{\mu}\tilde{C}_{-1}^{\nu}\ket{0}_{A} \tag{116}\] \[+b_{\mu}\tilde{C}_{-2}^{\mu}\ket{0}_{A}+f_{\mu\nu}\tilde{C}_{-1}^ {\mu}\tilde{C}_{-1}^{\nu}\ket{0}_{A}+j_{\mu\nu}C_{-1}^{\mu}\tilde{C}_{-1}^{ \nu}\ket{0}_{A}.\] where we have assumed \(h_{\mu\nu}\) to be symmetric and \(j_{\mu\nu}\) to be antisymmetric. Applying the physical state conditions (46) on (116) one can put constraints on the coefficients in (116). For states with level 2, the conditions in (46) with \(n\geq 3\) will be trivially satisfied and hence, excluding the \(L_{0}\) condition (which has already been used) we would be left with only five non-trivial physical state conditions--namely, \(M_{0}\), \(L_{1}\), \(M_{1}\), \(L_{2}\) and \(M_{2}\) conditions. The \(M_{0}\) condition gives \[\begin{split} M_{0}\ket{2}&=\Big{[}(c^{\prime}k^{2 }+2)a_{\mu}-2b_{\mu}\Big{]}C_{-2}^{\mu}\ket{0}_{A}+\Big{[}(c^{\prime}k^{2}-2)b _{\mu}+2a_{\mu}\Big{]}\tilde{C}_{-2}^{\mu}\ket{0}_{A}\\ &+\Big{[}(c^{\prime}k^{2}+2)e_{\mu\nu}-h_{\mu\nu}\Big{]}C_{-1}^ {\mu}C_{-1}^{\nu}\ket{0}_{A}+\Big{[}(c^{\prime}k^{2}-2)f_{\mu\nu}+h_{\mu\nu} \Big{]}\tilde{C}_{-1}^{\mu}\tilde{C}_{-1}^{\nu}\ket{0}_{A}\\ &+\Big{[}c^{\prime}k^{2}(h_{\mu\nu}+j_{\mu\nu})+2(e_{\mu\nu}-f_{ \mu\nu})\Big{]}C_{-1}^{\mu}\tilde{C}_{-1}^{\nu}\ket{0}_{A}=0.\end{split} \tag{117}\] \(L_{1}\), \(M_{1}\), \(L_{2}\) and \(M_{2}\) conditions respectively give \[\begin{split} L_{1}\ket{2}&=\Big{[}2a_{\nu}+c^{ \prime}(2e_{\mu\nu}+h_{\mu\nu}-j_{\mu\nu})k^{\mu}\Big{]}C_{-1}^{\nu}\ket{0}_{A }\\ &+\Big{[}2b_{\nu}+c^{\prime}(2f_{\mu\nu}+h_{\mu\nu}+j_{\mu\nu})k ^{\mu}\Big{]}\tilde{C}_{-1}^{\nu}\ket{0}_{A}=0,\\ M_{1}\ket{2}&=2\Big{[}(a_{\nu}-b_{\nu})+c^{\prime}(2 e_{\mu\nu}-h_{\nu\mu}+j_{\mu\nu})k^{\mu}\Big{]}C_{-1}^{\nu}\ket{0}_{A}\\ &+2\Big{[}(a_{\nu}-b_{\nu})-c^{\prime}(2f_{\mu\nu}-h_{\mu\nu}-j_{ \mu\nu})k^{\mu}\Big{]}\tilde{C}_{-1}^{\nu}\ket{0}_{A}=0,\\ L_{2}\ket{2}&=\Big{[}2c^{\prime}k\cdot(a+b)+(e_{\mu} ^{\mu}-f_{\mu}^{\mu})\Big{]}\ket{0}_{A}=0,\\ M_{2}\ket{2}&=\Big{[}4c^{\prime}k\cdot(a-b)+(e_{\mu} ^{\mu}+f_{\mu}^{\mu})-h_{\mu}^{\mu}\Big{]}\ket{0}_{A}=0.\end{split} \tag{118}\] Solving the \(M_{0}\) condition in (C.3) gives us \(a_{\mu}=b_{\mu}\), also that \(k^{2}=0\), implying that the state has to be massless. The equations in (C.4) lead us to the following constraints on the coefficients \[e_{\mu\nu}=f_{\mu\nu}=\frac{1}{2}h_{\mu\nu},\ \ \ \ e_{\mu\nu}k^{\mu}=j_{\mu\nu}k^{\mu}=0,\ \ \ \ a_{\mu}=0.\] (C.5) Hence the resulting level 2 physical state becomes \[\ket{2}=e_{\mu\nu}\Big{[}C_{-1}^{\mu}C_{-1}^{\nu}\ket{0}_{A}+2C_{-1}^{\mu} \tilde{\mathcal{C}}_{-1}^{\nu}\ket{0}_{A}+\tilde{\mathcal{C}}_{-1}^{\mu} \tilde{\mathcal{C}}_{-1}^{\nu}\ket{0}_{A}\Big{]}+j_{\mu\nu}C_{-1}^{\mu}\tilde{ \mathcal{C}}_{-1}^{\nu}\ket{0}_{A}.\] (C.6) The norm of this state vanishes. As discussed in [61] norm of these states are GCA null states having weights \(\Delta=2\), \(\xi=0\). Given the fact that this theory comes from direct tensionless limit of twisted string theory, the emergence of null physical states at the tensionless limit might sound bizarre. However, it has been shown in [61] that positive norm physical state of tensile twisted string become null at the tensionless limit. In order to understand this let us consider a state in the tensile twisted theory \[\ket{\Psi}=\xi_{\mu\nu}\alpha_{-1}^{\mu}\bar{\alpha}_{-1}^{\nu} \ket{0}_{A}.\] (C.7) Using the Bogoliubov relation between the \(\alpha\) and the \(C\) oscillators we can determine the tensionless limit of this state \[\lim_{\epsilon\to 0}\ket{\Psi}=\xi_{\mu\nu}\Big{[}\cosh\theta\;C_{-1}^{ \mu}-\sinh\theta\;\tilde{\mathcal{C}}_{-1}^{\mu}\Big{]}\Big{[}\sinh\theta\;C_ {-1}^{\nu}-\cosh\theta\;\tilde{\mathcal{C}}_{-1}^{\nu}\Big{]}\ket{0}_{A}=\ket{ \phi_{1}}+\ket{\phi_{2}}+\ket{\phi_{3}},\] (C.8) where \[\cosh\theta=\frac{1}{2}\left(\sqrt{\epsilon}+\frac{1}{\sqrt{ \epsilon}}\right),\ \ \sinh\theta=\frac{1}{2}\left(\sqrt{\epsilon}-\frac{1}{\sqrt{ \epsilon}}\right),\] and \[\ket{\phi_{1}} =\frac{\epsilon}{2\sqrt{2}}\xi_{\mu\nu}\left[C_{-1}^{\mu}C_{-1}^ {\nu}-2C_{-1}^{\mu}\tilde{\mathcal{C}}_{-1}^{\nu}+\tilde{\mathcal{C}}_{-1}^{ \mu}\tilde{\mathcal{C}}_{-1}^{\nu}\right]\ket{0}_{A}\] (C.9) \[\ket{\phi_{2}} =\frac{1}{2\epsilon\sqrt{2}}\xi_{\mu\nu}\left[C_{-1}^{\mu}C_{-1} ^{\nu}+2C_{-1}^{\mu}\tilde{\mathcal{C}}_{-1}^{\nu}+\tilde{\mathcal{C}}_{-1}^{ \mu}\tilde{\mathcal{C}}_{-1}^{\nu}\right]\ket{0}_{A}\] \[\ket{\phi_{3}} =\xi_{\mu\nu}\left[C_{-1}^{\mu}\tilde{\mathcal{C}}_{-1}^{\nu}-C_ {-1}^{\nu}\tilde{\mathcal{C}}_{-1}^{\mu}\right]\ket{0}_{A}.\] As pointed out in [61], norm of all the three states in (C.9) vanish. However, still the norm of \(\ket{\Psi}\) remains intact since the non-zero part of the norm actually comes from \(\bra{\phi_{1}}\phi_{2}\). The norm of \(\ket{\Psi}\) is found to be \[\bra{\Psi}\ket{\Psi}=\xi^{\mu\nu}\xi_{\mu\nu}.\] (C.10) Looking at (C.9) and (C.6) one can notice that the combination given in \(\ket{\phi_{1}}\) is not a physical state combination when we look at the theory intrinsically. From limiting perspective, it is the \(\mathcal{O}(\epsilon)\) term and hence, vanishes at the \(\epsilon\to 0\) limit. Multiple dimensions compactification of twisted tensile string In this section we consider the tensile parent theory of ambitwistor string theory in a Target space with \(d\) dimensions compactified on a torus \(T^{d}\). Classically this theory has the same Polyakov action \[S=\frac{T}{2}\int d\tau d\sigma\sqrt{-g}g^{\alpha\beta}\partial_{ \alpha}X^{\mu}\partial_{\beta}X_{\mu}.\] Solution of the equation of motion of this theory in conformal gauge is expanded as in (2.15). We rewrite that in terms of left and right modes \(X^{\mu}=X^{\mu}_{L}+X^{\mu}_{R}\) as \[\begin{split} X^{\mu}_{L}&=x^{\mu}+\sqrt{\frac{ \alpha^{\prime}}{2}}\alpha^{\mu}_{0}(\tau+\sigma)+i\sqrt{\frac{\alpha^{\prime} }{2}}\sum_{n\neq 0}\frac{1}{n}\alpha^{\mu}_{n}e^{-in(\tau+\sigma)},\\ X^{\mu}_{R}&=x^{\mu}+\sqrt{\frac{\alpha^{\prime} }{2}}\tilde{\alpha}^{\mu}_{0}(\tau-\sigma)+i\sqrt{\frac{\alpha^{\prime}}{2}} \sum_{n\neq 0}\frac{1}{n}\tilde{\alpha}^{\mu}_{n}e^{-in(\tau-\sigma)},\end{split}\] (D.1) where \(\{\alpha,\tilde{\alpha}\}\) satisfy the harmonic oscillator algebra. The physical state condition for this theory is \[\left(\mathcal{L}_{n}-a\delta_{n,0}\right)\left|phys\right>= \left(\bar{\mathcal{L}}_{-n}-\tilde{a}\delta_{n,0}\right)\left|phys\right>=0 \hskip 14.226378pt\forall n\geq 0,\] (D.2) where \(\{a,\tilde{a}\}\) are normal ordering constants. The vacuum in this theory is defined as below \[\alpha_{n}\left|0\right>=\tilde{\alpha}_{-n}\left|0\right>=0, \hskip 14.226378pt\forall n>0.\] (D.3) The number operators in this theory are defined as \[N=\sum_{n=1}^{\infty}:\alpha_{-n}\cdot\alpha_{n}:\hskip 14.226378pt\text{ and } \hskip 14.226378pt\widetilde{N}=\sum_{n=1}^{\infty}:\tilde{\alpha}_{-n}\cdot \tilde{\alpha}_{n}:.\] (D.4) Now, let us recall from section (4.4) that for compactification on a \(d\) dimensional torus we need to make the following identification for the compactified coordinates \[X^{I}\sim X^{I}+2\pi RW^{I},\hskip 14.226378ptI\in\{26-d, \cdots,25\},\] (D.5) with \(W^{I}\) defined in (3.9). In order to ensure that \(e^{iX^{I}K_{I}}\) single-valued, we need to impose equation (3.10) on momentum components in compactified directions. Following section (4.4), here too, we define dimensionless field \(Y^{I}\) as \[X^{I}=\sqrt{\frac{\alpha^{\prime}}{2}}Y^{I},\] (D.6) where the mode expansion of \(Y^{I}\) (splitting into left and right part) is given by \[\begin{split} Y^{I}_{L}&=y^{I}_{L}+k^{I}_{L}(\tau+ \sigma)+i\sum_{n\neq 0}\frac{1}{n}\alpha^{\mu}_{n}e^{-in(\tau+\sigma)},\\ Y^{I}_{R}&=y^{I}_{R}+k^{I}_{R}(\tau-\sigma)+i\sum_{n \neq 0}\frac{1}{n}\tilde{\alpha}^{\mu}_{n}e^{-in(\tau-\sigma)}.\end{split}\] (D.7) Here \(k^{I}_{L,R}\) are given by \[k^{I}_{L,R}=\frac{1}{\sqrt{2}}\Bigg{(}\sqrt{\alpha^{\prime}}K^{I}\pm\frac{1}{ \sqrt{\alpha^{\prime}}}W^{I}R\Bigg{)}. \tag{114}\] Now, \(\mathcal{L}_{0}\) and \(\bar{\mathcal{L}}_{0}\) can be expressed in terms of the number operators as \[\begin{split}\mathcal{L}_{0}&=\frac{\alpha^{\prime} }{4}\sum_{I=1}^{d}\left(K^{I}+\frac{1}{\alpha^{\prime}}W^{I}R\right)^{2}+\frac {\alpha^{\prime}}{4}\sum_{\mu=0}^{25-d}k_{\mu}k^{\mu}+N,\\ \bar{\mathcal{L}}_{0}&=\frac{\alpha^{\prime}}{4}\sum _{I=1}^{d}\left(K^{I}-\frac{1}{\alpha^{\prime}}W^{I}R\right)^{2}+\frac{\alpha ^{\prime}}{4}\sum_{\mu=0}^{25-d}k_{\mu}k^{\mu}-\widetilde{N}.\end{split} \tag{115}\] The \(\mathcal{L}_{0}\) and \(\bar{\mathcal{L}}_{0}\) conditions in (109) with \(a=-\tilde{a}=1\), we have the following constraints on physical states of level \((r,s)\) (\(r,s\) are eigenvalues of number operators \(N\) and \(\widetilde{N}\) respectively). \[\begin{split} r+s+\frac{\alpha^{\prime}}{4}\sum_{I=1}^{d}\left(K ^{I}+\frac{1}{\alpha^{\prime}}W^{I}R\right)^{2}-\frac{\alpha^{\prime}}{4}\sum _{I=1}^{d}\left(K^{I}-\frac{1}{\alpha^{\prime}}W^{I}R\right)^{2}-2=0,\\ r-s+\frac{\alpha^{\prime}}{4}\sum_{I=1}^{d}\left(K^{I}+\frac{1}{ \alpha^{\prime}}W^{I}R\right)^{2}+\frac{\alpha^{\prime}}{4}\sum_{I=1}^{d} \left(K^{I}-\frac{1}{\alpha^{\prime}}W^{I}R\right)^{2}-\frac{\alpha^{\prime}} {2}m^{2}=0.\end{split} \tag{116}\] In the above, we have used \(m^{2}=-\sum_{\mu=0}^{25-d}k_{\mu}k^{\mu}\). Using (116) and (101) on (116) we get \[r+s+\sum_{i=1}^{d}k_{i}\omega^{i}=2,\ \ \ \ \ m^{2}=\frac{1}{R^{2}}\sum_{i,j=1}^{d}k_ {i}g^{ij}k_{j}+\frac{R^{2}}{\alpha^{\prime 2}}\sum_{i,j=1}^{d}\omega^{i}g_{ij} \omega^{j}+\frac{2}{\alpha^{\prime}}(r-s). \tag{117}\] Just like the mass spectrum in tensile string theory, this mass spectrum too, is invariant under T-duality transformation \[k^{i}\longleftrightarrow\omega_{i}\ \ \ \ R\rightarrow\frac{\alpha^{\prime}}{R}\] Under tensionless limit (\(\alpha^{\prime}=\frac{c^{\prime}}{\epsilon},c^{\prime}\to 0\)), this mass-spectrum will reduce to the mass-spectrum for twisted string theory derived in (109).
2305.17508
Pairs of associated Yamabe almost solitons with vertical potential on almost contact complex Riemannian manifolds
Almost contact complex Riemannian manifolds, known also as almost contact B-metric manifolds, are in principle equipped with a pair of mutually associated pseudo-Riemannian metrics. Each of these metrics is specialized here as a Yamabe almost soliton with a potential collinear to the Reeb vector field. The resulting manifolds are then investigated in two important cases with geometric significance. The first is when the manifold is of Sasaki-like type, i.e. its complex cone is a holomorphic complex Riemannian manifold (also called a K\"ahler--Norden manifold). The second case is when the soliton potential is torse-forming, i.e. it satisfies a certain recurrence condition for its covariant derivative with respect to the Levi-Civita connection of the corresponding metric. The studied solitons are characterized. In the three-dimensional case, an explicit example is constructed and the properties obtained in the theoretical part are confirmed.
Mancho Manev
2023-05-27T16:04:34Z
http://arxiv.org/abs/2305.17508v1
Pairs of associated Yamabe almost solitons with vertical potential on almost contact complex Riemannian manifolds ###### Abstract. Almost contact complex Riemannian manifolds, known also as almost contact B-metric manifolds, are in principle equipped with a pair of mutually associated pseudo-Riemannian metrics. Each of these metrics is specialized here as a Yamabe almost soliton with a potential collinear to the Reeb vector field. The resulting manifolds are then investigated in two important cases with geometric significance. The first is when the manifold is of Sasaki-like type, i.e. its complex cone is a holomorphic complex Riemannian manifold (also called a Kahler-Norden manifold). The second case is when the soliton potential is torse-forming, i.e. it satisfies a certain recurrence condition for its covariant derivative with respect to the Levi-Civita connection of the corresponding metric. The studied solitons are characterized. In the three-dimensional case, an explicit example is constructed and the properties obtained in the theoretical part are confirmed. Key words and phrases:Yamabe soliton, almost contact B-metric manifold, almost contact complex Riemannian manifold, Sasaki-like manifold, torse-forming vector field 2020 Mathematics Subject Classification: Primary 53C25, 53D15, 53C50; Secondary 53C44, 53D35, 70G45 ## 1. Introduction The notion of the Yamabe flow is known since 1988. It is introduced in [1, 2] by R. S. Hamilton to construct metrics with constant scalar curvature. A time-dependent family of (pseudo-)Riemannian metrics \(g(t)\) considered on a smooth manifold \(\mathcal{M}\) is said to evolve by _Yamabe flow_ if \(g(t)\) satisfies the following evolution equation \[\frac{\partial}{\partial t}g(t)=-\tau(t)g(t),\qquad g(0)=g_{0},\] where \(\tau(t)\) denotes the scalar curvature corresponding to \(g(t)\). A self-similar solution of the Yamabe flow on \((\mathcal{M},g)\) is called a _Yamabe soliton_ and is determined by the following equation \[\frac{1}{2}\mathcal{L}_{\vartheta}g=(\tau-\lambda)g, \tag{1}\] where \(\mathcal{L}_{\vartheta}g\) denotes the Lie derivative of \(g\) along the vector field \(\vartheta\) called the soliton potential, and \(\lambda\) is the soliton constant (e.g. [3]). Briefly, we denote this soliton by \((g;\vartheta,\lambda)\). In the case that \(\lambda\) is a differential function on \(\mathcal{M}\), the solution is called an _Yamabe almost soliton_. Many authors have studied Yamabe (almost) solitons on different types of manifolds in the recent years (see e.g. [4, 5, 6, 7, 8, 9, 10]). The study of this kind of flows and the corresponding (almost) solitons cause an interest in mathematical physics because the Yamabe flow corresponds to fast diffusion of the porous medium equation [11]. In [9] the author begins the study of Yamabe solitons on almost contact complex Riemannian manifolds (abbreviated accR manifolds), there called almost contact B-metric manifolds. These manifolds are classified in [12] by G. Ganchev, V. Mihova and K. Gribachev. The pair of B-metrics, which are related to each other by the almost contact structure, determine the geometry of the investigated manifolds. In [9] and [10], the present author studies Yamabe solitons obtained by contact conformal transformations for some interesting classes of studied manifolds. In the former paper, the manifold studied is cosymplectic or Sasaki-like, and in the latter, the soliton potential is torse-forming. Contact conformal transformations of an almost contact B-metric structure transform the two B-metrics, the Reeb vector field and its dual contact 1-form, using this pair of metrics and a triplet of differentiable functions on the manifold (see e.g. [13, 14]). These transformations generalize the \(\mathcal{D}\)-homothetic deformations of the considered manifolds introduced in [15]. In the present work, instead of these naturally occurring transformed Yamabe solitons involving the two B-metrics, we use a condition for two Yamabe almost solitons for each of the metrics. Again, one of the simplest types of non-cosymplectic manifolds among those investigated, which is of interest to us, is precisely that the Sasaki-likes introduced in [16]. This means that a warped product of a Sasaki-like accR manifold with the positive real axis gives rise to a complex cone which is a Kahler manifold with a pair of Norden metrics. Note that the intersection of the classes of Sasaki-like manifolds and cosymplectic manifolds is an empty set. Different types of solitons on Sasaki-like manifolds have been studied in [9, 17, 18, 19]. Another interesting type of the studied manifold with Yamabe solitons is again (as in [9] and [10]) the object of consideration in the present article. This is the case when the soliton potential is a torse-forming vertical vector field. Vertical means it has the same direction as the Reeb vector field. Torse-forming vector fields are defined by a certain recurrence condition for their covariant derivative regarding the Levi-Civita connection of the basic metric. These vector fields are first defined and studied by K. Yano [20]. They are then investigated by various authors for manifolds with different tensor structures (e.g. [21, 22, 23]) and for the studied here manifolds in [10, 17, 19, 24]. The present paper is organized as follows. After the present introduction to the topic, in Section 2 we recall some known facts about the investigated manifolds. In Section 3, we set ourselves the task of equipping the considered manifolds with a pair of associated Yamabe almost solitons. In Section 4, we prove that there does not exist a Sasaki-like manifold equipped with a pair of Yamabe almost solitons with vertical potential generated by each of the two fundamental metrics. A successful solution to the problem posed in Section 3 is done in Section 5 in the case where the vertical potentials of the pair of Yamabe almost solitons are torse-forming. Section 6 provides an explicit example of the smallest dimension of the type of manifold constructed in the previous section. ## 2. accR manifolds A differentiable manifold \(\mathcal{M}\) of dimension \((2n+1)\), equipped with an almost contact structure \((\varphi,\xi,\eta)\) and a B-metric \(g\) is called an _almost contact B-metric manifold_ or _almost contact complex Riemannian_ (abbr. _accR_) _manifold_ and it is denoted by \((\mathcal{M},\varphi,\xi,\eta,g)\). More concretely, \(\varphi\) is an endomorphism of the tangent bundle \(T\mathcal{M}\), \(\xi\) is a Reeb vector field, \(\eta\) is its dual contact 1-form and \(g\) is a pseudo-Riemannian metric of signature \((n+1,n)\) satisfying the following conditions \[\begin{array}{ll}\varphi\xi=0,&\varphi^{2}=-\iota+\eta\otimes\xi,\qquad \eta\circ\varphi=0,\qquad\eta(\xi)=1,\\ &g(\varphi x,\varphi y)=-g(x,y)+\eta(x)\eta(y),\end{array} \tag{2}\] where \(\iota\) stands for the identity transformation on \(\Gamma(T\mathcal{M})\)[12]. In the latter equality and further, \(x\), \(y\), \(z\) will stand for arbitrary elements of \(\Gamma(T\mathcal{M})\) or vectors in the tangent space \(T_{p}\mathcal{M}\) of \(\mathcal{M}\) at an arbitrary point \(p\) in \(\mathcal{M}\). The following equations are immediate consequences of (2) \[g(\varphi x,y)=g(x,\varphi y),\qquad g(x,\xi)=\eta(x),\qquad g(\xi,\xi)=1, \qquad\eta(\nabla_{x}\xi)=0,\] where \(\nabla\) denotes the Levi-Civita connection of \(g\). The associated metric \(\tilde{g}\) of \(g\) on \(\mathcal{M}\) is also a B-metric and it is defined by \[\tilde{g}(x,y)=g(x,\varphi y)+\eta(x)\eta(y). \tag{3}\] In [12], accR manifolds are classified with respect to the (0,3)-tensor \(F\) defined by \[F(x,y,z)=g\big{(}(\nabla_{x}\varphi)\,y,z\big{)}. \tag{4}\] It has the following basic properties: \[F(x,y,z)=F(x,z,y)=F(x,\varphi y,\varphi z)+\eta(y)F(x,\xi,z)+\eta(z)F(x,y,\xi), \tag{5}\] \[F(x,\varphi y,\xi)=(\nabla_{x}\eta)y=g(\nabla_{x}\xi,y). \tag{6}\] The Ganchev-Mihova-Gribachev classification of the studied manifolds cited in the introduction consists of eleven basic classes \(\mathcal{F}_{i}\), \(i\in\{1,2,\ldots,11\}\), determined by conditions for \(F\). ## 3. Pair of associated Yamabe almost solitons Let us consider an accR manifold \((\mathcal{M},\varphi,\xi,\eta,g)\) with a pair of associated Yamabe almost solitons generated by the pair of B-metrics \(g\) and \(\tilde{g}\), i.e. \((g;\vartheta,\lambda)\) and \((\tilde{g};\tilde{\vartheta},\tilde{\lambda})\), which are mutually associated by the \((\varphi,\xi,\eta)\)-structure. Then, along with (1), the following identity also holds \[\frac{1}{2}\mathcal{L}_{\tilde{\vartheta}}\tilde{g}=(\tilde{\tau}-\tilde{ \lambda})\tilde{g}, \tag{7}\] where \(\tilde{\vartheta}\) and \(\tilde{\lambda}\) are the soliton potential and the soliton function, respectively, and \(\tilde{\tau}\) is the scalar curvature of the manifold with respect to \(\tilde{g}\). We suppose that the potentials \(\vartheta\) and \(\tilde{\vartheta}\) are vertical, i.e. there exists differentiable functions \(k\) and \(\tilde{k}\) on \(\mathcal{M}\), such that we have \[\vartheta=k\xi,\qquad\tilde{\vartheta}=\tilde{k}\xi, \tag{8}\] where \(k(p)\neq 0\) and \(\tilde{k}(p)\neq 0\) at every point \(p\) of \(M\). Briefly, we denote these potentials by \((\vartheta,k)\) and \((\tilde{\vartheta},\tilde{k})\). In this case, for the Lie derivatives of \(g\) and \(\tilde{g}\) along \(\vartheta\) and \(\tilde{\vartheta}\), respectively, we obtain the following expressions: \[\left(\mathcal{L}_{\vartheta}g\right)(x,y) =g(\nabla_{x}\vartheta,y)+g(x,\nabla_{y}\vartheta)\] \[=\mathrm{d}k(x)\eta(y)+\mathrm{d}k(y)\eta(x)+k\left\{g(\nabla_{x }\xi,y)+g(x,\nabla_{y}\xi)\right\},\] \[\left(\mathcal{L}_{\tilde{\vartheta}}\tilde{g}\right)(x,y) =\tilde{g}(\tilde{\nabla}_{x}\tilde{\vartheta},y)+\tilde{g}(x, \tilde{\nabla}_{y}\tilde{\vartheta}) \tag{10}\] \[=\mathrm{d}\tilde{k}(x)\eta(y)+\mathrm{d}\tilde{k}(y)\eta(x)+ \tilde{k}\left\{\tilde{g}(\tilde{\nabla}_{x}\xi,y)+\tilde{g}(x,\tilde{\nabla }_{y}\xi)\right\}. \tag{9}\] ## 4. The case when the underlying accR manifold is Sasaki-like In [16], it is introduced the type of a _Sasaki-like_ manifold among accR manifolds. The definition condition is its complex cone to be a Kahler-Norden manifold, i.e. with a parallel complex structure. A Sasaki-like accR manifold is determined by the condition \[\left(\nabla_{x}\varphi\right)y=\eta(y)\varphi^{2}x+g(\varphi x,\varphi y)\xi.\] Therefore, the fundamental tensor \(F\) of such a manifold has the following form \[F(x,y,z)=g(\varphi x,\varphi y)\eta(z)+g(\varphi x,\varphi z)\eta(y). \tag{11}\] Obviously, Sasaki-like accR manifolds form a subclass of the class \(\mathcal{F}_{4}\) of the Ganchev-Mihova-Gribachev classification. Moreover, the following identities are valid for it \[\nabla_{x}\xi=-\varphi x, \left(\nabla_{x}\eta\right)(y)=-g(x,\varphi y),\] \[R(x,y)\xi=\eta(y)x-\eta(x)y, \rho(x,\xi)=2n\,\eta(x),\] \[R(\xi,y)z=g(y,z)\xi-\eta(z)y, \rho(\xi,\xi)=2n, \tag{12}\] where \(R\) and \(\rho\) stand for the curvature tensor and the Ricci tensor of \(\nabla\), defined as usual by \(R=[\nabla,\nabla]-\nabla_{[\,,\,]}\) and \(\rho\) is the result of the contraction of \(R\) by its first index [16]. If the considered accR manifold \((\mathcal{M},\varphi,\xi,\eta,g)\) is Sasaki-like, due to the first equality of (12), we obtain that (9) takes the following form \[\left(\mathcal{L}_{\vartheta}g\right)(x,y)=\mathrm{d}k(x)\eta(y)+\mathrm{d}k( y)\eta(x)-2kg(x,\varphi y). \tag{13}\] We then put the result of (13) into (1) and get the following \[\frac{1}{2}\left\{\mathrm{d}k(x)\eta(y)+\mathrm{d}k(y)\eta(x)\right\}-kg(x, \varphi y)=(\tau-\lambda)g(x,y). \tag{14}\] Replacing \(x\) and \(y\) with \(\xi\) in (14) gives \[\mathrm{d}k(\xi)=\tau-\lambda. \tag{15}\] The trace of (14) in an arbitrary basis \(\{e_{i}\}\)\((i=1,2,\dots,2n+1)\) implies \[\mathrm{d}k(\xi)=(2n+1)(\tau-\lambda). \tag{16}\] Combining (15) and (16) leads to \(k=0\), which contradicts the conditions and therefore we find the following to be true **Theorem 4.1**.: _There does not exist a Sasaki-like manifold \((\mathcal{M},\varphi,\xi,\eta,g)\) equipped with a \(g\)-generated Yamabe almost soliton having a vertical potential._ Now, let us consider the similar situation but with respect to the associated B-metric \(\tilde{g}\) and the corresponding Levi-Civita connection \(\tilde{\nabla}\). First, similarly to (4), we define the fundamental tensor \(\tilde{F}\) for \(\tilde{g}\) as follows \[\tilde{F}(x,y,z)=\tilde{g}\left(\big{(}\tilde{\nabla}_{x}\varphi\big{)}y,z\right).\] Since \(\tilde{g}\) is also a B-metric like \(g\), it is obvious that properties (5) and (6) also hold for \(\tilde{F}\), i.e. \[\tilde{F}(x,y,z)=\tilde{F}(x,z,y)=\tilde{F}(x,\varphi y,\varphi z)+\eta(y) \tilde{F}(x,\xi,z)+\eta(z)\tilde{F}(x,y,\xi), \tag{17}\] \[\tilde{F}(x,\varphi y,\xi)=(\tilde{\nabla}_{x}\eta)y=\tilde{g}(\tilde{\nabla} _{x}\xi,y).\] Then, using the well-known Koszul formula in this case for \(\tilde{g}\), i.e. \[2\tilde{g}\Big{(}\tilde{\nabla}_{x}y,z\Big{)} =x\big{(}\tilde{g}(y,z)\big{)}+y\big{(}\tilde{g}(x,z)\big{)}-z \big{(}\tilde{g}(x,y)\big{)}\] \[\qquad\qquad+\tilde{g}\big{(}[x,y],z\big{)}+\tilde{g}\big{(}[z,y],x\big{)}+\tilde{g}\big{(}[z,x],y\big{)},\] we obtain, after lengthy but standard calculations, the following relationship between \(\tilde{F}\) and \(F\): [25] \[2\tilde{F}(x,y,z) =F(\varphi y,z,x)-F(y,\varphi z,x)+F(\varphi z,y,x)-F(z,\varphi y,x)\] \[\quad+\left\{F(x,y,\xi)+F(\varphi y,\varphi x,\xi)+F(x,\varphi y, \xi)\right\}\eta(z)\] \[\quad+\left\{F(x,z,\xi)+F(\varphi z,\varphi x,\xi)+F(x,\varphi z, \xi)\right\}\eta(y)\] \[\quad+\left\{F(y,z,\xi)+F(\varphi z,\varphi y,\xi)+F(z,y,\xi)+F( \varphi y,\varphi z,\xi)\right\}\eta(x). \tag{18}\] **Lemma 4.2**.: _For a Sasaki-like manifold \((\mathcal{M},\varphi,\xi,\eta,g)\) with associated B-metric \(\tilde{g}\) the following holds_ \[\tilde{\nabla}_{x}\xi=-\varphi x. \tag{19}\] Proof.: Due to (17) and (18), we obtain the following consequence \[2\tilde{g}(\tilde{\nabla}_{x}\xi,y) =F(\varphi^{2}y,\xi,x)-F(\xi,\varphi^{2}y,x)+\left\{F(\varphi y, \xi,\xi)+F(\xi,\varphi y,\xi)\right\}\eta(x)\] \[\quad+F(x,\varphi y,\xi)+F(\varphi^{2}y,\varphi x,\xi)+F(x, \varphi^{2}y,\xi). \tag{20}\] In deriving the last equality we have used the properties in (2). We then apply the expression of \(\varphi^{2}\) from (2) and some properties of \(F\) in this case. The first is \(F(\xi,\xi,x)=0\), which is a consequence of (11), and the second is the general identity \(F(x,\xi,\xi)=0\), which comes from (5). Thus, the relation in (20) simplifies to the form \[2\tilde{g}(\tilde{\nabla}_{x}\xi,y)=F(\xi,x,y)+F(x,\varphi y,\xi)-F(y,\varphi x,\xi)-F(x,y,\xi)-F(y,x,\xi). \tag{21}\] Thereafter, we compute the various components in the above formula by exploiting the fact that the given manifold is Sasaki-like, i.e. (11) is valid, and we get: \[F(\xi,x,y)=0,\qquad F(x,y,\xi)=g(\varphi x,\varphi y),\qquad F(x,\varphi y, \xi)=-g(x,\varphi y).\] As a result, given the symmetry of \(g(x,\varphi y)\) with respect to \(x\) and \(y\) as well as (3), the equality in (21) simplifies to the following form \[\tilde{g}(\tilde{\nabla}_{x}\xi,y)=-\tilde{g}(\varphi x,y),\] which is an equivalent expression of (19). Now we apply (19) to (10) and use (3) to get: \[\left(\mathcal{L}_{\tilde{g}}\tilde{g}\right)(x,y)=\mathrm{d}\tilde{k}(x)\eta(y)+ \mathrm{d}\tilde{k}(y)\eta(x)-2\tilde{k}\,g(\varphi x,\varphi y). \tag{22}\] Then we substitute the expression from (22) into (7) and obtain the following \[\frac{1}{2}\left\{\mathrm{d}\tilde{k}(x)\eta(y)+\mathrm{d}\tilde{k}(y)\eta(x) \right\}-\tilde{k}\,g(\varphi x,\varphi y)=(\tilde{\tau}-\tilde{\lambda}) \tilde{g}(x,y). \tag{23}\] Contracting (23), we infer \[\mathrm{d}\tilde{k}(\xi)+2n\tilde{k}=\tilde{\tau}-\tilde{\lambda}. \tag{24}\] On the other hand, we replace \(x\) and \(y\) in (23) by \(\xi\) and get \[\mathrm{d}\tilde{k}(\xi)=\tilde{\tau}-\tilde{\lambda}. \tag{25}\] Then (24) and (25) imply \(\tilde{k}=0\), which is not admissible for the potential and therefore the following holds **Theorem 4.3**.: _There does not exist a Sasaki-like manifold \((\mathcal{M},\varphi,\xi,\eta,g)\) equipped with a \(\tilde{g}\)-generated Yamabe almost soliton having a vertical potential._ ## 5. The case of a torse-forming vertical potential Let us recall, a vector field \(\vartheta\) on a (pseudo-)Riemannian manifold \((\mathcal{M},g)\) is called a _torse-forming vector field_ if the following identity is true: \[\nabla_{x}\vartheta=f\,x+\gamma(x)\vartheta, \tag{26}\] where \(f\) is a differentiable function and \(\gamma\) is a \(1\)-form [20, 26]. The \(1\)-form \(\gamma\) is called the _generating form_ and the function \(f\) is called the _conformal scalar_ of \(\vartheta\)[22]. _Remark 5.1_.: Some special types of torse-forming vector fields have been considered in various studies. A vector field \(\vartheta\) determined by (26) is called respectively: * _torqued_, if \(\gamma(\vartheta)=0\); [23] * _concircular_, if \(\gamma=0\); [27] * _concurrent_, if \(f-1=\gamma=0\); [28] * _recurrent_, if \(f=0\); [29] * _parallel_, if \(f=\gamma=0\). (e.g. [30]) If, in addition, the potential \(\vartheta\) is vertical, i.e. \(\vartheta=k\,\xi\), then we get from (26) the following \[\mathrm{d}k(x)\xi+k\nabla_{x}\xi=f\,x+k\,\gamma(x)\xi. \tag{27}\] Since \(\eta(\nabla_{x}\xi)\) vanishes identically, (27) implies the following \[\mathrm{d}k(x)=f\,\eta(x)+k\,\gamma(x),\] which, due to the nowhere-vanishing \(k\), gives us the following expression for the generating form of \(\vartheta\) \[\gamma(x)=\frac{1}{k}\left\{\mathrm{d}k(x)-f\,\eta(x)\right\}. \tag{28}\] Then, the torse-forming vertical potential is determined by \(f\) and \(k\), hence we denote it by \(\vartheta(f,k)\). Plugging (28) into (26), we get \[\nabla_{x}\vartheta=-f\,\varphi^{2}x+\mathrm{d}k(x)\xi, \tag{29}\] which together with \(\nabla_{x}\vartheta=\nabla_{x}(k\,\xi)=\mathrm{d}k(x)\xi+k\nabla_{x}\xi\) gives in the considered case the following \[\nabla_{x}\xi=-\frac{f}{k}\varphi^{2}x. \tag{30}\] By virtue of (30), for the curvature tensor of \(g\) we get \[R(x,y)\xi=-\left\{\mathrm{d}h(x)+h^{2}\eta(x)\right\}\varphi^{2}y+\left\{ \mathrm{d}h(y)+h^{2}\eta(y)\right\}\varphi^{2}x, \tag{31}\] where we use the following shorter notation for the function which is the coefficient in (30) \[h=\frac{f}{k}. \tag{32}\] As immediate consequences of (31), we obtain the following expressions \[R(\xi,y)z =g(\varphi y,\varphi z)\operatorname{grad}h-\mathrm{d}h(z)\varphi ^{2}y+h^{2}\left\{\eta(z)y-g(y,z)\xi\right\},\] \[\rho(y,\xi) =-(2n-1)\mathrm{d}h(y)-\left\{\mathrm{d}h(\xi)+2nh^{2}\right\} \eta(y),\] \[\rho(\xi,\xi) =-2n\left\{\mathrm{d}h(\xi)+h^{2}\right\}.\] Equality (30), due to (6) and (32), can be rewrite in the form \[F(x,\varphi y,\xi)=-h\,g(\varphi x,\varphi y). \tag{33}\] Bearing in mind \(F(x,\xi,\xi)=0\), following from (5), the expression (33) is equivalent to the following equality \[F(x,y,\xi)=-h\,g(x,\varphi y). \tag{34}\] Then (9) and (1) imply \[\frac{1}{2}\left\{\mathrm{d}k(x)\eta(y)+\mathrm{d}k(y)\eta(x)\right\}-fg( \varphi x,\varphi y)=(\tau-\lambda)g(x,y). \tag{35}\] Contracting (35) gives \[\mathrm{d}k(\xi)+2n\,f=(2n+1)(\tau-\lambda), \tag{36}\] and substituting \(x=y=\xi\) into (35) yields \[\mathrm{d}k(\xi)=\tau-\lambda. \tag{37}\] Then combining (36) and (37) leads to an expression for the conformal scalar of \(\vartheta\) as follows \[f=\tau-\lambda. \tag{38}\] This means that the following statement is valid **Theorem 5.2**.: _Let an accR manifold \((\mathcal{M},\varphi,\xi,\eta,g)\) be equipped with a Yamabe almost soliton \((g;\vartheta(f,k),\lambda)\), where \(\vartheta\) is a vertical torse-forming potential. Then the scalar curvature \(\tau\) of this manifold is the sum of the conformal scalar \(f\) of \(\vartheta\) and the soliton function \(\lambda\), i.e. \(\tau=f+\lambda\)._ By (37) and (38) we have \[f=\mathrm{d}k(\xi). \tag{39}\] Substituting (39) into (32), we obtain the following expression of the function \(h\) \[h=\mathrm{d}(\ln k)(\xi).\] **Corollary 5.3**.: _The potential \(\vartheta(f,k)\) of any Yamabe almost soliton \((g;\vartheta,\lambda)\) on \((\mathcal{M},\varphi,\xi,\eta,g)\) is a torqued vector field._ Proof.: Due to (39) and (28), \(\gamma(\xi)\) vanishes. Hence, \(\gamma(\vartheta)=0\) is true, i.e. the potential \(\vartheta\) is torqued given Remark 5.1. It is shown in [17] that the class \(\mathcal{F}_{5}\) is the only basic class in the considered classification of accR manifolds in which \(\xi\) or its collinear vector field can be torse-forming. Furthermore, the general class of accR manifolds with a torse-forming \(\xi\) is \(\mathcal{F}_{1}\oplus\mathcal{F}_{2}\oplus\mathcal{F}_{3}\oplus\mathcal{F}_{ 5}\oplus\mathcal{F}_{10}\). Note that \(\mathcal{F}_{5}\)-manifolds are counterparts of \(\beta\)-Kenmotsu manifolds in the case of almost contact metric manifolds. The definition of the class \(\mathcal{F}_{5}\) is given in the following way in [12] \[F(x,y,z)=-\frac{\theta^{*}(\xi)}{2n}\left\{g(x,\varphi y)\eta(z)+g(x,\varphi z )\eta(y)\right\}, \tag{40}\] where \(\theta^{*}(\cdot)=g^{ij}F(e_{i},\varphi e_{j},\cdot)\) with respect to the basis \(\{e_{1},\ldots,e_{2n},\xi\}\) of \(T_{p}\mathcal{M}\). Moreover, on an \(\mathcal{F}_{5}\)-manifold the Lee form \(\theta^{*}\) satisfies the property \(\theta^{*}=\theta^{*}(\xi)\eta\). Then in addition to the component in (34) we have \[F(\xi,y,z)=0,\qquad\omega=0. \tag{41}\] Let the potential \(\tilde{\vartheta}\) of the Yamabe almost soliton \((\tilde{g};\tilde{\vartheta},\tilde{\lambda})\) is also torse-forming and vertical, i.e. \[\tilde{\nabla}_{x}\tilde{\vartheta}=\tilde{f}\,x+\tilde{\gamma}(x)\tilde{ \vartheta},\qquad\quad\tilde{\vartheta}=\tilde{k}\,\xi.\] Similarly, we obtain analogous equalities of (29) and (30) for \(\tilde{g}\) and its Levi-Civita connection \(\tilde{\nabla}\) in the following form \[\tilde{\nabla}_{x}\tilde{\vartheta}=-\tilde{f}\,\varphi^{2}x+ \mathrm{d}\tilde{k}(x)\xi, \tag{43}\] \[\tilde{\nabla}_{x}\xi=-\tilde{h}\,\varphi^{2}x, \tag{42}\] where \[\tilde{h}=\frac{\tilde{f}}{\tilde{k}}.\] Moreover, we also have \(\tilde{f}=\mathrm{d}\tilde{k}(\xi)\) and \(\tilde{h}=\mathrm{d}(\ln\tilde{k})(\xi)\). Thus, the following analogous assertions are valid. **Theorem 5.4**.: _Let an accR manifold \((\mathcal{M},\varphi,\xi,\eta,g)\) be equipped with a Yamabe almost soliton \((\tilde{g};\tilde{\vartheta}(\tilde{f},\tilde{k}),\tilde{\lambda})\), where \(\tilde{\vartheta}\) is a vertical torse-forming potential. Then the scalar curvature \(\tilde{\tau}\) of this manifold is the sum of the conformal scalar \(\tilde{f}\) of \(\tilde{\vartheta}\) and the soliton function \(\tilde{\lambda}\), i.e. \(\tilde{\tau}=\tilde{f}+\tilde{\lambda}\)._ **Corollary 5.5**.: _The potential \(\tilde{\vartheta}(\tilde{f},\tilde{k})\) of any Yamabe almost soliton \((\tilde{g};\tilde{\vartheta},\tilde{\lambda})\) on \((\mathcal{M},\varphi,\xi,\eta,g)\) is a torqued vector field._ The following equality is given in [12] and it expresses the relation between \(\nabla\) and \(\tilde{\nabla}\) for the pair of B-metrics of an arbitrary accR manifold: \[2g(\tilde{\nabla}_{x}y,z) =2g(\nabla_{x}y,z)-F(x,y,\varphi z)-F(y,x,\varphi z)+F(\varphi z,x,y)\] \[\quad+\left\{F(y,z,\xi)+F(\varphi z,\varphi y,\xi)-\omega(\varphi y )\eta(z)\right\}\eta(x)\] \[\quad+\left\{F(x,z,\xi)+F(\varphi z,\varphi x,\xi)-\omega(\varphi x )\eta(z)\right\}\eta(y)\] \[\quad-\left\{F(\xi,x,y)-F(y,x,\xi)-F(x,\varphi y,\xi)\right.\] \[\qquad\qquad\qquad\qquad\left.-F(x,y,\xi)-F(y,\varphi x,\xi) \right\}\eta(z). \tag{44}\] By setting \(y=\xi\), the last equality implies the following \[2g(\tilde{\nabla}_{x}\xi,z) =2g(\nabla_{x}\xi,z)-F(x,\varphi z,\xi)-F(\xi,x,\varphi z)+F( \varphi z,x,\xi)\] \[\quad+\omega(z)\eta(x)+F(x,z,\xi)+F(\varphi z,\varphi x,\xi). \tag{45}\] Taking into account (30), (33), (34) and (43), the relation (44) takes the form \[2(\tilde{h}-h)g(\varphi x,\varphi z)=F(\xi,x,\varphi z)-\eta(x)\omega(z),\] which for an \(\mathcal{F}_{5}\)-manifold, due to (41), implies \(\tilde{h}=h\), i.e. \[\frac{f}{k}=\frac{\tilde{f}}{\tilde{k}}. \tag{46}\] To express some curvature properties of accR manifolds, an associated quantity \(\tau^{*}\) of the scalar curvature \(\tau\) of \(g\) is used in [31]. It is defined by the following trace of the Ricci tensor \(\rho\): \(\tau^{*}=g^{ij}\rho_{is}\varphi_{j}^{s}\) with respect to the basis \(\{e_{1},\ldots,e_{2n},\xi\}\). The relation between \(\tilde{\tau}\) and \(\tau^{*}\) for a manifold belonging to \(\mathcal{F}_{5}^{0}\subset\mathcal{F}_{5}\) is given in [31, Corolary 2] as follows \[\tilde{\tau}=-\tau^{*}-\frac{2n+1}{2n}\left(\theta^{*}(\xi)\right)^{2}-2\xi( \theta^{*}(\xi))\,. \tag{47}\] The subclass \(\mathcal{F}_{5}^{0}\) of \(\mathcal{F}_{5}\) is introduced in [13] by the condition that the Lee form \(\theta^{*}\) of the manifold be closed, i.e. \(\mathrm{d}\,\theta^{*}=0\). The last equality is equivalent to the following condition \[\mathrm{d}(\theta^{*}(\xi))=\xi(\theta^{*}(\xi))\,\eta. \tag{48}\] Using (33), we compute that \[\theta^{*}(\xi)=2nh,\qquad\xi(\theta^{*}(\xi))=2n\,\mathrm{d}h(\xi)\] and therefore (47) takes the form \[\tilde{\tau}=-\tau^{*}-2n(2n+1)h^{2}-4n\,\mathrm{d}h(\xi). \tag{49}\] ## 6. Example: The cone over a 2-dimensional complex space form with Norden metric In this section we consider an accR manifold construction given in [32]. First, let \((\mathcal{N},J,g^{\prime})\) be a 2-dimensional almost complex manifold with Norden metric, i.e. \(J\) is an almost complex structure and \(g^{\prime}\) is a pseudo-Riemannian metric with neutral signature such that \(g^{\prime}(Jx^{\prime},Jy^{\prime})=-g^{\prime}(x^{\prime},y^{\prime})\) for arbitrary \(x^{\prime}\), \(y^{\prime}\in\Gamma(T\mathcal{N})\). It is then known that \((\mathcal{N},J,g^{\prime})\) is a complex space form with constant sectional curvature, denoted e.g. \(k^{\prime}\). Second, let \(\mathcal{C}(\mathcal{N})\) be the cone over \((\mathcal{N},J,g^{\prime})\), i.e. \(\mathcal{C}(\mathcal{N})\) is the warped product \(\mathbb{R}^{+}\times_{t}\mathcal{N}\) with a generated metric \(g\) as follows \[g\left(\left(x^{\prime},a_{\frac{\mathrm{d}}{\mathrm{d}t}}^{\mathrm{d}}\right), \left(y^{\prime},b_{\frac{\mathrm{d}}{\mathrm{d}t}}^{\mathrm{d}}\right)\right)= t^{2}\,g^{\prime}(x^{\prime},y^{\prime})+ab,\] where \(t\) is the coordinate on the set of positive reals \(\mathbb{R}^{+}\) and \(a\), \(b\) are differentiable functions on \(\mathcal{C}(\mathcal{N})\). Moreover, \(\mathcal{C}(\mathcal{N})\) is equipped with an almost contact structure \((\varphi,\xi,\eta)\) by \[\varphi|_{\ker\eta}=J,\quad\xi=\tfrac{\mathrm{d}}{\mathrm{d}t},\quad\eta= \mathrm{d}t,\quad\varphi\xi=0,\quad\eta\circ\varphi=0. \tag{50}\] Then \((\mathcal{C}(\mathcal{N}),\varphi,\xi,\eta,g)\) is a \(3\)-dimensional accR manifold belonging to the class \(\mathcal{F}_{1}\oplus\mathcal{F}_{5}\). In particular, this manifold can be of \(\mathcal{F}_{5}\) if and only if \(J\) is parallel with respect to the Levi-Civita connection of \(g^{\prime}\), but the constructed manifold cannot belong to \(\mathcal{F}_{1}\) nor to \(\mathcal{F}_{0}\)[32]. Let the considered manifold \((\mathcal{C}(\mathcal{N}),\varphi,\xi,\eta,g)\) belong to \(\mathcal{F}_{5}\). Using the result \(\theta^{*}(\xi)=\frac{2}{t}\) from [32], we verify that the condition in (48) holds and therefore \((\mathcal{C}(\mathcal{N}),\varphi,\xi,\eta,g)\) belongs to \(\mathcal{F}_{5}^{0}\). Let \(\{e_{1},e_{2},e_{3}\}\) be a basis in any tangent space at an arbitrary point of \(\mathcal{C}(\mathcal{N})\) such that \[\varphi e_{1}=e_{2},\quad\varphi e_{2}=-e_{1},\quad e_{3}=\xi,\] \[g(e_{1},e_{1})=-g(e_{2},e_{2})=g(e_{3},e_{3})=1,\quad g(e_{i},e_ {j})=0,\;i\neq j. \tag{51}\] In [32] it is shown that the nonzero components of \(R\) of the constructed \(3\)-dimensional manifold with respect to the basis \(\{e_{1},e_{2},e_{3}\}\) are determined by the equality \(R_{1212}=\frac{1}{t^{2}}(k^{\prime}-1)\) and the well-known properties of \(R\). Obviously, \((\mathcal{C}(\mathcal{N}),\varphi,\xi,\eta,g)\) is flat if and only if \(k^{\prime}=1\) for \((\mathcal{N},J,g^{\prime})\). The nonzero components of the Ricci tensor of \((\mathcal{C}(\mathcal{N}),\varphi,\xi,\eta,g)\) in the general case are then calculated as \(\rho_{11}=-\rho_{22}=\frac{1}{t^{2}}(k^{\prime}-1)\). Furthermore, the scalar curvature \(\tau\) and the associated quantity \(\tau^{*}\) of \((\mathcal{C}(\mathcal{N}),\varphi,\xi,\eta,g)\) are given by \[\tau=\frac{2}{t^{2}}(k^{\prime}-1),\qquad\tau^{*}=0. \tag{52}\] Then, taking into account the vanishing of \(\tau^{*}\), the expression \[\theta^{*}(\xi)=\frac{2}{t} \tag{53}\] and \(n=1\), we calculate \(\tilde{\tau}\) by (47) as \[\tilde{\tau}=-\frac{2}{t^{2}}. \tag{54}\] Using the results \(\nabla_{e_{1}}e_{3}=\frac{1}{t}e_{1}\), \(\nabla_{e_{2}}e_{3}=\frac{1}{t}e_{2}\), \(\nabla_{e_{3}}e_{3}=0\) from [32] and \(e_{3}=\xi\) from (51), we derive for any \(x\) on \(\mathcal{C}(\mathcal{N})\) the following formula \[\nabla_{x}\xi=-\frac{1}{t}\varphi^{2}x. \tag{55}\] Comparing the last equality with (30), we conclude that \[\frac{f}{k}=\frac{1}{t}, \tag{56}\] i.e. \(h=\frac{1}{t}\) holds due to (32) and then (49) is also valid. From (39), (56) and the expression of \(\xi\) in (50), we obtain the differential equation \(t\mathrm{d}k=k\mathrm{d}t\), whose solution for the function \(k(t)\) is \[k=ct, \tag{57}\] where \(c\) is an arbitrary constant. Hence (56) and (57) imply \[f=c. \tag{58}\] Taking into account (9), (55) and (57), we obtain \[\mathcal{L}_{\vartheta}g=2cg. \tag{59}\] Let us define the following differentiable function on \(\mathcal{C}(\mathcal{N})\) \[\lambda=\frac{2}{t^{2}}(k^{\prime}-1)-c. \tag{60}\] Then, bearing in mind (52), (59) and (60), we check that the condition in (1) is satisfied and \((g;\vartheta,\lambda)\) is a Yamabe almost soliton with vertical potential \(\vartheta\). Due to (8) and (57), the soliton potential \(\vartheta\) is determined by \(\vartheta=ct\xi\). Then, because of \(\mathrm{d}t=\eta\) from (50) and (55), we obtain \(\nabla_{x}\vartheta=cx\). This means that \(\vartheta\) is torse-forming with conformal scalar \(f=c\) and zero generating form \(\gamma\). According to Remark 5.1, the torse-forming vector field \(\vartheta\) is concircular in the general case of our example and in particular when \(c=1\) it is concurrent. Obviously, every concircular vector field is torqued, which supports Corollary 5.3. Taking into account (52), (58) and (60), we check the truthfulness of Theorem 5.2. In [33], a relation between the Levi-Civita connections \(\nabla\) and \(\tilde{\nabla}\) of \(g\) and \(\tilde{g}\), respectively, is given for \(\mathcal{F}_{5}\) as follows \[\tilde{\nabla}_{x}y=\nabla_{x}y-\frac{\theta^{*}(\xi)}{2n}\left\{g(x,\varphi y )+g(\varphi x,\varphi y)\right\}\xi.\] This relation for \((\mathcal{C}(\mathcal{N}),\varphi,\xi,\eta,g)\) and \(y=\xi\) implies \(\tilde{\nabla}_{x}\xi=\nabla_{x}\xi\), which due to (55) gives \[\tilde{\nabla}_{x}\xi=-\frac{1}{t}\varphi^{2}x. \tag{61}\] The expression in (61) follows also from (40), (45) and (53). Then, using (43) and (61), we have \[\frac{\tilde{f}}{\tilde{k}}=\frac{1}{t}, \tag{62}\] which supports (46) and (56). In a manner similar to obtaining (57) and (58), starting with (62), we find \[\tilde{k}=\tilde{c}t,\qquad\tilde{c}=\mathrm{const}, \tag{63}\] \[\tilde{f}=\tilde{c}. \tag{64}\] By virtue of (10), (61) and (63), we have \[\mathcal{L}_{\tilde{\vartheta}}\tilde{g}=2\tilde{c}\tilde{g}. \tag{65}\] We define the following differentiable function on \(\mathcal{C}(\mathcal{N})\) \[\tilde{\lambda}=-\frac{2}{t^{2}}-\tilde{c}, \tag{66}\] which together with (54) and (65) shows that the condition in (7) holds. Then, \((\tilde{g};\tilde{\vartheta},\tilde{\lambda})\) is a Yamabe almost soliton with vertical potential \(\tilde{\vartheta}\). Using (42), (63), (64) and \(\mathrm{d}t=\eta\) from (50), we obtain \(\nabla_{x}\tilde{\vartheta}=\tilde{c}x\), which shows that \(\tilde{\vartheta}\) is torse-forming with conformal scalar \(\tilde{f}=\tilde{c}\) and zero generating form \(\tilde{\gamma}\). Therefore \(\tilde{\vartheta}\) is concircular for arbitrary \(\tilde{c}\) and concurrent for \(\tilde{c}=1\). Obviously, every concircular vector field is torqued, which supports Corollary 5.5. Furthermore, the results in (54), (64), and (66) support Theorem 5.4.
2308.11185
MEGA: Multimodal Alignment Aggregation and Distillation For Cinematic Video Segmentation
Previous research has studied the task of segmenting cinematic videos into scenes and into narrative acts. However, these studies have overlooked the essential task of multimodal alignment and fusion for effectively and efficiently processing long-form videos (>60min). In this paper, we introduce Multimodal alignmEnt aGgregation and distillAtion (MEGA) for cinematic long-video segmentation. MEGA tackles the challenge by leveraging multiple media modalities. The method coarsely aligns inputs of variable lengths and different modalities with alignment positional encoding. To maintain temporal synchronization while reducing computation, we further introduce an enhanced bottleneck fusion layer which uses temporal alignment. Additionally, MEGA employs a novel contrastive loss to synchronize and transfer labels across modalities, enabling act segmentation from labeled synopsis sentences on video shots. Our experimental results show that MEGA outperforms state-of-the-art methods on MovieNet dataset for scene segmentation (with an Average Precision improvement of +1.19%) and on TRIPOD dataset for act segmentation (with a Total Agreement improvement of +5.51%)
Najmeh Sadoughi, Xinyu Li, Avijit Vajpayee, David Fan, Bing Shuai, Hector Santos-Villalobos, Vimal Bhat, Rohith MV
2023-08-22T04:23:59Z
http://arxiv.org/abs/2308.11185v1
# MEGA: Multimodal Alignment Aggregation and Distillation ###### Abstract Previous research has studied the task of segmenting cinematic videos into scenes and into narrative acts. However, these studies have overlooked the essential task of multimodal alignment and fusion for effectively and efficiently processing long-form videos (\(>60\)min). In this paper, we introduce **M**ultimodal alignmentEnt **a**G**gregation and distill**A**tion (MEGA) for cinematic long-video segmentation. MEGA tackles the challenge by leveraging multiple media modalities. The method coarsely aligns inputs of variable lengths and different modalities with alignment positional encoding. To maintain temporal synchronization while reducing computation, we further introduce an enhanced bottleneck fusion layer which uses temporal alignment. Additionally, MEGA employs a novel contrastive loss to synchronize and transfer labels across modalities, enabling act segmentation from labeled synopsis sentences on video shots. Our experimental results show that MEGA outperforms state-of-the-art methods on MovieNet dataset for scene segmentation (with an Average Precision improvement of +1.19%) and on TRIPOD dataset for act segmentation (with a Total Agreement improvement of +5.51%). ## 1 Introduction In the world of video production, movies are composed of smaller units called shots, scenes, and acts. Shots are a continuous set of frames, a scene is a sequence of shots that tell a story, and an act is a thematic section of a narrative [14]. While computer vision has made significant strides in shot detection [40], scene and act segmentation remain a challenge, despite their potential for smart video navigation, advertisement insertion, and movie summarization. Cinematic content comprises of different data sources, including audio, visual, and text data, as well as derivative data sources from the narrative, including location, appearance, tone, or acoustic events. In this work, we will refer to all of these input components as "modalities" of cinematic content. Previous work has not fully explored how to align and aggregate these modalities which have different granularities. We propose to address scene and act segmentation tasks with an unified multimodal Transformer. However, this approach presents two main challenges. Firstly, there is the issue of cross modality information synchronization and fusion at the shot level. Previous studies which use multimodal fusion for scene and act segmentation perform early [8, 32] or late fusion of features [35], and have not explored fusion strategies which utilize multimodal temporal alignment. Additionally, the fusion strategies that utilize temporal alignment such as merged attention or cross modality attention [9, 21] are computationally expensive and not generalizable to a large number of modalities. Secondly, due to the challenges associated with labeling a long video on act segmentation, the labels for act segmentation are provided on synopsis sentences [30] which do not provide timestamps. To avoid the more challenging task of cross-modal synchronization, previous studies on act segmentation [30, 32] rely Figure 1: MEGA works well on both scene segmentation and act segmentation tasks, outperforming previous work with significant margin. V,T*,T,A denotes video, screenplay, subtitle and audio respectively. on textual screenplay to transfer the labels from synopsis to movie, ignoring the rich multimodal information from the video, and introducing an additional dependency on screenplay data which is not always available. To address these challenges, we introduce **M**ultimodal align**m**E**nt **a**G**gregation and distill**A**tion (MEGA). MEGA includes a novel module called _alignment positional encoding_ which aligns inputs of variable lengths and different modalities at a coarse temporal level. To fuse the aligned embeddings of different modalities in an efficient and effective manner, we adopt the bottleneck fusion tokens [28] and append a set of fusion tokens to each modality. These tokens share the same sequence length as the normalized positional encoding for different modalities, allowing us to inject them with the coarse temporal information, enabling information fusion in a better aligned embedding space. To address the issue of cross-domain knowledge transfer, we introduce a cross-modal synchronization approach. This method allows us to transfer the manually labeled act boundaries from synopsis level to movie level using rich multimodal information, enabling us to train MEGA directly on videos without relying on screenplay - which was a hard requirement for previous works [30, 32]. We test our proposed alignment and aggregation modules on the Movienet-318 [18] and the TRIPOD datasets [30], and we test our cross modality synchronization module on TRIPOD alone, as the labels are provided on a different modality during training. Our proposed MEGA outperforms previous SoTA on scene segmentation on the Movienet-318 dataset (by +\(1.19\%\) in AP) and on act segmentation on the TRIPOD dataset (by +\(5.51\%\) in TA). Our contributions are: 1. Alignment positional encoding module and a fusion bottleneck layer that performs multimodal fusion with aligned multi-modality inputs. 2. A cross-domain knowledge transfer module that synchronizes features across-domain, and enables knowledge distillation without requiring extra information. 3. SoTA performance on scene and act segmentation tasks, with detailed ablations, which can be used as reference for future work. ## 2 Related Work **Scene Segmentation in Cinematic Content**: Recent works on scene segmentation have explored self-supervised learning (SSL) [27, 46, 8]. Self-supervised pretext tasks have included maximizing the similarity between nearby shots compared to randomly selected ones [8], maximizing the similarity between pairs of images selected according to scene consistency [46], and maximizing the similarity between pairs of images selected according to pseudo scene boundaries [27]. While several previous works have used multimodal inputs for this task [35, 46, 8], they have either utilized late fusion of features with predefined weights for each modality [35] or have utilized early integration of features derived via SSL [46, 8]. In this paper, we explore how to better align and integrate features from different modalities for scene segmentation. **Act Segmentation in Cinematic Content**: Movie screenplays follow a general narrative structure on how a plot unfolds across the story. Several theories have been proposed in this domain dating as far back as Aristotle, who defined a 3 act structure with a beginning (protasis), middle (epitasis), and end (catastrophe) [33]. Modern screenplays are usually written with a 6 act structure [14], named as "the setup", "the new situation", "progress", "complications and higher stakes", "the final push", and "the aftermath", separated by five turning points (TPs). Prior approaches in narrative segmentation on movies have adopted the aforementioned 6 act structure and posed the problem as identifying the 5 TPs that separate these 6 acts. [32] is to our knowledge the only prior work that utilizes visual domain in act segmentation by using a pre-trained teacher model trained on textual input to train a student Graph Convolutional network with audio-visual-text representations as input. In contrast, our work uses a new multimodal fusion and distillation applied on the modalities which are available with the movie. **Multimodal Aggregation**: Previous SoTAs on multimodal fusion with transformers perform early fusion of the features as inputs to the transformer [19], merge attention between them requiring more memory [21, 9], use cross attention between two modalities [45, 9], or add cross-modal interactions more efficiently via bottleneck tokens or exchanging some of their latents [28, 15]. [28] provides information flow between modalities efficiently by utilizing bottleneck tokens to tame the quadratic complexity of pairwise attention. This global exchange between modalities may not be enough for long videos, which require an adaptive fusion in different temporal locations. Our model considers [28] as baseline and extends it to incorporate local information during information exchange between modalities. **Positional Encoding**: Previous studies on improving the positional encoding in long sequence modeling have mostly focused on adding relative distance positional encoding [24, 39, 41]. However, they do not offer solutions on better maintaining the relative position of latent tokens with respect to their starting point in time in long sequences with variable lengths, which is important for long movie narrative understanding [14, 5]. We propose Alignment Positional Encoding to bridge this gap. **Cross-Modality Synchronization & Distillation**: To transfer the labels from synopsis to movie shots, we use cross-modality distillation. Previous cross-modality distillation studies for label transfer across modalities are focused on parallel data with the same granularity [2, 3, 12], or where the alignment is known [31]. Alignment of features at different granularities in the same modality, such as screenplay scenes to synopsis sentences [26, 50] and across modalities such as aligning synopsis sentences to visual shots [43, 48], book chapters to visual scenes [44] have been previously explored. While majority of these works rely on unsupervised optimization techniques [26, 43, 44, 50], there are studies that use supervised labels to improve the representations used for optimization [48]. We present an alignment approach with self-supervised loss for synchronizing data in different modalities of cinematic content to enable distillation. ## 3 Methodology MEGA processes long videos and performs video segmentation in three major steps (Fig. 2). First, a video \(V\) is chunked into shots, and multiple features such as scene related features and sound event features are extracted at the shot-level (Sec. 3.1). The system is built on shot-level representation for two reasons: (1) scene and act boundaries always align with shot boundaries, and (2) the content within a shot remains similar, which allows efficient yet meaningful sampling without losing important information. Second, the embeddings from different input samples and each modality are coarsely aligned with a proposed alignment embedding (Sec. 3.2), and the alignment positional tokens are used to refine the commonly used bottleneck fusion tokens for cross-modal feature fusion (Sec. 3.2). Third, a linear layer is applied on top of the fused representations to generate scene and act boundaries (Sec. 3.3). Finally, to address the challenge of cross-domain knowledge transfer where labels from one domain may not directly align with another domain (e.g. act labels on synopsis sentences do not have movie-level timing information), we propose a cross-modal synchronization module that is simple yet effective (Sec. 3.4). ### Preprocessing We chose to utilize Transnet-v2 [40] for shot segmentation due to its superior performance and efficiency. We list our selection of pre-trained models and associated parameters in Tab. 1. As each pretrained feature extraction model has different requirements for input resolution and sampling rate, we first sample the input at various sample rates (as shown in Table 1). It is worth noting that the CLIP\({}_{\text{movie}}\) model is the CLIP [34] model with ViT-B/32 backbone fine-tuned on paired IMDB-image/text dataset. IMDB-image dataset comprises 1.6M images from 31.3K unique movies/TV series with 761 unique textual labels. The features attributed to each shot are the ones with overlap with the shot time stamp (More details are in Appendix). After feature extraction, we aggregate the features for each shot and normalize the feature dimension with linear projection as: \[E_{i}^{m}=g^{m}\left(\frac{1}{T_{i}^{m}}\sum_{j=1}^{T_{i}^{m}}S_{ij}^{m}\right) \tag{1}\] where \(E_{i}^{m}\in\mathbb{R}^{C}\) denotes the embedding from the \(i\)-th shot of \(m\)-th modality, \(S_{ij}^{m}\in\mathbb{R}^{D^{m}}\) is the \(j\)-th sampled feature of \(m\)-th modality from \(i\)-th shot, and \(g^{m}\) is a linear projection layer that projects feature dimension for \(m\)-th modality to a common dimension \(C\) across all modalities. While it is possible to create an end-to-end system starting from raw shot inputs and training the model from scratch, pre-extracting the features from pretrained models is generally more scalable and efficient in actual industrial scenarios, hence we rely on the latter. ### Cross-modality Alignment and Fusion For long video segmentation into scenes and acts it is important to model short and long term context and perform effective multimodal fusion. However, the commonly used learnable positional embedding only provides fine-grained granularity and is not suitable for high-level semantic alignment across modalities. Furthermore, for tasks such as act segmentation we expect consistent patterns at normalized position of temporal inputs as the theory suggests approximate locations for each turning point (i.e., 10%, 25%, 50%, 75%, 95% [5, 14, 30]). Hence, in addition to using the traditional positional encoding, we introduce Alignment Positional Encoding layer\(\in\mathbb{R}^{L_{n}\times C}\), which is a learnable embedding layer, for which the index at \(i\)-th temporal unit (e.g., shot) is derived by: \[i_{align}=\mathrm{floor}\left(\frac{L_{n}}{L}i\right) \tag{2}\] where \(L\) is the temporal dimension (e.g., number of shots) and \(L_{n}\) is the length of alignment positional encoding which is a hyper-parameter (\(L_{n}<L\)). We add the alignment positional encoding to the features in conjunction with the conventional positional encoding (see Fig 2). This module is shared across different modalities. This module provides extra information to the network that can be helpful in learning from long training samples with varying lengths, and in coarsely aligning inputs from different modalities before information fusion. Inspired by previous works [9, 28, 45], we choose cross-modal feature fusion, which has been shown to be more effective than early or late fusion. To make our approach scale to \begin{table} \begin{tabular}{l l l l} \hline \hline **Feature extractor** & **Input Freq.** & **Input Res.** & **Feature dim.** \\ \hline **Visual input** & & & \\ BASSL [27] (basal) & Varying & \(224^{2}\) & \(2048\) \\ ResNet\({}_{\text{movie}}\)[51] (place) & \(1Hz\) & \(224^{2}\) & \(2048\) \\ ResNeK101 [47] (app) & \(1Hz\) & \(224^{2}\) & \(2048\) \\ I3D [6] (action) & \(16Hz\) & \(16\times 224^{2}\) & \(2048\) \\ CLIP\({}_{\text{movie}}\) (clip) & \(1Hz\) & \(224^{2}\) & \(768\) \\ \hline **Acoustic input** & & & \\ PANNs [23] (audio) & \(1Hz\) & \(10\times 32K\) & \(2048\) \\ \hline **Linguistic input** & & & \\ all-MiniLM-L6-v2 [1] (text) & Varying & - & \(384\) \\ \hline \hline \end{tabular} \end{table} Table 1: Sampling strategy and feature extraction backbones used for different modalities. multiple modalities, we propose an efficient temporal bottleneck fusion based on [28], and follow their mid-fusion strategy, which comprises of an unimodal transformer encoder followed by a multimodal transformer encoder (See Fig. 2). While [28] proposes to use bottleneck tokens for fusing information across different modalities, these tokens learn to integrate the information across modalities in a global manner. We propose to use \(L_{n}\) fusion tokens, and then integrate them with the same Normalized Positional Encoding to align them with features on the coarse temporal scale (Fig. 3). The transformer layer per modality then takes in an extra set of aligned fusion tokens concatenated with its input (Fig 2), making it much more efficient compared to other methods such as merged attention or pairwise cross-attention with respect to the number of modalities [9, 21]. Finally, the latent tokens per modality from the last fusion layer (i.e., \(Z^{m}\) for m-th modality) are concatenated as the fused representation: \[Z^{\text{fused}}=\text{concat}_{C}\left(Z\right)=\text{concat}_{C}\left( \left[Z^{1},...,Z^{M}\right]\right) \tag{3}\] where \(\text{concat}_{C}(*)\) stands for channel-wise concatenation and \(M\) is the number of modalities. ### Scene/Act Segmentation MEGA adopts a similar approach to previous works [8, 27, 46], where scene segmentation is framed as a binary classification task using a key-shot representation from a window of \(2\times k+1\) shots (\(k\)-shots before and after): \[y_{i}=\varphi\left(Z_{i}^{\text{fused}};\theta_{s}\right) \tag{4}\] where \(y_{i}\) is the logit prediction for the \(i\)-th key-shot. \(\varphi\) denotes the linear layer with learnable parameters \(\theta_{s}\) and a cross entropy loss is utilized [8, 27, 46]. The act segmentation task is formulated as \(N_{tp}\) linear prediction heads (\(N_{tp}\) is the number of turning points, and \(\theta_{a_{n}}\) denotes the head parameters for \(n\)-th turning point [32]), for each individual shot from a temporal model that takes all the shots of the movie as input. To make a prediction at the \(i-\)th shot for the \(n\)-th act boundary (\(n\in\{1,\dots,N_{tp}\}\)), we use: \[y_{in}=\varphi\left(Z_{i}^{\text{fused}};\theta_{a_{n}}\right) \tag{5}\] where \(y_{in}\) is the logit prediction for \(i\)-th shot and \(n\)-th turning point. Figure 3: Illustration of Normalized Positional Encoding integration to the temporal tokens in one modality. This figure shows the integration of normalized Positional Encoding with 1) temporal shots for one modality and 2) bottleneck fusion tokens, where \(L=15\) and \(L_{n}=3\). For (1) \(i_{align}\) is obtained per shot index, \(i\), and then each shot is integrated with normalized PE. For (2) each randomly initialized bottleneck token is integrated with normalized PE for its corresponding index. Figure 2: The pipeline of the proposed method includes 1) Preprocessing: splitting the video into shots, extraction of features from each shot, pooling and normalization 2) Cross modality fusion and alignment: with the help of alignment positional encoding and bottleneck fusion tokens, 3) Scene/Act Segmentation comprising the segmentation heads. For Scene Segmentation, CE loss is used and for Act Segmentation knowledge transfer loss is used (refer to Sec. 3.4 for details) ### Cross-domain Knowledge Transfer It is quite common during machine learning that certain modalities may lack annotations that are directly available in other modalities with different information granularity. To address this, we propose a cross-modality synchronization scheme that enables cross-modal distillation. We utilize this module for act segmentation where we aim to transfer act labels from synopsis sentences to movie-level timestamps. Importantly, our approach does not require additional information, such as screenplay [32], to bridge the gap. Our knowledge distillation utilizes (1) an individual network to learn the synopsis-based act segmentation in a supervised manner with cross-entropy loss similar to [31]; and (2) a novel synchronization approach between synopsis and movie. For (1) we use the same architecture mentioned in Secs. 3.1, 3.2, setting \(C_{\text{fused}}\) equal to the multimodal shot model setting. Additionally, similar to shot level linear prediction head (See Eq. 5), we use a sentence level linear prediction head, resulting in \(q_{in}\) logits for the \(i\)-th sentence of \(n\)-th TP. A supervised Cross Entropy loss is used to learn the synopsis labels from predictions for each turning point (\(\mathscr{L}_{ce}\)). For (2) we seek a synchronization matrix \(W\in\mathbb{R}^{L_{sh}\times L_{syn}}\) between \(L_{sh}\) shots and \(L_{syn}\) synopsis sentences for a sample, where \(w_{ij}=1\) if the \(i\)-th shot matches with the \(j\)-th synopsis sentence and \(w_{ij}=0\), otherwise. Assuming \(F(.;\theta)\) represents a parametric reward function (with parameters \(\theta\)), to find \(W\), we define an objective as: \[\max_{W,\theta}\sum_{i,j} w_{ij}F(.;\theta)-\lambda\sum_{i,j}|w_{i,j}| \tag{6}\] \[s.t.\ 0\leq w_{ij}\leq 1\] Expectation-Maximization algorithm is used to solve the objective in Eq. 9. We estimate the target variable \(W\) via fixed parameters (i.e., \(\theta\)) in the E-step, and update the parameters while the target variable is known in the M-step. **E-step**: Assuming \(F(.;\theta)\) returns the similarity of input shot and synopsis sentence pair, the E-step has a closed form solution. In the E-step, following [26], we reduce the search space during optimization to only the pairs which are inside a diagonal boundary (see the proof for E-step and visualization of expected synchronization matrix for different examples in Appendix). **M-step**: Using all samples in a batch with \(L_{\text{SH}}\), \(L_{\text{SYN}}\) total number of shots and synopsis sentences, we form their cosine similarity matrix of dimension \(L_{\text{SH}}\times L_{\text{SYN}}\). For each query (synopsis sentence/movie shot, respectively), the positive keys (movie shot/synopsis sentence, respectively) are derived from the expectation step. Negative keys are the keys lying outside the diagonal boundary of the similarity matrix of shot-synopsis pairs for one movie [26], and all the keys from other movies within the batch. Here each query (synopsis sentence/movie shot, respectively) can have more than one positive key (movie shot/synopsis sentence, respectively) attached to it. Following [20], we adopt a modified version of the InfoNCE loss and combine it with the symmetric contrastive loss [34] as: \[\mathscr{L}_{c}= -\sum_{i=1}^{L_{\text{SYN}}}\frac{1}{|\hat{y}_{i}|}\sum_{k=1}^{L_ {\text{SH}}}\hat{y}_{ik}\text{log}\frac{exp(v_{i}u_{k}/\tau)}{\sum_{j=1}^{L_{ \text{SH}}}exp(v_{i}u_{j}/\tau)}- \tag{7}\] \[\sum_{i=1}^{L_{\text{SH}}}\frac{1}{|\hat{y}_{i}^{T}|}\sum_{k=1}^ {L_{\text{SYN}}}\hat{y}_{ik}^{T}\text{log}\frac{exp(u_{i}v_{k}/\tau)}{\sum_{j =1}^{L_{\text{SYN}}}exp(u_{i}v_{j}/\tau)}\] where \(\tau\) is a learnable temperature parameter, and \(\hat{y}_{ij}\) is a binary indicator of positive vs. negative pairs, \(u_{i}\) is the normalized feature for the \(i\)-th shot and \(v_{i}\) is the normalized feature for the \(i\)-th synopsis sentence. **Knowledge distillation**: Knowledge distillation is used to transfer the knowledge available for the training samples on synopsis. The predictions from the synopsis model are mapped to shots using a matrix of their similarities as calculated in the maximization step, for each sample. The similarity scores for each shot are normalized along the synopsis sentences with softmax. The logit predictions from synopsis model are transferred to shots by multiplication with the normalized similarity matrix. A softmax along the shots is applied to the transferred logits to derive the probability scores for each shot. Following [31], we use a Kullback-Leibler divergence loss between predicted outputs for each shot and the transferred probabilities (\(\mathscr{L}_{kd}\)) (More details are provided in Appendix.). The cross-domain knowledge transfer module can be trained by simply adding the losses together as: \[\mathscr{L}=\alpha_{c}\mathscr{L}_{c}+\alpha_{ce}\mathscr{L}_{ce}+\alpha_{kd} \mathscr{L}_{kd} \tag{8}\] where \(\alpha_{c}\), \(\alpha_{ce}\), and \(\alpha_{kd}\) are hyperparameters that control the weights of the three losses. ## 4 Experiments ### Dataset We test our model on two commonly used dataset: **Movienet-318 [18]:** consists of 1100 movies, out of which 318 movies are annotated for the task of scene segmentation. The annotated dataset is split into 190, 64, and 64 movies for train, validation and test splits, respectively. We report the Average Precision (AP) and F1-score (F1) on the test split following previous work [35, 46]. **TRIPOD [32]** includes 122 movies, split into 84, and 38 movies for train and test, respectively. This dataset includes the annotations of 6 act boundaries ("the setup", "the new situation", "progress", "complications and higher stakes", "the final push", "the aftermath") on the movie synopsis sentences for the training set, and on the movie screenplay scenes for the test set. The authors also have released soft probability scores (silver annotations) for the training set, using [30]1. To find the timestamps for the screenplay scenes in the movie, following [32] we used Dynamic Time Warping (DTW) to align the timed subtitles from the movie to the monologue lines in the screenplay. Following [32], we use total agreement (TA), partial agreement (PA) and distance (D) as evaluation metrics. Footnote 1: [https://github.com/ppapalampidi/GraphTP](https://github.com/ppapalampidi/GraphTP) ### Implementation Details **For scene segmentation:** We train our model with 8 V100 GPUs with total batch size of 1024. The Adam [22] optimizer is used with learning rate of 1e-4. We train the model for 20 epochs. GeLU [16] is used as activation function by default, we use weighted cross entropy to balance the positive and negative samples at batch level. We choose shot sequence length of 17 (\(k=8\)) following [27]. We set \(L_{n}=2\) for this model. **For act segmentation:** We train our model with 4 V100 GPUs with total batch size of 4. The SGD optimizer is used with learning rate of 1e-3. We train the model for 10 epochs. \(\lambda\) in Eq. 9 is empirically set differently for each synopsis sentence of each sample, by finding 99% percentile of the similarity scores between the synopsis sentence and all the shots corresponding to that sample. \(\alpha_{c}\), \(\alpha_{ce}\), \(\alpha_{kd}\) are set to 1, 1, and 10. We set \(L_{n}=100\) for this the shot model and \(L_{n}=20\) for the synopsis model. We use max pooling to aggregate the shot-level predictions to scene level. We use all shots from a video for act segmentation. ### Main Results **SoTA on Scene Segmentation.** We first show MEGA outperforms previous SoTA on MovieNet318 [27] for scene segmentation (+\(1.19\%\) on AP and +\(8.28\%\) on F1). With the same input visual features, MEGA outperforms previous SoTA [27] (Tab. 2 +\(0.52\%\) on AP and +\(3.69\%\) on F1), which indicates that the proposed approach is effective. Thanks to the proposed cross-modality fusion module, the MEGA generalizes and benefits from additional information extracted from visual signals. MEGA with 3 visual modalities (clip, place, and basal) outperforms single modality model by +\(0.67\%\) on AP and +\(4.59\%\) on F1, which shows the proposed fusion works as expected. It is worth mentioning that the proposed method is scalable and generalizes to various number of modalities at different scales, which makes it flexible for real-world applications. **SoTA on Act Segmentation.** We then show that MEGA establishes the new SoTA performance on act segmentation on TRIPOD dataset (Tab. 8). We first show that MEGA outperforms previous SoTA on TRIPOD [32] dataset With only visual signals as input. Comparing to other works that take textural inputs [29, 32], MEGA is able to achieve better performance. Furthermore, in real-world applications, the visual input (the video) is often easier to obtain than textual inputs such as screenplay [32]. We further show that MEGA* (Tab. 8), which swaps our synchronization module to use similar synchronization as SoTA [32], outperforms GRAPHTP, which demonstrates the effectiveness of proposed approach including the alignment and fusion modules. It is worth mentioning that, with multiple features extracted from **visual** media modality alone, MEGA outperforms previous SoTA which makes the proposed model applicable for real-world scenarios, as it is usually harder to get additional media modalities (e.g., screenplay) which are used in other works. By further aggregating the results from text input, MEGA establishes the new state-of-the-art on TRIPOD dataset (Tab. 8). MEGA almost doubles the performance of previous work [29, 32] with +\(5.51\%\) TA, +\(9.15\%\) PA, -\(\%0.81\) D. This shows the proposed multimodal fusion scales and generalizes well to multiple modalities. The cross-modality distillation also works robustly for various settings. We noticed that adding the acoustic features is not always helpful to the performance (Tab. 8), probably because acoustic information provides redundant or not useful information for the task of act boundary segmentation. ### Ablations We perform ablations to examine the effectiveness of major building blocks of MEGA. We use features extracted from visual media modality (for scene segmentation: clip, place, basal and for act segmentation: clip, appr, action, place), with the proposed normalized positional encoding, multi-modality bottleneck fusion, and cross-modality synchronization by default unless specified. The training and evaluation follows same protocols as mentioned in Sec. 4.3. **Alignment Positional Encoding.** We report the ablations on Alignment PE in Tab. 7a. We notice that the proposed \begin{table} \begin{tabular}{l l l c c} \hline \hline _Approach_ & _Modality_ & _Pretrained on_ & \begin{tabular}{c} _AP_\(\uparrow\) \\ _[\%]_ \\ \end{tabular} & \begin{tabular}{c} _F1_\(\uparrow\) \\ _[\%]_ \\ \end{tabular} \\ \hline Random [35] & - & - & 8.2 & - \\ \hline \multicolumn{4}{l}{**Visual only input**} \\ GraphCut [36] & V & Places [51] & 14.1 & - \\ SCSA [7] & V & Places [51] & 14.7 & - \\ DP [13] & V & Places [51] & 15.5 & - \\ Grouping [37] & V & Places [51] & 17.6 & - \\ StoryGraph [42] & V & Places [51] & 25.1 & - \\ Siamese [4] & V & Places [51] & 28.1 & - \\ LGSS [35] & V & Places [35] & 39.0 & - \\ LGSS [35] & V & Cast [17, 49] & 15.9 & - \\ LGSS [35] & V & Action [11] & 32.1 & - \\ ShotCoL [8]\(\uparrow\) & V & Movienet [18] & 52.89 & 49.17 \\ SCRL [46] & V & Movienet [18] & 54.82 & 51.43 \\ BaSSL [27] & V & Movienet [18] & 57.4 & 47.02 \\ \hline MEGA & V & Movienet [18] & 57.92 & 50.71 \\ **MEGA** & **V** & **M+P+I** & 58.59 & 55.30 \\ \hline \hline \end{tabular} \end{table} Table 2: Scene boundary detection: comparison with SoTA. \(\uparrow\)means the numbers are copied from [46]. M+P+I denotes pre-trained on Movienet [18], Places [51] and IMDB. alignment PE consistently improves the performance on both scene and act segmentation tasks, while act segmentation benefits more significantly from it. This is because of two reasons: 1) inputs to the act segmentation model have variable lengths (all the shots from a video) as opposed to scene segmentation which has fixed length inputs, hence the alignment PE adds more information to the act segmentation model; and 2) Alignment PE is shared across all modalities and fusion layer and its absence causes more harm to sequences with longer lengths, as act segmentation takes longer shot sequences (entire video) compared to scene segmentation (17 shots). Overall, the drop in performance indicates that the proposed Alignment P.E. is an essential component for video segmentation tasks. We further remove the proposed Alignment PE from bottleneck fusion tokens and show results in Tab. 6(b). We observe a performance drop when removing the Alignment PE. It is worth mentioning that the performance drop is more noticeable when multiple modalities are involved, (e.g. V + T model), which suggests that the information from subtitles requires precise temporal alignment in order to be effective during multimodal fusion. **Multimodal Fusion Strategies.** Tab. 6(c) compares our proposed fusion tokens with the commonly used late fusion strategy. The results show that the temporal bottleneck fusion clearly outperforms late fusion, demonstrating the effectiveness of aligned bottleneck tokens in improving the performance across both tasks. **Different Input Modalities.** We study the impact of different modalities by removing them from the input and present results on scene segmentation and act segmentation. In **scene segmentation (Tab. 6(d))**, we find that the basal feature followed by place are the most important. This is because the BaSSL [27] model is pretrained for scene segmenta \begin{table} \end{table} Table 4: Ablation studies on MEGA components. \begin{table} \end{table} Table 3: TP identification: comparison with SoTA. MEGA* denotes the MEGA using the same synchronization as [30] for fair comparison. T*,V,T,A denote Textual-screenplay, Visual, Textual-subtitle and Acoustic features, respectively. tion and place consistency is critical for scene segmentation. In **act segmentation (Tab. 6(e))**, we find that the CLIP feature pre-trained on IMDB, followed by subtitle are the most important features. This is because the clip model is pre-trained on abstract concepts (e.g. genre, micro genre, character type and coarse key places), thus the CLIP feature contains richer semantics, and subtitle provides complementary rich semantic information. These high level semantics are considered useful for act segmentation. Overall, when all the features are included, the model is able to leverage the unique information provided by each feature and yields the best performance. **Cross-modality Synchronization.** Tab. 6(f) studies the effectiveness of the proposed cross modality synchronization on act segmentation. To establish a baseline, we use the probability scores provided by [32] and derived by aligning synopsis sentences to scenes using screenplay [30] (T* in Tab. 6(f) denotes screenplay). For a fair comparison, we repeat the scores provided for each scene on all of its shots and then re-normalize2. MEGA with visual only input outperforms [30] across two metrics (PA and TA) and when we add the subtitle features (T in Tab. 6(f) denotes subtitle), MEGA outperforms [30] across all the metrics. The results demonstrate that the proposed cross-domain synchronization works effectively and generalize well to various modalities. It is worth mentioning that the proposed method generates act segmentation without requiring the screenplay, which makes it practical for various industrial applications. Footnote 2: This strategy maintains the rank of probability scores for different turning points across different segments of the movie, and is consistent with the max-pooling of prediction scores on shot level to derive the scene level predictions during evaluation (see Sec. 4.2). **Impact of Feature Set.** In Tab. 6(g), we examine whether feature set plays an outsized role in MEGA's improved performance over other methods. On the same feature set of M + P + I, we outperform LGSS [35] while only introducing a small number of additional parameters. BaSSL [27] achieves slightly lower performance using only Movienet features, indicating that the use of additional features is not the primary reason for our improved performance. **Impact of Model Size.** We finally look into the impact of model size on the performance. To establish the fair comparison, we first expand GRAPHTP [32] which shares the same input as MEGA to roughly the same number of parameters as MEGA. Our results on act segmentation (Tab. 6(h)) show that MEGA outperforms the previous SoTA, GRAPHTP [32], which indicates the MEGA has efficient and effective design. ### Fusion with Audio Modality We also experimented with the effect of adding audio. Movienet [18] has released _Short Time Fourier Transform_ (STFT) features extracted from audio files. We use the same audio backbone as [35]. However, by adding audio features in MEGA, we see a drop in the performance (see Tab. 5). Although [35, 8] have shown improvements across multiple models by adding the audio modality across Movienet-150 dataset [35] (where the split is not publicly available) or a private dataset: AdCuepoints [8], Wu _et al._[46] have observed a similar trend as our experiments, where adding the released audio features in Movienet [18] to SCRL [46] and ShotCoL [8] drops the performance (see Tab. 5). Possible reasons can be 1) the audio features published by Movienet via STFT3 are an incomplete view of the shot from audio modality either in representation or in terms of the audio chunk from each shot they used, and the raw audio files are not available. 2) our multimodal fusion strategy cannot exploit the possible complementary information or filter the harmful or confusing signals from the audio modality. Footnote 3: [https://github.com/movienet/movienet-tools](https://github.com/movienet/movienet-tools) ## 5 Discussion and Conclusion **Limitations.** The explorations in this work are limited to appearance, location, activity, acoustic and textual features. For long movie segmentation, however, providing the name of actors (tabular data) and having a specific component for actor identification in the movie can help both the synchronization and the act/scene segmentation models. We will explore the use of this data. The results demonstrated that richer semantic representations from the clip features enhanced the performance for long video segmentation. To obtain better performing representations for long video understanding one can use large amount of unlabeled data with carefully selected pretext tasks for understanding long context. We will investigate the use of SSL to train a rich multimodal representation from videos and will examine the learned representations across multiple long video understanding tasks. **Conclusion.** This paper introduces MEGA, a unified solution for long video segmentation. Our design of normalized positional encoding, and their integration into fusion tokens allows MEGA to learn consistent patterns from inputs with variable lengths and efficiently and effectively align and fuse them across different modalities. Our synchronization schema further allows the use of rich multimodal tokens to \begin{table} \begin{tabular}{l l c} \hline \hline _Approach_ & _Modality_ & _AP_\(\uparrow\) [\%] \\ \hline LGSS [35] & V(place) & 39.00 \\ ShotCoL [8]\% & V & 46.77 \\ SCRL [46] & V & 54.55 \\ \hline **MEGA** & V(place,clip,basal) & 58.59 \\ \hline LGSS [35] & V(place)+A & 43.4 \\ ShotCoL [8]\% & V+A & 44.32 \\ SCRL [46] & V+A & 50.80 \\ \hline **MEGA** & V(place,clip,basal)+A & 55.36 \\ \hline \hline \end{tabular} \end{table} Table 5: Scene Seg. with audio. \(\ddagger\)denotes copying from [46]. be used in transferring the labels from synopsis sentences to movie shots, facilitating the knowledge distillation from synopses to movies. MEGA achieves state-of-the-art performance compared with previous works. ## Appendix A Architecture The details of our model architecture are shown in Tab. (a)a. The model is composed of unimodal encoders, fusion encoders and output layer. The unimodal encoders are repeated for all \(M\) modalities and consist of a Shot Encoder, Bottleneck layer, and Transformer Encoder repeated \(N_{u}\) times. The Fusion starts with concatenating the fusion tokens and further includes the Transformer Encoder repeated \(N_{f}\) times for each modality, and final concatenation of the latent tokens for different modalities (\(C_{\mathrm{fused}}=C\times M\)) per shot. The hyperparameter values used to derive the architecture per task (Scene/Act Segmentation) are provided in Tab. (b)b. ## Appendix B Feature Extraction CLIP\({}_{\text{movie}}\) model is the the original CLIP [34] model with ViT-B/32 backbone fine-tuned on IMDB-image dataset. The IMDB-image dataset includes 1.6M images from 31.3K unique movies/TV series paired with 762 unique textual labels. This model is trained with contrastive loss similar to CLIP [34]. The differences with [34] are: a) the textual labels are from a limited set, b) the positive and negative keys for a query sample are identified by their labels, c) the number of positive keys per image can be more than one in a batch, and d) not all other samples in a batch are considered negative keys for a query sample, only the ones with different sets of labels are considered negative keys. ## Appendix C Details of Feature Extraction Per Shot Bassl features are used for scene segmentation and are extracted from 3 key frames per shot, as Movient only releases 3 key frames per shot. Appr, place, clip, action features are extracted every 1 second. The input for extracting appr, place, and clip features is 1 frame, and for extracting the action features is a sequence of 16 frames. Audio features are extracted every 1 second, and the audio model's input is a window of 10 seconds. Text features are extracted for each subtitle timed text and each sentence of the synopsis. For each shot, we assign the features whose input has overlap with the shot time interval. E.g., audio is split into 10s (seconds) windows with an overlap of 9s (stride =1\(s\)). Then, we get the features for windows which overlap with each shot, or for subtitles, we get the features for the subtitle timed segments whose time interval overlaps with each shot time interval. ## Appendix D Expectation Step For synchronization between synopsis sentences and movie shots, we use Eq. 9 as the objective. This objective is solved in an alternative manner, where we estimate the target variable \(W\) via fixed parameters in the first step (E-step), and update the parameters while the target variable is known (M-step). \[\begin{split}\max_{W,\theta}&\sum_{i,j}w_{ij}F(.; \theta)-\lambda\sum_{i,j}|w_{i,j}|\\ & s.t.\ 0\leq w_{ij}\leq 1\end{split} \tag{9}\] \begin{table} \end{table} Table 6: Architecture details. Let \(M_{sim}\) with dimension \(L_{sh}\times L_{syn}\) be the similarity matrix with \(m_{ij}\) entries representing the similarity between the \(i\)-th shot and \(j\)-th synopsis sentence for one sample. Assuming \(F(.;\theta)=M_{sim}\), there is a closed form solution for Eq. 9 in the expectation step. We also use [26] to reduce the search space during optimization to only the pairs which are inside a diagonal boundary, i.e., all \(m_{ij}\) outside the diagonal boundary are ignored. Eq. 10 shows the closed form solution for the expectation step. Following [26], \(\xi\) is set to \(0.3\). \[w_{ij}=\begin{cases}1&\text{if }m_{ij}\geq\lambda\;\&j<i\frac{L_{syn}}{L_{sh}} +\xi L_{syn}\;\&i<j\frac{L_{sh}}{L_{syn}}+\xi L_{sh}\\ 0&\text{if }m_{ij}<\lambda\;||\;j\geq i\frac{L_{syn}}{L_{sh}}+\xi L_{syn}\;||\;i \geq j\frac{L_{sh}}{L_{syn}}+\xi L_{sh}\end{cases} \tag{10}\] **Proof:** Due to non-negative constraint \(0\leq w_{ij}\leq 1\), the objective function in Eq. 9 is reduced to Eq. 11. \[\begin{split}&\max_{W}\sum_{i,j}w_{ij}(m_{ij}-\lambda)\\ & s.t.\;0\leq w_{ij}\leq 1\end{split} \tag{11}\] This objective has a closed form solution, shown in Eq. 12. Combining this solution with the diagonal constraint from [26] the final solution boils down to Eq. 10. \[w^{*}_{ij}=\begin{cases}1&\text{if }m_{ij}\geq\lambda\\ 0&\text{if }m_{ij}<\lambda\end{cases} \tag{12}\] ## Appendix E Knowledge Transfer Knowledge distillation is used to transfer the knowledge available for the training samples on synopsis. More concretely, the soft similarity scores for each shot are normalized using Eq. 13, and the shot level probability scores are derived using Eq. 14 for \(i\)-th shot and \(n\)-th TP. Let \(Y\) be the predicted logits from the shot model (\(\in\mathbb{R}^{L_{sh}\times N_{tp}}\), with entries \(y_{in}\) for the \(i\)-th shot and \(n\)-th TP), where a softmax function derives the probability predictions per shot and TP (Eq. 15). Given predicted probabilities from the model on the shot level \(O\) ( \(\in\mathbb{R}^{L_{sh}\times N_{tp}}\), with entries \(o_{in}\) for the \(i\)-th shot and \(n\)-th TP), Kullback-Leibler divergence loss between \(O.n\) and \(P.n\) for each of the turning points (i.e., each \(n\)) is minimized during training (Eq. 16). \[a_{ij}=\frac{exp(u_{i}v_{j}/\tau)}{\sum_{k}^{L_{syn}}exp(u_{i}v_{k}/\tau)} \tag{13}\] \[p_{in}=\frac{exp\left(\sum_{k=1}^{L_{syn}}a_{ik}q_{kn}\right)}{\sum_{j=1}^{L_{ sh}}exp\left(\sum_{k=1}^{L_{syn}}a_{jk}q_{kn}\right)} \tag{14}\] \[o_{in}=\frac{exp(y_{in})}{\sum_{k=1}^{L_{sh}}exp(y_{kn})} \tag{15}\] \[\mathscr{L}_{kd}=\sum_{n=1}^{N_{tp}}\operatorname{KL}\left(O_{.n}||P_{.n}\right) \tag{16}\] ## Appendix F Experiments ### Additional Results Unless otherwise specified, this section includes further statistics and metrics for the same experiments which are provided in the paper. Tab. 7 includes the \(F1\) score for Scene Segmentation model in all the ablation experiments. The ablation experiments show a similar trend across \(AP\) and \(F1\) score. Tab. 6 shows an extra experiment with ablation of bassl features (i.e., -bassl(50)). In this experiment, we further inspect this ablation by continuing the training for this model for 50 epochs, which still shows a significant difference with the model with all the modalities included, demonstrating the effectiveness of multimodal fusion in MEGA. Additionally, given that all experiments for Act Segmentation with MEGA are repeated 8 times, and the performance Figure 4: This figure demonstrates the two applications used to test our model for video segmentation: 1) act boundary segmentation (tops) and 2) scene boundary segmentation (bottom). The model utilizes several features extracted from pretrained models to predict a decision for each shot. metrics are averaged, the standard deviations (STD) of all such experiments for comparison with previous SoTA and in ablation experiments are provided in parentheses in Tabs. 8 and 7, respectively. Considering the STD values, results still demonstrate that MEGA outperforms previous SoTA on act segmentation, and the ablations of various components in MEGA worsen the performance, showing the importance of each of those components. Additionally, Tab. 6(a) includes an extra experiment (i.e., w/o align. PE(50)), where we further investigate the ablation of this module for act segmentation by training the model without the normalized positional encoding for longer time (50 epochs). By further training, the performance gap reduces but still remains to be significant, which indicates that the proposed normalized positional encoding not only helps the model converge faster but also is a necessary component. Furthermore, Tab. 6(e) shows an extra experiment (-clip(20)), where we further trained the model for 20 epochs. The results are improved but there is still a significant gap with the model which uses clip, showing that the importance of multimodal fusion in MEGA. ### Hyperparameter Search in Align. PE. for \(L_{n}\) We demonstrate the performance of act segmentation model with different values for \(L_{n}\) (i.e., length of Align. PE.) in Fig. 5. The experiment results reveal that while MEGA performs best at \(L_{n}=100\) for act segmentation, it performs robustly across a range of \(L_{n}\) values (specifically, within the range of \(50\) to \(150\)). Align. PE. is designed to provide video-level coarse progress information as a complementary signal next to regular PE. If \(L_{n}\) is very small, the granularity of the align. PE. becomes too coarse, whereas if \(L_{n}\) is too large, it becomes overly detailed as regular PE; which explain the performance decline we observe in Fig. 5 at small and big \(L_{n}\) values. Aside from better performance, using \(L_{n}\ll L\), where \(L\) is the input length, results in better efficiency. During multimodal fusion, the number of fusion tokens is equal to \(L_{n}\), hence the memory consumption during multimodal fusion attention calculation is \((L_{n}\times m+L)^{2}\), where \(m\) is the number of modalities. Our approach enables aligned multimodal alignment during fusion that scales efficiently with respect to the number of modalities in terms of memory consumption. ### Visualization #### f.3.1 Feature Importance To further look into how MEGA is integrating different modalities to make a prediction, we calculated the GradCAM [38] for the output features from the fusion module. More concretely, the derivative of the outputs (the maximum prediction logit of the two dimensional output for _scene segmentation_ and the max predicted shot for _act segmentation_) with respect to the final FC layer parameters are calculated. These values are then multiplied with activation scores coming out of each modality fusion module, summed across the channel dimension and undergone a ReLU non-linearity function. The value scores are then normalized across all modalities, such that their summation is 1. This helps to visualize the effect of each of the modalities in the prediction from the model. For scene segmentation, Fig. 6 demonstrates the results. The results in this figure are aligned with Tab. 6(d) showing the order of importance for the modalities are bassl, place and clip. For act segmentation, Fig. 7 demonstrates the results for the 5 predicted act boundaries. The demonstrations are aligned with the ablation experiments in Tab. 6(e), showing the highest contributions are from the clip and subtitle modalities. #### f.3.2 Expected Synchronization Matrix Fig. 8 shows the expected value of synchronization matrix during optimization on different samples from TRIPOD test set. The results demonstrate that the synchronization matrix synchronizes the synopsis sentences and shots more along the diagonal line, which is expected. #### f.3.3 Attention Scores Figs. 9 and 10 demonstrate the attention scores for the fusion modules across the Scene Segmentation and Act Segmentation models on several test samples. For better visualization, all scores within one image are normalized by their max value within that image. These figures clearly demonstrate that the model is fusing different modalities flexibly for different time units (i.e., shots). And, while for some of the modalities the fusion patterns remains more similar across different time units, for some there is a clear change in pattern across time (e.g., see the zoomed area in the last row of Clip attention, which shows MEGA's fusion tokens have the capability to preserve temporal information). Figure 5: Evaluation of act segmentation in terms of Distance, TA, and PA metrics by changing \(L_{n}\). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{_Scene Seg._} & \multicolumn{3}{c}{_Act Segmentation_} \\ \cline{2-7} _case_ & _modality_ & _AP_ & _F1_ & _TA_ & _PA_ & \(D\) \\ \hline w/o align.PE & 57.77 & 54.75 & 5.29 (4.74) & 7.37 (0.75) & 31.04 (1.04) \\ w/o align.PE(50) & - & - & 7.31 (1.13) & 10.26 (1.19) & 15.75 (1.17) \\ w. align.PE & 58.59 & 55.29 & 13.93 (2.21) & 20.72 (2.28) & 9.19 (0.58) \\ \hline \hline \end{tabular} (a) Effectiveness of Alignment Positional Encoding. \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{_Scene Seg._} & \multicolumn{3}{c}{_Act Segmentation_} \\ \cline{2-7} _case_ & _modality_ & _AP_ & _F1_ & _TA_ & _PA_ & \(D\) \\ \hline w/o align.PE & V & 58.31 & 54.83 & 13.60 (1.59) & 20.53 (2.27) & 9.47 (1.23) \\ w. align.PE & V & 58.59 & 55.29 & 13.93 (2.21) & 20.72 (2.28) & 9.19 (0.58) \\ w/o align.PE & V + T & - & - & 13.01 (1.43) & 20.13 (2.33) & 9.56 (0.92) \\ w. align.PE & V + T & - & - & 14.63 (1.10) & 21.78 (1.22) & 8.96 (0.65) \\ \hline \hline \end{tabular} (b) Effectiveness of Normalized Positional Encoding in bottleneck tokens. \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{3}{c}{_Scene Seg._} & \multicolumn{3}{c}{_Act Segmentation_} \\ \cline{2-7} _MM. integ. type_ & _AP_ & _F1_ & _TA_ & _PA_ & \(D\) \\ \hline LateFusion & 58.24 & 48.27 & 12.57 (1.71) & 19.21 (2.34) & 10.00 (1.22) \\ Bottleneck & 58.59 & 55.29 & 13.93 (2.21) & 20.72 (2.28) & 9.19 (0.58) \\ \hline \hline \end{tabular} (c) Multi-modal fusion strategies. \begin{tabular}{l c c c c c} \hline \hline _change_ & _AP_ & _F1_ & _change_ & _TA_ & _PA_ & \(D\) \\ \hline -clip & 58.09 & 53.95 & -clip & 6.09 (0.86) & 10.66 (1.48) & 21.81 (4.30) \\ -place & 57.51 & 54.83 & -clip(20) & 11.99 (1.73) & 17.89 (3.04) & 10.32 (1.68) \\ -basal & 51.88 & 43.04 & -place & 13.57 (2.78) & 19.87 (4.00) & 9.22 (1.38) \\ -bassl(50) & 53.80 & 43.67 & -action & 13.31 (1.98) & 20.20 (2.90) & 10.38 (2.03) \\ -clip-place & 57.92 & 50.71 & -appr & 13.42 (2.34) & 20.59 (3.40) & 8.85 (0.90) \\ - & 58.59 & 55.29 & - & 13.93 (2.21) & 20.72 (2.28) & 9.19 (0.58) \\ & & & +subtitle & 13.93 (2.21) & 20.72 (2.28) & 9.19 (0.58) \\ & & & +subtitle+audio & 14.19 (1.13) & 22.10 (1.46) & 9.68 (1.06) \\ \hline \hline \end{tabular} (d) Impact from input modalities on scene segmentation. \begin{tabular}{l c c c c c} \hline \hline _Approach_ & _Feature Set Pretrained on_ & _AP_ & _F1_ & _Params_ & _SPS_ \\ \hline BaSSL [27] & Movienet & 57.4 & 47.02 & 15.77M & 6244.99 \\ LGSS [35] & M+P+I & 52.93 & 48.75 & 66.16M & 206.36 \\ MEGA & M+P+I & 58.59 & 55.30 & 67.57M & 1736.13 \\ \hline \hline \end{tabular} (g) Impact from feature set and model size on scene seg. SPS denotes \(\#\) of samples per second. \begin{tabular}{l c c c c c} \hline \hline _Approach_ & _Feature Set_ & _TA_ & _PA_ & \(D\) & _Params_ & _SPS_ \\ \hline GRAPHTP [32] & Set1 [32] & 9.12 & 12.63 & 9.77 & 0.745M & 25.40 \\ GRAPHTP [32] & Set2 & 4.72 & 7.37 & 9.69 & 6.78M & 14.36 \\ MEGA & Set2 & 14.19 (1.13) & 22.10 (1.46) & 9.68 (1.06) & 6.78M & 18.24 \\ \hline \hline \end{tabular} (h) Impact from feature set and model size on act seg. Set1 includes Visual (appr), Audio (YAMNet), Textual (script-USE). Set2 has Visual (appr,clip,action,place), Audio (audio), Textual (text from subtitle). \end{table} Table 7: Ablation studies on MEGA components. Values within parentheses are standard deviations for multiple runs. \begin{table} \begin{tabular}{l l l l l l} \hline \hline _Approach_ & _Modality_ & \begin{tabular}{c} _Modality_ \\ _for synch._ \\ \end{tabular} & \begin{tabular}{c} _TA_ \\ _[\%]_ \\ \end{tabular} & \begin{tabular}{c} _PA_ \\ _[\%]_ \\ \end{tabular} & \begin{tabular}{c} \(D\) \\ _[\%]_ \\ \end{tabular} \\ \hline Random (Even. distribution) [32] & - & T* & 4.82 & 6.95 & 12.35 \\ Theory [14, 30] & - & T* & 4.41 & 6.32 & 11.03 \\ Distribution position [32] & - & T* & 5.59 & 7.37 & 10.74 \\ \hline \multicolumn{5}{l}{**Single modality input**} \\ TEXTRANK [25] & T & T* & 6.18 & 10.00 & 17.77 \\ SCENESUM [10] & T & T* & 4.41 & 7.89 & 16.86 \\ TAM [29] & T & T* & 7.94 & 9.47 & 9.42 \\ GRAPHTP [32] & T & T* & 6.76 & 10.00 & 9.62 \\ MEGA* & V & T* & 10.51 (0.72) & 14.54 (1.09) & 8.98 (0.25) \\ MEGA & V & V & 13.93 (2.21) & 20.72 (2.28) & 9.19 (0.58) \\ \hline \multicolumn{5}{l}{**Multi-modality input**} \\ TEXTRANK [25] & T+A+V & T* & 6.18 & 10.00 & 18.90 \\ SCENESUM [10] & T+A+V & T* & 6.76 & 11.05 & 18.93 \\ TAM [29] & T+A+V & T* & 7.36 & 10.00 & 10.01 \\ GRAPHTP [32] & T+A+V & T* & 9.12 & 12.63 & 9.77 \\ MEGA* & T+V & T* & 11.14 (1.77) & 15.20 (2.33) & **8.96 (0.35)** \\ MEGA & T+V & T+V & **14.63 (1.10)** & 21.78 (1.22) & **8.96 (0.65)** \\ MEGA* & T+A+V & T* & 10.00 (0.98) & 14.08 (1.56) & 8.96 (0.39) \\ MEGA & T+A+V & T+A+V & 14.19 (1.13) & **22.10 (1.46)** & 9.68 (1.06) \\ \hline \hline \end{tabular} \end{table} Table 8: TP identification: comparison with SoTA. MEGA* denotes the MEGA using the same synchronization as in [30] for fair comparison. T*,V,T,A denote Textual-screenplay, Visual, Textual-subtitle and Acoustic features respectively. Values within parentheses are standard deviations for multiple runs. Figure 6: GradCAM values shown in pie charts for 9 different predictions on scene segmentation on the test set, along with the probability prediction score for the middle shot being the end of a scene and its groundtruth label. Figure 8: The expected value of synchronization matrix for 5 different samples on the test set. The actual matrices had to be resized for visualization (ratio of height/width is set to 3). Figure 7: GradCAM for 5 different predicted turning points on the test set, along with their synopsis annotated sentence for that turning point. Figure 9: Attention scores derived from the fusion transformer encoder on Scene Segmentation model. Figure 10: Attention scores derived from the fusion transformer encoder on Act Segmentation model.
2308.01396
Cross-phase modulation in the two dimensional spectroscopy
Developing from the transient absorption (TA) spectroscopy, the two dimensional (2D) spectroscopy with pump-probe geometry has emerged as a versatile approach for alleviating the difficulty on implementing the 2D spectroscopy with other geometries. However, the presence of cross-phase modulation (XPM) in TA spectroscopy introduces significant spectral distortions, particularly when the pump and probe pulses overlap. We demonstrate that this phenomenon is extended to the 2D spectroscopy with pump-probe geometry and the XPM is induced by the interference of the two pump pulse. We present the oscillatory behavior of XPM in the 2D spectrum and its displacement with respect to the waiting time delay through both experimental measurements and numerical simulations. Additionally, we explore the influence of probe pulse chirp on XPM and discover that by compressing the chirp, the impact of XPM on the desired signal can be reduced.
Mao-Rui Cai, Xue Zhang, Zi-Qian Cheng, Teng-Fei Yan, Hui Dong
2023-08-02T19:28:39Z
http://arxiv.org/abs/2308.01396v1
# Cross-phase modulation in the two dimensional spectroscopy ###### Abstract Developoing from the transient absorption (TA) spectroscopy, the two dimensional (2D) spectroscopy with pump-probe geometry has emerged as a versatile approach for alleviating the difficulty on implementing the 2D spectroscopy with other geometries. However, the presence of cross-phase modulation (XPM) in TA spectroscopy introduces significant spectral distortions, particularly when the pump and probe pulses overlap. We demonstrate that this phenomenon is extended to the 2D spectroscopy with pump-probe geometry and the XPM is induced by the interference of the two pump pulse. We present the oscillatory behavior of XPM in the 2D spectrum and its displacement with respect to the waiting time delay through both experimental measurements and numerical simulations. Additionally, we explore the influence of probe pulse chirp on XPM and discover that by compressing the chirp, the impact of XPM on the desired signal can be reduced. ## I Introduction Two dimensional (2D) spectroscopy [1; 2; 3; 4] is a powerful tool with growing attention for studying the couplings and dynamics of systems in both condensed [5; 6; 7; 8] and gaseous phases [9; 10; 11; 12]. Among the various 2D spectroscopy beam geometries [7; 8; 13], the pump-probe geometry [13; 14; 15] developed from the transient absorption (TA) spectroscopy [16; 17; 18; 19; 20] has emerged as a widely adopted approach due to its ease of implementation. In this technique, the pump beam is typically modulated by a pulse shaper [14; 15; 21] which is utilized to generate two identical pump pulses with interval \(\tau\) and to compensate for the chirp of the pump pulses [22]. As for the probe arm, the chirp of the probe pulse is often compressed with an extra compressor [23] or even left uncompressed, especially when the super-continuum is applied for probe [24; 15]. Similar to TA spectroscopy, the chirp of the probe pulse would introduce distortions [25; 26; 19] to the spectrum in the 2D case [5; 27; 28]. This occurs because various frequency components of the probe pulse reach the investigated sample at different times [29]. Taking inspiration from the chirp correction scheme used in TA spectroscopy [25; 19; 26], it has been proposed that these distortions in the 2D spectroscopy could be post-corrected [27; 28] by characterizing the chirp of the probe pulse. Despite the distortion caused by the chirp of the probe pulse, there is another significant artifact induced by the intense pump pulse in the TA spectroscopy known as the cross-phase modulation (XPM) [29; 30; 31; 32; 33]. XPM universally exists in the TA spectroscopy when the pump and probe pulses overlap temporally at the sample, particularly in the system of condensed phase. Moreover, it get stronger as the chirp of the probe pulse increases [29]. As an extension of the TA spectroscopy, the 2D spectroscopy with pump-probe geometry also gets affected by the XPM [34] especially when dynamics within several tens of femtoseconds is investigated. However, a comprehensive theoretical understanding of the XPM in the 2D spectroscopy with pump-probe geometry and its response to variations in the chirp of the probe pulse remain elusive. In this paper, we present a theoretical derivation of the XPM in the 2D spectroscopy with pump-probe geometry. Our results demonstrate that XPM manifests itself at the central frequency of the pump pulse on the excitation axis and at the central frequency of the probe pulse on the detection axis. Additionally, we observe oscillations and shifts of the XPM along the detection axis, which are supported by experimental data using a sapphire window as the sample and by our numerical simulations. Furthermore, we investigate the effect of probe pulse chirp on the behavior of XPM through the experiments and simulations. We demonstrated that by reducing the chirp of the probe pulse, the influence of XPM on the desired signal is mitigated. Theory ### General two dimensional spectroscopy In typical two dimensional spectroscopy experiments, the third-order optical response [1; 2] of a sample is represented by the induced third-order polarization as \[P^{\left(3\right)}\left(t\right)=\int_{0}^{\infty}dt_{3}\int_{0}^{\infty}dt_{2} \int_{0}^{\infty}dt_{1}R^{\left(3\right)}\left(t_{1},t_{2},t_{3}\right)E\left(t -t_{3}\right)E\left(t-t_{3}-t_{2}\right)E\left(t-t_{3}-t_{2}-t_{1}\right). \tag{1}\] where \(R^{\left(3\right)}\left(t_{1},t_{2},t_{3}\right)\) is the third-order response function of the sample and \(E\left(t\right)\) is the external electric field. As Fig. 1 shows, the external field typically consists of three pulses arriving sequentially at time \(t=0\), \(t=\tau\), and \(t=\tau+T\)[5; 21; 35; 36] for 2D setup. As a result, both the external electric field and the induced third-order polarization are functions of \(\tau\), \(T\) and \(t\). Specifically, we denote the induced third-order polarization as \(P^{\left(3\right)}\left(\tau,T,t\right)\), and the electric field as \[E\left(\tau,T,t\right)=E_{1}\left(t\right)+E_{2}\left(\tau,t\right)+E_{3} \left(\tau,T,t\right)+\text{c.c.}, \tag{2}\] \[E_{1}\left(t\right)=A_{1}\left(t\right)\exp\left\{i\left(\vec{k}_{1}\cdot \vec{r}-\Omega_{1}t+\phi_{1}\right)\right\}, \tag{3}\] \[E_{2}\left(\tau,t\right)=A_{2}\left(t-\tau\right)\exp\left\{i\left[\vec{k}_{2 }\cdot\vec{r}-\Omega_{2}\left(t-\tau\right)+\phi_{2}\right]\right\}, \tag{4}\] \[E_{3}\left(\tau,T,t\right)=A_{3}\left(t-\tau-T\right)\exp\left\{i\left[\vec{k} _{3}\cdot\vec{r}-\Omega_{3}\left(t-\tau-T\right)+\phi_{3}\right]\right\}. \tag{5}\] where \(A_{m}\), \(\vec{k}_{m}\), \(\Omega_{m}\), and \(\phi_{m}\) (\(m=1,2,3\)) are respectively the envelopes, wave vectors, central frequencies, and initial phases of the pulses, and \(\vec{r}\) is the spatial location of the sample molecule. The possible dispersion of the pulses is encoded into the envelopes. For a transparent sample with length \(L\), the induced polarization yields signal [1] field \[E_{s}\left(\tau,T,t\right)=\frac{2\pi i}{n}\frac{\Omega_{s}L}{c}P^{\left(3 \right)}\left(\tau,T,t\right)=i\eta\Omega_{s}P^{\left(3\right)}\left(\tau,T,t \right), \tag{6}\] along several phase matching directions \(\vec{k}_{s}=\pm\vec{k}_{\alpha}\pm\vec{k}_{\beta}\pm\vec{k}_{\gamma}\) (\(\alpha,\beta,\gamma=1,2,3\)), where \(\eta=2\pi L/cn\), \(n\) is the linear refractive index of the sample, \(c\) is the speed of light, and \(\Omega_{s}\) is the central frequency of the signal field. In general, the two dimensional (2D) spectrum \(E_{s}\left(\omega_{\tau},T,\omega_{t}\right)\) is obtained by Fourier transforming \(E_{s}\left(\tau,T,t\right)\) with respect to \(\tau\) and \(t\). In some cases, the time domain signal \(E_{s}\left(\tau,T,t\right)\) is collected by scanning both \(\tau\) and \(t\)[7; 8; 9; 10]. Alternatively, it is popular to heterodynely detect [1; 5; 35] the frequency domain signal \(E_{s}\left(\tau,T,\omega_{t}\right)\) via a spectrometer and with a local oscillator \(E_{\text{LO}}\left(t\right)\). In this technique, the differential optical signal \(S\left(\tau,T,\omega_{t}\right)\) is obtained as \[S\left(\tau,T,\omega_{t}\right)=\ln\frac{I_{s}\left(\tau,T,\omega_{t}\right)} {I_{\text{LO}}\left(\omega_{t}\right)}\simeq\frac{I_{s}\left(\tau,T,\omega_{t} \right)}{I_{\text{LO}}\left(\omega_{t}\right)}-1\simeq\frac{2\text{Re}\left[E_ {s}\left(\tau,T,\omega_{t}\right)E_{\text{LO}}^{*}\left(\omega_{t}\right) \right]}{\left|E_{\text{LO}}\left(\omega_{t}\right)\right|^{2}}=2\text{Re}\left[ \frac{E_{s}\left(\tau,T,\omega_{t}\right)}{E_{\text{LO}}\left(\omega_{t} \right)}\right], \tag{7}\] where \(I_{\text{LO}}\left(\omega_{t}\right)=\left|E_{\text{LO}}\left(\omega_{t} \right)\right|^{2}\) and \(I_{s}\left(\tau,T,\omega_{t}\right)=\left|E_{s}\left(\tau,T,\omega_{t}\right)+ E_{\text{LO}}\left(\omega_{t}\right)\right|^{2}\simeq 2\text{Re}\left[E_{s}\left(\tau,T, \omega_{t}\right)E_{\text{LO}}^{*}\left(\omega_{t}\right)\right]+\left|E_{\text{ LO}}\left(\omega_{t}\right)\right|^{2}\), because the signal field is usually much weaker than the local oscillator, i.e., \(\left|E_{s}\left(\tau,T,\omega_{t}\right)\right|\ll\left|E_{\text{LO}}\left( \omega_{t}\right)\right|\). The 2D spectrum \(S\left(\omega_{\tau},T,\omega_{t}\right)\) is then obtained by scanning and Fourier transforming with respect to only \(\tau\), \[S\left(\omega_{\tau},T,\omega_{t}\right)=\mathcal{F}\left[S\left(\tau,T,\omega_{ t}\right)\right]. \tag{8}\] ### Two dimensional spectroscopy with pump-probe geometry There are several experimental beam geometries in the 2D spectroscopy based on the direction arrangement of the three pulses, such as collinear geometry [7; 8; 10] and non-collinear BOXCARS geometry [5; 35]. Besides, there is a partially collinear geometry known as the pump-probe geometry [13; 14; 15] where the first two pulses are collinear \(\vec{k}_{1}=\vec{k}_{2}\) and the third pulse serves also as the local oscillator. Such a beam geometry is identical with the TA (pump-probe) spectroscopy geometry [16; 17; 18; 19; 20]. The first two pulses are often denoted as the pump pulses and the third pulse as the probe pulse. The pump-probe geometry eases the difficulty of experimentally implementing the 2D spectroscopy compared with the other geometries, because the heterodyne detection is naturally satisfied by measuring the spectrum of probe pulse, i.e., \(\vec{k}_{s}=\vec{k}_{3}\). However, signal fields that are spatially separated in the BOXCARS geometry may get mixed [14; 23] in the pump-probe geometry. Specifically, signal fields \(E_{\alpha^{\prime},-\alpha^{\prime},3}\left(\tau,T,\omega_{t}\right)\), \(E_{1,-2,3}\left(\tau,T,\omega_{t}\right)\), and \(E_{-1,2,3}\left(\tau,T,\omega_{t}\right)\) are all emit along \(\vec{k}_{3}\) in the pump-probe geometry. Here, \(\alpha^{\prime}=\pm 1,\pm 2,\pm 3\) and the subscripts of the signal field \(E_{\alpha,\beta,\gamma}\) denote that the signal is induced by incident fields \(E_{\alpha}\), \(E_{\beta}\), and \(E_{\gamma}\) (or their conjugation if the corresponding subscripts are negative). Among all these mixed signals, \(E_{-1,2,3}\left(\tau,T,\omega_{t}\right)\) and \(E_{1,-2,3}\left(\tau,T,\omega_{t}\right)\) are typically required for the study of single-quantum coherence in the excitation time \(\tau\). These two signals are often denoted as the rephasing and non-rephasing signal [1; 5], respectively. To sort out only \(E_{-1,2,3}\left(\tau,T,\omega_{t}\right)\) and \(E_{1,-2,3}\left(\tau,T,\omega_{t}\right)\) in the total signal field, the technique of phase cycling [11; 14; 23] is applied. According to the discussion in Sec. II.1, the signal field \(E_{s}\left(\tau,T,\omega_{t}\right)\) from the sample depends on the initial phase \(\phi_{\alpha}\) of the incident fields. However, in the 2D spectroscopy with pump-probe geometry, the phase of the third pulse \(\phi_{3}\) is eliminated because the third pulse serves as the local oscillator in the heterodyne detection [14]. Thus, we denote the phase-depend signal field as \(E_{s}\left(\tau,T,\omega_{t};\phi_{1},\phi_{2}\right)\) and its components \[E_{\alpha,\beta,3}\left(\tau,T,\omega_{t};\phi_{1},\phi_{2}\right)=E_{\alpha, \beta,3}\left(\tau,T,\omega_{t}\right)\exp\left\{i\left[\mathrm{sgn}\left( \alpha\right)\times\phi_{|\alpha|}+\mathrm{sgn}\left(\beta\right)\times\phi_{| \beta|}\right]\right\}, \tag{9}\] where \(\alpha,\beta=\pm 1,\pm 2,\pm 3\), \(\mathrm{sgn}\left(x\right)=x/\left|x\right|\) is the sign function, and the phase free term \(E_{\alpha,\beta,3}\left(\tau,T,\omega_{t}\right)\) denotes the signal field with initial phases \(\phi_{1},\phi_{2}=0\) of the first two pulses. By conducting two experiments with phase arrangements a) \(\phi_{1}=\pi\) and \(\phi_{2}=0\); b) \(\phi_{1}=\pi\) and \(\phi_{2}=\pi\), the phase cycled signal is given as \[S_{\mathrm{cyc}}\left(\tau,T,\omega_{t}\right)=\ln\frac{I\left(\tau,T,\omega _{t};\pi,0\right)}{I\left(\tau,T,\omega_{t};\pi,\pi\right)}\simeq\frac{I\left( \tau,T,\omega_{t};\pi,0\right)}{I\left(\tau,T,\omega_{t};\pi,\pi\right)}-1 \simeq\frac{I\left(\tau,T,\omega_{t};\pi,0\right)-I\left(\tau,T,\omega_{t}; \pi,\pi\right)}{I\left(\tau,T,\omega_{t};\pi,\pi\right)}, \tag{10}\] with \[I\left(\tau,T,\omega_{t};\phi_{1},\phi_{2}\right)=\left|E_{s}\left(\tau,T, \omega_{t};\phi_{1},\phi_{2}\right)+E_{3}(\omega_{t})\right|^{2}\simeq\left|E _{3}(\omega_{t})\right|^{2}+2\mathrm{Re}\left[E_{s}\left(\tau,T,\omega_{t}; \phi_{1},\phi_{2}\right)E_{3}^{*}(\omega_{t})\right], \tag{11}\] \(I_{3}\left(\omega_{t}\right)=\left|E_{3}(\omega_{t})\right|^{2}\), and \(\left|E_{s}\left(\tau,T,\omega_{t};\phi_{1},\phi_{2}\right)\right|\ll\left|E _{3}(\omega_{t})\right|\). Then, the phase cycled signal is expressed as \[S_{\mathrm{cyc}}\left(\tau,T,\omega_{t}\right)=2\mathrm{Re}\left[\frac{E_{s} \left(\tau,T,\omega_{t};\pi,0\right)-E_{s}\left(\tau,T,\omega_{t};\pi,\pi \right)}{E_{3}\left(\omega_{t}\right)}\right], \tag{12}\] which includes only the signals \(E_{-1,2,3}\) and \(E_{1,-2,3}\), while the parts \(E_{\alpha^{\prime},-\alpha^{\prime},3}\) are eliminated because they are not affected by the initial phases of the pump pulses. The 2D spectrum \(S_{\mathrm{cyc}}\left(\omega_{\tau},T,\omega_{t}\right)\) is obtained by firstly scanning \(\tau\) to collect a series of \(S_{\mathrm{cyc}}\left(\tau,T,\omega_{t}\right)\) and then Fourier transforming \(S_{\mathrm{cyc}}\left(\tau,T,\omega_{t}\right)\) with respect to \(\tau\). The rephasing and non-rephasing parts can further be separated with additional phase cycling [23]. ### Cross-phase modulation in the two dimensional spectroscopy with pump-probe geometry In most spectroscopy experiments, the frequencies of the incident electric fields are typically chosen to resonate with (or nearly resonate with) one of the transition frequencies of the investigated sample, thereby changing the absorption rate [1; 2] (imaginary part of the electric susceptibility) of the sample accordingly. However, in liquid sample where the solution (i.e., the investigated sample) is dissolved in a specific solvent, the electric fields also induce the non-resonant electronic response (ER) of the solvent, altering the refractive index [34; 37] (real part of the electric susceptibility) of the solvent through Kerr effect. The ER is approximately instantaneous [37; 38] (with a time scale of \(0.1\sim 1\,\mathrm{fs}\) Figure 1: Typical pulse sequence of the two dimensional spectroscopy, where three pulses arrive at the sample at time \(t=0\), \(t=\tau\), and \(t=\tau+T\) sequentially. compared with the resonant response of the solution and the duration of the incident pulses (with a time scale of \(10\sim 100\,\mathrm{fs}\)). Therefore, the response function [29; 34; 37] is approximately written as \[R_{\mathrm{ER}}^{\left(3\right)}\left(t_{1},t_{2},t_{3}\right)=\sigma_{e}\delta \left(t_{1}\right)\delta\left(t_{2}\right)\delta\left(t_{3}\right). \tag{13}\] Substituting Eq. (13) into Eq. (1), the induced polarization of the electronic response is given as \(P^{\mathrm{ER}}\left(\tau,T,t\right)=\sigma_{e}\left[E\left(\tau,T,t\right) \right]^{3}\), and the corresponding signal field is \[E_{s}^{\mathrm{ER}}\left(\tau,T,t\right)=i\sigma_{e}\eta\Omega_{s}\left[E\left( \tau,T,t\right)\right]^{3}. \tag{14}\] Among all the heterodynely detected signal components in the 2D spectroscopy with pump-probe geometry, \(E_{3,-3,3}^{\mathrm{ER}}\left(\tau,T,t\right)\) and \(E_{-3,3,3}^{\mathrm{ER}}\left(\tau,T,t\right)\) are treated as the self-phase modulation (SPM) [39] which describes the spectral and temporal modulation of the probe pulse induced by itself. The other components are cross-phase modulation (XPM) [29; 30; 31; 32; 33], describing the spectral and temporal modulation of the probe pulse induced by the first pump pulse only [\(E_{1,-1,3}^{\mathrm{ER}}\left(\tau,T,t\right)\) and \(E_{-1,3}^{\mathrm{ER}}\left(\tau,T,t\right)\)], the second pump pulse only [\(E_{2,-2,3}^{\mathrm{ER}}\left(\tau,T,t\right)\) and \(E_{1,-2,3}^{\mathrm{ER}}\left(\tau,T,t\right)\)], or the interference of the two pump pulses [\(E_{-1,2,3}^{\mathrm{ER}}\left(\tau,T,t\right)\) and \(E_{1,-2,3}^{\mathrm{ER}}\left(\tau,T,t\right)\)] where \[E_{1,-2,3}^{\mathrm{ER}}\left(\tau,T,t\right) =i\sigma_{e}\eta\Omega_{s}E_{1}^{*}\left(t\right)E_{2}\left(t- \tau\right)E_{3}\left(t-\tau-T\right), \tag{15}\] \[E_{1,-2,3}^{\mathrm{ER}}\left(\tau,T,t\right) =i\sigma_{e}\eta\Omega_{s}E_{1}\left(t\right)E_{2}^{*}\left(t- \tau\right)E_{3}\left(t-\tau-T\right). \tag{16}\] Via the phase cycling technique [11; 14; 23] described in Sec. II.2, all the modulations of the probe pulse induced by a single pulse \(E_{\alpha^{\prime},-\alpha^{\prime},3}^{\mathrm{ER}}\) are eliminated. Whereas, the modulations from the interference of the two pump pulses remain, \[S_{\mathrm{XPM}}\left(\tau,T,\omega_{t}\right)=-4\mathrm{Re}\left[\frac{E_{-1,2,3}^{\mathrm{ER}}\left(\tau,T,\omega_{t}\right)+E_{1,-2,3}^{\mathrm{ER}} \left(\tau,T,\omega_{t}\right)}{E_{3}\left(\omega_{t}\right)}\right], \tag{17}\] where \(E_{-1,2,3}^{\mathrm{ER}}\left(\tau,T,\omega_{t}\right)\) and \(E_{1,-2,3}^{\mathrm{ER}}\left(\tau,T,\omega_{t}\right)\) are XPM signal fields when the initial phases of the two pump pulses \(\phi_{1},\phi_{2}=0\). To give more explicit expressions of the XPM signal \(S_{\mathrm{XPM}}\left(\tau,T,\omega_{t}\right)\), we assume that the two pump pulses are Fourier transform limited Gaussian pulses without chirp, i.e., \[E_{1}\left(t\right)=a_{1}\exp\left\{-\frac{t^{2}}{2\tau_{1}^{2} }\right\}\exp\left\{-i\Omega_{1}t\right\}, \tag{18}\] \[E_{2}\left(\tau,t\right)=a_{2}\exp\left\{-\frac{\left(t-\tau \right)^{2}}{2\tau_{2}^{2}}\right\}\exp\left\{-i\Omega_{2}\left(t-\tau\right) \right\}, \tag{19}\] and the probe pulse is a Gaussian pulse with linear chirp [28; 29] \[E_{3}\left(\tau,T,t\right)=a_{3}\exp\left\{-\frac{\left(t-\tau-T\right)^{2}}{ 2\tau_{3}^{2}}\right\}\exp\left\{-i\left[\Omega_{3}\left(t-\tau-T\right)+ \beta_{3}\left(t-\tau-T\right)^{2}\right]\right\}, \tag{20}\] where \(a_{\alpha}\) and \(\tau_{\alpha}\) are the amplitude and duration of pulse \(E_{\alpha}\), and \(\beta_{3}\) is the chirp rate of the probe pulse. The spatial phase factors and initial phases are neglected here. We further simplify our model by assuming \(a_{1}=a_{2}\), \(\tau_{1}=\tau_{2}\), and \(\Omega_{1}=\Omega_{2}\), which is accessible with the pulse shaper [14; 15; 21] on the pump arm. With these incident pulses in Eq. (18-20), we obtain the phase cycled XPM signal as \[S_{\mathrm{XPM}}\left(\tau,T,\omega_{t}\right) =\frac{8}{A^{1/4}}\sigma_{e}\eta\Omega_{3}\tau_{1}\left|a_{1} \right|^{2}\cos\left(\Omega_{1}\tau\right)\exp\left\{-\frac{\tau^{2}}{4\tau_{1}^ {2}}\right\} \tag{21}\] \[\times\exp\left\{\frac{\left(\tau_{1}^{2}+2\tau_{3,0}^{2}\right)B -4\beta_{gdd}\tau_{3,0}^{2}\left(\Omega_{3}-\omega_{t}\right)t_{d}\left(\omega _{t}\right)}{A}\right\}\] \[\times\sin\left\{\frac{2\beta_{gdd}B+2\tau_{3,0}^{2}\left(\Omega_{3 }-\omega_{t}\right)\left(\tau_{1}^{2}+2\tau_{3,0}^{2}\right)t_{d}\left(\omega _{t}\right)}{A}+\phi\right\},\] where \(A=\left(\tau_{1}^{2}+2\tau_{3,0}^{2}\right)^{2}+4\beta_{gdd}^{2}\), \(B=\left[t_{d}\left(\omega_{t}\right)\right]^{2}+\tau_{3,0}^{4}\left(\Omega_{3}- \omega_{t}\right)^{2}\), \(t_{d}\left(\omega_{t}\right)=\tau/2+T-\beta_{gdd}\left(\Omega_{3}-\omega_{t}\right)\), and \(\phi=\arctan\left[2\beta_{gdd}/\left(\tau_{1}^{2}+2\tau_{3,0}^{2}\right)\right]\). The parameters \(\beta_{gdd}\) and \(\tau_{3,0}\) are the group delay velocity (GDD) and Fourier limited duration of the probe pulse, respectively. These two parameters are connected to \(\tau_{3}\) and \(\beta_{3}\) with relation \(\tau_{3}^{2}=\left(\tau_{3,0}^{4}+\beta_{gold}^{2}\right)/\tau_{3,0}^{2}\) and \(\beta_{3}=0.5\beta_{gold}/\left(\tau_{3,0}^{4}+\beta_{gold}^{2}\right)\)[28]. The cosine term \(\cos\left(\Omega_{1}\tau\right)\) and the Gaussian term with variable \(\left(\Omega_{3}-\omega_{t}\right)\) in Eq. (21) indicate that the XPM signal on the 2D spectrum covers around \(\left(\omega_{\tau}=\Omega_{1},\omega_{t}=\Omega_{3}\right)\), i.e., typically the resonant frequencies of the sample solution. We remark that the XPM signal are oscillating along \(\omega_{t}\) govern by the sine term in Eq. (21), and the oscillation will move along \(\omega_{t}\) as \(T\) changes. We will show such features with experiments and simulations in the following sections. ## III Experiment and Simulation ### Experimental Setup Our experimental setup is presented in Fig. 2. Laser pulses (800 nm, 7.15 W, 30.8 fs, at a 1 kHz repetition rate, and horizontally polarized with respect to the table surface) delivered by the Ti:Sapphire amplifier laser system are lead into an optical parametric amplifier (OPA) to produce pulses with desired central wavelength at 680 nm. The laser beam is then lead into a commercial integrated 2D spectroscopy system. Specifically, the laser beam after the OPA is divided by a wedged window into two beams, i.e., the pump and probe beams, and the power of the probe beam is controlled by a half-wave plate before the wedged window. To introduce time delays between the pump and probe pulses, the pump beam is directed into a pulse shaper and a delay stage before hitting the sample. The pulse shaper [15; 21; 14] consists of an acousto-optic modulator (AOM), two parabolic mirrors, and two gratings with 4-f geometry. The AOM, synchronized with our laser system, modulates the first-order diffraction of the pump pulses with customized mask through photoelastic effect. Typically, the mask is designed such that one pump pulse is divided into two identical transform limited pump pulses with interval \(\tau\). The time delay \(T\) between the probe and the second pump pulse is controlled by the delay stage. The pump and probe beams are aligned in parallel and focused onto the sample using a parabolic mirror. After the sample, the pump beam is blocked by a beam trap, and the probe beam, passing through a Glan-Taylor prism and a lens (\(f=300\:\mathrm{mm}\)), is lead into a spectrometer with the spectrum detected by a charge coupled device (CCD) camera. The Glan-Taylor prism is used to guarantee that only horizontally polarized component of the probe pulse is measured. Furthermore, a half-wave plate and a polarizer in the pump arm before the sample are utilized to ensure that only horizontally polarized component pump pulses interact with the sample. In our 2D spectroscopy experiment for detecting the XPM, a sapphire window (3 mm) serves as the sample. Additionally, to study the influence of the GDD of the probe pulse to the XPM on the 2D spectrum, two experiments are conducted. In our first experiment, the probe beam passes freely before the sample, while in the second experiment, an additional sapphire window (SW, 3 mm thick) is placed on probe arm to alter the \(\beta_{gold}\) of the probe pulse. We remark that the phases of the pump pulses are also controlled by the AOM, making phase cycling available in this system. Moreover, instead of the two-frame cycling (\(\phi_{1}=\pi,\phi_{2}=0\)) and (\(\phi_{1}=\pi,\phi_{2}=\pi\)) introduced in Sec. II.2, we actually utilize a four-frame cycling [14] (\(\phi_{1}=\pi,\phi_{2}=0\)), (\(\phi_{1}=\pi,\phi_{2}=\pi\)), (\(\phi_{1}=0,\phi_{2}=\pi\)), and (\(\phi_{1}=0,\phi_{2}=0\)), which further eliminates the scatter from the pump pulses. ### Pulse parameters We presented in this section the parameters of the pulses applied in our experiments. The spectrum of the probe pulse is shown in Fig. 3(a), with the central frequency \(\Omega_{3}/2\pi=438.09\:\mathrm{THz}\) and the bandwidth \(\sigma_{3,0}/2\pi=11.74\:\mathrm{THz}\) evaluated by the Gaussian fitting of the spectrum. The bandwidth \(\sigma_{3,0}\) corresponds to a Fourier limited duration of the probe pulse, \(\tau_{3,0}=1/\sigma_{3,0}=13.56\:\mathrm{fs}\). The auto-correlation functions (ACF) of the probe pulses \(E_{3}^{e_{1}}\) and \(E_{3}^{e_{2}}\) are given in Fig. 3(b) and (c). Here, the superscripts \(e_{1}\) and \(e_{2}\) are used to denote experiments without and with the additional SW on the probe arm, respectively. The duration of the ACFs are \(\tau_{\mathrm{ACF}}^{e_{1}}=47.25\:\mathrm{fs}\) and \(\tau_{\mathrm{ACF}}^{e_{2}}=66.50\:\mathrm{fs}\), and the duration of the electric fields are directly given by the ACFs as \(\tau_{3}^{e_{1}}=\tau_{\mathrm{ACF}}^{e_{1}}=47.25\:\mathrm{fs}\) and \(\tau_{3}^{e_{2}}=\tau_{\mathrm{ACF}}^{e_{2}}=66.50\:\mathrm{fs}\). Such duration of the electric fields corresponds to the typical positive GDDs \(\beta_{gold}^{e_{1}}=613.76\:\mathrm{fs}^{2}\) and \(\beta_{gold}^{e_{2}}=882.79\:\mathrm{fs}^{2}\) according to the relation \(\beta_{gold}^{2}=\tau_{3,0}^{2}\left(\tau_{3}^{2}-\tau_{3,0}^{2}\right)\). The spectrum is collected by the spectrometer and CCD camera, and the ACFs are measured by an auto-correlator. The parameters of the pump pulses are not measured with the spectrometer, instead we evaluate the parameters of the pump pulses by simulating the 2D correlation spectra of the XPM (referred to as 2DCS-XPM) and Gaussian fitting their projection traces on the axis of \(\omega_{\tau}\). Firstly, the central frequencies of the pump pulses are determined by projecting the experimental results of the 2DCS-XPM onto the axis of \(\omega_{\tau}\). Then the simulations are conducted by scanning the duration of the pump pulses to obtain the best duration, with which the simulated projection trace fits best with the experiment (See Supplementary for details). The central frequencies and best duration of the pump pulses are \(\Omega_{1}^{e_{1}}/2\pi=449.59\,\mathrm{THz}\) and \(\tau_{1}^{e_{1}}=48.85\,\mathrm{fs}\) for experiment without the SW and \(\Omega_{1}^{e_{2}}/2\pi=449.19\,\mathrm{THz}\) and \(\tau_{1}^{e_{2}}=50.48\,\mathrm{fs}\) for experiment with the SW. These parameters are utilized in Sec. III.4 for numerical simulation. ### XPM in 2D spectroscopy experiments We have measured the 2DCS-XPM using a sapphire window as the sample. The time delay \(T\) is scanned from -100 fs to 30 fs with a step size of 10 fs. The results from \(T=-20\) fs to \(T=20\) fs with 65% contours are shown in Fig. 4, where the spectra in the first row are measured without the SW and those in the second row are measured with the Figure 3: (a) The spectrum and (b, c) the auto-correlation functions (ACFs) of the probe pulses. The black solid lines are experimental results and the red dash lines are Gaussian fittings. The Gaussian fitting in (a) suggests that the probe spectrum centered at \(\Omega_{3}/2\pi=438.09\,\mathrm{THz}\) with the bandwidth \(\sigma_{3,0}/2\pi=11.74\,\mathrm{THz}\). Such a bandwidth corresponds to a Fourier limited duration \(\tau_{3,0}=1/\sigma_{3,0}=13.56\,\mathrm{fs}\) of the probe pulse. The ACFs in (b) and (c) relate to the probe pulses without and with the additional SW, respectively. Via the Gaussian fitting, the duration of the ACFs are \(\tau_{\mathrm{ACP}}^{e_{1}}=47.25\,\mathrm{fs}\) for (b) and \(\tau_{\mathrm{ACP}}^{e_{2}}=66.50\,\mathrm{fs}\) for (c) corresponding to the duration \(\tau_{3}^{e_{1}}=\tau_{\mathrm{ACP}}^{e_{1}}=47.25\,\mathrm{fs}\) and \(\tau_{3}^{e_{2}}=\tau_{\mathrm{ACP}}^{e_{2}}=66.50\,\mathrm{fs}\) for the probe pulses. Figure 2: Schematic experimental setup of the 2D spectroscopy with pump-probe geometry in our experiment. Amplifier: Ti:Sapphire amplifier laser system; OPA: optical parametric amplifier; HWP: half-wave plate; WW: wedged window; PM: parabolic mirror; GR: optical grating; AOM: acousto-optic modulator; DS: delay stage; PL: polarizer; SW: sapphire window (3 mm) used to altering the GDD of the probe pulse; BT: beam trap; GTP: Glan-Taylor prism; Spectrometer: spectrometer; CCD: charge coupled device camera. The system enclosed by a dash box is an integrated 2D spectroscopy system. In our 2D spectroscopy experiment for detecting the XPM, a sapphire window (3 mm) serves as the sample. SW on the probe arm. In Fig. 4, up to three peaks along \(\omega_{t}\) are observable. Those peaks correspond to the oscillation of the \(S_{\mathrm{XPM}}\left(\tau,T,\omega_{t}\right)\) introduced by the sine term in Eq. (21). The additional peaks, which are also predicted by the sine term, are not visible here constrained by the signal-to-noise ratio. We remark that the intensities of the 2DCS-XPM presented in Fig. 4 are stronger when \(T\) is negative than the positive ones. This discrepancy arises due to the greater likelihood of the probe pulse overlapping with the interference of the two pump pulses under negative \(T\) conditions. Another feature of the 2DCS-XPM is the dynamic shift along \(\omega_{t}\) from higher frequency (shorter wavelength) to smaller frequency (longer wavelength) regions as the delay time \(T\) increases. We emphasize such a tendency by projecting the 2DCS-XPM onto the axis of \(\omega_{t}\), and the instances from \(T=-20\) fs to \(T=20\) fs are presented in Fig. 5(a) for experiment without the SW and Fig. 5(b) for experiment with the SW. After the projection, one particular peak near \(\Omega_{3}\) of the projection traces, e.g., the one enclosed by red square in Fig. 5(a) or blue square in Fig. 5(b), is fitted with the quadratic polynomial to obtain a central frequency of the peak. Fig. 5(c) and (d) illustrates the shifts of the central frequencies as \(T\) changes from -100 fs to 30 fs with the red dots (for experiment without the SW) and the blue triangles (for experiment with the SW), respectively. The shifts are nearly linear about \(T\) in both experiments and the slopes of the linear fitting lines are \(-0.205\,\mathrm{THz/fs}\) for experiment without the SW and \(-0.153\,\mathrm{THz/fs}\) for experiment with the SW. The slope characterizes the speed of the peak displacement, and thus the result shows that the XPM moves faster when the probe pulse is less chirped. ### Simulation To verify the mechanism of the XPM on the 2D spectroscopy, we simulate the 2DCS-XPM using Eq. (21) and with the pulse parameters in Sec. III.2. The simulation results from \(T=-20\) fs to \(T=20\) fs with \(65\%\) contours are taken as the instances in Fig. 6, where three distinct peaks are observed as in the experimental results in Fig. 4. Also, as in Fig. 4, simulations in the first row in Fig. 6 correspond to experiment without the SW, and those in the second row correspond to experiment with the SW. We also illustrate the dynamic shifts of the XPM on the simulated 2D spectra by projecting them onto the axis of \(\omega_{t}\) and then fitting the peak near \(\Omega_{3}\) to obtain a central frequency. The shifts of the central frequencies in our simulations are depicted in Fig. 5(c) and (d), alongside the corresponding experimental results. In Fig. 5(c) and (d), the displacements of the simulations (represented by the hollow marks) meet well with the experimental results (represented by the solid marks) for both experiments without and with the SW. The slope of the linear fittings of the simulations are \(-0.231\,\mathrm{THz/fs}\) and \(-0.165\,\mathrm{THz/fs}\), close to the experimental results \(-0.205\,\mathrm{THz/fs}\) and \(-0.153\,\mathrm{THz/fs}\). Both experiments and simulations prove that as the chirp of the probe pulse gets larger, the XPM moves slower with Figure 4: 2D correlation spectra of the XPM by using a sapphire window as the sample with the corresponding \(T\) denoted on. The spectra in the first row are measured when the SW is absent on the probe arm. The spectra in the second row are measured with the the SW placed. Those 2D spectra are obtained by Fourier transform the time domain signal \(S_{\mathrm{XPM}}\left(\tau,T,\omega_{t}\right)\) with respect to \(\tau\) and then taking the absolute value of the transform results. To stress the signal from the background and from the noise, \(65\%\) contours are shown here. respect to delay time \(T\) and will cover the desired signal for a wider range of \(T\). The speed of the XPM displacement as a function of the GDD of the probe pulse \(\beta_{gdd}\) is presented in Fig. 7, where the slopes for both positive and negative \(\beta_{gdd}\) are simulated (red solid marks) and then fitted (red dash lines) with reciprocal function. Unlike in the positive \(\beta_{gdd}\) region where the slope is negative and the XPM moves from higher frequency (shorter wavelength) to smaller frequency (longer wavelength), in the negative \(\beta_{gdd}\) region, slope is positive and the XPM moves from smaller frequency (longer wavelength) to larger frequency (shorter wavelength). Whereas, the reciprocal relation between the slope and the \(\beta_{gdd}\) retains. Such a reciprocal relation moves the XPM away from Figure 5: (a, b) Projection traces of the 2DCS-XPM onto the axis of \(\omega_{t}\) and (c, d) the dynamic shifts of the XPM along the axis of \(\omega_{t}\) as \(T\) changes. The traces in (a) and the shifts in (c) correspond to the 2DCS-XPM without the SW and those in (b) and (d) correspond to the 2DCS-XPM with the SW on the probe arm. The displacement of the traces in (a) and (b) are indicated by the gray dash lines, representing the central frequencies of one specific peak, e.g., the one enclosed by red square in (a) or blue square in (b) when \(T=-20\) fs. The central frequencies are obtained by fitting the peaks using the quadratic polynomial (green dash lines). The shifts of the central frequencies as functions of \(T\) are quantified in (c) by the red dots and in (d) by the blue solid triangles. Their corresponding linear fitting lines have slopes of \(-0.205\,\mathrm{THz/fs}\) and \(-0.153\,\mathrm{THz/fs}\), respectively. Additionally, the simulation results are depicted in (c) by the pink circles and in (d) by the light-blue hollow triangles, with the slopes of their linear fitting lines being \(-0.231\,\mathrm{THz/fs}\) and \(-0.165\,\mathrm{THz/fs}\), respectively. Figure 6: Simulated 2DCS-XPM with the corresponding \(T\) denoted on. The spectra in the first row are simulated for the experiment without the SW, and the parameters are \(\Omega_{3}/2\pi=438.09\,\mathrm{THz}\), \(\beta_{gdd}^{\mathrm{c}}=613.76\,\mathrm{fs}^{2}\) for the probe pulse, and \(\Omega_{7}^{\mathrm{e}_{1}}/2\pi=449.59\,\mathrm{THz}\), \(\tau_{1}^{\mathrm{e}_{1}}=48.85\,\mathrm{fs}\) for the pump pulses. The spectra in the second row are simulated for the experiment with the SW, and the parameters are \(\Omega_{3}/2\pi=438.09\,\mathrm{THz}\), \(\beta_{gdd}^{\mathrm{e}}=882.79\,\mathrm{fs}^{2}\) for the probe pulse, and \(\Omega_{1}^{\mathrm{e}_{2}}/2\pi=449.19\,\mathrm{THz}\), \(\tau_{1}^{\mathrm{e}_{2}}=50.48\,\mathrm{fs}\) for the pump pulses. \(65\%\) contours are shown here. the frequency region of interest much faster with respect to \(T\) as \(\beta_{gold}\) decreases. Hence, suppressing the chirp of the probe pulse also suppresses the range of \(T\) within which the desired signal is affected by the XPM. Moreover, the intensity of the XPM at \(T=0\) fs is significantly reduced by suppressing \(\beta_{gold}\), as depicted by the black solid line in Fig. 7. With these two features, one is able to minimize the influence of the XPM on the 2D spectroscopy by suppressing the chirp of the probe pulse. ## IV Conclusion We have successfully demonstrated the presence of XPM induced by the interference of the two pump pulses in 2D spectroscopy with pump-probe geometry. This XPM phenomenon predominantly occurs at \((\omega_{\tau}=\Omega_{1},\omega_{t}=\Omega_{3})\), which typically corresponds to the resonant frequencies of the target sample. Consequently, the XPM effect can significantly overlap with the desired signal on the 2D spectra, especially when studying liquid samples due to the substantial XPM introduced by the solvent. We have present theoretically and experimentally the pattern and the displacements of the XPM on the 2D spectrum. We observe that the XPM oscillates along \(\omega_{t}\), while the observable oscillations (or peaks) are limited by the signal-to-noise ratio. In our experiments, three peaks are observed. As for the displacements of the XPM, we show that for positive chirped probe pulse, the XPM on the 2D spectra moves from higher frequency (shorter wavelength) to smaller frequency (longer wavelength) regions as \(T\) increases. The direction of such a displacement is reversed for negative chirped probe pulse. Additionally, we determine that the speed of the displacement with respect to \(T\) is inversely related to the GDD of the probe pulse \(\beta_{gold}\). When the \(\beta_{gold}\) gets suppressed, the XPM moves faster and thus will disturb the measurement of the desired signal for a narrow range of \(T\). Moreover, suppressing the \(\beta_{gold}\) of the probe pulse reduces the intensity of the XPM.
2304.08701
Bayesian D-Optimal Design of Experiments with Quantitative and Qualitative Responses
Systems with both quantitative and qualitative responses are widely encountered in many applications. Design of experiment methods are needed when experiments are conducted to study such systems. Classic experimental design methods are unsuitable here because they often focus on one type of response. In this paper, we develop a Bayesian D-optimal design method for experiments with one continuous and one binary response. Both noninformative and conjugate informative prior distributions on the unknown parameters are considered. The proposed design criterion has meaningful interpretations regarding the D-optimality for the models for both types of responses. An efficient point-exchange search algorithm is developed to construct the local D-optimal designs for given parameter values. Global D-optimal designs are obtained by accumulating the frequencies of the design points in local D-optimal designs, where the parameters are sampled from the prior distributions. The performances of the proposed methods are evaluated through two examples.
Lulu Kang, Xinwei Deng, Ran Jin
2023-04-18T02:23:04Z
http://arxiv.org/abs/2304.08701v2
# Bayesian \(D\)-Optimal Design of Experiments with Quantitative and Qualitative Responses ###### Abstract Systems with both quantitative and qualitative responses are widely encountered in many applications. Design of experiment methods are needed when experiments are conducted to study such systems. Classic experimental design methods are unsuitable here because they often focus on one type of response. In this paper, we develop a Bayesian \(D\)-optimal design method for experiments with one continuous and one binary response. Both noninformative and conjugate informative prior distributions on the unknown parameters are considered. The proposed design criterion has meaningful interpretations regarding the \(D\)-optimality for the models for both types of responses. An efficient point-exchange search algorithm is developed to construct the local \(D\)-optimal designs for given parameter values. Global \(D\)-optimal designs are obtained by accumulating the frequencies of the design points in local \(D\)-optimal designs, where the parameters are sampled from the prior distributions. The performances of the proposed methods are evaluated through two examples. KEYWORDS: Bayesian \(D\)-optimal design, Conjugate prior, Generalized linear model, Multivariate responses, Noninformative prior, Point-exchange. ## 1 Introduction In many applications, both quantitative and qualitative responses are often collected for evaluating the quality of the system. Often, the two types of responses are mutually dependent. We call such a system with both types of quality responses quantitative-qualitative system. Such systems are widely encountered in practice (Kang et al., 2022, 2021, 2018). In Kang et al. (2018), the authors studied an experiment of the lapping stage of the wafer manufacturing process. The qualitative response is the conformity of the site total indicator reading (STIR) of the wafer, which has two possible outcomes: whether or not the STIR of a wafer is within the tolerance. The quantitative response is the total thickness variation (TTV) of the wafer. Kang et al. (2021) focused on the birth records and examined the mutual dependency of birth weight and preterm birth. The birth weight of an infant is a quantitative outcome and the preterm birth is a binary indicator of whether an infant is born before 36 gestational weeks. The two types of outcomes are correlated as an infant is usually underweight if the infant is born preterm. In Kang et al. (2022), two case studies of quantitative-qualitative systems from material sciences and gene expressions are illustrated. In the gene expression study, the qualitative response has three possible outcomes: healthy individuals, patients with Crohn's disease, and patients with Ulcerative colitis. This work is motivated by a study of an etching process in a wafer manufacturing process. In the production of silicon wafers, the silicon ingot is sliced into wafers in fine geometry parameters. Inevitably, this step leaves scratches on the wafers' surface. An etching process is used to improve the surface finish, during which the wafers are submerged in the container of etchant for chemical reaction. The quality of the wafers after etching is measured by two response variables: the total thickness variation of the wafer (TTV) and the binary judgment that whether the wafer has cloudy stains in its appearance. The two responses measure the quality from different but connected aspects. There is a hidden dependency between the continuous TTV and binary judgment of stains. To improve the etching quality, expensive experiments are to be carried out to reveal this hidden dependency and to model the quality-process relationship. Therefore, an ideal experimental design for continuous and binary responses should be able to extract such useful information with economic run size. The classic design of experiments methods mainly focus on experiments with a single continuous response. There have been various methods developed for a single discrete response too, including Sitter and Torsney (1995); Woods et al. (2006); Russell et al. (2009); Woods (2010); Woods and Van de Ven (2012). For multiple responses, Draper and Hunter (1966) proposed the seminal work for continuous responses, Denman et al. (2011) developed a design method for bivariate binary responses modeled by Copula functions. In the case of mixed types of responses, the literature is very scarce. A naive design method is to combine the two designs that are separately constructed for each type of response. However, such a naive strategy could be reasonable for one type of response but problematic for the other by ignoring the dependency between the types of responses, as shown in Example 1. **Example 1**.: _Denote by \(Y\) and \(Z\) a continuous response and a binary response, respectively. Assume that the true model of the binary response \(Z\) is \(\mathbb{E}(Z|x)=\pi(x)=\exp(1+x)/(1+\exp(1+x))\). The true model of \(Y\) is related to \(Z\) in the form \(Y|Z=z\sim N(1-(1-z)x^{2},0.3^{2})\). Thus, \(\mathbb{E}(Y|Z=1)=1\) and \(\mathbb{E}(Y|Z=0)=1-x^{2}\). Using the naive design method, a 14-point design is constructed, which consists of an 8-point local \(D\)-optimal design for the model of \(Z\) with \(\log(\pi(x)/(1-\pi(x))=\eta_{0}+\eta_{1}x\), and a 6-point \(D\)-optimal design for linear regression with a quadratic model of \(x\). Given the design, we generate the responses from the true models of \(Y\) and \(Z\). Figure 1 (a)-(c) show the data \((x_{i},y_{i},z_{i})\) from the 8-point, 6-point and their combined 14-point design, respectively._ Figure 1: (a) Observations from design for \(Z\); (b) Observations from design for \(Y\); (c) Observations from the combined design. Dashed line “- - -” denotes \(\mathbb{E}(Y|Z=1)=1\); solid line “\(\--\)” denotes \(\mathbb{E}(Y|Z=0)=1-x^{2}\); point “\(+\)” denotes \((x_{i},y_{i})\) with \(z_{i}=1\); point “o” denotes \((x_{i},y_{i})\) with \(z_{i}=0\). In this example, there is a strong dependency between the two responses since the true underlying models of \(\mathbb{E}(Y|Z)\) are different when \(Z=1\) and \(Z=0\). In both designs for a single response shown in Figure 1 (a) and (b), the design points are balanced and reasonably distributed for the targeted response. However, since there are no \(Y\) observations for \(Z=0\) at \(x=1.0\) shown in Figure 1 (c), the quadratic model for \(Y|Z=0\) is not estimable. Clearly, the combined design is not suitable here. Note that this problem is not caused by outliers, since all the points for \(Z=1\) (with "+") are varied around \(Y=1\) and the points for \(Z=0\) (with "o") are around \(Y=1-x^{2}\). In fact \(P(Z=0|x=1)=0.12\), which is relatively small. Thus it is less likely to observe \(Y\) with \(Z=0\) at \(x=1.0\). A simple solution is to add more replications at \(x=1.0\), but it is not clear how many replications should be sufficient. It becomes more difficult to spot a direct solution when the experiments get more complicated. Such experiments call for new experimental design methods to account for both continuous and binary responses. Note that under the experimental design framework, the linear model is often considered for modeling the continuous response, and the generalized linear model (GLM) is often considered for modeling the qualitative response. A joint model must be developed to incorporate both types of responses. Compared to the classic design methods for linear models or GLMs, design for the joint model is more challenging due to the following aspects. First, the design criterion for the joint model is more complicated, as the joint model is more complicated than the separate ones. Second, experimental design for the GLM itself is more difficult than that for the linear model, which is naturally inherited by the design for the joint model. Third, efficient design construction algorithms are needed to handle the complexity of the design criterion based on the joint model. Kang and Huang (2019) proposed an \(A\)-optimal design for the experiments with both quantitative and qualitative responses. The \(A\)-optimality was derived under a Bayesian framework proposed in Kang et al. (2018). Although Kang and Huang (2019) addressed the three challenges to a degree, the \(A\)-optimality is not a commonly used criterion. More importantly, only informative prior is considered, which circumvented some difficulties brought by noninformative prior of the parameters. In this paper, we choose the most commonly used \(D\)-optimal design criterion and propose a novel Bayesian design method for the continuous and binary responses. The proposed method considers both cases of noninformative priors and informative priors. With the noninformative priors, the Bayesian framework is equivalent to the frequentist approach. In this case, we also establish some regularity conditions on the experimental run sizes. With the informative priors, we develop the \(D\)-optimal design using conjugate priors. The derived design criterion has meaningful interpretations in terms of the \(D\)-optimality criteria for the models of both continuous and binary responses. Moreover, we develop an efficient point-exchange algorithm to construct the proposed designs. The construction algorithm can be applied to more general settings other than factorial designs. The rest of the paper is organized as follows. Section 2 reviews the general Bayesian quantitative-qualitative (QQ) model and the optimal design criterion. The Bayesian \(D\)-optimal design criterion is derived using noninformative prior distributions in Section 3. In Section 4, the design criterion is derived with conjugate informative priors. Efficient algorithms for constructing optimal designs are elaborated in Section 5. One artificial example and the etching experimental design are shown in Section 6. Section 7 concludes this paper with some discussions. ## 2 General Bayesian QQ Model and Design We first review the general Bayesian QQ model introduced in Kang et al. (2018) and focus on the scenario that \(Y\) is a continuous response and \(Z\) is a binary response. The input variable \(\mathbf{x}=(x_{1},\ldots,x_{p})^{\prime}\in\mathbb{R}^{p}\) contains \(p\) dimensions. Denote the data as \((\mathbf{x}_{i},y_{i},z_{i}),i=1,\ldots,n\), where \(y_{i}\in\mathbb{R}\) and \(z_{i}\in\{0,1\}\). The vectors \(\mathbf{y}=(y_{i},\ldots,y_{n})^{\prime}\) and \(\mathbf{z}=(z_{1},\ldots,z_{n})^{\prime}\) are the vectors of response observations. To jointly model the continuous response \(Y\) and the binary response \(Z\) given \(\mathbf{x}\), consider the joint probability of \(Y|Z\) and \(Z\). The conditional model on \(Y|Z\) is assumed to be a linear regression model, while the model of \(Z\) is a logistic regression model. Specifically, we consider joint modeling of \(Y\) and \(Z\) as follows, \[Z=\left\{\begin{array}{ll}1,&\mbox{with probability $\pi(\mathbf{x})$}\\ 0,&\mbox{with probability $1-\pi(\mathbf{x})$}\end{array}\right.\mbox{ with }\pi(\mathbf{x},\mathbf{\eta})=\frac{\exp(\mathbf{f}(\mathbf{x})^{ \prime}\mathbf{\eta})}{1+\exp(\mathbf{f}(\mathbf{x})^{\prime}\mathbf{\eta})}, \tag{1}\] where \(\mathbf{f}(\mathbf{x})=(f_{1}(\mathbf{x}),\ldots,f_{q}(\mathbf{x}))\) contains \(q\) modeling effects including the intercept, the main, interaction and quadratic effects, etc., and \(\mathbf{\eta}=(\eta_{1},\ldots,\eta_{q})^{\prime}\) is a vector of parameter coefficients. Conditioning on \(Z=z\), the quantitative variable \(Y\) has the distribution \[Y|Z=z\sim N(\mu_{0}+z\mathbf{f}(\mathbf{x})^{\prime}\mathbf{\beta}^{(1)}+(1-z)\mathbf{f}(\mathbf{ x})^{\prime}\mathbf{\beta}^{(2)},\sigma^{2}), \tag{2}\] where \(\mathbf{\beta}^{(i)}=(\beta_{1}^{(i)},\ldots,\beta_{q}^{(i)})^{\prime},i=1,2\) are the corresponding coefficients of the model effects. The parameter \(\mu_{0}\) is the mean and \(\sigma^{2}\) is the noise variance. The above conditional model (2) indicates that \(Y|Z=1\sim N(\mu_{0}+\mathbf{f}(x)^{\prime}\mathbf{\beta}^{(1)},\sigma^{2})\) and \(Y|Z=0\sim N(\mu_{0}+\mathbf{f}(\mathbf{x})^{\prime}\mathbf{\beta}^{(2)},\sigma^{2})\). We assume the same variance \(\sigma^{2}\) for the two conditional distributions of \(Y|Z=1\) and \(0\). The design method developed in the paper can be easily adapted to the case with different variances. The association between the two responses \(Y\) and \(Z\) is represented using the conditional model \(Y|Z\). When the two linear models for \(Y|Z=0\) and \(Y|Z=1\) are different, i.e., \(\mathbf{\beta}^{(1)}\neq\mathbf{\beta}^{(2)}\), then it is important to take account of the influence of the qualitative response \(Z\) when modeling the quantitative response \(Y\). Let \(\mathbf{X}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})^{\prime}\) be the \(n\times p\) design matrix with \(\mathbf{x}_{i}\) as the \(i\)th design point. Based on the CB model, we can express the sampling distributions as \[\mathbf{y}|\mathbf{z},\mathbf{\beta}^{(1)},\mathbf{\beta}^{(2)},\mu_{0},\sigma^{ 2},\mathbf{X}\sim N(\mu_{0}\mathbf{1}+\mathbf{V}_{1}\mathbf{F}\mathbf{\beta}^{(1)}+\mathbf{V}_{2}\mathbf{ F}\mathbf{\beta}^{(2)},\sigma^{2}\mathbf{I}_{n}), \tag{3}\] \[\mathbf{z}|\mathbf{\eta},\mathbf{X}\sim\mbox{Bernoulli}(\pi(\mathbf{x}_{i},\mathbf{ \eta}))\mbox{ for }i=1,\ldots,n,\mbox{ and }\] (4) \[p(\mathbf{z}|\mathbf{\eta},\mathbf{X})\propto\exp\left\{\sum_{i=1}^{n}\left( z_{i}\mathbf{f}(\mathbf{x}_{i})^{\prime}\mathbf{\eta}-\log(1+e^{\mathbf{f}(\mathbf{x}_{i})^{ \prime}\mathbf{\eta}})\right)\right\},\] where \(p(\cdot)\) denotes a general density function. Here \(\mathbf{V}_{1}=\mbox{diag}\{z_{1},\ldots,z_{n}\}\) is a diagonal matrix, \(\mathbf{I}_{n}\) is the \(n\times n\) identity matrix and \(\mathbf{V}_{2}=\mathbf{I}_{n}-\mathbf{V}_{1}\), \(\mathbf{F}\) is the model matrix with the \(i\)th row as \(\mathbf{f}(\mathbf{x}_{i})^{\prime}\), and \(\mathbf{1}\) is a vector of ones. Denote \(p(\mathbf{\beta}^{(1)})\), \(p(\mathbf{\beta}^{(2)})\), and \(p(\mathbf{\eta})\) as the prior distributions of the parameters \(\mathbf{\beta}^{(1)}\), \(\mathbf{\beta}^{(2)}\), and \(\mathbf{\eta}\). Note that we focus on the estimation accuracy of the three groups of parameters. The mean \(\mu_{0}\) and variance \(\sigma^{2}\) are considered nuisance parameters and thus excluded from the optimal design criterion. In this work, we assume that the priors of \(\mathbf{\beta}^{(1)}\), \(\mathbf{\beta}^{(2)}\), and \(\mathbf{\eta}\) are independent. Under this assumption, the conditional posterior distribution of \(\mathbf{\eta}\), \(\mathbf{\beta}^{(1)}\), and \(\mathbf{\beta}^{(2)}\) are also independent as explained in Sections 3 and 4. Under the Bayesian framework, the conditional posterior distribution of the parameters \((\mathbf{\beta}^{(1)},\mathbf{\beta}^{(2)},\mathbf{\eta})\) can be derived as \[p(\mathbf{\beta}^{(1)},\mathbf{\beta}^{(2)},\mathbf{\eta}|\mathbf{y},\mathbf{z},\mu_{0},\sigma^{2}, \mathbf{X})\propto p(\mathbf{y}|\mathbf{z},\mathbf{\beta}^{(1)},\mathbf{\beta}^{(2)},\mu_{0}, \sigma^{2},\mathbf{X})p(\mathbf{\beta}^{(1)})p(\mathbf{\beta}^{(2)})p(\mathbf{z}|\mathbf{\eta},\mathbf{ X})p(\mathbf{\eta}) \tag{5}\] Using (5) we develop the general Bayesian optimal design criterion. Let \(\psi\) be a criterion function on the conditional posterior distribution of the parameters. For example, it can be the Shannon information (or equivalently, the Kullback-Leibler distance), \(A/I\)-optimality (Fedorov, 1972), or other design criteria. However, \(\psi(\cdot)\) cannot be directly used as the final optimal design criterion because its value depends on the random parameters \((\mathbf{\beta}^{(1)},\mathbf{\beta}^{(2)},\mathbf{\eta})\) and the experimental outputs \((\mathbf{y},\mathbf{z})\) that are not yet observed. The randomness of \((\mathbf{\beta}^{(1)},\mathbf{\beta}^{(2)},\mathbf{\eta})\) can be removed by calculating the mean of \(\psi\) with respect to these parameters. The uncertainty of \((\mathbf{y},\mathbf{z})\) can be removed by calculating the mean \(\mathbb{E}(\mathbb{E}(\psi|\mathbf{y},\mathbf{z}))\). Therefore, the general Bayesian optimal design criterion on the design matrix \(\mathbf{X}\) is \[\Psi(\mathbf{X}|\mu_{0},\sigma^{2})=\int p(\mathbf{y},\mathbf{z}|\mu_{0}, \sigma^{2},\mathbf{X})\times\left(\int\psi(p(\mathbf{\beta}^{(1)},\mathbf{\beta}^{(2)}, \mathbf{\eta}|\mathbf{y},\mathbf{z},\mu_{0},\sigma^{2},\mathbf{X}))\times\right. \tag{6}\] \[\left.p(\mathbf{\beta}^{(1)},\mathbf{\beta}^{(2)},\mathbf{\eta}|\mathbf{y},\mathbf{z},\mu_{0},\sigma^{2},\mathbf{X})d\mathbf{\beta}^{(1)}d\mathbf{\beta}^{(2)}d\mathbf{\eta}\right) d\mathbf{y}d\mathbf{z}.\] It is well-known that the Bayesian \(D\)-optimal design is equivalent to the Shannon information criterion (Chaloner and Verdinelli, 1995), omitting the constant terms to \(\mathbf{X}\). The criterion function \(\psi(\cdot)\) of Shannon information is \(\log(\cdot)\). Next, we develop the Bayesian \(D\)-optimal design criteria (6) under different prior distributions. ## 3 Optimal Design under Noninformative Priors When lacking domain knowledge or proper historical data, experimenters often favor the frequentist approach as no priors need to be specified. The frequentist approach can be seen as the Bayesian approach using noninformative priors. In this section, we derive the optimal design criterion and the regularity conditions for noninformative priors. ### Design Criterion Assume the non-informative priors \(p(\mathbf{\beta}^{(i)})\propto 1\) for \(i=1,2\) and \(p(\mathbf{\eta})\propto 1\). The conditional posterior distribution in (5) is the same as the joint distribution of the data. It can be further factorized into \[p(\mathbf{\beta}^{(1)},\mathbf{\beta}^{(2)},\mathbf{\eta}|\mathbf{y},\mathbf{z},\mu_{0},\sigma^{2},\mathbf{X})\propto p(\mathbf{\eta}|\mathbf{z},\mathbf{X})\prod_{i=1}^{2}p(\mathbf{\beta}^{(i)}| \mathbf{y},\mathbf{z},\mu_{0},\sigma^{2},\mathbf{X}),\] with the posterior distributions \[\mathbf{\beta}^{(i)}|\mathbf{y},\mathbf{z},\mu_{0},\sigma^{2},\mathbf{X}\sim N \left((\mathbf{F}^{\prime}\mathbf{V}_{i}\mathbf{F})^{-1}\mathbf{F}^{\prime}\mathbf{V}_{i}(\mathbf{y}- \mu_{0}\mathbf{1}),\sigma^{2}(\mathbf{F}^{\prime}\mathbf{V}_{i}\mathbf{F})^{-1}\right)\text{ for }i=1,2, \tag{7}\] \[p(\mathbf{\eta}|\mathbf{z},\mathbf{X})\propto\exp\left\{\sum_{i=1}^{n}\left( z_{i}\mathbf{f}(\mathbf{x}_{i})^{\prime}\mathbf{\eta}-\log(1+e^{\mathbf{f}(\mathbf{x}_{i})^{ \prime}\mathbf{\eta}})\right)\right\}. \tag{8}\] Conditioning on \(\mathbf{z}\), the posterior distributions of \((\mathbf{\beta}^{1},\mathbf{\beta}^{2})\) and \(\mathbf{\eta}\) are independent. Note that the noninformative prior \(p(\mathbf{\eta})\propto 1\) is proper because it leads to proper posterior \(p(\mathbf{\eta}|\mathbf{z},\mathbf{X})\). Under the noninformative priors, the Bayesian estimation is identical to the frequentist estimation. Using the posterior distributions (7)-(8) and the criterion function \(\psi(\cdot)=\log(\cdot)\) in the general Bayesian optimal design criterion (6), we obtain the Bayesian \(D\)-optimal design criterion (9). \[\Psi(\mathbf{X}|\mu_{0},\sigma^{2}) =\mathbb{E}_{\mathbf{z},\mathbf{\eta}}\left\{\log(p(\mathbf{\eta}|\mathbf{z},\mathbf{X}))\right\} \tag{9}\] \[+\frac{1}{2}\sum_{i=1}^{2}\mathbb{E}_{\mathbf{\eta}}\mathbb{E}_{\mathbf{z} |\mathbf{\eta}}\left\{\log\det\{(\mathbf{F}^{\prime}\mathbf{V}_{i}\mathbf{F})\}\right\}+\text{ constants}.\] The derivation is in Appendix A1. The first additive term in (9) is exactly the Bayesian \(D\)-optimal design criterion for GLMs. Unfortunately, its exact integration is not tractable. The common approach in experimental design for GLMs is to use a normal approximation for the posterior distribution \(p(\mathbf{\eta}|\mathbf{z},\mathbf{X})\)(Chaloner and Verdinelli, 1995; Khuri et al., 2006). Such an approximation leads to \[\mathbb{E}_{\mathbf{z},\mathbf{\eta}}\left\{\log(p(\mathbf{\eta}|\mathbf{z},\mathbf{X}))\right\} \approx\mathbb{E}_{\mathbf{\eta}}\{\log\det\mathbf{I}(\mathbf{\eta}|\mathbf{X})\}+\text{ constant}, \tag{10}\] where \(\mathbf{I}(\mathbf{\eta}|\mathbf{X})\) is the Fisher information matrix. We can easily show that \[\mathbf{I}(\mathbf{\eta}|\mathbf{X})=-\mathbb{E}_{\mathbf{z}}\left(\frac{\partial^{2}l(\mathbf{z},\mathbf{\eta}|\mathbf{X})}{\partial\mathbf{\eta}\partial\mathbf{\eta}^{T}}\right)=\sum_{i=1}^ {n}\mathbf{f}(\mathbf{x}_{i})\mathbf{f}(\mathbf{x}_{i})^{\prime}\pi(\mathbf{x}_{i},\mathbf{\eta})(1- \pi(\mathbf{x}_{i},\mathbf{\eta}))=\mathbf{F}^{\prime}\mathbf{W}_{0}\mathbf{F},\] where \(\mathbf{W}_{0}\) is a diagonal weight matrix \[\mathbf{W}_{0}=\text{diag}\{\pi(\mathbf{x}_{1},\mathbf{\eta})(1-\pi(\mathbf{x}_{1},\mathbf{\eta}) ),\ldots,\pi(\mathbf{x}_{n},\mathbf{\eta})(1-\pi(\mathbf{x}_{n},\mathbf{\eta}))\}.\] Omitting the irrelevant constant, we approximate the exact criterion \(\Psi(\mathbf{X}|\mu_{0},\sigma^{2})\) in (9) as follows. \[\Psi(\mathbf{X}|\mu_{0},\sigma^{2})\approx\mathbb{E}_{\mathbf{\eta}}\{\log\det(\mathbf{F} ^{\prime}\mathbf{W}_{0}\mathbf{F})\}+\frac{1}{2}\sum_{i=1}^{2}\mathbb{E}_{\mathbf{\eta}} \mathbb{E}_{\mathbf{z}|\mathbf{\eta}}\left\{\log\det(\mathbf{F}^{\prime}\mathbf{V}_{i}\mathbf{F} )\right\}. \tag{11}\] To construct the optimal design, we consider maximizing the approximated \(\Psi(\mathbf{X}|\mu_{0},\sigma^{2})\) in (11). But this is not trivial, because it involves the expectation on \(Z_{i}\)'s in the second additive term. To overcome this challenge, Hainy et al. (2013) constructed optimal designs by simulating samples from the joint distribution of responses and the unknown parameters. But this method can be computationally expensive for even slightly larger dimensions of experimental factors. Instead of simulating \(Z_{i}\)'s, we derive the following Theorem 1 that gives a tractable upper bound \(Q(\mathbf{X})\). Thus we propose using the upper bound \(Q(\mathbf{X})\) as the optimal criterion. **Theorem 1**.: _Assume that the matrices \(\mathbf{F}^{\prime}\mathbf{W}_{0}\mathbf{F}\), \(\mathbf{F}^{\prime}\mathbf{V}_{1}\mathbf{F}\), and \(\mathbf{F}^{\prime}\mathbf{V}_{2}\mathbf{F}\) are all nonsingular. Omitting the irrelevant constant, an upper bound of the approximated \(\Psi(\mathbf{X}|\mu_{0},\sigma^{2})\) is_ \[Q(\mathbf{X})=\mathbb{E}_{\mathbf{\eta}}\left\{\log\det(\mathbf{F}^{\prime}\mathbf{W}_{0}\mathbf{ F})+\frac{1}{2}\sum_{i=1}^{2}\log\det(\mathbf{F}^{\prime}\mathbf{W}_{i}\mathbf{F}) \right\}, \tag{12}\] _where \(\mathbf{W}_{1}=\text{diag}\{\pi(\mathbf{x}_{1},\mathbf{\eta}),\ldots,\pi(\mathbf{x}_{n},\mathbf{ \eta})\}\) and \(\mathbf{W}_{2}=\mathbf{I}_{n}-\mathbf{W}_{1}\)._ The proof of Theorem 1 is in Appendix A1. Note that Theorem 1 requires that \(\mathbf{F}^{\prime}\mathbf{W}_{i}\mathbf{F}\) for \(i=0,1,2\) are all nonsingular. Obviously \(\mathbf{W}_{0}=\mathbf{W}_{1}\mathbf{W}_{2}\). It is easy to see that \(\mathbf{F}^{\prime}\mathbf{W}_{0}\mathbf{F}\) is nonsingular if and only if both \(\mathbf{F}^{\prime}\mathbf{W}_{1}\mathbf{F}\) and \(\mathbf{F}^{\prime}\mathbf{W}_{2}\mathbf{F}\) are nonsingular. The matrices \(\mathbf{F}^{\prime}\mathbf{V}_{1}\mathbf{F}\) and \(\mathbf{F}^{\prime}\mathbf{V}_{2}\mathbf{F}\) involve the responses \(Z_{i}\)'s that are not yet observed at the experimental design stage. We can only choose the experimental run size and the design points to avoid the singularity problem with a larger probability for given values of \(\mathbf{\eta}\). Once the run size is chosen, the design points can be optimally arranged by maximizing \(Q(\mathbf{X})\). The weight matrix \(\mathbf{W}_{1}\) (or \(\mathbf{W}_{2}\)) gives more weight to the feasible design points that are more likely to lead to \(Z=1\) (or \(Z=0\)) observations so that the parameters \(\mathbf{\beta}^{(1)}\) (or \(\mathbf{\beta}^{(2)}\)) of the linear model \(Y|Z=1\) (or \(Y|Z=0\)) are more likely to be estimable. Next, we introduce some regularity conditions on the run size and number of replications to alleviate the singularity problem. ### Regularity Conditions Let \(m\) be the number of distinct design points in the design matrix \(\mathbf{X}\), \(n_{i}\) be the number of repeated point \(\mathbf{x}_{i}\) in \(\mathbf{X}\) for \(i=1,\ldots,m\). Thus \(n=\sum_{i=1}^{m}n_{i}\) and \(n_{i}\geq 1\) for \(i=1,\ldots,m\). First, it is necessary that \(m\geq q\) for the linear regression model to be estimable under the noninformative priors. The if-and-only-if condition for \(\mathbf{F}^{\prime}\mathbf{W}_{0}\mathbf{F}\) to be nonsingular is that \(\text{rank}(\mathbf{F}^{\prime}\mathbf{W}_{0}\mathbf{F})\geq q\). If \(m\geq q\) and \(\pi(\mathbf{x}_{i},\mathbf{\eta})\in(0,1)\) for \(i=1,\ldots,m\), then \(\mathbf{F}^{\prime}\mathbf{W}_{i}\mathbf{F}\) for \(i=0,1,2\) are all nonsingular and thus positive definite. To make sure \(\pi(\mathbf{x}_{i},\mathbf{\eta})\in(0,1)\) for \(i=1,\ldots,m\), it is sufficient to assume that \(\mathbf{\eta}\) is finitely bounded. This condition is typically used for the frequentist \(D\)-optimal design for GLMs. For instance, Woods et al. (2006) suggested using the centroids of the finite bounded space of \(\mathbf{\eta}\) to develop the local \(D\)-optimal design for GLMs. Dror and Steinberg (2006) clustered different local \(D\)-optimal designs with \(\mathbf{\eta}\) randomly sampled from its bounded space. The if-and-only-if condition for \(\mathbf{F}^{\prime}\mathbf{V}_{1}\mathbf{F}\) and \(\mathbf{F}^{\prime}\mathbf{V}_{2}\mathbf{F}\) being nonsingular is \[\sum_{i=1}^{m}I(\sum_{j=1}^{n_{i}}Z_{ij}>0)\geq q\quad\text{and} \quad\sum_{i=1}^{m}I(\sum_{j=1}^{n_{i}}Z_{ij}<n_{i})\geq q, \tag{13}\] where \(Z_{ij}\) is the \(j\)th random binary response at the unique design point \(\mathbf{x}_{i}\) and \(I(\cdot)\) is the indicator function. In the following, we discuss how to choose sample sizes \(n_{i}\) and \(n\) under two scenarios (i) \(m=q\) and (ii) \(m>q\). **Proposition 1**.: _Assume \(m=q\). Both \(\mathbf{F}^{\prime}\mathbf{V}_{1}\mathbf{F}\) and \(\mathbf{F}^{\prime}\mathbf{V}_{2}\mathbf{F}\) are nonsingular if and only if \(I(0<\sum_{j=1}^{n_{i}}Z_{ij}<n_{i})=1\) for \(i=1,2,\ldots,m\). For any given \(\kappa\in(0,1)\), a sufficient condition on \(n_{i}\) for \(\Pr(0<\sum_{j=1}^{n_{i}}Z_{ij}<n_{i})\geq\kappa\) is_ \[n_{i}\geq 1+\left\lceil\frac{\log(1-\kappa)}{\log\left(\max\left\{\pi(\mathbf{x}_{i}, \mathbf{\eta}),1-\pi(\mathbf{x}_{i},\mathbf{\eta})\right\}\right)}\right\rceil\quad\text{ for }i=1,2,\ldots,m, \tag{14}\] _and a necessary condition is_ \[n_{i}\geq\left\lceil\frac{2\log\left(\frac{1-\kappa}{2}\right)}{\log\pi(\mathbf{x }_{i},\mathbf{\eta})+\log(1-\pi(\mathbf{x}_{i},\mathbf{\eta}))}\right\rceil\quad\text{ for }i=1,2,\ldots,m. \tag{15}\] **Proposition 2**.: _Assume \(m>q\). To make both \(\mathbf{F}^{\prime}\mathbf{V}_{1}\mathbf{F}\) and \(\mathbf{F}^{\prime}\mathbf{V}_{2}\mathbf{F}\) nonsingular with large probability, or equivalently,_ \[\mathbb{E}\left\{\sum_{i=1}^{m}I(\sum_{j=1}^{n_{i}}Z_{ij}>0)\right\}\geq q \quad\text{and}\quad\mathbb{E}\left\{\sum_{i=1}^{m}I(\sum_{j=1}^{n_{i}}Z_{ij} <n_{i})\right\}\geq q,\] 1. _a sufficient condition is_ \[n_{0}\geq\max\left\lceil\left\{1,\frac{\log(1-q/m)}{\log(1-\pi_{\min})},\frac{ \log(1-q/m)}{\log\pi_{\max}}\right\}\right\rceil,\] (16) _which is the same as_ \[n\geq\left\lceil m\cdot\max\left\{1,\frac{\log(1-q/m)}{\log(1-\pi_{\min})},\frac{ \log(1-q/m)}{\log\pi_{\max}}\right\}\right\rceil,\] (17) _where_ \(n_{0}=\min\{n_{1},\ldots,n_{m}\}\)_,_ \(\pi_{\min}=\min_{i=1}^{m}\pi(\mathbf{x}_{i},\mathbf{\eta})\)_,_ \(\pi_{\max}=\max_{i=1}^{m}\pi(\mathbf{x}_{i},\mathbf{\eta})\) _and_ \(\mathbf{x}_{i}\)_'s are the unique design points;_ 2. _a necessary condition is_ \[n_{0}\geq\left\lceil\max\left\{1,\frac{\log(1-q/m)}{\log(1-\pi_{\max})},\frac{ \log(1-q/m)}{\log\pi_{\min}}\right\}\right\rceil\] (18) _which is the same as_ \[n\geq\left\lceil m\cdot\max\left\{1,\frac{\log(1-q/m)}{\log(1-\pi_{\max})}, \frac{\log(1-q/m)}{\log\pi_{\min}}\right\}\right\rceil.\] (19) Proposition 1 gives a sufficient condition on the lower bound of \(n_{i}\) when saturated design (\(m=q\)) is used. Under the sufficient condition, the nonsingularity of \(\mathbf{FV_{1}F}\) and \(\mathbf{FV_{2}F}\) holds with a probability larger than \(\kappa^{m}\). For Example 1 in Section 1, suppose that if the possible values of \(x\) can only be \(-1,0,1\), then \(m=q=3\). Let \(\mathbf{\eta}=(1,1)^{\prime}\). If \(\kappa=0.5\), then the numbers of replications for \(x=-1,0,1\) need to satisfy \(n_{1}\geq 2\), \(n_{2}\geq 4\), and \(n_{3}\geq 7\), respectively. If \(\kappa=0.9\), then \(n_{1}\geq 5\), \(n_{2}\geq 9\), and \(n_{3}\geq 20\). Proposition 1 is useful in Step 1 to construct the initial design in Algorithm 2 in Section 5. Proposition 2 provides one sufficient condition and one necessary condition when \(m>q\) on the smallest number of replications and the overall run size. But these conditions only examine the nonsingularity of the two matrices with large probability, which is weaker than Proposition 1. For given \(\mathbf{\eta}\) value, Algorithm 2 in Section 5.1 can return the local \(D\)-optimal design. Proposition 2 can be useful to check the local \(D\)-optimal design, as \(\pi_{\min}\) and \(\pi_{\max}\) depend on \(\mathbf{\eta}\). Take the artificial example in Section 6.1 for instance. The local \(D\)-optimal design for \(\rho=0\) (Table A1 in Appendix A2) has \(m=50\) unique design points and there are \(q=22\) effects. According to Proposition 2, the sufficient condition requires \(n_{0}\geq 7\) and the necessary condition requires \(n_{0}\geq 1\). The local \(D\)-optimal design in Table A1 only satisfies the necessary condition. To meet the sufficient condition \(n\) has to be much larger. For the global optimal design considering all possible \(\mathbf{\eta}\) values, Proposition 2 can provide some guidelines for the design construction when \(\mathbf{\eta}\) is varied in a relatively small range. ## 4 Optimal Design under Conjugate Priors When prior information for parameters \(\mathbf{\beta}^{(1)}\), \(\mathbf{\beta}^{(2)}\), and \(\mathbf{\eta}\) is available, it would be desirable to consider the optimal design under the informative priors. In this section, we detail the proposed Bayesian \(D\)-optimal design using the conjugate priors. ### Design Criterion For the parameters \(\mathbf{\beta}^{(1)}\) and \(\mathbf{\beta}^{(2)}\), the conjugate priors are normal distribution since \(Y|Z\) follows normal distribution. Thus we consider their priors as \[\mathbf{\beta}^{(1)}\sim N(\mathbf{0},\tau^{2}\mathbf{R}_{1}),\quad\mathbf{\beta}^{(2)}\sim N (\mathbf{0},\tau^{2}\mathbf{R}_{2}).\] where \(\tau^{2}\) is the prior variance and \(\mathbf{R}_{i}\) is the prior correlation matrix of \(\mathbf{\beta}^{(i)}\) for \(i=1,2\). Here we use the same prior variance \(\tau^{2}\) only for simplicity. The matrix \(\mathbf{R}_{i}\) can be specified flexibly such as using \((\mathbf{F}^{\prime}\mathbf{F})^{-1}\), or those in Joseph (2006) for factorial designs. For the parameters \(\mathbf{\eta}\), we choose the conjugate prior derived in Chen and Ibrahim (2003). It takes the form \[\mathbf{\eta}\sim D(s,\mathbf{b})\propto\exp\left\{\sum_{i=1}^{n}s\left(b_{i}\mathbf{f}( \mathbf{x}_{i})^{\prime}\mathbf{\eta}-\log(1+e^{\mathbf{f}(\mathbf{x}_{i})^{\prime}\mathbf{\eta}}) \right)\right\}, \tag{20}\] where \(D(s,\mathbf{b})\) is the distribution with parameters \((s,\mathbf{b})\). Here \(s\) is a scalar factor and \(\mathbf{b}\in(0,1)^{n}\) is the marginal mean of \(\mathbf{z}\) as shown in Diaconis and Ylvisaker (1979). The value of \(\mathbf{b}\) can be interpreted as a prior prediction (or guess) for \(\mathbb{E}(\mathbf{Z})\). Based on the priors for \((\mathbf{\beta}^{(1)},\mathbf{\beta}^{(2)},\mathbf{\eta})\) we can derive the posteriors as follows. **Proposition 3**.: _For priors \(\mathbf{\beta}^{(i)}\sim N(\mathbf{0},\tau^{2}\mathbf{R}_{i})\), \(i=1,2\), and \(\mathbf{\eta}\sim D(s,\mathbf{b})\), the posterior distributions of \(\mathbf{\beta}^{(1)}\), \(\mathbf{\beta}^{(2)}\) and \(\mathbf{\eta}\) are independent of each other with the following forms._ \[\mathbf{\beta}^{(i)}|\mathbf{y},\mathbf{z},\mu_{0},\sigma^{2},\mathbf{X} \sim N\left(\mathbf{H}_{i}^{-1}\mathbf{F}^{\prime}\mathbf{V}_{i}(\mathbf{y}-\mu_{0 }\mathbf{1}),\sigma^{2}\mathbf{H}_{i}^{-1}\right),\qquad\text{for $i=1,2$.} \tag{21}\] \[\mathbf{\eta}|\mathbf{z},\mathbf{X} \sim D\left(1+s,\frac{\mathbf{z}+s\mathbf{b}}{1+s}\right), \tag{22}\] _where \(\mathbf{H}_{i}=\mathbf{F}^{\prime}\mathbf{V}_{i}\mathbf{F}+\rho\mathbf{R}_{i}^{-1}\) with \(\rho=\frac{\sigma^{2}}{\tau^{2}}\)._ The proof of Proposition 3 can be derived following the standard Bayesian framework, thus is omitted. To derive the Bayesian \(D\)-optimal design criterion, we take the posterior distributions in Proposition 3 to (6) and set \(\psi(\cdot)=\log(\cdot)\). The derivation is very similar to that in (9), and thus we obtain the exact design criterion as \[\Psi(\mathbf{X}|\mu_{0},\sigma^{2}) =\mathbb{E}_{\mathbf{z},\mathbf{\eta}}\left\{\log(p(\mathbf{\eta}|\mathbf{z},\mathbf{ X}))\right\} \tag{23}\] \[+\frac{1}{2}\sum_{i=1}^{2}\mathbb{E}_{\mathbf{\eta}}\mathbb{E}_{\mathbf{z }|\mathbf{\eta}}\left\{\log\det(\mathbf{F}^{\prime}\mathbf{V}_{i}\mathbf{F}+\rho\mathbf{R}_{i}^{-1 })\right\}+\text{constant}.\] As the integration of \(\mathbb{E}_{\mathbf{z},\mathbf{\eta}}\left\{\log(p(\mathbf{\eta}|\mathbf{z},\mathbf{X}))\right\}\) is not tractable, we adopt the same normal approximation of the posterior distribution \(p(\mathbf{\eta}|\mathbf{z},\mathbf{X})\) as in (10). A straightforward calculation leads to getting Fisher information matrix \(\mathbf{I}(\mathbf{\eta}|\mathbf{X})=(1+s)\mathbf{F}^{\prime}\mathbf{W}_{0}\mathbf{F}\). Thus we have \[\mathbb{E}_{\mathbf{z},\mathbf{\eta}}\left\{\log(p(\mathbf{\eta}|\mathbf{z},\mathbf{X}))\right\} \approx\mathbb{E}_{\mathbf{\eta}}\{\log\det(\mathbf{F}^{\prime}\mathbf{W}_{0}\mathbf{F})\}+ \text{constant}. \tag{24}\] Disregarding the constant, we can approximate the exact \(\Psi(\mathbf{X}|\mu_{0},\sigma^{2})\) by \[\Psi(\mathbf{X}|\mu_{0},\sigma^{2})\approx\mathbb{E}_{\mathbf{\eta}}\{\log\det(\mathbf{F}^ {\prime}\mathbf{W}_{0}\mathbf{F})\}+\frac{1}{2}\sum_{i=1}^{2}\mathbb{E}_{\mathbf{\eta}} \mathbb{E}_{\mathbf{z}|\mathbf{\eta}}\left\{\log\det(\mathbf{F}^{\prime}\mathbf{V}_{i}\mathbf{F}+ \rho\mathbf{R}_{i}^{-1})\right\},\] The following Theorem 2 gives an upper bound of the approximated criterion \(\Psi(\mathbf{X}|\mu_{0},\sigma^{2})\) to avoid the integration with respect to \(\mathbf{z}\), which plays the same role as Theorem 1. **Theorem 2**.: _Assume that the prior distributions of \(\mathbf{\beta}^{(i)}\) are \(\mathbf{\beta}^{(i)}\sim N(\mathbf{0},\tau^{2}\mathbf{R}_{i})\) for \(i=1,2\) and \(\mathbf{\eta}\) has either the conjugate prior \(\mathbf{\eta}\sim D(s,\mathbf{b})\) or the noninformative prior \(p(\mathbf{\eta})\propto 1\). If \(\mathbf{F}^{\prime}\mathbf{W}_{0}\mathbf{F}\) is nonsingular, an upper bound of the approximated \(\Psi(\mathbf{X}|\mu_{0},\sigma^{2})\) is_ \[Q(\mathbf{X})=\mathbb{E}_{\mathbf{\eta}}\left(\log\det(\mathbf{F}^{\prime}\mathbf{W}_{0}\mathbf{F})+ \frac{1}{2}\sum_{i=1}^{2}\log\det(\mathbf{F}^{\prime}\mathbf{W}_{i}\mathbf{F}+\rho\mathbf{R}_{i }^{-1})\right). \tag{25}\] For the same argument as in Section 3, we use the upper bound in (25) as the optimal design criterion. Note that since \(\rho\mathbf{R}_{i}^{-1}\) is added to \(\mathbf{F}^{\prime}\mathbf{V}_{i}\mathbf{F}\) and \(\mathbf{F}^{\prime}\mathbf{W}_{i}\mathbf{F}\), \(\mathbf{F}^{\prime}\mathbf{V}_{i}\mathbf{F}+\rho\mathbf{R}_{i}^{-1}\) and \(\mathbf{F}^{\prime}\mathbf{W}_{i}\mathbf{F}+\rho\mathbf{R}_{i}^{-1}\) are nonsingular. The derivation of \(\Psi(\mathbf{X}|\mu_{0},\sigma^{2})\) and \(Q(\mathbf{X})\) only needs \(\mathbf{F}^{\prime}\mathbf{W}_{0}\mathbf{F}\) to be nonsingular, which requires \(m\geq q\) and \(\mathbf{\eta}\) to be finitely bounded as in Section 3. ### Interpretation Note that the criterion in (25) has a similar formulation with \(Q(\mathbf{X})\) in (12). The only difference is that (12) does not involve \(\rho\mathbf{R}_{i}^{-1}\). For consistency, we use the formula (25) as the design criterion \(Q(\mathbf{X})\) for both cases. When noninformative priors for \(\mathbf{\beta}^{(1)}\) and \(\mathbf{\beta}^{(2)}\) are used, we set \(\rho=0\). From another point of view, as \(\tau^{2}\to\infty\), \(\rho\to 0\), the variances in the priors \(p(\mathbf{\beta}_{1})\) and \(p(\mathbf{\beta}_{2})\) diffuse and result in a noninformative priors. The criterion \(Q(\mathbf{X})\), consisting of three additive terms, can be interpreted intuitively. The first additive term \(\mathbb{E}_{\mathbf{\eta}}\{\log\det(\mathbf{F}^{\prime}\mathbf{W}_{0}\mathbf{F})\}\) is known as the Bayesian \(D\)-optimal criterion for logistic regression and \(\mathbb{E}_{\mathbf{\eta}}\{\log\det(\mathbf{F}^{\prime}\mathbf{W}_{i}\mathbf{F}+\rho\mathbf{R}_{ i}^{-1})\}\) is the Bayesian \(D\)-optimal criterion for the linear regression model of \(Y\). To explain the weights, we rewrite \(Q(\mathbf{X})\) as follows. \[Q(\mathbf{X})=1\cdot\mathbb{E}_{\mathbf{\eta}}\left(\log\det(\mathbf{F}^{\prime}\mathbf{W}_{0 }\mathbf{F})\right)+1\cdot\left(\frac{1}{2}\sum_{i=1}^{2}\mathbb{E}_{\mathbf{\eta}}( \log\det(\mathbf{F}^{\prime}\mathbf{W}_{i}\mathbf{F}+\rho\mathbf{R}_{i}^{-1}))\right)\] Since there are equal numbers of binary and continuous response observations, the design criterion should put the same weight (equal to 1) on both design criteria for \(Z\) and \(Y\). For the two criteria for the linear regression models, the same weight \(1/2\) is used. This is also reasonable because we assume \(\pi_{i}\in(0,1)\). Then none of the diagonal entries of \(\mathbf{W}_{1}\) and \(\mathbf{W}_{2}\) are zero, so the two terms should split the total weight 1 assigned for the entire linear regression part. Therefore, even though \(Q(\mathbf{X})\) are derived analytically, all the additive terms and their weights make sense intuitively. ### Prior Parameters Note that the conjugate prior \(p(\mathbf{\eta})\) requires prior parameters \((s,\mathbf{b})\) to be specified. Moreover, the prior distribution (20) contains \(f(\mathbf{x}_{i})\), which depends on the design points. When sampling \(\mathbf{\eta}\) from the prior (20), it does not matter whether \(f(\mathbf{x}_{i})\)'s are actually from the design points. If relevant historical data is available, we can simply sample \(\mathbf{\eta}\) from the likelihood of the data. Alternatively, one can adopt the method in Chen and Ibrahim (2003) to estimate the parameters \((s,\mathbf{b})\). Without the relevant data, we would use the noninformative prior for \(\mathbf{\eta}\), i.e., \(p(\mathbf{\eta})\propto 1\) in the bounded region for \(\mathbf{\eta}\). The design criterion \(Q(\mathbf{X})\) contains some unknown parameters, including the noise-to-signal ratio \(\rho=\sigma^{2}/\tau^{2}\) and the correlation matrices \(\mathbf{R}_{i}\)'s for \(i=1,2\). The value of \(\rho\) has to be specified either from the historical data or from the domain knowledge. Typically we would assume \(\rho<1\) such that the measurement error has a smaller variance than the signal variance. The setting of \(\mathbf{R}_{i}\) can also be specified flexibly. If historical data are available, \(\mathbf{R}_{i}\) can be set as the estimated correlation matrix of \(\mathbf{\beta}^{(i)}\). Otherwise, we can use the correlation matrix in Joseph (2006) and Kang and Joseph (2009), which is targeted for factorial design. Specifically, let \(\mathbf{\beta}\) be the unknown coefficients of the linear regression model and the prior distribution is \(\mathbf{\beta}\sim N(\mathbf{0},\tau^{2}\mathbf{R})\). For 2-level factor coded in \(-1\) and \(1\), Joseph (2006) suggests that \(\mathbf{R}\) is a diagonal matrix and the priors for individual \(\beta_{j}\) is \[\beta_{0} \sim N(0,\tau^{2}), \tag{26}\] \[\beta_{j} \sim N(0,\tau^{2}r),\qquad i=1,\ldots,p,\] \[\beta_{j} \sim N(0,\tau^{2}r^{2}),\qquad i=p+1,\ldots,p+\binom{p}{2},\] \[\vdots\] \[\beta_{2^{p}-1}\sim N(0,\tau^{2}r^{p}),\] where \(\beta_{j}\)\(i=1,\ldots,p\) are main effects, \(\beta_{j}\)\(j=p+1,\ldots,p+\binom{p}{2}\) are 2-factor-interactions and up to the \(p\)-factor-interaction \(\beta_{2^{p}-1}\). The variance of \(\beta_{j}\) decreases exponentially with the order of their corresponding effects by \(r\in(0,1)\), thus it incorporates the _effects hierarchy principle_(Wu and Hamada, 2011). Joseph (2006) showed that if \(\mathbf{f}(\mathbf{x})\) contains all the \(2^{p}\) effects of all \(p\) orders, \(\tau^{2}\mathbf{R}\) can be represented alternatively by Kronecker product as \(\tau^{2}\mathbf{R}=\varsigma^{2}\bigotimes_{j=1}^{p}\mathbf{F}_{j}(x_{j})^{-1}\mathbf{ \Psi}_{j}(x_{j})(\mathbf{F}_{j}(x_{j}))^{-1}\). The model matrix for the 2-level factor and the correlation matrix are \[\mathbf{F}_{j}(x_{j})=\left(\begin{array}{cc}1&-1\\ 1&1\end{array}\right)\ \ \text{and}\ \ \mathbf{\Psi}_{j}(x_{j})=\left(\begin{array}{cc}1& \zeta\\ \zeta&1\end{array}\right). \tag{27}\] To keep the two different presentations equivalent, let \(\zeta=\frac{1-r}{1+r}\) and \(\tau^{2}=(\frac{1+\zeta}{2})^{p}\varsigma^{2}\). For the mixed-level of 2- and 3-level experiments, Kang and Joseph (2009) have extended the 2-level case to the format \[\tau^{2}\mathbf{R}=\varsigma^{2}\quad\bigotimes_{j=1}^{p_{2}+p_{3,c}+p_{3,q}}\quad \mathbf{F}_{j}(x_{j})^{-1}\mathbf{\Psi}_{j}(x_{j})(\mathbf{F}_{j}(x_{j})^{-1})^{\prime}, \tag{28}\] where \(p_{2}\) is the number of 2-level factors, \(p_{3,c}\) is the number of 3-level qualitative (categorical) factors, and \(p_{3,q}\) is the number of 3-level quantitative factors. For all the 3-level factors, the model matrix is \[\mathbf{F}_{j}(x_{j})=\left(\begin{array}{ccc}1&-\sqrt{\frac{3}{2}}&\sqrt{\frac {1}{2}}\\ 1&0&-\sqrt{2}\\ 1&\sqrt{\frac{3}{2}}&\sqrt{\frac{1}{2}}\end{array}\right). \tag{29}\] An isotropic correlation function is recommended for the 3-level qualitative factors and a Gaussian correlation function for quantitative factors. Thus, the correlation matrices for the 3-level qualitative and quantitative factors are \[\mathbf{\Psi}_{j}(x_{j})=\left(\begin{array}{ccc}1&\zeta&\zeta\\ \zeta&1&\zeta\\ \zeta&\zeta&1\end{array}\right)\ \ \text{and}\ \ \mathbf{\Psi}_{j}(x_{j})=\left( \begin{array}{ccc}1&\zeta&\zeta^{4}\\ \zeta&1&\zeta\\ \zeta^{4}&\zeta&1\end{array}\right), \tag{30}\] respectively. To keep the covariance (28) consistent with the 2-level case we still set \(\zeta=\frac{1-r}{1+r}\). To keep the variance of the intercept equal to \(\tau^{2}\)(Kang and Joseph, 2009), we set \[\tau^{2}=\varsigma^{2}\left(\frac{1+\zeta}{2}\right)^{p_{2}}\left(\frac{1+2 \zeta}{3}\right)^{p_{3,c}}\left(\frac{3+4\zeta+2\zeta^{4}}{9}\right)^{p_{3,q}},\] and thus \[\mathbf{R}=\left(\left(\frac{1+\zeta}{2}\right)^{p_{2}}\left(\frac{1+2\zeta}{3} \right)^{p_{3,c}}\left(\frac{3+4\zeta+2\zeta^{4}}{9}\right)^{p_{3,q}}\right)^{- 1}\times\bigotimes_{j=1}^{p_{2}+p_{3,c}+p_{3,q}}\mathbf{F}_{j}(x_{j})^{-1}\mathbf{ \Psi}_{j}(x_{j})(\mathbf{F}_{j}(x_{j})^{-1})^{\prime}.\] It is straightforward to prove that \(\mathbf{R}\) is a diagonal matrix if only 2-level and 3-level qualitative factors are involved, but not so if any 3-level quantitative factors are involved, and the first diagonal entry of \(\mathbf{R}\) is always 1. To specify different prior distributions for \(\mathbf{\beta}^{(1)}\) and \(\mathbf{\beta}^{(2)}\), we only need to use different values \(r_{1}\) (or \(\zeta_{1}\)) and \(r_{2}\) (or \(\zeta_{2}\)) to construct the prior correlation matrix. If the prior knowledge assumes that the two responses \(Z\) and \(Y\) are independent, one can set \(r_{1}=r_{2}=r\) so that the two correlation matrices \(\mathbf{R}_{i}\)'s are the same, denoted as \(\mathbf{R}\). Kang and Joseph (2009) has used \(r=1/3\) (equivalently \(\zeta=1/2\)) according to a meta-analysis of 113 data sets from published experiments (Li et al., 2006). Thus we also use \(r=1/3\) in all the examples. The readers can specify different values for \(r_{1}\) and \(r_{2}\) if needed. In computation, we construct \(\mathbf{R}\) using the Kronecker product in (28). But such \(\mathbf{R}\) is for \(\mathbf{f}(\mathbf{x})\) containing effects of all possible orders. Usually, we would assume the model just contains lower-order effects. So we just pick the rows and columns that correspond to the lower-order effects assumed in the model as the correlation matrix. ## 5 Design Search Algorithm In this work, we focus on the construction of optimal design based on factorial design, which is suited for the prior distribution introduced in Section 4.3. For optimizing the design criterion \(Q(\mathbf{X})\) we consider two cases. First, for fixed \(\mathbf{\eta}\) value, we develop a point-exchange algorithm to construct a _local optimal design_ that maximizes the criterion \(Q(\mathbf{X}|\mathbf{\eta})\). Second, we construct a _global optimal design_ based on the prior distribution of \(\mathbf{\eta}\). Specifically, we construct the local optimal designs for different \(\mathbf{\eta}\)'s sampled from its prior distribution. Then the global optimal continuous design is obtained by accumulating the frequencies of design points selected into those local optimal designs. ### Local Optimal Design for Fixed \(\eta\) For a fixed \(\mathbf{\eta}\), we adapt the point-wise exchange algorithm to maximize the criterion \[Q(\mathbf{X}|\mathbf{\eta})=\log\det(\mathbf{F}^{\prime}\mathbf{W}_{0}\mathbf{F})+\frac{1}{2} \sum_{i=1}^{n}\log\det(\mathbf{F}^{\prime}\mathbf{W}_{i}\mathbf{F}+\rho\mathbf{R}_{i}^{-1}).\] The point-wise exchange algorithm is commonly used to construct \(D\)-optimal designs. It was first introduced by Fedorov (1972) and then widely used in many works (Cook and Nachtrheim, 1980; Nguyen and Miller, 1992). The point-wise exchange algorithm finds the optimal design from a candidate set. Here the candidate set is chosen to be the full factorial design without replicates. For now, we develop the method for 2- and 3-level factors, but it can be generalized to factors of more levels. Use previous notation that \(p_{2}\), \(p_{3,c}\), \(p_{3,q}\) as the number of 2-level, 3-level categorical, and 3-level quantitative factors. The total number of full factorial design points is \(N=2^{p_{2}}3^{p_{3,c}+p_{3,q}}\), which can be large if the experiment involves many factors. To make the algorithm efficient, we filter out the candidate points that are unlikely to be the optimal design points. Following the suggestion from Dror and Steinberg (2005), we exclude the candidate design points whose corresponding probabilities \(\pi(\mathbf{x},\mathbf{\eta})\) is outside of \([0.15,0.85]\). This range is used because the approximate variance of \(\log\left(\frac{\pi_{i}}{1-\pi_{i}}\right)\) is nearly constant for \(\pi_{i}\in(0.2,0.8)\) but increases rapidly if \(\pi_{i}\) is outside that range (Wu, 1985). Denote the reduced candidate set as \(\mathbf{X}_{c}\) with size \(N^{\prime}\). Next we construct the initial design of size \(n\), such that \(\mathbf{F}^{\prime}\mathbf{W}_{0}\mathbf{F}\) is nonsingular, and so should be \(\mathbf{F}^{\prime}\mathbf{W}_{i}\mathbf{F}\) if \(\rho=0\) for \(i=1,2\). If \(N^{\prime}\geq q\), we construct the initial design by reduction. Starting the initial design as \(\mathbf{X}_{c}\), we remove the design points one by one until there are \(q\) points left. The remaining \(n-q\) design points are then sampled from these \(q\) initial design points with probabilities proportional to the lower bounds in the sufficient condition in Proposition 1. For removing one design point, we select the one having the smallest deletion function \(d(\mathbf{x})\) defined in (31). Shortcut formulas are developed in Appendix for updating the inverse of the matrices \(\mathbf{F}^{\prime}\mathbf{W}_{0}\mathbf{F}\) and \(\mathbf{F}^{\prime}\mathbf{W}_{i}\mathbf{F}+\rho\mathbf{R}\) for \(i=1,2\) after one design point is removed. If \(N^{\prime}\leq q\), we have to restore the candidate set back to the full factorial design and construct the initial design in the same reduction fashion. To simplify the notation for \(d(\mathbf{x})\), we define \(v_{i}(\mathbf{x}_{1},\mathbf{x}_{2})=\mathbf{f}(\mathbf{x}_{1})^{\prime}\mathbf{M}_{i}\mathbf{f}(\mathbf{ x}_{2})\) and \(v_{i}(\mathbf{x})=\mathbf{f}(\mathbf{x})^{\prime}\mathbf{M}_{i}\mathbf{f}(\mathbf{x})\) for \(i=0,1,2\), where \(\mathbf{M}_{0}=\left(\mathbf{F}^{\prime}\mathbf{W}_{0}\mathbf{F}\right)^{-1}\) and \(\mathbf{M}_{i}=\left(\mathbf{F}^{\prime}\mathbf{W}_{i}\mathbf{F}+\rho\mathbf{R}_{i}^{-1}\right)^{-1}\) for \(i=1,2\). Denote \(\mathbf{X}\) as the current design and \(\mathbf{X}_{-i}\) the design of \(\mathbf{X}\) with the \(i\)th row removed. Then the deletion function can be derived as \[d(\mathbf{x}_{i}) =Q(\mathbf{X}|\mathbf{\eta})-Q(\mathbf{X}_{-i}|\mathbf{\eta}) \tag{31}\] \[=-\left\{\log\left[1-\pi(\mathbf{x}_{i},\mathbf{\eta})(1-\pi(\mathbf{x}_{i}, \mathbf{\eta}))v_{0}(\mathbf{x}_{i})\right]+\frac{1}{2}\log\left[1-\pi(\mathbf{x}_{i},\bm {\eta})v_{1}(\mathbf{x}_{i})\right]\right.\] \[\left.+\frac{1}{2}\log\left[1-(1-\pi(\mathbf{x}_{i},\mathbf{\eta}))v_{2}( \mathbf{x}_{i})\right]\right\}.\] The smaller \(d(\mathbf{x}_{i})\) is, the less contribution the corresponding point makes for the overall objective \(Q(\mathbf{X}|\mathbf{\eta})\). One key of the point-wise exchange algorithm is to compute \(\Delta(\mathbf{x},\mathbf{x}_{i})=Q(\mathbf{X}^{*}|\mathbf{\eta})-Q(\mathbf{X}|\mathbf{\eta})\), the change in the criterion after the candidate design point \(\mathbf{x}\) replaces \(\mathbf{x}_{i}\) in the current design \(\mathbf{X}\). Here \(\mathbf{X}^{*}\) is the new design matrix after the exchange. To compute \(\Delta(\mathbf{x},\mathbf{x}_{i})\) efficiently, we can obtain the following formula. \[\Delta(\mathbf{x},\mathbf{x}_{i}) =Q(\mathbf{X}^{*}|\mathbf{\eta})-Q(\mathbf{X}|\mathbf{\eta}) \tag{32}\] \[=\log\Delta_{0}(\mathbf{x},\mathbf{x}_{i})+\frac{1}{2}\sum_{i=1}^{2}\log \Delta_{i}(\mathbf{x},\mathbf{x}_{i}),\] where \(\Delta_{i}(\mathbf{x},\mathbf{x}_{i})\) for \(i=0,1,2\) are derived in Appendix. The matrices \(\mathbf{M}_{i}\) for \(i=0,1,2\) need to be updated after the exchange of design points. Denote the updated matrices as \(\mathbf{M}_{i}^{*}\) for the updated design \(\mathbf{X}^{*}\). We derive the shortcut formulas to easily compute \(\mathbf{M}_{i}^{*}\) for \(i=0,1,2\) as shown in Appendix. Given the initial design, we can iteratively exchange the current design points with candidate design points to improve the objective \(Q(\mathbf{X}|\mathbf{\eta})\). The details are listed in the following Algorithm 1. ``` 1:\(\mathbf{X}^{ sample the design point for an exchange instead of deterministically picking the "worst" point. (4) Different from some other point-exchange algorithms, the candidate set here remains the same through Steps 1-4 since no points are deleted if they are selected in the design. It enables the resultant optimal design having replicated design points. ### Global Optimal Design Based on Algorithm 1 for local \(D\)-optimal design, we can use the following Algorithm 2 to construct global optimal design. ``` Step 0: If \(p(\mathbf{\eta})\) is informative, simulate \(\mathbf{\eta}_{j}\sim p(\mathbf{\eta})\) for \(j=1,\ldots,B\). Otherwise, \(\mathbf{\eta}\) is uniformly distributed in a rectangular high-dimensional space. Step 1: For each \(\mathbf{\eta}_{j}\), call Algorithm 1 to construct the local optimal design \(\mathbf{X}_{j}\). Step 2: For each point in the candidate set, count its frequency of being selected in the local optimal designs. The continuous optimal design is formed by the normalized frequency as a discrete distribution. Step 3: To obtain a discrete optimal design, sample \(n\) design points from the continuous optimal design. ``` **Algorithm 2** Algorithm for Global \(D\)-Optimal Design In Step 1 of generating \(\mathbf{\eta}\) uniformly, we can use uniform design (Fang et al., 2000), maximin Latin hypercube design (Morris and Mitchell, 1995), or other space filling design methods (Joseph and Hung, 2008; Lin et al., 2010; Qian, 2012) to select samples \(\mathbf{\eta}_{j}\) for \(j=1,\ldots,B\). From Algorithm 2, it is likely that the discrete design obtained in Step 3 has some design points with \(n_{i}=1\). When experimenters prefer to have replications at every design point, they can choose a saturated design by sampling \(m=q\) unique design points in Step 3. Then sample some \(\mathbf{\eta}\) values as in Step 0. Compute the lower bounds for \(n_{i}\) for every \(\mathbf{\eta}\) sample according to Proposition 1 and use the averaged lower bounds to set \(n_{i}\). If \(\sum_{i=1}^{m}n_{i}\) exceeds \(n\), the experimenters have to either increase the experiment budget or reduce the \(\kappa\) value. ## 6 Examples In this section, we use two examples to demonstrate the proposed Bayesian \(D\)-optimal design and the construction method. For both examples, we set \(r=1/3\) (equivalently \(\zeta=1/2\)) as explained in Section 4.3. Since there are few existing works on experimental design for continuous and binary responses, we compare the proposed method with three alternative designs: the optimal designs for the quantitative-only response, the optimal design for the binary-only response, and the naively combined design method as mentioned in Example 1. ### Artificial Example In this artificial experiment, there are three 2-level factors \(x_{1}\sim x_{3}\), one 3-level categorical factor \(x_{4}\), and one 3-level quantitative factor \(x_{5}\). The underlying model assumed is the complete quadratic model and \(\mathbf{f}(\mathbf{x})\) contains \(q=22\) model effects including the intercept and the following model effects. First order effects: \[x_{1},x_{2},x_{3},x_{4,1},x_{4,2},x_{5,l},\] Second order effects: \[x_{1}x_{2},x_{1}x_{3},x_{1}x_{4,1},x_{1}x_{4,2},x_{1}x_{5,l},x_{2}x _{3},x_{2}x_{4,1},x_{2}x_{4,2},\] \[x_{2}x_{5,l},x_{3}x_{4,1},x_{3}x_{4,2},x_{3}x_{5,l},x_{4,1}x_{5,l },x_{4,2}x_{5,l},x_{5,\text{quad}}.\] Here for the 3-level factors \(x_{4}\) and \(x_{5}\), the effects \(x_{4,1}\) (1st comparison) and \(x_{5,l}\) (linear effect) have values \(\left\{-\sqrt{\frac{3}{2}},0,\sqrt{\frac{3}{2}}\right\}\), and \(x_{4,2}\) (2nd comparison) and \(x_{5,\text{quad}}\) (quadratic effect) have values \(\left\{-\sqrt{\frac{1}{2}},\sqrt{2},\sqrt{\frac{1}{2}}\right\}\). For the 2-level factors, the effects \(x_{i}\)\(i=1,2,3\) have the same values as the design settings \(\{-1,1\}\). We consider independent uniform distribution for each \(\eta_{i}\). Specifically, \(\eta_{i}\sim\text{Uniform}[-1,1]\) for the intercept and the first order effects and \(\text{Uniform}[-0.5,0.5]\) for the second order effects. The ranges of \(\eta_{i}\)'s satisfy the effect hierarchy principle. We set the experimental run size to be \(n=66\). Table 1 illustrates the values of a randomly chosen \(\mathbf{\eta}\). Using Algorithm 2 with this \(\mathbf{\eta}\), we construct the proposed local \(D\)-optimal designs for QQ model \(D_{QQ}\) for \(\rho=0\) and \(\rho=0.3\), respectively. For comparison, we also generate three alternative designs via R package _AlgDesign_ developed by Wheeler (2014). Specifically, they are (i) the 66-run classic \(D\)-optimal design for linear regression model, denoted as \(D_{L}\), (ii) the 66-run local \(D\)-optimal design for logistic regression model given the \(\mathbf{\eta}\), denoted as \(D_{G}\), (iii) and the naively combined design of 44-run local \(D\)-optimal design for logistic regression model and 22-run \(D\)-optimal design for the linear regression model, denoted as \(D_{C}\). The details of these designs can be found in Table A1 in Appendix. To evaluate the performance of the proposed design in comparison with alternative designs, we consider the efficiency between two designs (Woods et al., 2006) as \[\text{eff}(D_{1},D_{2}|\mathbf{\eta})=\exp\left\{\frac{1}{q}\left(Q(D_{1}|\mathbf{\eta })-Q(D_{2}|\mathbf{\eta})\right)\right\}. \tag{33}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Effect & \(\eta\) & Effect & \(\eta\) & Effect & \(\eta\) & Effect & \(\eta\) \\ \hline Intercept & -0.0153 & \(x_{1}\) & -0.6067 & \(x_{2}\) & 0.7212 & \(x_{1}x_{2}\) & 0.0080 \\ \hline \(x_{3}\) & -0.1682 & \(x_{1}x_{3}\) & 0.0010 & \(x_{2}x_{3}\) & 0.1349 & \(x_{4,1}\) & 0.0283 \\ \hline \(x_{1}x_{4,1}\) & 0.0594 & \(x_{2}x_{4,1}\) & -0.1719 & \(x_{3}x_{4,1}\) & 0.1492 & \(x_{4,2}\) & -0.1468 \\ \hline \(x_{1}x_{4,2}\) & 0.0553 & \(x_{2}x_{4,2}\) & -0.0634 & \(x_{3}x_{4,2}\) & -0.2629 & \(x_{5,l}\) & -0.0660 \\ \hline \(x_{1}x_{5,l}\) & -0.1054 & \(x_{2}x_{5,l}\) & -0.0857 & \(x_{3}x_{5,l}\) & -0.0807 & \(x_{4,1}x_{5,l}\) & -0.1198 \\ \hline \(x_{4,2}x_{5,l}\) & -0.0292 & \(x_{5,\text{quad}}\) & -0.1336 & & & \\ \hline \end{tabular} \end{table} Table 1: An example of \(\mathbf{\eta}\) value for the local \(D\)-optimal design. Table 2 reports the efficiency of \(D_{QQ}\) compared with \(D_{L}\), \(D_{G}\), and \(D_{C}\), respectively. The proposed QQ optimal design \(D_{QQ}\) gains the best efficiency over the three alternative designs. It appears that the combined design \(D_{C}\) has the second-best design efficiency. Next, we focus on the comparison of \(D_{QQ}\) with \(D_{C}\) under different \(\mathbf{\eta}\) values. We generate a maximin Latin hypercube design of \(B=500\) runs (R package _lhs_ by Carnell (2012)) for \(\mathbf{\eta}\) with the lower and upper bounds specified earlier. For each of the \(\mathbf{\eta}\) values, we construct a local QQ optimal design \(D_{QQ}\) and the combined design \(D_{C}\). Figure 2 shows the histogram of the \(\text{eff}(D_{QQ},D_{C}|\mathbf{\eta})\) for different \(\mathbf{\eta}\) value. All \(\text{eff}(D_{QQ},D_{C}|\mathbf{\eta})\) values are larger than 1, indicating that the local QQ optimal design outperforms the combined design. Based on Algorithm 2, we accumulate frequencies of locally optimal designs and obtain the global \(D\)-optimal designs shown in Figure 3. Denote \(d_{QQ}\) and \(d_{C}\) are the proposed global optimal design for the QQ model and the global optimal combined design, respectively. The bar plots show the normalized frequencies for all the candidate points with the largest 22 frequencies colored blue. From Figure 3, for \(d_{QQ}\), the points in the middle have much smaller frequencies than the other points. It is known that these points in the middle correspond to the points with \(x_{5}=0\) in Table A1. Note that these points are only necessary for estimating the coefficient for \(x_{5,q}\), whose variances are the smallest in the prior for \(\mathbf{\beta}^{(i)}\)'s based on the effects hierarchy principle. In contrast, such a pattern is not observed for points with \(x_{4}=0\). The reason is that \(x_{4}\) is a categorical variable and \(x_{4}=-1,0,1\) are equally necessary to estimate the effects \(x_{4,1}\) and \(x_{4,2}\). For \(d_{C}\), the points with the largest 22 frequencies correspond to the 22-run \(D\)-optimal design for the linear regression model, \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(\rho\) & \(\text{eff}(D_{QQ},D_{L}|\mathbf{\eta})\) & \(\text{eff}(D_{QQ},D_{G}|\mathbf{\eta})\) & \(\text{eff}(D_{QQ},D_{C}|\mathbf{\eta})\) \\ \hline 0 & 1.08 & 1.11 & 1.05 \\ \hline 0.3 & 1.10 & 1.14 & 1.07 \\ \hline \end{tabular} \end{table} Table 2: Design efficiency between the proposed local design \(D_{QQ}\) and three alternative designs. Figure 2: Artificial example: the efficiency between each local \(D_{QQ}\) and \(D_{C}\) under different \(\mathbf{\eta}\) values: (a) \(\rho=0\) (b) \(\rho=0.3\). which is independent of \(\mathbf{\eta}\) and remains the same every time. The points with \(x_{5}=1\) and \(-1\) only have slightly higher frequencies than the ones with \(x_{5}=0\), due to the way we specify the prior distribution of \(\mathbf{\eta}\). To compare the performances of the global designs, the design efficiencies in (33) is used with \(Q(d|\mathbf{\eta})\) adapted as \[Q(d|\mathbf{\eta}) =\log\det\left(n\sum_{i=1}^{N}d(\mathbf{x}_{i})\pi(\mathbf{x}_{i},\mathbf{ \eta})(1-\pi(\mathbf{x}_{i},\mathbf{\eta}))\mathbf{f}(\mathbf{x}_{i})f(\mathbf{x}_{i})^{\prime}\right)\] \[+\frac{1}{2}\log\det\left(n\sum_{i=1}^{N}d(\mathbf{x}_{i})\pi(\mathbf{x} _{i},\mathbf{\eta})\mathbf{f}(\mathbf{x}_{i})\mathbf{f}(\mathbf{x}_{i})^{\prime}+\rho\mathbf{R}\right)\] \[+\frac{1}{2}\log\det\left(n\sum_{i=1}^{N}d(\mathbf{x}_{i})(1-\pi(\bm {x}_{i},\mathbf{\eta}))\mathbf{f}(\mathbf{x}_{i})\mathbf{f}(\mathbf{x}_{i})^{\prime}+\rho\mathbf{R}\right)\] for a global optimal design \(d\) given \(\mathbf{\eta}\) value. Here \(d(\mathbf{x}_{i})\) is the probability frequency for candidate design point \(\mathbf{x}_{i}\) specified by the design \(d\) and \(\sum_{i=1}^{N}d(\mathbf{x}_{i})=1\). For the global optimal designs \(d_{QQ}\) and \(d_{C}\) obtained previously, Figure 4 shows the histograms of the \(\text{eff}(d_{QQ},d_{C}|\mathbf{\eta})\) values, where the \(\mathbf{\eta}\) values are generated from another 100-run maximin Latin hypercube design. It is clear that \(d_{QQ}\) is universally better than \(d_{C}\), thus the proposed design is more robust to different values of \(\mathbf{\eta}\). ### Etching Experiment In the etching process described in Section 1, the etchant is circulating at a certain flow rate. The wafers are rotated and swung horizontally and vertically. Meanwhile, the air is blown in the etchant with certain pressure. There are five factors involved in the etching process, the wafer rotation speed (\(x_{1}\)), the pressure for blowing the bubbles (\(x_{2}\)), the horizontal and vertical frequencies for swinging wafers (\(x_{3}\), \(x_{4}\)), and the flow rate of circulating the etchant (\(x_{5}\)). The engineers intend to experiment Figure 3: Artificial example: global QQ optimal designs for (a) \(\rho=0\) and (b) \(\rho=0.3\) and (c) global combined design. to study the relationship between these factors and the two QQ responses. Because of the newly developed process, the historical data on similar processes are not directly applicable to this experiment. Based on some exploratory analysis, we set \(\rho=0.5\). Both domain knowledge and data have shown that the wafer appearance is the worst when both the rotating speed (\(x_{1}\)) and bubble pressure (\(x_{2}\)) are low. Accordingly, we set the prior of \(\mathbf{\eta}\) as follows. For intercept \(\eta_{0}\sim\text{Uniform}[0,6]\). The linear effects of rotating speed and bubble pressure follow \(\text{Uniform}[1,5]\). The other linear effects follow \(\text{Uniform}[-1,1]\) and the second order interactions and quadratic effects \(\text{Uniform}[-0.3,0.3]\). The experimental run size is set to be \(n=21\times 6=126\), 6 times the number of effects \(q=21\). We generate a maximin Latin hypercube design of \(B=500\) runs for \(\mathbf{\eta}\) values. For each \(\mathbf{\eta}\) value, we obtain the local optimal designs \(D_{QQ}\) and \(D_{C}\). Here the local combined design \(D_{C}\) has \(2/3\) of the runs generated from the local \(D\)-optimal design for logistic regression and \(1/3\) of the runs from the \(D\)-optimal design for linear regression. The efficiency between each pair of local designs \(D_{QQ}\) and \(D_{C}\) is reported in Figure 6(a). We can see that almost every local design \(D_{QQ}\) is better than \(D_{C}\). Moreover, we obtain the global optimal designs \(d_{QQ}\) and \(d_{C}\) by accumulating the frequencies of the local designs. To compare \(d_{QQ}\) and \(d_{C}\), we generate another 100-run maximin Latin hypercube design for \(\mathbf{\eta}\) values and compute the efficiencies between \(d_{QQ}\) and \(d_{C}\) under different \(\mathbf{\eta}\) values, which are shown in Figure 6 (b). Clearly, \(d_{QQ}\) is universally better and more robust to \(\mathbf{\eta}\) than \(d_{C}\). Fractional factorial design (Wu and Hamada, 2011) is another commonly used design in practice. We compare the proposed design with a \(3^{5-2}\) minimum aberration (MA) fractional factorial design by defining the contrast subgroup as \(I=ABD^{2}=AB^{2}CE^{2}=AC^{2}DE=BCDE^{2}\). Each design point is replicated 5 times and the overall run is \(3^{5-2}\times 5=135\). Figure 6(c) shows the histogram of the efficiencies between \(d_{QQ}\) and the MA design, and the proposed global optimal design is still superior. Figure 4: Artificial example: the efficiency between each global QQ optimal design and combined design under different \(\mathbf{\eta}\) values: (a) \(\rho=0\) (b) \(\rho=0.3\). Figure 5: Etching experiment: (a) the global Bayesian QQ \(D\)-optimal design for \(\rho=0.5\) and (b) the global combined design. Figure 6: Etching experiment: histograms of the efficiencies (a) efficiencies between local designs \(D_{QQ}\) and \(D_{C}\) for different \(\mathbf{\eta}\)’s; (b) efficiencies between global designs \(d_{QQ}\) and \(d_{C}\); (c) efficiencies between global design \(d_{QQ}\) and the \(3^{5-2}\) MA fractional factorial design. Discussion In this paper, we propose the Bayesian \(D\)-optimal design criterion for QQ responses. The adoption of the Bayesian approach allows us to consider both the non-informative priors as the frequentist approach and informative priors when domain knowledge or historical data are available. A new point-exchange algorithm is developed for efficiently constructing the proposed designs. This algorithm can also be used to construct non-factorial designs when the candidate set is not a full factorial design. Moreover, the proposed method can be directly generalized for the sequential design with the QQ response. In the following, we discuss some other scenarios for the proposed method that are not considered in detail previously. **Non-conjugate Prior for \(\boldsymbol{\eta}\)** Other than the conjugate prior \(p(\boldsymbol{\eta})\), we can also use a non-conjugate prior distribution \(\boldsymbol{\eta}\sim N(\boldsymbol{0},\tau_{0}^{2}\boldsymbol{R}_{0})\). In this situation, one can consider the normal approximation in the posterior distribution \(p(\boldsymbol{\eta}|\boldsymbol{z})\). Then the design criterion for the binary response becomes (Chaloner and Verdinelli, 1995) \(\mathbb{E}_{\boldsymbol{z},\boldsymbol{\eta}}\{\log(p(\boldsymbol{\eta}| \boldsymbol{z})\}\approx\mathbb{E}_{\boldsymbol{\eta}}\left\{\log\det( \boldsymbol{F}^{\prime}\boldsymbol{W}_{0}\boldsymbol{F}+\rho_{0}\boldsymbol{R }_{0}^{-1})\right\}\). The overall design criterion \(Q(\boldsymbol{X})\) can be updated as \[Q(\boldsymbol{X})=\sum_{i=0}^{2}\mathbb{E}_{\boldsymbol{\eta}}\left\{\log\det \left(\boldsymbol{F}^{\prime}\boldsymbol{W}_{i}\boldsymbol{F}+\tau_{i} \boldsymbol{R}_{i}^{-1}\right)\right\},\] where \(\rho_{i}=\sigma^{2}/\tau_{i}\), \(\tau_{i}\) and \(\boldsymbol{R}_{i}\) are the prior variance and correlation of \(\boldsymbol{\eta}\), \(\boldsymbol{\beta}^{(1)}\), and \(\boldsymbol{\beta}^{(2)}\) respectively. The proposed design construction algorithm can still be applied with minor modifications. **Multiple QQ Responses** In this paper, we focus on optimal designs for one quantitative response and one qualitative response. The proposed method can also be generalized to accommodate the QQ models with multiple quantitative responses \(Y_{1},\ldots,Y_{l}\) and binary responses \(Z_{1},\ldots,Z_{k}\). For example, a multi-level qualitative response can be transformed into a set of dummy binary responses. One idea is to generalize the QQ models in (1) and (2) for both \(l\geq 1\) and \(k\geq 1\). For \(l\geq 1\) and \(k=1\), we generalize the QQ model by introducing the correlation matrix between \(Y_{1},\ldots,Y_{l}\) as in the standard multi-response regression (Breiman and Friedman, 1997). Then the corresponding optimal design can be established by studying its likelihood function. For \(l\geq 1\) and \(k>1\) with multiple binary responses, considering all \(2^{k}\) conditional models for \((Y_{1},\ldots,Y_{l}|Z_{1}=1,\ldots,Z_{k}=1)\),..., \((Y_{1},\ldots,Y_{l}|Z_{1}=0,\ldots,Z_{k}=0)\) only works for a small \(k\). Moreover, the construction algorithm can be more complicated as it needs to involve the multi-logit model (McCullagh, 1980) for modeling the multiple binary responses. When \(k\) is relatively large, we are going to pursue an alternative QQ model and develop its corresponding optimal design method as a future research topic. **Continuous Design** The point-exchange algorithm is to construct the exact discrete optimal designs, which are different from the theoretical continuous optimal designs. As described in Sections 5 and 6, the way of generating the frequency as the local optimal design is heuristic. The rigorous definition of the local continuous \(D\)-optimal design criterion is the probability measure \(\xi\) on the design space \(\Omega\) that maximizes \[Q(\boldsymbol{X}|\boldsymbol{\eta})=\log\det\left(\int\pi( \boldsymbol{x})(1-\pi(\boldsymbol{x}))f(\boldsymbol{x})f(\boldsymbol{x})^{ \prime}d\xi(\boldsymbol{x})\right)+\] \[\log\det\left(\int(\pi\left(\boldsymbol{x}\right)f(\boldsymbol{x })f(\boldsymbol{x})^{\prime}+\rho\boldsymbol{R}_{1}^{-1}\right)d\xi( \boldsymbol{x})\right)+\log\det\left(\int\left((1-\pi(\boldsymbol{x}))f( \boldsymbol{x})f(\boldsymbol{x})^{\prime}+\rho\boldsymbol{R}_{2}^{-1}\right)d \xi(\boldsymbol{x})\right).\] Yang et al. (2013) developed a method to obtain the optimal \(\xi\) for the nonlinear models. It will be interesting to extend their framework and develop the method to obtain the optimal \(\xi\) for QQ models. **Different QQ Models** The proposed design is not restricted to the logit model for the binary response. For example, if the probit model is used, the Bayesian \(D\)-optimal design criterion can be directly obtained by replacing the logit transformation with the probit transformation in both \(p(\mathbf{z}|\mathbf{\eta})\) and \(p(\mathbf{\eta})\). The design criterion can be derived similarly with minor modifications. The criterion formula remains the same with the following different diagonal matrices, \[\mathbf{W}_{0} =\mathrm{diag}\left\{\frac{\phi^{2}(\mathbf{f}(\mathbf{x}_{1})^{\prime} \mathbf{\eta})}{\Phi(\mathbf{f}(\mathbf{x}_{1})^{\prime}\mathbf{\eta})\left(1-\Phi(\mathbf{f}(\mathbf{ x}_{1})^{\prime}\mathbf{\eta})\right)},\ldots,\frac{\phi^{2}(\mathbf{f}(\mathbf{x}_{n})^{ \prime}\mathbf{\eta})}{\Phi(\mathbf{f}(\mathbf{x}_{n})^{\prime}\mathbf{\eta})\left(1-\Phi(\bm {f}(\mathbf{x}_{n})^{\prime}\mathbf{\eta})\right)}\right\},\] \[\mathbf{W}_{1} =\mathrm{diag}\left\{\Phi\left(\mathbf{f}(\mathbf{x}_{1})^{\prime}\mathbf{ \eta}\right),\ldots,\Phi\left(\mathbf{f}(\mathbf{x}_{n})^{\prime}\mathbf{\eta}\right) \right\},\quad\mathbf{W}_{2}=\mathbf{I}-\mathbf{W}_{1},\] where \(\Phi\) and \(\phi\) are CDF and PDF of the standard normal distribution. It is worth pointing out that the design criterion in the work is based on the QQ model constructed by the joint model of \(Y|Z\) in (1) and \(Z\) in (2). Kang et al. (2021) created a new QQ model based on \(Z|U\) where \(U\) is a latent continuous variable that is assumed to be correlated with the observed continuous response variable \(Y\). Besides the conditional model structures, other model structures such as mixed graphical models (Yang et al., 2014) can also be used as long as the \(D\)-optimality can be derived. **Acknowledgement** The authors were partly supported by U.S. National Science Foundation for this research project. Dr. Lulu Kang was supported by grants CMMI-1435902, DMS-1916467, and DMS-2153029, Dr. Xinwei Deng by CMMI-1233571 and CMMI-1435996, and Dr. Ran Jin by CMMI-1435996.
2303.12926
Stability for the logarithmic Sobolev inequality
This paper is devoted to stability results for the Gaussian logarithmic Sobolev inequality, with explicit stability constants.
Giovanni Brigati, Jean Dolbeault, Nikita Simonov
2023-03-22T21:48:56Z
http://arxiv.org/abs/2303.12926v3
# Stability for the logarithmic Sobolev inequality ###### Abstract This paper is devoted to stability results for the Gaussian logarithmic Sobolev inequality. Our approach covers several cases involving the strongest possible norm with the optimal exponent, under constraints. Explicit constants are obtained. The strategy of proof relies on entropy methods and the Ornstein-Uhlenbeck flow. keywords: logarithmic Sobolev inequality, stability, log-concavity, heat flow, entropy, carre du champ. Msc: [2020]Primary: 39B62; Secondary: 47J20, 49J40, 35A23, 35K85. + Footnote †: journal: Journal of Computational Physics ## 1 Introduction and main results Let us consider the _Gaussian logarithmic Sobolev inequality_ \[\|\nabla u\|_{\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)}^{2}\geq\frac{1}{2}\int_{ \mathbb{R}^{d}}|u|^{2}\log\left(\frac{|u|^{2}}{\|u\|_{\mathrm{L}^{2}(\mathbb{R }^{d})}^{2}}\right)d\gamma\quad\forall\;u\in\mathrm{H}^{1}(\mathbb{R}^{d},d\gamma) \tag{1}\] where \(d\gamma=\gamma(x)\,dx\) is the normalized Gaussian probability measure with density \[\gamma(x)=(2\pi)^{-\frac{d}{2}}\,e^{-\frac{1}{2}\,|x|^{2}}\quad\forall\;x\in \mathbb{R}^{d}\,.\] In this paper we are interested in stability results, that is, in estimating the difference of the two terms in (1) from below, by a distance to the set of optimal functions. According to [20; 4], equality in (1) is achieved by functions in the manifold \[\mathcal{M}:=\left\{\,w_{a,c}:(a,c)\in\mathbb{R}^{d}\times\mathbb{R}\right\}\] where \[w_{a,c}(x)=c\,e^{-a\cdot x}\quad\forall\;x\in\mathbb{R}^{d}\] and only by these functions. The ultimate goal of _stability estimates_ is to find a notion of distance \(\mathsf{d}\), an explicit constant \(\beta>0\) and an explicit exponent \(\alpha>0\), which may depend on \(\mathsf{d}\), such that \[\|\nabla u\|_{\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)}^{2}-\frac{1}{2}\int_{ \mathbb{R}^{d}}|u|^{2}\,\log\left(\frac{|u|^{2}}{\|u\|_{\mathrm{L}^{2}(\mathbb{ R}^{d})}^{2}}\right)d\gamma\geq\beta\inf_{w\in\mathcal{M}}\mathsf{d}(u,w)^{\alpha}\] ( \[\mathcal{S}\] ) for any given \(u\in\mathrm{H}^{1}(\mathbb{R}^{d},d\gamma)\). In this paper we consider the slightly simpler question of finding a specific \(w_{u}\in\mathcal{M}\) such that \[\|\nabla u\|_{\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)}^{2}-\frac{1}{2}\int_{ \mathbb{R}^{d}}|u|^{2}\,\log\left(\frac{|u|^{2}}{\|u\|_{\mathrm{L}^{2}(\mathbb{ R}^{d})}^{2}}\right)d\gamma\geq\beta\mathsf{d}(u,w_{u})^{\alpha}\,,\] ( \[\star\] ) which provides us with no more than an estimate for (\(\mathcal{S}\)): any estimate of \(\alpha\) and \(\beta\) for (\(\star\) *> 2) is also an estimate for (\(\mathcal{S}\)). In order to illustrate the difference between the two questions, let us consider the following elementary example. Assume that \(d=1\) and consider the functions \(u_{\varepsilon}(x)=1+\varepsilon\,x\) in the limit as \(\varepsilon\to 0\). With \(\mathsf{d}(u,w)=\|u^{\prime}-w^{\prime}\|_{\mathrm{L}^{2}(\mathbb{R},d\gamma)}\), which is the strongest possible notion of distance that we can expect to control in (\(\star\) *> 2), elementary computations show that the _deficit_ of the logarithmic Sobolev inequality, _i.e._, the left hand-side in (\(\star\) *> 2), is \[\|\nabla u_{\varepsilon}\|_{\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)}^{2}-\frac {1}{2}\int_{\mathbb{R}^{d}}|u_{\varepsilon}|^{2}\,\log\left(\frac{|u_{ \varepsilon}|^{2}}{\|u_{\varepsilon}\|_{\mathrm{L}^{2}(\mathbb{R}^{d})}^{2}} \right)d\gamma=\frac{1}{2}\varepsilon^{4}+O\big{(}\varepsilon^{6}\big{)}\,,\] while, using the test function \(w_{a_{\varepsilon},c_{\varepsilon}}\in\mathcal{M}\) where \(a_{\varepsilon}=2\,\varepsilon\) and \(c_{\varepsilon}=e^{-a_{\varepsilon}^{2}/4}\), we obtain \[d(u_{\varepsilon},1)^{2}=\|u_{\varepsilon}^{\prime}\|_{\mathrm{L}^{2}( \mathbb{R},d\gamma)}^{2}=\varepsilon^{2}\quad\text{and}\quad\inf_{w\in \mathcal{M}}\mathsf{d}(u_{\varepsilon},w)^{\alpha}\leq d\left(u_{\varepsilon},w_{a_{\varepsilon},c_{\varepsilon}}\right)^{2}=\frac{1}{2}\,\varepsilon^{4}+ O\big{(}\varepsilon^{6}\big{)}\,.\] In practice we will consider only the case \[w_{u}=1\] in (\(\star\) *> 2) and the above example shows that the best we can hope for without additional restriction is \(\alpha\geq 4\). Similar examples in higher dimensions can be obtained by considering for an arbitrary given \(\nu\in\mathbb{S}^{d-1}\) the functions \(u_{\varepsilon}(x)=1+\varepsilon\,x\cdot\nu\) in the limit as \(\varepsilon\to 0\). This is not a surprise in view of [33, 30, 39], and also of the detailed Taylor expansions of [36, 17, 18]. Still with \(w_{u}=1\), we can expect to have \(\alpha=2\) in (\(\star\) *> 2) _under additional conditions_, including for \(\mathsf{d}(u,w)=\|\nabla u-\nabla w\|_{\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)}\), while it is otherwise banned as shown for instance from [39, Theorem 1.2 (2)], or simply from considering the above example. Before entering the details, let us mention a recent stability result for (\(\mathcal{S}\)) with \(\alpha=2\) involving a constructive although very delicate expression for \(\beta>0\) and \(\mathsf{d}(u,w)=\|u-w\|_{\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)}\) that appeared in [27]. Here we aim at stronger estimates under additional constraints, with \(w_{u}=1\), which is a different point of view. Let us start by a first stability result. **Proposition 1**.: _For all \(u\in\mathrm{H}^{1}(\mathbb{R}^{d},d\gamma)\) such that \(\left\|u\right\|_{\mathrm{L}^{2}(\mathbb{R}^{d})}=1\) and \(\left\|x\,u\right\|_{\mathrm{L}^{2}(\mathbb{R}^{d})}^{2}\leq d\), we have_ \[\left\|\nabla u\right\|_{\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)}^{2}-\frac{1}{2} \int_{\mathbb{R}^{d}}|u|^{2}\,\log|u|^{2}\,d\gamma\geq\frac{1}{2\,d}\,\bigg{(} \int_{\mathbb{R}^{d}}|u|^{2}\,\log|u|^{2}\,d\gamma\bigg{)}^{2} \tag{2}\] _and, with \(\psi(s):=s-\frac{d}{4}\log\big{(}1+\frac{4}{d}\,s\big{)}\), we also have the stronger estimate_ \[\left\|\nabla u\right\|_{\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)}^{2}-\frac{1}{ 2}\int_{\mathbb{R}^{d}}|u|^{2}\,\log|u|^{2}\,d\gamma\geq\psi\left(\left\| \nabla u\right\|_{\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)}^{2}\right). \tag{3}\] Similar results are already known in the literature (see for instance [14, 33, 30, 39]) and we claim no originality for the the results. Also see references to earlier proofs at the end of the introduction. Our method is based on the _carre du champ_ method. Even if some ideas go back to [2], it is elementary, new as far as we know, and of some use for our other results. Coming back to (\(\star\) *> 2), we may notice that there is no loss of generality in imposing the condition \(\left\|u\right\|_{\mathrm{L}^{2}(\mathbb{R}^{d})}=1\), as we can always replace \(u\) by \(u/\left\|u\right\|_{\mathrm{L}^{2}(\mathbb{R}^{d})}\). Because of the Csiszar-Kullback-Pinsker inequality \[\int_{\mathbb{R}^{d}}|u|^{2}\,\log|u|^{2}\,d\gamma\geq\frac{1}{4}\,\left\|u \right\|^{2}-1\right\|_{\mathrm{L}^{1}(\mathbb{R}^{d},d\gamma)}^{2} \tag{4}\] and \(\left|\left|u\right|-1\right|=\left|\left|u\right|^{2}-1\right|\big{/}\big{|} \left|u\right|+1\big{|}\leq\left|\left|u\right|^{2}-1\right|\), we find that (2) implies (\(\star\) *> 2) type with \(\mathrm{d}(u,w)=\left\|u-w\right\|_{\mathrm{L}^{1}(\mathbb{R}^{d},d\gamma)}\) for nonnegative functions \(u\), \(\alpha=4\), and \(\beta=1/(32\,d)\). For functions far away from the optimal functions, say such that \(\left\|\nabla u\right\|_{\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)}\geq A\) under the conditions of Proposition 1, Inequality (3) provides us with an even stronger stability result of (\(\star\) *> 2) type with \(\alpha=2\) and \(\mathrm{d}(u,w)=\left\|\nabla u-\nabla w\right\|_{\mathrm{L}^{2}(\mathbb{R}^{ d},d\gamma)}\), but with a positive constant \(\beta\) which depends on \(A>0\). Again, notice that (\(\star\) *> 2) with such a distance cannot hold without constraints. Next we aim at explicit results with \(\alpha=2\), under other constraints. Let \[\mathcal{C}_{\star}=1+\frac{1}{1728}\approx 1.0005787\,.\] **Theorem 2**.: _For all \(u\in\mathrm{H}^{1}(\mathbb{R}^{d},d\gamma)\) such that \(u^{2}\,\gamma\) is log-concave and such that_ \[\int_{\mathbb{R}^{d}}\left(1,x\right)\left|u\right|^{2}\,d\gamma=(1,0)\quad \text{and}\quad\int_{\mathbb{R}^{d}}|x|^{2}\left|u\right|^{2}\,d\gamma\leq d\,, \tag{5}\] _we have_ \[\left\|\nabla u\right\|_{\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)}^{2}-\frac{ \mathcal{C}_{\star}}{2}\int_{\mathbb{R}^{d}}|u|^{2}\,\log|u|^{2}\,d\gamma\geq 0\,. \tag{6}\] The condition \(\int_{\mathbb{R}^{d}}|x|^{2}\left|u\right|^{2}\,d\gamma\leq d\) in (5) is a simplifying assumption. A result like (6) also holds if \(\int_{\mathbb{R}^{d}}|x|^{2}\left|u\right|^{2}\,d\gamma>d\), but with a constant that differs from \(\mathcal{C}_{\star}\) and actually depends on \(\int_{\mathbb{R}^{d}}|x|^{2}\left|u\right|^{2}d\gamma\). We refer to Section 3.5: see Proposition 7 for an extension of Theorem 2, and also for further comments on the extension of Proposition 1. The constant \(\mathcal{C}_{\star}\) in (8) relies on an estimate of [12]. Inequality (6) with improved constant \(\mathcal{C}_{\star}>1\) compared to (1) can be recast in the form of a stability inequality of type (\(\star\)) around the normalised Gaussian as \[\|\nabla u\|_{\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)}^{2}-\frac{1}{2}\int_{ \mathbb{R}^{d}}|u|^{2}\log|u|^{2}\,d\gamma\geq\frac{1}{2}\left(\mathcal{C}_{ \star}-1\right)\int_{\mathbb{R}^{d}}|u|^{2}\,\log|u|^{2}\,d\gamma\] for all functions \(u\in\mathrm{H}^{1}(\mathbb{R}^{d},d\gamma)\) such that \(\|u\|_{\mathrm{L}^{2}(\mathbb{R}^{d})}=1\), which covers the case \(\alpha=2\), \(\beta=(\mathcal{C}_{\star}-1)/8\) and \(\mathrm{d}(u,w)=\|u-w\|_{\mathrm{L}^{1}(\mathbb{R}^{d},d\gamma)}\) in (\(\star\)) for nonnegative functions by (4), or even in the stronger \(\dot{\mathrm{H}}^{1}(\mathbb{R}^{d},d\gamma)\) semi-norm, as \[\|\nabla u\|_{\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)}^{2}-\frac{1}{2}\int_{ \mathbb{R}^{d}}|u|^{2}\,\log|u|^{2}\,d\gamma\geq\frac{\mathcal{C}_{\star}-1}{ \mathcal{C}_{\star}}\,\|\nabla u\|_{\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)}^{2}\] for all functions \(u\in\mathrm{H}^{1}(\mathbb{R}^{d},d\gamma)\) such that \(\|u\|_{\mathrm{L}^{2}(\mathbb{R}^{d})}=1\), which corresponds to \(\alpha=2\), \(\beta=(\mathcal{C}_{\star}-1)/\mathcal{C}_{\star}\) and \(\mathrm{d}(u,w)=\|\nabla(u-w)\|_{\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)}\) in (\(\star\)). By the Gaussian Poincare inequality, notice that the case of (\(\star\)) with \(\alpha=2\), \(\beta=(\mathcal{C}_{\star}-1)/\mathcal{C}_{\star}\) and the standard distance \(\mathrm{d}(u,w)=\|u-w\|_{\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)}\) is also covered. Log-concavity might appear as a rather restrictive assumption, but this is useful because a function which is compactly supported at time \(t=0\) evolves through the diffusion flow into a logarithmically concave function after some finite time that can be estimated by the heat flow estimates of [48]. This is enough to produce a stability result with an explicit constant. Compact support is in fact a too restrictive condition and we have the following result. **Theorem 3**.: _Let \(d\geq 1\). For any \(\varepsilon>0\), there is some explicit \(\mathcal{C}>1\) depending only on \(\varepsilon\) such that, for any \(u\in\mathrm{H}^{1}\big{(}\mathbb{R}^{d},d\gamma\big{)}\) satisfying (5) and_ \[\int_{\mathbb{R}^{d}}|u|^{2}\,e^{\varepsilon\,|x|^{2}}\,d\gamma<\infty\,, \tag{7}\] _then we have_ \[\|\nabla u\|_{\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)}^{2}\geq\frac{\mathcal{ C}}{2}\int_{\mathbb{R}^{d}}|u|^{2}\,\log|u|^{2}\,d\gamma\,. \tag{8}\] _Additionally, if \(u\) is compactly supported in a ball of radius \(R>0\), then (8) holds with_ \[\mathcal{C}=1+\frac{\mathcal{C}_{\star}-1}{1+\mathcal{C}_{\star}\,R^{2}}\,.\] This expression of the constant \(\mathcal{C}\) in (8) is given in the proof, in Section 3.4. The simpler estimate in terms of \(R\) relies on Theorem 2. Let us conclude this introduction with a review of the literature. The _logarithmic Sobolev inequality_ historically appeared in [54, 11], in a form that was later rediscovered as an equivalent scale-invariant form of the Euclidean version of the inequality in [56]. We refer to [37] for the Gaussian version (1) of the inequality, and also to [34] for an equivalent result. The optimality case in the inequality has been characterized in [20] but can also be deduced from [4]. Also see [55] for a short introductory review with an emphasis on _information theoretical_ aspects. The logarithmic Sobolev inequality can be viewed as a limit case of a family of the Gagliardo-Nirenberg-Sobolev (GNS) as observed in [25], in the Euclidean setting, or as a large dimension limit of the Sobolev inequality according to [9]. See [27, 18] for recent developments and further references. We refer to [1, 38, 52, 5] to reference books on the topic. In a classical result on stability in functional inequalities, Bianchi and Egnell proved in [10] that the deficit in the Sobolev inequality measures the \(\dot{\mathrm{H}}^{1}(\mathbb{R}^{d},dx)\) distance to the manifold of the Aubin-Talenti functions. The estimate has been made constructive in [27] where a new \(\mathrm{L}^{2}(\mathbb{R}^{d},dx)\) stability result for the logarithmic Sobolev inequality is also established (also see [39] for further results in strong norms). Still in the Euclidean setting a first stability result in strong norms for the logarithmic Sobolev inequality appears in [33], where the authors give deficit estimates in various distances for functions inducing a Poincare inequality. Under the condition \(\|x\,u\|_{\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)}=\sqrt{d}\), a stability result measured by an entropy is given in [30]. For sequential stability results in strong norms, we refer to [40] when assuming a bound on \(u\) in \(\mathrm{L}^{4}(\mathbb{R}^{d},d\gamma)\) and to [39] when assuming a bound on \(|x|^{2}\,u\) in \(\mathrm{L}^{2}(\mathbb{R}^{d},d\gamma)\). Stability according to other notions of distance has been studied in [47, 46, 35]. To our knowledge, the first result of stability for the logarithmic Sobolev inequality is a reinforcement of the inequality due to E. Carlen in [20] where he introduces an additional term involving the Wiener transform. Stability in logarithmic Sobolev inequality is related to deficit in Gaussian isoperimetry and we refer to [14] for an introduction to early results in this direction, [7] for a sharp, dimension-free quantitative Gaussian isoperimetric inequality, and [31] for recent results and further references. Results of Proposition 1 are known from [14, Theorem 1.1] where it is deduced from the HWI inequality due to F. Otto and C. Villani [51]. Such estimates have even been refined in [31]. There are several other proofs. In [33], M. Fathi, E. Indrei and M. Ledoux use a Mehler formula for the Ornstein-Uhlenbeck semigroup and Poincare inequalities. The proof in [30] is based on simple scaling properties of the Euclidean form of the logarithmic Sobolev inequality, which also apply to Gagliardo-Nirenberg inequalities. Various stability results have been proved in Wasserstein's distance: we refer to [41, 14, 33, 40, 43, 15, 31, 39]. A key argument for Theorem 2 is the fact that the heat flow preserves log-concavity according, _e.g._, [53], which is a pretty natural property in this framework: see for instance [24]. In this paper, we carefully distinguish stability results of type (\(\mathcal{S}\)) where stability is measured w.r.t. \(\mathcal{M}\), and of type (\(\star\)) where the distance to a given function is estimated. Even if this function is normalized and centered, this is not enough as shown in [39]. Many counter-examples to stability are known, involving Wasserstein's distance for instance in [41, 14, 33, 40, 43], weaker distances like \(p\)-Wasserstein, or stronger norms like \(\mathrm{L}^{p}\) or \(\mathrm{H}^{1}\): see for instance [40, 39]. The main counter-examples which we might try to apply to our setting are [43, Theorem 1.3] and [31, Theorem 4] but, as already noted in [40], they are based on the fact that the second moment diverges along a sequence of test functions, which is forbidden in our assumptions. This paper is organized as follows. Section 2 is devoted to the standard _carre du champ_ method and a proof of Proposition 1. Theorem 2 is proved in Section 3.3, under a log-concavity assumption. Using properties of the heat flow, the method is extended to the larger class of functions of Theorem 3 in Section 3.4. ## 2 Entropy methods and entropy - entropy production stability estimates This section is devoted to the proof of Proposition 1. ### Definitions and preliminary results Consider the _Ornstein-Uhlenbeck equation_ on \(\mathbb{R}^{d}\) \[\frac{\partial h}{\partial t}=\mathcal{L}h\,,\quad h(\,t=0,\cdot\,)=h_{0}\,, \quad(\,t,x)\in\mathbb{R}^{+}\times\mathbb{R}^{d}\,, \tag{9}\] where \(\mathcal{L}h:=\Delta h-x\cdot\nabla h\) denotes the _Ornstein-Uhlenbeck_ operator. Let us recall some classical results. If \(h_{0}\in\mathrm{L}^{1}(\mathbb{R}^{d},d\gamma)\) is nonnegative, then there exists a unique nonnegative weak solution to (9) (see for instance [32]). The two key properties of the Ornstein-Uhlenbeck operator are \[\int_{\mathbb{R}^{d}}v_{1}\left(\mathcal{L}v_{2}\right)d\gamma=-\int_{ \mathbb{R}^{d}}\nabla\,v_{1}\cdot\nabla\,v_{2}\,d\gamma\quad\text{and}\quad [\nabla,\mathcal{L}]\,v=-\,\nabla\,v\,.\] As a consequence, we obtain the two identities \[\int_{\mathbb{R}^{d}}\left(\mathcal{L}v\right)^{2}d\gamma=\int_{\mathbb{R}^{d }}\|\mathrm{Hess}\,v\|^{2}\,d\gamma+\int_{\mathbb{R}^{d}}|\nabla\,v|^{2}\,d\gamma \tag{10}\] and \[\int_{\mathbb{R}^{d}}\mathcal{L}v\,\frac{\left|\nabla\,v\right|^{2}}{v}\,d \gamma=-2\int_{\mathbb{R}^{d}}\mathrm{Hess}\,v\,\colon\frac{\nabla\,v\otimes \nabla\,v}{v}\,d\gamma+\int_{\mathbb{R}^{d}}\frac{\left|\nabla\,v\right|^{4}}{ v^{2}}\,d\gamma\,, \tag{11}\] where \(\mathrm{Hess}\,v=\nabla\otimes\nabla\,v\) is the _Hessian matrix_ of \(v\). Here we use the following notations. If \(a\) and \(b\) take values in \(\mathbb{R}^{d}\), \(a\otimes b\) denotes the matrix \((\,a_{i}\,b_{j}\,)_{1\leq i,j\leq d}\). With matrix valued \(m\) and \(n\), we define \(m:n=\sum_{i,j=1}^{d}m_{i,j}\,n_{i,j}\) and \(\|m\|^{2}=m:m\). If \(h\) is a nonnegative solution of (9), notice that \(v=\sqrt{h}\) solves \[\frac{\partial v}{\partial t}=\mathcal{L}v+\frac{\left|\nabla\,v\right|^{2}}{ v}\,. \tag{12}\] Let us fix \(\|v\|_{\mathrm{L}^{2}(\mathbb{R}^{d})}=1\), then the _entropy_ and the _Fisher information_, respectively defined by \[\mathcal{L}[\,v\,]:=\int_{\mathbb{R}^{d}}\left|v\right|^{2}\log\left|v\right| ^{2}d\gamma\quad\text{and}\quad\mathcal{I}[\,v\,]:=\int_{\mathbb{R}^{d}} \left|\nabla\,v\right|^{2}d\gamma\,,\] evolve along the flow according to \[\frac{d}{dt}\mathcal{L}[\,v\,]=-\,4\mathcal{I}[\,v\,]\quad\text{and}\quad \frac{d}{dt}\mathcal{I}[\,v\,]=-\,2\int_{\mathbb{R}^{d}}\left((\mathcal{L}v \,)^{2}+\mathcal{L}v\,\frac{\left|\nabla\,v\right|^{2}}{v}\,\right)d\gamma\] if \(v\) solves (12). Using (10) and (11), we obtain the classical expression of the _carre du champ_ method \[\frac{d}{dt}\mathcal{I}[\,v\,]+2\mathcal{I}[\,v\,]=-\,2\int_{\mathbb{R}^{d}} \left\|\mathrm{Hess}\,v-\frac{\nabla\,v\otimes\nabla\,v}{v}\,\right\|^{2}d\gamma \tag{13}\] as for instance in [3; 28; 5]. By writing that \[\frac{d}{dt}\left(\mathscr{I}[v]-\frac{1}{2}\,\mathscr{E}[v]\right)\leq 0\quad \text{and}\quad\lim_{t\to+\infty}\left(\mathscr{I}[v(t,\cdot)]-\frac{1}{2}\, \mathscr{E}[v(t,\cdot)]\right)=0\,,\] we recover the standard proof of the _entropy - entropy production_ inequality \[\mathscr{I}[v]-\frac{1}{2}\,\mathscr{E}[v]\geq 0\,, \tag{14}\] _i.e._, of (1) by the method of [4]. Several of the above expression can be rephrased in terms of the _pressure variable_ \[P:=-\log h=-2\log v\] using the following elementary identities \[\nabla\,v=-\frac{1}{2}\,e^{-P/2}\,\nabla P\,,\quad\frac{\nabla \,v\otimes\nabla\,v}{v}=\frac{1}{4}\,e^{-P/2}\,\nabla P\otimes\nabla P\,,\] \[\text{Hess}\,v=-\frac{1}{2}\,e^{-P/2}\,\text{Hess}\,P+\frac{1}{4 }\,e^{-P/2}\,\nabla P\otimes\nabla P\,,\] so that, by taking into account \(v\,\nabla P=-2\,\nabla\,v\) and \(h=v^{2}\), we have \[\mathscr{I}[v]=\frac{1}{4}\int_{\mathbb{R}^{d}}|\nabla P|^{2}\,h\,d\gamma \quad\text{and}\,\int_{\mathbb{R}^{d}}\left\|\text{Hess}\,v-\frac{\nabla\,v \otimes\nabla\,v}{v}\right\|^{2}d\gamma=\frac{1}{4}\int_{\mathbb{R}^{d}}\left\| \text{Hess}\,P\right\|^{2}h\,d\gamma\,.\] ### Improvements under moment constraints In standard computations based on the _carre du champ_ method, one usually drops the right-hand side in (13) which results in the standard exponential decay of \(\mathscr{I}[v(t,\cdot)]\) if \(v\) solves (12) and, after integration on \(t\in\mathbb{R}^{+}\), proves (1). Keeping track of the right-hand side in (13) provides us with improvements as shown in [2; 26; 29] in various interpolation inequalities but generically fails in the case of the logarithmic Sobolev inequality. We remedy to this issue by introducing moment constraints. This is not a very difficult result but, as far as we know, it is new in the framework of the _carre du champ_ method. **Lemma 4**: _With the notations of Section 2.1, if \(v\in\mathrm{H}^{2}(\mathbb{R}^{d},d\gamma)\) is a positive function such that \(\int_{\mathbb{R}^{d}}|x|^{2}\,|v|^{2}\,d\gamma\leq d\), then_ \[4\,\mathscr{I}[v]\leq\int_{\mathbb{R}^{d}}\left(\Delta P\right)h\,d\gamma\leq \sqrt{d\int_{\mathbb{R}^{d}}\left\|\text{Hess}\,P\right\|^{2}h\,d\gamma}\,.\] _Proof._ Using \(h\,\nabla P=-\,\nabla\,h\), we obtain \[4\,\mathscr{I}[\,v]=\int_{\mathbb{R}^{d}}|\nabla P|^{2}\,h\,d\gamma=-\int_{ \mathbb{R}^{d}}\nabla P\cdot\nabla h\,d\gamma=\int_{\mathbb{R}^{d}}h\left( \mathscr{L}P\right)d\gamma\,.\] After recalling that \(\mathcal{L}P=\Delta P-x\cdot\nabla P\), using an integration by parts we deduce that \[-\int_{\mathbb{R}^{d}}h\,x\cdot\nabla P\,d\gamma=\int_{\mathbb{R}^{d}}x\cdot \nabla h\,d\gamma=\int_{\mathbb{R}^{d}}h\left(|x|^{2}-d\right)d\gamma=\int_{ \mathbb{R}^{d}}|v|^{2}\left(|x|^{2}-d\right)d\gamma\leq 0\] which proves the first inequality. The second inequality follows from a Cauchy-Schwarz inequality and the arithmetic-geometric inequality \[\left(\Delta P\right)^{2}\leq d\left\|\operatorname{Hess}P\right\|^{2}.\] Proof of Proposition 1.: Let \(h=v^{2}\) be the solution of (9) with initial datum \(h_{0}=u^{2}\). Since \(x\mapsto\left(|x|^{2}-d\right)\) is an eigenfunction of \(\mathcal{L}\) with corresponding eigenvalue \(-2\) and \(\mathcal{L}\) is self-adjoint on \(\operatorname{L}^{2}(\mathbb{R}^{d},d\gamma)\), we have \[\frac{d}{dt}\int_{\mathbb{R}^{d}}\left(|x|^{2}-d\right)h\,d\gamma =\int_{\mathbb{R}^{d}}\left(|x|^{2}-d\right)\left(\mathcal{L}h \right)d\gamma\\ =\int_{\mathbb{R}^{d}}h\,\mathcal{L}\left(|x|^{2}-d\right)d \gamma=-2\int_{\mathbb{R}^{d}}\left(|x|^{2}-d\right)h\,d\gamma\,. \tag{15}\] The sign of \(t\mapsto\int_{\mathbb{R}^{d}}\left(|x|^{2}-d\right)h(t,x)\,d\gamma\) is conserved and in particular we have that \(\int_{\mathbb{R}^{d}}|x|^{2}\,|v|^{2}\,d\gamma\leq d\) for any \(t\geq 0\). For any \(i=1,\,2\ldots d\), we also notice that \(x\mapsto x_{i}\) is also an eigenfunction of \(\mathcal{L}\) with corresponding eigenvalue \(-1\) so that \[\frac{d}{dt}\int_{\mathbb{R}^{d}}x\,h\,d\gamma=-\int_{\mathbb{R}^{d}}x\,h\,d\gamma\] and, as a consequence \(\int_{\mathbb{R}^{d}}x\,h(t,\cdot)\,d\gamma=0\) for all \(t\geq 0\) because \(\int_{\mathbb{R}^{d}}x\,h_{0}\,d\gamma=0\). For smooth enough solutions, we deduce from Lemma 4, (13) and (14) that \[\frac{d}{dt}\mathcal{I}\left[v\right]+2\mathcal{I}\left[v\right]\leq-\frac{8} {d}\mathcal{I}^{2}\left[v\right]\leq\frac{1}{2\,d}\,\frac{d}{dt}\left( \mathcal{E}\left[v\right]\right)^{2}\] and obtain by considering the limit as \(t\to+\infty\) that \[\mathcal{I}\left[v\right]\geq\frac{1}{2}\,\mathcal{E}\left[v\right]+\frac{1}{2 \,d}\left(\mathcal{E}\left[v\right]\right)^{2}\,.\] This provides us with (2). In the general case, one can get rid of the \(\operatorname{H}^{2}(\mathbb{R}^{d},d\gamma)\) regularity of Lemma 4 by a standard approximation scheme, which is classical and will not be detailed here. As in [17], a better estimate is achieved as follows. Let \[\phi(s):=\frac{d}{4}\left(e^{\frac{2}{d}s}-1\right)\quad\forall\,s\geq 0\,.\] Using \(\frac{d}{dt}\mathcal{E}[v]=-4\mathcal{I}[v]\), we notice that \[\frac{d}{dt}\Big{(}\mathcal{I}[v]-\phi\big{(}\mathcal{E}[v]\big{)}\Big{)}=-\frac{ 8}{d}\Big{(}\mathcal{I}[v]-\phi\big{(}\mathcal{E}[v]\big{)}\Big{)}\,.\] Since \(\lim_{t\to+\infty}\mathcal{I}[v(t,\cdot)]=0\) as can be deduced from a Gronwall estimate relying on \(\frac{d}{dt}\mathcal{I}[v]\leq-2\mathcal{I}[v]\) and \(\lim_{t\to+\infty}\mathcal{E}[v(t,\cdot)]=0\) as a consequence of (1), one knows that \[\lim_{t\to+\infty}\Big{(}\mathcal{I}[v(t,\cdot)]-\phi\big{(}\mathcal{E}[v(t, \cdot)]\big{)}\Big{)}=0\,.\] Moreover, Gronwall estimates show that \(\mathcal{I}[v(t,\cdot)]-\phi\big{(}\mathcal{E}[v(t,\cdot)]\) cannot change sign and an asymptotic expansion as \(t\to+\infty\) as in (17, Appendix B.4) is enough to obtain that \(\mathcal{I}[v(t,\cdot)]-\phi\big{(}\mathcal{E}[v(t,\cdot)]\) takes nonnegative values for \(t>0\) large enough. Altogether, we conclude that \[\mathcal{I}[v(t,\cdot)]-\phi\Big{(}\mathcal{E}[v(t,\cdot)]\Big{)}\geq 0\] for any \(t\geq 0\) and, as a particular case, at \(t=0\) for \(v(0,\cdot)=u\). The function \(\phi\) is convex increasing and, as such, invertible, so that we can also write \[\phi\big{(}\mathcal{I}[u]\big{)}^{-1}-\phi\Big{(}\mathcal{E}[u]\Big{)}\geq 0\,.\] his completes the proof of (3) with the convex monotone increasing function \[\psi(s):=s-\frac{1}{2}\phi^{-1}(s)\,.\] ## 3 Stability results ### Log-concave measures and Poincare inequality According to (6), given a Borel probability measure \(\mu\) on \(\mathbb{R}^{d}\), its _isoperimetric constant_ is defined as \[h(\mu):=\inf_{A}\frac{\mathrm{P}_{\mu}(A)}{\min\big{\{}\mu(A),1-\mu(A)\big{\}}}\] where the infimum is taken on the set of arbitrary Borel subset \(\mathbb{R}^{d}\) with \(\mu\)-perimeter or surface measure \(\mathrm{P}_{\mu}(A):=\lim_{\varepsilon\to 0_{+}}\big{(}\mu(A_{\varepsilon})-\mu(A) \big{)}/\varepsilon\) and \(A_{\varepsilon}:=\{x\in\mathbb{R}^{d}:|x-a|<\varepsilon\text{ for some }a\in A\}\). Here and in what follows, we shall say that a measure \(\mu\) with density \(e^{-\psi}\) with respect to Lebesgue's measure is a _log-concave probability measure_ if \(\psi\) is a convex function, and denote by \(\lambda_{1}(\mu)\) the first positive eigenvalue of \(-\mathcal{L}_{\psi}\) where \(\mathcal{L}_{\psi}\) is the _Ornstein-Uhlenbeck operator_\(\mathcal{L}_{\psi}:=\Delta-\nabla\psi\cdot\nabla\). In that case, we learn from (45, Ineq. (5.8)) that \[\frac{1}{4}\,h(\mu)^{2}\leq\lambda_{1}(\mu)\leq 36\,h(\mu)^{2}\] where the lower bound is J. Cheeger's inequality that goes back to [22] for Riemannian manifolds and also to earlier works by V.G. Maz'ya [49, 50]. This bound was later improved in [19, 44]. The characterization of \(h(\mu)\) has been actively studied, but it is out of the scope of the present paper. We learn from [12, Theorem 1.2] and [12, Ineq. (3.4)] that \[h(\mu)\geq\frac{1}{6\sqrt{3\int_{\mathbb{R}^{d}}|x-x_{\mu}|^{2}\,d\mu}}\quad \text{where}\quad x_{\mu}=\int_{\mathbb{R}^{d}}x\,d\mu\] for any log-concave probability measure \(\mu\). This estimate is closely related with the results by R. Kannan, L. Lovasz and M. Simonovits in [42] and their conjecture, which again lies out of the scope of the present paper (see for instance [13] for a recent work on the topic). Altogether, if \(\mu\) is a log-concave probability measure with \(d\mu=e^{-\psi}\,dx\) such that \(\int_{\mathbb{R}^{d}}|x|^{2}\,d\mu\leq d\), then we have the _Poincare inequality_ \[\int_{\mathbb{R}^{d}}|\nabla f|^{2}\,d\mu\geq\frac{1}{432}\int_{\mathbb{R}^{d }}|f|^{2}\,d\mu\quad\forall\,f\in\mathrm{H}^{1}(\mathbb{R}^{d},d\mu)\text{ such that }\int_{\mathbb{R}^{d}}f\,d\mu=0\,. \tag{16}\] We refer to [21] and references therein for further estimates on \(\lambda_{1}(\mu)\). ### Time evolution, log-concave densities and Poincare inequality **Lemma 5**.: _Let us consider consider a solution \(h\) of (9) with initial datum \(h_{0}=v^{2}\) and assume that \(\mu_{0}:=h_{0}\gamma\) is log-concave. Then \(\mu_{t}:=h(t,\cdot)\gamma\) is log-concave for all \(t\geq 0\)._ Proof.: The function \(g:=h\gamma\) solves the Fokker-Planck equation \[\frac{\partial g}{\partial t}=\Delta g+\nabla\cdot(x\,g)\,.\] The function \(f\) such that \[f(t,x):=g\left(\frac{1}{2}\log(1+2\,t),\frac{x}{\sqrt{1+2\,t}}\right)\quad \forall\,(t,x)\in\mathbb{R}^{+}\times\mathbb{R}^{d}\] solves the heat equation and can be represented using the heat kernel. According for instance to [53, 8], log-concavity is preserved under convolution, which completes the proof. The log-concavity property becomes true under the action of the flow of (9) after some delay \(t_{\star}\) for large classes of initial data. With the notation of Lemma 5, for any \(R>0\), we read from [48, Theorem 5.1] by K. Lee and J-L Vazquez that \(\mu_{t}\) is log-concave for any \[t\geq t_{\star}\left(R\right):=\log\left(\sqrt{R^{2}+1}\right)\,, \tag{17}\] if \(v\) is compactly supported in a ball of radius \(R>0\), by reducing the problem to the heat flow as in the above proof. As a consequence, we know that (16) holds for any \(t\geq t_{\star}\left(R\right)\). Alternatively, under Assumption (7), we learn from a recent paper [23, Theorem 2] by H.-B. Chen, S. Chewi, and J. Niles-Weed that the _Poincare inequality_ \[\int_{\mathbb{R}^{d}}\left|\nabla f\right|^{2}d\mu_{t}\geq\lambda_{1}(\mu_{t}) \int_{\mathbb{R}^{d}}\left|f\right|^{2}d\mu_{t}\quad\forall\,f\in\mathrm{H}^{1 }(\mathbb{R}^{d},d\mu_{t})\text{ such that }\int_{\mathbb{R}^{d}}f\,d\mu_{t}=0 \tag{18}\] holds for all \(t\geq t_{\star}^{\varepsilon}\) with \[t_{\star}^{\varepsilon}:=\log\left(\sqrt{1+\varepsilon^{-1}}\right),\quad \frac{1}{\lambda_{1}(\mu_{t})}\leq\tau\left(\frac{\varepsilon\tau}{\varepsilon \tau-1}+A^{\frac{1}{\varepsilon\tau-1}}\right)\quad\text{and}\quad\tau=\frac{ 1}{2}\left(e^{2t}-1\right).\] ### Explicit stability results for log-concave densities Let us start by an elementary observation. **Lemma 6**.: _If \(h\in\mathrm{H}^{1}(\mathbb{R}^{d},d\gamma)\) is such that \(\int_{\mathbb{R}^{d}}x\,h\,d\gamma=0\) and \(P=-\log h\) is the pressure variable, then_ \[\int_{\mathbb{R}^{d}}\nabla P\,h\,d\gamma=0\,.\] Proof.: The result follows from \(\int_{\mathbb{R}^{d}}\nabla P\,h\,d\gamma=-\int_{\mathbb{R}^{d}}\nabla h\,d \gamma=\int_{\mathbb{R}^{d}}x\,h\,d\gamma=0\). With this result in hand, we can now prove our first main result. Proof of Theorem 2.: The function \(h=v^{2}\) is such that \(\int_{\mathbb{R}^{d}}x\,h\,d\gamma=0\) and Lemma 6 applies. Since \(h\gamma\) is log-concave, we can apply (16) with \(f=\partial P/\partial x_{i}\) for any \(i=1\), \(2\),...\(d\) and obtain \[\int_{\mathbb{R}^{d}}\left\|\mathrm{Hess}\,P\right\|^{2}h\,d\gamma\geq\frac{ 1}{432}\,\int_{\mathbb{R}^{d}}\left|\nabla P\right|^{2}h\,d\gamma\,.\] It follows from (13) that \[\frac{d}{dt}\int_{\mathbb{R}^{d}}\left|\nabla\,v\right|^{2}d\gamma+2\int_{ \mathbb{R}^{d}}\left|\nabla\,v\right|^{2}d\gamma\leq-\frac{1}{864}\int_{ \mathbb{R}^{d}}\left|\nabla\,v\right|^{2}d\gamma\,,\] and the stability result is obtained as in the proof of Proposition 1. ### Extension by entropy methods and flows This section is devoted to the proof of Theorem 3. The key idea is to evolve the function by the Ornstein-Uhlenbeck equation, so that the solution after an _initial time layer_ has the log-concavity property of Theorem 2, at least if the initial datum has compact support. To some extent, the strategy is similar to the one used in [16]. During the initial time layer, we use an improved version of the entropy - entropy production inequality which arises as a consequence of the _carre du champ_ method. Proof of Theorem 3.: Let \(h\) be the solution to (9) with initial datum \(h_{0}=u^{2}\) and define \[\mathcal{D}(t):=\frac{\mathcal{I}\left[\sqrt{h(t,\cdot)}\right]}{\mathcal{E} \left[\sqrt{h(t,\cdot)}\right]}\quad\forall\,\,t\geq 0\,.\] Let us assume first that \(v\) has compact support. With no loss of generality, we can assume that \(v\) is supported in \(B(0,R)\) for some \(R>0\). With \(t_{*}=t_{*}\left(R\right)\) given by (17), we know from (48, Theorem 5.1) that \(\mu_{t}\) is log-concave at \(t=t_{*}\) and Theorem 2 applies: \[\mathcal{Q}(t_{*})\geq\frac{\mathcal{C}_{*}}{2}\,.\] With an estimate similar to (16, Lemma 2.9), we learn from Section 2 that \[\frac{d\mathcal{Q}}{dt}\leq 2\,\mathcal{Q}\left(2\,\mathcal{Q}-1\right). \tag{19}\] An integration on \(\left(0,t_{*}\right)\) shows that \[\mathcal{Q}(0)\geq\frac{1}{2}\left(1+\frac{2\,\mathcal{Q}(t_{*})-1}{1+2\, \mathcal{Q}(t_{*})\left(e^{2\,t_{*}}-1\right)}\right)\geq\frac{1}{2}\left(1+ \frac{\mathcal{C}_{*}-1}{1+\mathcal{C}_{*}\,R^{2}}\right)=\frac{\mathcal{C}}{ 2}\,.\] Under the more general assumption (7), we rely on (18) and obtain with same notations as above and \(t_{*}=t_{*}^{\varepsilon}\) that \[\int_{\mathbb{R}^{d}}\left\|\operatorname{Hess}P\right\|^{2}h(t,\cdot)\,d \gamma\geq\lambda_{1}(\mu_{t})\,\int_{\mathbb{R}^{d}}\left|\nabla P\right|^{2} h(t,\cdot)\,d\gamma\quad\forall\,\,t\geq t_{*}\,.\] Moreover, for some explicit \(t_{0}=t_{0}(\varepsilon)>t_{*}\), we notice that \(t\mapsto\lambda_{1}(\mu_{t})\) is nonincreasing on \((t_{0},+\infty)\). Hence we deduce from \[\mathcal{I}\left[\,v(t_{0},\cdot)\right]-\frac{1}{8}\int_{t_{0}}^{+\infty} \left(4+\lambda_{1}(\mu_{s})\right)\mathcal{E}^{\prime}[\,v(s,\cdot)\,]\,ds\geq 0\] after an integration by parts that \[\mathcal{Q}(t_{0})\geq\frac{1}{2}\left(1+\frac{1}{4}\,\lambda_{1}(\mu_{t_{0}} )\right)=:\mathcal{C}_{0}\,.\] Using (19), we obtain \[\mathcal{C}=1+\frac{\mathcal{C}_{0}-1}{1+\mathcal{C}_{0}\left(e^{2\,t_{0}}-1 \right)}\,.\] This concludes the proof. ### Normalization issues If we do not assume that \(\left\|\,u\right\|_{\operatorname{L}^{2}(\mathbb{R}^{d},d\gamma)}=1\) and \(\left\|\,x\,u\right\|_{\operatorname{L}^{2}(\mathbb{R}^{d},d\gamma)}\leq d\), it is still possible to state the analogue of Theorem 2, but the price to be paid is a dependence on \[\kappa[\,u]:=\frac{\left\|\,u\right\|_{\operatorname{L}^{2}(\mathbb{R}^{d},d \gamma)}}{\max\left\{\sqrt{d},\left\|\,(x-x_{0})\,u\right\|_{\operatorname{ L}^{2}(\mathbb{R}^{d},d\gamma)}\right\}}\quad\text{where}\quad x_{0}=\int_{ \mathbb{R}^{d}}x\,h_{0}\,d\gamma\,,\] which goes as follows. **Proposition 7**.: _For all \(u\in\mathrm{H}^{1}\big{(}\mathbb{R}^{d},(1+|x|^{2})\,d\gamma\big{)}\) such that \(\int_{\mathbb{R}^{d}}x\,u^{2}\,d\gamma=0\), and \(u^{2}\,\gamma\) is log-concave, we have_ \[\|\nabla u\|_{\mathrm{L}^{2}\big{(}\mathbb{R}^{d},d\gamma\big{)}}^{2}-\frac{1}{ 2}\,\big{(}1+(\mathcal{C}_{*}-1)\,\kappa\big{[}u\big{]}\big{)}\int_{\mathbb{R} ^{d}}|u|^{2}\,\log\left(\frac{|u|^{2}}{\|u\|_{\mathrm{L}^{2}\big{(}\mathbb{R}^{ d},d\gamma\big{)}}^{2}}\right)d\gamma\geq 0\,.\] Proof.: We learn from (15) that \[\int_{\mathbb{R}^{d}}|x|^{2}\,h(t,x)\,d\gamma=d\,\|u\|_{\mathrm{L}^{2}\big{(} \mathbb{R}^{d},d\gamma\big{)}}^{2}+e^{-2\,t}\,\int_{\mathbb{R}^{d}}\big{(}|x| ^{2}-d\big{)}\,h_{0}\,d\gamma\quad\forall\,t\geq 0\,.\] Hence [12, Theorem 1.2] and [12, Ineq. (3.4)] apply with \[h(\mu)\geq\frac{\kappa\big{[}u\big{]}}{6\sqrt{3}}\,.\] and the remainder of the proof of Theorem 2 is unchanged. A similar extension of Theorem 3 can be done on the same basis. Details are left to the reader. As for Proposition 1, we can make the following observations. The case \(\int_{\mathbb{R}^{d}}|x|^{2}\,|v|^{2}\,d\gamma\leq d\) is already covered in Lemma 4. If \[A:=\int_{\mathbb{R}^{d}}|u|^{2}\,\big{(}|x|^{2}-d\big{)}\,d\gamma\] is positive, let us consider the solution \(h\) of (9) with initial datum \(h_{0}=u^{2}\). We know from (15) that \[\int_{\mathbb{R}^{d}}h(t,x)\,\big{(}|x|^{2}-d\big{)}\,d\gamma=A\,e^{-2\,t}\] and deduce as in the proof of Lemma 4 that \[4\,\mathcal{I}\big{[}\,v\big{]}=\int_{\mathbb{R}^{d}}|\nabla P|^{2}\,h\,d \gamma=\int_{\mathbb{R}^{d}}h\,(\mathcal{L}P)\,d\gamma+A\,e^{-2\,t}\leq\sqrt{ d\int_{\mathbb{R}^{d}}\|\mathrm{Hess}\,P\|^{2}\,h\,d\gamma}+A\,e^{-2\,t}\] with \(P=-\log h\). Hence by (13), we learn that \[\frac{d}{dt}\mathcal{I}\big{[}\,v\big{]}+2\,\mathcal{I}\big{[}\,v\big{]}=- \,\frac{1}{2}\int_{\mathbb{R}^{d}}\|\mathrm{Hess}\,P\|^{2}\,h\,d\gamma\leq- \,\frac{1}{2\,d}\,\big{(}4\,\mathcal{I}\big{[}\,v\big{]}-A\,e^{-2\,t}\big{)}^{2}\] and this estimate can be rephrased with \(z(t):=e^{2\,t}\,\mathcal{I}\big{[}\,v(t,\cdot)\,\big{]}\) as \[z^{\prime}\leq-\,\frac{e^{-2\,t}}{2\,d}\,(4\,z-A)^{2}\,.\] Knowing that \(z^{\prime}<0\) is an improvement on the decay rate \(\mathcal{I}\big{[}\,v(t,\cdot)\,\big{]}\leq\mathcal{I}\big{[}\,u\big{]}\,e^{-2 \,t}\) can be rephrased as an improved entropy - entropy production inequality for \(A>0\). ## Acknowledgements The authors thank Max Fathi and Pierre Cardaliaguet for fruitful discussions and Emanuel Indrei for stimulating interactions. G.B. has been funded by the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 754362.
2305.08428
Legal Extractive Summarization of U.S. Court Opinions
This paper tackles the task of legal extractive summarization using a dataset of 430K U.S. court opinions with key passages annotated. According to automated summary quality metrics, the reinforcement-learning-based MemSum model is best and even out-performs transformer-based models. In turn, expert human evaluation shows that MemSum summaries effectively capture the key points of lengthy court opinions. Motivated by these results, we open-source our models to the general public. This represents progress towards democratizing law and making U.S. court opinions more accessible to the general public.
Emmanuel Bauer, Dominik Stammbach, Nianlong Gu, Elliott Ash
2023-05-15T08:13:00Z
http://arxiv.org/abs/2305.08428v1
# Legal Extractive Summarization of U.S. Court Opinions ###### Abstract This paper tackles the task of legal extractive summarization using a dataset of 430K U.S. court opinions with key passages annotated. According to automated summary quality metrics, the reinforcement-learning-based MemSum model is best and even out-performs transformer-based models. In turn, expert human evaluation shows that MemSum summaries effectively capture the key points of lengthy court opinions. Motivated by these results, we open-source our models to the general public. This represents progress towards democratizing law and making U.S. court opinions more accessible to the general public. ## 1 Introduction How many people have argued about _Dobbs v. Jackson_, the 2022 U.S. Supreme Court abortion case, without having read the associated judicial opinions? Even among trained attorneys, it is not trivial to read a set of opinions spanning 213 pages. While many news outlets have provided more accessible summaries of the majority opinion, it is not easy to find a decent summary of the dissenting opinion. And countless other judicial opinions in U.S. courts have never been put into a more digestible format. A promise of recently developed AI technologies for natural language processing is to automatically analyze and summarize technical documents, such as judicial opinions. This paper focuses on the task of _extractive summarization_ - that is, extracting key passages from a long document that concisely summarize its main points. We use a labeled dataset of 430K judicial opinions from U.S. courts to train a set of neural-net summarizers that try to reproduce the human labels. In a held-out test set of unseen documents, a reinforcement-learning-based architecture called MemSum out-performs other baselines, including a long-context transformer architecture. Qualitatively, the produced summaries are of extremely high quality. Consider for example the summaries for _Dobbs_, shown in Table 1. For both the majority (top panel) and dissent (bottom panel), the key legal and policy arguments are effectively extracted and reproduced, with a 100\(\times\) compression ratio in terms of document length. In a more extensive blinded human validation task where a lawyer-annotator compared human summaries to machine summaries, we further substantiate the quality of the machine-generated summaries. Motivated by these evaluations, we open-source our models on github.1 Our models will now be available for use by the legal community and the broader public. Footnote 1: [https://github.com/baureem/legal_mensum](https://github.com/baureem/legal_mensum) On the academic side, these contributions add to the applied NLP field on legal document summarization Jain et al. (2021). The early work in this area used a combination of rule-based heuristics and unsupervised clustering approaches (among others Moens and Uyttendaele, 1997; Farzindar and Lapalme, 2004; Hachey and Grover, 2006; Yousfi-Monod et al., 2010; Kim et al., 2012), on a range of small-scale datasets in various countries and domains. The closest recent work on extractive summarization of U.S. legal documents is Kornilova and Eidelman (2019), who provide a new dataset of legislation with annotated summaries, BillSum, and provide some extractive summary baselines (see also Jain et al., 2021). We produce complementary baselines for U.S. court opinions, via experimenting with two different dedicated long-document summarization approaches. Legal document summarization is part of the broader literature on legal NLP, which has seen rapid progress in recent years (see e.g., Zhong et al., 2020; Chalkidis et al., 2020; Peric et al., 2020; Chalkidis et al., 2022). Many legal tasks are natural-language tasks, and massive legal corpora have become increasingly available for computa tional analysis (e.g. Henderson et al., 2022). An interesting challenge in the legal domain is the relatively long document length (Chalkidis et al., 2022; Mamakas et al., 2022), which poses unique challenges for a range of tasks (e.g. Koh et al., 2022; Jain et al., 2021). Our solution to that problem in the case of summarization - a reinforcement-learning-based approach that easily scales to hundreds or even thousands of sentences - could be useful to legal NLP more generally. In terms of applied NLP methodology, our research is useful in showing a context where a relatively light-weight reinforcement-learning-based model (Gu et al. 2022; see also Luo et al. 2019; Narayan et al. 2018; Nguyen et al. 2021) outperforms a more heavy-duty transformer-based model. One of the original motivations for approaches like MemSum is that they can scale to longer documents than BERT-based models. Now that long-context transformers like LongFormer have relaxed the document-length constraint, we might expect those models to out-perform MemSum. At least in our setting, they do not. Beyond its place in the academic literature, we hope that our models will help democratize legal practice by making the primary sources - i.e., judicial opinions - more accessible to attorneys, to policymakers, and to the broader public (Carroll, 2006). Given recent work showing that judges use Wikipedia articles about precedents in their opinions (Thompson et al., 2022), there is proven value in making such models available. ## 2 Methods ### Data We have data on 436,889 judicial opinions published by courts in the United States (both state and federal) from the years 1755 through 2016. Each document (a majority opinion) is accompanied by human annotations of key passages, which we take as an extractive summary. These annotations are used by lawyers to understand the distinctive features of a case and the most relevant points of law made by the associated ruling. The average opinion contains 86 sentences, while the average summary contains 6 sentences, making for an average compression ratio of 15.8% (Appendix Table A.1). We train the model on most of the data - 410,732 opinions - with 13,146 opinions used for validation and 13,011 held out for test evaluation. ### Quantitative Results ### Summarization Models The average opinion in our dataset contains 2'745 words (Appendix Table A.1), which translates to 3,500 tokens, about seven times the maximum sequence length (512 tokens) of typical transformer models such as BERT (Devlin et al., 2018). Further, about 27% of opinion documents are longer \begin{table} \begin{tabular}{c c c} Rank & Position & Expected sentence & Majority Opinion by Justice Alite 0915 sentences in original) \\ \hline 7 & 77 & 77 \\ 8 & 99 & Except in a medical emergency or in the case of severe lethal abnormality, a person shall not intentionally or knowingly perform. or induce an abortion of an unbroken human being of the probable gestational age of the unbroken human being has been determined to be greater than fifteen (15) weeks. \\ 5 & 150 & Finally, we consider whether a right to obtain an abortion is part of a broader extended right that is supported by other precedions. \\ 1 & 155 & The Constituitance makes no expressers to engage to a right to obtain an abortion, and therefore those who claim that it protects such a right must show that the right is somehow implicit in the constitutional result. \\ 2 & 182 & The regulation of a medical procedure that only one sex can undergo does not trigger heightened constitutional scrutiny unless the regulation is a “mere protest[] designed to effect an invitation discrimination against members of one sex or the other. \\ 3 & 185 & And as the court has the, the goal of preventing identifier’ does not constitute “misidently discriminatory antimax” against women. \\ 4 & 187 & Accordingly, laws regarding or publishing abortion are not subject to heightened scrutiny. \\ 6 & 188 & Rather, they are governed by the same standard review as other health and safety measures. \\ \multicolumn{4}{c}{**Discenting Opinion by Justices Beyer, Kagana, and Sotomayor (1,128 sentences in original)**} \\ Rank & Position & Extracted sentence & \\ \hline 10 & 60 & As of today, this Court holds, a State can always force a woman to give birth, prohibiting even the earliest abortions. \\ 3 & 91 & Since _exist_ is the claim phrase for a foundation issue of the rule of law: that things decided should stay decided unless there is a very good reason for change. \\ 4 & 92 & In a decision of judicial money and humility. \\ 8 & 93 & Those qualities are not evident in today’s opinion. \\ 2 & 104 & _State details_, in Court has been shown task, ‘conturbants to the actual and perceived integrity of the judicial process’ by ensuring that decisions are “founded in the law rather than in the specifications of individuals. \\ 6 & 123 & We believe in a Constitution that pass, some issues of inflats to majority rule. \\ 7 & 124 & Even in the face of public opinion, we should the right of individuals—yes, including women—to make their own choices and chart their own futures. \\ 9 & 160 & Recognizing that “arguments [again Real] continue to be made,” we responded that the doctrine of _state deficits_ “ demands respect in a society governed by the rule of law.” \\ 5 & 163 & And we avoid the “utility” of “confounded prejudies cannot be allowed to yield purely because of disagreement with them.” \\ 1 & 182 & It is useful not,” the Court said—though it was not always “on-that” the Constitution places limits on a State’s right to interfere with a person’s most basic decisions about family and parenthood, as well as bodily integrity.” \\ \end{tabular} \end{table} Table 1: Extractive Summaries: _Dobbs v Jackson_’s Majority and Dissenting Opinions_ than 4096 tokens, the maximum sequence length of standard long-document transformers using sparse attention (Beltagy et al., 2020; Zaheer et al., 2020). Our summarization modeling choices have to address this issue of unusually long documents (Koh et al., 2022). We experiment with two types of models: (1) a neural architecture built to extract sentences from long documents using reinforcement learning, and (2) an extra-long-document transformer encoder which can handle input up to sequence length of 16'834 tokens. MemSum.The first extractive summarizer model we use is MemSum - "Multi-step Episodic Markov decision process extractive SUMmarizer" (Gu et al., 2022). This model uses reinforcement learning to train a policy network to iteratively select sentences from text. At each extraction step, the network considers the local context of each candidate sentence (encoded by bi-LSTM), the global context of the whole document (also encoded by bi-LSTM), and the historical context of previously extracted sentences (encoded by a mini-transformer). The extractor (a feedforward network) takes these concatenated encodings as input and decides which sentence to add, or else to stop extracting. The model learns by rewarding the agent for a higher ROUGE score (which is not differentiable, hence the use of reinforcement learning). See Appendix B for more details. LongFormer Encoder.The second model uses a sparse-attention transformer architecture based on the LongFormer encoder from Beltagy et al. (2020). While the original LongFormer accommodates sequence lengths up to 4,096 tokens, this variant handles sequence lengths up to 16'384 tokens - enough for almost all court opinions in our dataset. Similar to the discriminator objective in Clark et al. (2020), we implement extractive summarization as a binary token-level prediction task - that is, to predict for each token whether it is part of the extractive summary or not. At inference time, a sentence is selected as part of the summary if the average score across a sentence's tokens is greater than 0.5. We present two flavors of this approach. First, "LongFormer" is the long-context version of Beltagy et al. (2020) fine-tuned on the token prediction task using our dataset. Second,"LawFormer" is similar, but also pre-trained on a legal BART objective (Lewis et al., 2019) using 6 million U.S. court opinions (see Appendix C for details). ### Evaluation Models are assessed by the textual overlap of the machine-selected summary sentences with the human-selected gold summary sentences in the held-out test set. This overlap is computed by ROUGE scores, with ROUGE-1 measuring unigram overlap, ROUGE-2 measuring bigram overlap, and ROUGE-L measuring the longest common subsequence. We report ROUGE F1 scores, computed as the harmonic mean of ROUGE recall (how much of the gold summary features are covered by the machine summary) and ROUGE precision (how much of the machine summary features are contained in the gold summary). As an upper bound on performance, we compute these ROUGE scores for the sentences extracted by an "Oracle" model that approximates scores for the ground truth. See Appendix D for more details. ## 3 Results The main results for summarizer model performance are reported in Table 2. The columns indicate the different ROUGE score variants (unigram, bigram, and longest subsequence), while the rows indicate the models/baselines. The cell values are ROUGE F1 scores reported in % (0-100). The first row, Lead-10, is a lower-bound baseline where the extracted summary is the first ten sentences of the opinion. The bottom row, Oracle, is the upper-bound baseline proxying for the ground truth. As desired, these rows respectively give the \begin{table} \begin{tabular}{l c c c} Model & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \hline Lead-10 & 30.5 & 12.0 & 26.7 \\ \hline LongFormer & 54.0 & 46.7 & 39.9 \\ LawFormer & 56.0 & 48.4 & 41.1 \\ MemSum & **62.8** & **55.3** & **61.1** \\ \hline Oracle & 85.5 & 80.2 & 84.5 \\ \hline \end{tabular} \end{table} Table 2: Extractive Summarization Model Performance lowest and highest values for each column. The second row reports results for LongFormer. That model achieves a substantial increase over the Lead-10 baseline in all metrics, including a 4x increase for ROUGE-2 F1. Looking to LawFormer in row 3, we see that further pre-training on U.S. court opinions does help but only marginally. Finally in row 4, we consider the MemSum model. It achieves the best overall results. MemSum consistently improves over the other models by at least 6% in every metric measured. That includes an increase of 1.5x in ROUGE-L over the second best model. Hence, for our qualitative validation and open-sourced summaries, we use MemSum. ### Qualitative Results As already discussed in the introduction, an initial sense of the summarizer's quality can be seen in Table 1's summaries of the opinions in _Dobbs_. The majority opinion (top panel) is compressed from 915 sentences to 8 sentences, while the dissenting opinion (bottom panel) is compressed from 1,128 sentences to 10 sentences. The first column, Rank, indicates the ordering in which MemSum extracts sentences, and can be used as a rough ranking of which sentences are most important in an opinion. To generalize this qualitative validation, we used a list of 25 U.S. Supreme Court opinions that had been cited most often by law review articles. Of these notable opinions, 14 were not part of the model's training set. For this subset, the generated summaries are listed in Appendix E. We feel that overall, the summaries are of high quality. Next, we arranged blind human annotation by a trained U.S. lawyer who was familiar with these 14 cases. The lawyer observed the ground-truth (human-generated) summaries and the machine-generated summaries, presented in random order without labels. For each case, the annotator ranked the pair on a five-point scale: A clearly better than B (-2), A slightly better than B (-1), could not decide (0), B slightly better than A (+1), and B clearly better than A (+2). Qualitatively, the annotator could not easily tell which summaries were machine-generated, and felt both summaries were satisfactory in almost all cases. That is reflected in the relative scores, reported in Figure 1. Out of the fourteen cases, the ground-truth summary was slightly better for ten cases, the machine summary was slightly better for two cases, and they were subjectively equal for two cases. The mean relative score for the machine summary is -0.57. See Appendix E for further details. Note that these prominent Supreme Court Cases are not a representative sample. They are relatively lengthy and complex, dealing with unusual issues and making novel holdings. Thus, this is one of the most difficult set of cases for the machine summarizer to deal with. Yet still, the resulting summaries are of impressive quality and almost as good as the human-generated summaries according to our trained lawyer. We hope that more comprehensive assessments of subjective summary quality are produced in the future. ## 4 Conclusion In this paper, we tackled the extractive summarization of U.S. Supreme Court opinions. The main challenge is that such opinions are very long, which poses unique challenges for NLP methods. We make use of recent advances in extractive long-document summarization, and experiment with two different model families. We find that that the reinforcement-based MemSum model performs best on this task. We manually confirm the quality of these summaries on 14 landmark U.S. cases. We open-source our models which will help democratize legal practice by making legal primary sources more accessible to a broad audience. Figure 1: Human Validation: Relative Quality of Machine Summaries ### Limitations Extractive Summarization.Extractive summarization of long documents has known limitations, e.g., such summaries tend to be lengthy, incoherent, disconnected and can be verbose. While abstractive summarization does not suffer from these limitations, these models are known to hallucinate and produce non-factual content - which is absolutely not acceptable in the case of summarizing court opinions. In a perfect world, we would tackle summarizing court opinions using faithful abstractive summarizations which we can guarantee do not hallucinate. However, the current technology does not support this, hence our decision to approach summarization in an extractive matter. Limitations of Models.While we believe that our summaries are of high quality, in our blind human evaluation, in 70% the annotator preferred the human summary compared to our machine-generated summary - indicating that there is still room for improvement. Summarizing long legal documents poses many challenges and the current state of NLP is not ready yet to tackle all of them. Given the tremendous amount of progress in summarizing long documents, we expect to re-visit summarizing U.S. court opinions in the future, using more advanced technology. Limitations of our Dataset.Due to licensing issues, we cannot release the dataset we trained our model on. Also, we are only concerned with summarizing U.S. court opinions. In future work, we want to explore how much our models generalize to e.g., other jurisdictions and other types of legal documents. We have already gotten promising results on summarizing legislative documents. Computational Budget.Given our computational budget and the time it takes to fine-tune models on 400K long court opinions, we were not able to conduct more experiments than the ones presented in the paper. Given infinite time and compute budget, we would have (1) replaced MemSum's RNN backbone with a transformer architecture (2) explored other long-document transformer models and (3) would have experimented with various hyper-parameters in more detail. ### Ethics Statement License and Intended Use.Our code and trained models are available as open source software at [https://github.com/bauerem/legal_memsum](https://github.com/bauerem/legal_memsum). We believe our models are a step towards democratizing U.S. law, thus our decision to release them for non-commercial use. The intended use of our models is legal research and legal journalism. Further, we allow the general public access to key information in U.S. court opinions and speed up the work of legal practitioners. The training data cannot be released due to license restrictions, but the trained models are released for public/social good. ### Misuse Potential. Risks of this Work.While we believe our models are useful, they will still produce machine-generated text. There is the risk that our models miss key sentences in the opinion, thus we advice practicioners using our models to always consult the original opinion as well. Furthermore, although our models are of extractive nature and thus by constructing cannot hallucinate, it is conceivable that the key findings generated by our summaries are presented out of context and thus might be misleading. Even though we have not found this in our human evaluation study, we do not exclude the risk of this happening at scale. Model Bias.It is widely known that ML models suffer from picking up spurious correlations from data. Furthermore, it has been shown that language models (e.g., the word embeddings we use for MemSum and the LED encoder for our extractive model) suffer from inherent biases present in the pre-training data leading to biased models - and we believe our models presented in this work also suffer from these biases. Data Privacy.The data used in this work are publicly available U.S. court opinions. There is no user-related data or private data involved.
2304.07432
Revisiting electromagnetic response of superconductors in mean-field approximation
In the standard mean-field treatment of superconductors, the electron-electron interactions are assumed to be written in terms of local density operators. However, more general interactions, such as pair-hopping interactions, may exist or may be generated in a low-energy effective Hamiltonian. In this work, we study the effect of correlated hopping interactions toward the electromagnetic response of superconductors. When only the Hamiltonian after the mean-field approximation is provided, one cannot unambiguously determine its electromagnetic response whenever such interactions are allowed. This work demonstrates that such interactions induce additional terms in the current operator, leading to modifications in the Meissner weight and optical conductivities that deviate from conventional expectations. These results underscore the need for caution when incorporating gauge fields into the BdG Hamiltonian.
Chang-geun Oh, Haruki Watanabe
2023-04-14T23:51:09Z
http://arxiv.org/abs/2304.07432v2
# Electromagnetic response of superconductors in mean-field approximation ###### Abstract In the standard mean-field treatment of superconductors, the electron-electron interactions are assumed to be written in terms of local density operators. However, more general interactions, such as pair-hopping interactions, may exist or may be generated in a low-energy effective Hamiltonian. In this work, we study the effect of correlated hopping interactions toward the electromagnetic response of superconductors. When only the Hamiltonian after the mean-field approximation is provided, one cannot unambiguously determine its electromagnetic response whenever such interactions are allowed. We find that additional terms in the current operator are generated and the Meissner weight and the optical conductivities are modified from the textbook results. _Introduction.--_ One of the most remarkable features of superconductors is the Meissner effect, which is the expulsion of an applied magnetic field from the bulk of the sample [1]. The Bardeen-Cooper-Schrieffer (BCS) theory successfully explained the mechanism based on the mean-field approximation [1]. Although this treatment apparently breaks the \(U(1)\) symmetry, the vertex correction restores the gauge invariance of the response kernel [2]. In the study of superconductors at the mean-field level, one often starts with the Bogoliubov-de Gennes (BdG) Hamiltonian without specifying the Hamiltonian before the mean-field approximation. The BdG Hamiltonian is not invariant under \(U(1)\) phase rotation and thus its coupling to the gauge field is not uniquely determined. This results in ambiguities in the electromagnetic response of superconductors described by the BdG Hamiltonian. To see the problems in details, let us consider superconductors described by the BdG Hamiltonian \(\hat{H}^{\mathrm{BdG}}\coloneqq\alpha\sum_{\mathbf{k}}\hat{\Psi}_{\mathbf{k}}^{ \dagger}H_{\mathbf{k}}^{\mathrm{BdG}}\hat{\Psi}_{\mathbf{k}}+C\) with \[H_{\mathbf{k}}^{\mathrm{BdG}}\coloneqq\begin{pmatrix}H_{\mathbf{k}}&\Delta_{\mathbf{k}} \\ \Delta_{\mathbf{k}}^{\dagger}&-H_{-\mathbf{k}}^{T}\end{pmatrix}. \tag{1}\] Here, \(H_{\mathbf{k}}\) describes band dispersions for the normal phase, \(\Delta_{\mathbf{k}}\) is the gap function that satisfies \(\Delta_{-\mathbf{k}}=-\beta\Delta_{\mathbf{k}}^{T}\), and \(C\) is a constant. The parameters \(\alpha\), \(\beta\) and the Nambu spinor \(\hat{\Psi}_{\mathbf{k}}\) are given by \(\alpha=1\), \(\beta=-1\), \(\hat{\Psi}_{\mathbf{k}}\coloneqq(\hat{c}_{\mathbf{k}\uparrow},\hat{c}_{-\mathbf{k}\downarrow }^{\dagger})^{T}\) for spinful electrons, and \(\alpha=1/2\), \(\beta=1\), \(\hat{\Psi}_{\mathbf{k}}\coloneqq(\hat{c}_{\mathbf{k}},\hat{c}_{-\mathbf{k}}^{\dagger})^{T}\) for spinless electrons. The particle-hole symmetry \(P\) satisfies \(P^{2}=\beta\!\!1\). To examine the electromagnetic response of the superconductor, it is customary to introduce the gauge field \(\mathbf{A}\) by replacing \(H_{\mathbf{k}}^{\mathrm{BdG}}\) with (see, for example, Refs. [3; 4; 5]) \[H_{\mathbf{k}}^{\mathrm{BdG}}(\mathbf{A})\overset{(?)}{=}\begin{pmatrix}H_{\mathbf{k+A}}& \Delta_{\mathbf{k}}\\ \Delta_{\mathbf{k}}^{\dagger}&-H_{-\mathbf{k+A}}^{T}\end{pmatrix} \tag{2}\] and define the paramagnetic current operator and the kinetic energy operator by derivatives with respect to \(\mathbf{A}\): \[\hat{J}_{i}^{\mathrm{BdG}}\coloneqq\partial_{A_{i}}\hat{H}_{\mathbf{ k}}^{\mathrm{BdG}}(\mathbf{A})|_{\mathbf{A=0}}, \tag{3}\] \[\hat{K}_{ij}^{\mathrm{BdG}}\coloneqq\partial_{A_{i}}\partial_{A_ {j}}\hat{H}_{\mathbf{k}}^{\mathrm{BdG}}(\mathbf{A})|_{\mathbf{A=0}}. \tag{4}\] However, the \(\mathbf{A}\)-dependence of \(H_{\mathbf{k}}^{\mathrm{BdG}}(\mathbf{A})\) in Eq. (2) is puzzling in two ways: \(\mathbf{A}\) does not entirely appear in the form of \(\mathbf{k}+\mathbf{A}\), which normally follows by the minimum coupling, and \(\Delta_{\mathbf{k}}\) does not depend on \(\mathbf{A}\) even though it may have a nontrivial \(\mathbf{k}\)-dependence. The purpose of this work is to revisit these points and clarify the subtlety behind the BdG Hamiltonian. We argue that the gap function \(\Delta_{\mathbf{k}}\) and the constant \(C\) may depend on \(\mathbf{A}\) and contribute to the paramagnetic current operator and the kinetic operator whenever the electron-electron interactions are not solely given in terms of the electron density \(\hat{n}_{\mathbf{x}\sigma}\coloneqq\hat{c}_{\mathbf{x}\sigma}^{\dagger}\hat{c}_{\mathbf{x}\sigma}\). As a consequence of these contributions, we find that the Meissner weight and the optical conductivity get modified from the standard results. Furthermore, the BdG Hamiltonian for \(\mathbf{A}=\mathbf{0}\) is not sufficient to determine the \(\mathbf{A}\) dependence of \(\Delta_{\mathbf{k}}\) and \(C\) unless the Hamiltonian before the mean-field approximation is provided. This means that the electromagnetic response of superconductor described by a BdG Hamiltonian alone is not completely well-defined. We clarify Figure 1: Schematic illustration of the electromagnetic response of superconductors. Given the \(U(1)\) symmetric Hamiltonian \(\hat{H}\), one can unambiguously introduce the gauge field \(\mathbf{A}\) and derive the electromagnetic response with or without the mean-field approximation. In contrast, given the BdG Hamiltonian \(\hat{H}^{\mathrm{BdG}}\) alone, one cannot uniquely determine \(\hat{H}^{\mathrm{BdG}}(\mathbf{A})\) and cannot discuss its electromagnetic response without ambiguities. these points through the discussion of two illuminating examples at zero temperature \(T=0\). \(s\)_-wave superconductor with pair-hopping.--_ As the simplest example, let us consider a BCS type superconductor for single-band spinful electrons on \(d\)-dimensional hypercubic lattice: \[\hat{H}(\mathbf{A}) \coloneqq-\sum_{\mathbf{x}}\sum_{\sigma=\uparrow,\downarrow}\sum_{i=1}^ {d}t(e^{-iA_{i}}\hat{c}^{\dagger}_{\mathbf{x}+\mathbf{e}_{i}\sigma}\hat{c}_{\mathbf{x} \sigma}+\text{h.c.})\] \[\quad-\sum_{\mathbf{x}}\sum_{i=1}^{d}\frac{J}{2}(e^{-2iA_{i}}\hat{c} ^{\dagger}_{\mathbf{x}+\mathbf{e}_{i}\uparrow}\hat{c}^{\dagger}_{\mathbf{x}+\mathbf{e}_{i} \downarrow}\hat{c}_{\mathbf{x}\downarrow}\hat{c}_{\mathbf{x}\uparrow}+\text{h.c.})\] \[\quad-\sum_{\mathbf{x}}\sum_{\sigma=\uparrow,\downarrow}\mu\hat{n}_{ \mathbf{x}\sigma}-\sum_{\mathbf{x}}U_{0}\hat{n}_{\mathbf{x}\uparrow}\hat{n}_{\mathbf{x} \downarrow}, \tag{5}\] where \(\hat{c}_{\mathbf{k}\sigma}\)'s are annihilation operators of electrons satisfying \(\{\hat{c}_{\mathbf{k}\sigma},\hat{c}^{\dagger}_{\mathbf{k}^{\prime}\sigma^{\prime}}\}= \delta_{\mathbf{k},\mathbf{k}^{\prime}}\delta_{\sigma,\sigma^{\prime}}\), \(t\) (\(t>0\)) is the nearest-neighbor hopping, \(\mu\) (\(|\mu|<2td\)) is the chemical potential that might be seen as the \(\nu=0\) component of the gauge field \(A_{\nu}\), \(U_{0}\) is the onsite density-density interaction, and \(J\) is nearest-neighbor pair-hopping interaction. This model for the \(U_{0}=A_{i}=0\) case is called the Penson-Kolb model [6; 7] and its electromagnetic response is studied in Refs. [8; 9]. We assume the periodic boundary condition with the length \(L_{i}\) in \(i\)-th direction. The continuum version of this model is included in Appendix A. The Hamiltonian has the U(1) symmetry associated with the electron density \(\hat{n}_{\mathbf{x}}\coloneqq\sum_{\sigma=\uparrow,\downarrow}\hat{n}_{\mathbf{x}\sigma}\), which is necessary to fix the \(\mathbf{A}\) dependence. The local current operator for the link between \(\mathbf{x}\) and \(\mathbf{x}+\mathbf{e}_{i}\) is given by \[\hat{j}_{\mathbf{x},\mathbf{x}+\mathbf{e}_{i}} \coloneqq\sum_{\sigma=\uparrow,\downarrow}t\big{(}ie^{-iA_{i}} \hat{c}^{\dagger}_{\mathbf{x}+\mathbf{e}_{i}\sigma}\hat{c}_{\mathbf{x}\sigma}+\text{h.c.} \big{)}\] \[\quad+J\big{(}ie^{-2iA_{i}}\hat{c}^{\dagger}_{\mathbf{x}+\mathbf{e}_{i} \uparrow}\hat{c}^{\dagger}_{\mathbf{x}+\mathbf{e}_{i}\downarrow}\hat{c}_{\mathbf{x} \downarrow}\hat{c}_{\mathbf{x}\uparrow}+\text{h.c.}\big{)} \tag{6}\] and satisfies the continuity equation \(i\big{[}\hat{n}_{\mathbf{x}},\hat{H}(\mathbf{A})\big{]}=\sum_{i=1}^{d}\big{(}\hat{j}_{ \mathbf{x},\mathbf{x}+\mathbf{e}_{i}}-\hat{j}_{\mathbf{x}-\mathbf{e}_{i},\mathbf{x}}\big{)}\). The second term in the current operator originates from the pair-hopping interaction. The model also possesses the spin rotation symmetry and, when \(A_{i}=0\), the time-reversal symmetry and the inversion symmetry. The superconducting order can be characterized by \[\phi\coloneqq\langle\hat{c}_{\mathbf{x}\downarrow}\hat{c}_{\mathbf{x} \uparrow}\rangle. \tag{7}\] In this work, \(\phi\) is assumed to be position-independent at least when \(\mathbf{A}=\mathbf{0}\). However, \(\phi\) may depend on \(\mathbf{x}\) when \(A_{i}\neq 0\). In fact, the large gauge transformation \(\hat{U}=e^{-2\pi i\sum_{i=1}^{d}m_{i}\hat{n}_{\mathbf{x}}x_{i}/L_{i}}\) (\(m_{i}\in\mathbb{Z}\)) maps \(\phi\) to \(e^{-4\pi i\sum_{i=1}^{d}m_{i}x_{i}/L_{i}}\phi\) and \(A_{i}\) to \(A_{i}+2\pi m_{i}/L_{i}\). This means that, even if \(\phi\) for \(A_{i}=0\) is position-independent, \(\phi\) for \(A_{i}=2\pi m_{i}/L_{i}\propto L_{i}^{-1}\) must depend on \(\mathbf{x}\) and have the winding \(m_{i}=(2\pi)^{-1}i\int_{0}^{L_{i}}dx_{i}(\phi^{\prime}/|\phi|^{\prime})^{*} \partial_{i}(\phi^{\prime}/|\phi|^{\prime})\). As we are interested in the large \(L_{i}\) limit, for the consistency of our assumption we will keep \(|A_{i}|\) much smaller than \(2\pi/L_{i}\) and will set \(A_{i}=0\) at the end of the calculation. The Hamiltonian \(\hat{H}(\mathbf{A})\) in Eq. (5) can be converted to the BdG form in Eq. (2) by the mean-field approximation followed by the Fourier transformation \(\hat{c}_{\mathbf{x}\sigma}\coloneqq V^{-1/2}\sum_{\mathbf{k}}\hat{c}_{\mathbf{k}\sigma}e^ {i\mathbf{k}\cdot\mathbf{x}}\). We find that \(H^{\text{BdG}}_{\mathbf{k}}(\mathbf{A})\) is given by the band dispersion \(\xi_{\mathbf{k}}\coloneqq-\sum_{i=1}^{d}2t\cos k_{i}-\mu\) and the gap function \(\Delta_{\mathbf{k}}\) is given by \(\Delta(\mathbf{A})\coloneqq-U(\mathbf{A})\phi\) with \(U(\mathbf{A})\coloneqq U_{0}+J\sum_{i=1}^{d}\cos(2A_{i})\). The constant \(C(\mathbf{A})=\sum_{\mathbf{k}}\xi_{\mathbf{k}+\mathbf{A}}+VU(\mathbf{A})|\phi|^{2}\) also depends on \(\mathbf{A}\) whenever \(J\neq 0\). When \(\mathbf{A}=\mathbf{0}\), the self-consistent equation for \(\phi\) at \(T=0\) reads as \[\frac{U_{\text{tot}}}{2V}\sum_{\mathbf{k}}\frac{1}{E_{\mathbf{k}}}=1, \tag{8}\] where \(U_{\text{tot}}\coloneqq U_{0}+Jd\) is the renormalized interaction strength, \(E_{\mathbf{k}}\coloneqq\sqrt{\xi_{\mathbf{k}}^{2}+|\Delta|^{2}}\) is the excitation energy of Bogoliubov quasi-particle, and \(\Delta\coloneqq-U_{\text{tot}}\phi\) is the gap function for \(\mathbf{A}=\mathbf{0}\). The self-consistent equation contains only \(U_{\text{tot}}\), not \(U_{0}\) nor \(J\) separately. Hence, if only \(\hat{H}^{\text{BdG}}\) is given without \(\hat{H}\), one cannot judge if \(U_{\text{tot}}\) comes from the density-density interaction \(U_{0}\) or the pair-hopping interaction \(J\). \(p+ip\)_topological superconductor in two dimension.--_ The above discussions are not restricted to \(s\)-wave superconductors. As a more nontrivial example, let us discuss a single-band model of spinless electron \[\hat{H}(\mathbf{A}) \coloneqq-\sum_{\mathbf{x}}\sum_{i=1,2}t(e^{-iA_{i}}\hat{c}^{\dagger}_{ \mathbf{x}+\mathbf{e}_{i}}\hat{c}_{\mathbf{x}}+\text{h.c.})\] \[\quad-\sum_{\mathbf{x}}\frac{J}{4}\Big{[}(ie^{i(A_{2}-A_{i})}\hat{n}_{ \mathbf{x}}\hat{c}^{\dagger}_{\mathbf{x}+\mathbf{e}_{i}}\hat{c}_{\mathbf{x}+\mathbf{e}_{2}}+\text{h. c.})\] \[\quad\quad\quad+(ie^{-i(A_{1}+A_{2})}\hat{n}_{\mathbf{x}}\hat{c}^{ \dagger}_{\mathbf{x}+\mathbf{e}_{2}}\hat{c}_{\mathbf{x}-\mathbf{e}_{1}}+\text{h.c.})\] \[\quad\quad\quad\quad+(ie^{i(A_{1}+A_{2})}\hat{n}_{\mathbf{x}}\hat{c}^{ \dagger}_{\mathbf{x}-\mathbf{e}_{1}}\hat{c}_{\mathbf{x}-\mathbf{e}_{2}}+\text{h.c.})\] \[\quad\quad\quad\quad+(ie^{i(A_{1}+A_{2})}\hat{n}_{\mathbf{x}}\hat{c}^{ \dagger}_{\mathbf{x}-\mathbf{e}_{2}}\hat{c}_{\mathbf{x}+\mathbf{e}_{1}}+\text{h.c.})\Big{]}\] \[\quad-\sum_{\mathbf{x}}\mu\hat{n}_{\mathbf{x}}-\sum_{\mathbf{x}}\sum_{i=1,2 }\frac{U_{0}}{2}\hat{n}_{\mathbf{x}}\hat{n}_{\mathbf{x}+\mathbf{e}_{i}}, \tag{9}\] where \(U_{0}\) describes the density-density interaction and \(J\) is a correlated hopping term that favors the \(p+ip\) order. The model has both \(U(1)\) symmetry and the four-fold rotation symmetry. The band dispersion is still given by \(\xi_{\mathbf{k}}\) above. The current operator and its continuity equation are summarized in Appendix B. Suppose that \(\phi_{i}\coloneqq\langle\hat{c}_{\mathbf{x}+\mathbf{e}_{i}}\hat{c}_{\mathbf{x}}\rangle\) is nonzero and position independent. The constant \ thermal Hall conductance [10]. Hence, the correlated hopping term (or some equivalent interaction) is necessary for the desired nontrivial topology. Assuming this type of order, we find Eq. (2) with \(\Delta_{\mathbf{k}}(\mathbf{A})=(\sin k_{1}-i\sin k_{2})U(\mathbf{A})\phi\) and \(U(\mathbf{A})=U_{0}+2J\cos A_{1}\cos A_{2}\). The self-consistent equation is modified to \[\frac{U_{\rm tot}}{2V}\sum_{\mathbf{k}}\frac{1}{E_{\mathbf{k}}}\frac{\sin^{2}k_{1}+\sin ^{2}k_{2}}{2}=1. \tag{11}\] We stress that \(\mathbf{A}\)_does not_ enter these quantities through the replacement of \(\mathbf{k}\) with \(\mathbf{k}+\mathbf{A}\), even though it _is_ introduced by the minimal coupling in the U(1) symmetric Hamiltonian. _Electromagnetic response.--_As we have seen, when the starting Hamiltonian with U(1) symmetry contains interactions not solely written in terms of density operators, the superconducting gap \(\Delta_{\mathbf{k}}\) and the constant \(C\) in the BdG Hamiltonian generally depend on the gauge field \(\mathbf{A}\). Such dependence gives rise to additional terms in the current operator and the kinetic energy operator. For example, the current operator with a finite momentum \(\mathbf{q}\) is given by \[\hat{J}^{\rm BdG}_{i,\mathbf{q}} =\sum_{\mathbf{x}}e^{-i\mathbf{q}\cdot(\mathbf{x}+\frac{1}{2}\mathbf{e}_{i}}\hat {j}_{\mathbf{x},\mathbf{x}+\mathbf{e}_{i}}\] \[=\sum_{\mathbf{k}}\hat{\Psi}^{\dagger}_{\mathbf{k}}\gamma_{i,\mathbf{k}+\mathbf{q },\mathbf{k}}\hat{\Psi}_{\mathbf{k}+\mathbf{q}}+\delta_{\mathbf{q},\mathbf{0}}\partial_{A_{i}}C( \mathbf{A})|_{\mathbf{A}=\mathbf{0}} \tag{12}\] with \(\gamma_{i,\mathbf{k}+\mathbf{q},\mathbf{k}}\coloneqq v_{i,\mathbf{k}+\frac{\mathbf{q}}{2}}\sigma_ {0}-2J\phi\sin\frac{q_{i}}{2}i\sigma_{2}\), which contains the contribution from the pair-hopping interaction in the off-diagonal part, in addition to the standard band velocity term \(v_{i,\mathbf{k}}\coloneqq\partial_{k_{i}}\xi_{\mathbf{k}}\) in the diagonal part. On the other hand, the charge density operator \[\hat{J}^{\rm BdG}_{0,\mathbf{q}}=\sum_{\mathbf{x}}e^{-i\mathbf{q}\cdot\mathbf{x}}\hat{n}_{\bm {x}}=\sum_{\mathbf{k}}\hat{\Psi}^{\dagger}_{\mathbf{k}}\gamma_{0,\mathbf{k}+\mathbf{q},\mathbf{k}} \hat{\Psi}_{\mathbf{k}+\mathbf{q}}+\delta_{\mathbf{q},\mathbf{0}}V \tag{13}\] with \(\gamma_{0,\mathbf{k}+\mathbf{q},\mathbf{k}}\coloneqq\sigma_{3}\) is not affected by \(J\). In the reminder of this work we discuss the physical consequences of these additional terms using the spinful electron model in Eq. (5). Without loss of generality, we set \(\Delta\) to be real using the U(1) symmetry. _Meissner Weight.--_Let us consider the linear response kernel of the current operator toward the gauge field with a frequency \(\omega=q_{0}\) and a momentum \(\mathbf{q}\): \[j_{\mu}(q)=\sum_{\nu=0}^{d}\mathcal{K}_{\mu\nu}(q)A_{\nu}(q). \tag{14}\] Here and hereafter we write \(q=(q_{0},\mathbf{q})\) for short. According to the linear response theory, the response kernel is given by \[\mathcal{K}^{\rm BdG}_{\mu\nu}(q)=\mathcal{M}^{\rm BdG}_{\mu\nu}+\mathcal{R} ^{\rm BdG}_{\mu\nu}(q), \tag{15}\] where \[\mathcal{R}^{\rm BdG}_{\mu\nu}(q)\coloneqq-\frac{i}{V}\int dte^{i\omega t} \theta(t)\langle[\hat{J}^{\rm BdG}_{\mu,\mathbf{q}}(t),\hat{J}^{\rm BdG}_{\nu,- \mathbf{q}}(0)]\rangle \tag{16}\] is the retarded current correlation function in the mean-field approximation and \[\mathcal{M}^{\rm BdG}_{ij}\coloneqq\frac{1}{V}\langle\hat{K}^{\rm BdG}_{ij}\rangle \tag{17}\] is the diamagnetic contribution giving the Meissner weight. We find \[\mathcal{M}^{\rm BdG}_{ij}=\frac{1}{V}\sum_{\mathbf{k}}\langle\hat{n }_{\mathbf{k}}\rangle\partial_{k_{i}}\partial_{k_{j}}\xi_{\mathbf{k}}-|\phi|^{2} \partial_{A_{i}}\partial_{A_{j}}U(\mathbf{A})|_{\mathbf{A}=\mathbf{0}} \tag{18}\] and \(\mathcal{M}^{\rm BdG}_{\mu\nu}=0\) if \(\mu\) or \(\nu\) is \(0\). This result also applies to the spinless model and hence generalizes the result in Refs. [8; 9]. In addition to the usual term associated with the electron density \(\langle\hat{n}_{\mathbf{k}}\rangle=\alpha(1-\xi_{\mathbf{k}}/E_{\mathbf{k}})\) and the band curvature \(\partial_{k_{i}}\partial_{k_{j}}\xi_{\mathbf{k}}\), the second term arises from the \(\mathbf{A}\) dependence of \(\Delta_{\mathbf{k}}\) and \(C\). If the standard form in Eq. (2) were assumed instead, this term would be missed. This effect might be measured through the penetration depth \(\lambda\) of an external magnetic field. _Response kernel with vertex correction.--_ As is well-known, the kernel \(\mathcal{K}^{\rm BdG}_{\mu\nu}(q)\) in Eq. (15) in the mean-field treatment does not respect the gauge invariance. That is, the induced current \(\sum_{\nu=0}^{d}\mathcal{K}^{\rm BdG}_{\mu\nu}(q)A_{\nu}(q)\) is not invariant under the gauge transformation \(A_{0}(q)\to A_{0}(q)-\omega\chi(q)\) and \(A_{i}(q)\to A_{i}(q)+2\sin\frac{q_{i}}{2}\chi(q)\). To obtain the gauge-invariant optical conductivity \(\sigma_{ij}(q)\) towards the electric field \(E_{j}(q)=-i\omega A_{j}(q)\), let us take into account the vertex correction following the steps in Refs. [1; 2]. First, we define the vertex function \(\Gamma_{\mu}\) by \[\langle\mathcal{T}\hat{n}_{\mathbf{z}}(t_{z})\hat{\Psi}_{\mathbf{x}}(t_{ x})\hat{\Psi}^{\dagger}_{\mathbf{y}}(t_{y})\rangle\] \[=-\sum_{\mathbf{x}^{\prime},\mathbf{y}^{\prime}}\int dt^{\prime}_{x}dt^{ \prime}_{y}G(x,x^{\prime})\Gamma_{0}(x^{\prime},y^{\prime},\mathbf{z},t_{z})G(y^{ \prime},y) \tag{19}\] and \[\langle\mathcal{T}\hat{j}_{\mathbf{z},\mathbf{z}+\mathbf{e}_{i}}(t_{z})\hat{ \Psi}_{\mathbf{x}}(t_{x})\hat{\Psi}^{\dagger}_{\mathbf{y}}(t_{y})\rangle\] \[=-\sum_{\mathbf{x}^{\prime},\mathbf{y}^{\prime}}\int dt^{\prime}_{x}dt^{ \prime}_{y}G(x,x^{\prime})\Gamma_{i}(x^{\prime},y^{\prime},\mathbf{z}+\tfrac{1}{2}\bm {e}_{i},t_{z})G(y^{\prime},y). \tag{20}\] Here, \[G_{q}\coloneqq\int dte^{i\omega t}(-i)\langle\mathcal{T}\hat{\Psi}_{\mathbf{q}}(t) \hat{\Psi}^{\dagger}_{\mathbf{q}}\rangle=\frac{\omega\sigma_{0}+\xi_{\mathbf{q}}\sigma_{ 3}+\Delta\sigma_{1}}{\omega^{2}-E_{\mathbf{q}}^{2}+i\delta} \tag{21}\] is the time-ordered Green function. The continuity equation for the current operator [1; 2] implies the generalized Ward identity (GWI): \[\sum_{i=1}^{d}2\sin\tfrac{q_{i}}{2}\Gamma_{i,k+q,k}-\omega\Gamma_{0,k+q,k}= \sigma_{3}G_{k}^{-1}-G_{k+q}^{-1}\sigma_{3}. \tag{22}\] For reader's convenience, we review the derivation in Refs. [1; 2] in Appendix A. The vertex function can be obtained by solving the Bethe-Salpeter equation \[\Gamma_{\mu,k+q,k}=\gamma^{(0)}_{\mu,\mathbf{k}+\mathbf{q},\mathbf{k}}+\mathcal{ C}_{\mu,q}, \tag{23}\] \[\mathcal{C}_{\mu,q}:=U_{\rm tot}\int\frac{dk_{0}}{2\pi i}\frac{1} {V}\sum_{\mathbf{k}}\sigma_{3}G_{k+q}\Gamma_{\mu,k+q,k}G_{k}\sigma_{3} \tag{24}\] with \(\gamma^{(0)}_{0,\mathbf{k}+\mathbf{q},\mathbf{k}}=\sigma_{3}\) and \(\gamma^{(0)}_{i,\mathbf{k}+\mathbf{q},\mathbf{k}}\coloneqq v_{i,\mathbf{k}+\frac{q}{2}}\sigma_ {0}\). Once \(\Gamma_{\mu}\) is obtained, the time-ordered current correlation function \(\mathcal{P}_{\mu\nu}(q)\) can be expressed as \[\mathcal{P}_{\mu\nu}(q)\coloneqq-\frac{i}{V}\int dte^{i\omega t} \langle\mathcal{T}\hat{J}_{\mu,\mathbf{q}}(t)\hat{J}_{\nu,-\mathbf{q}}(0)\rangle\] \[\quad=\int\frac{dk_{0}}{2\pi i}\frac{1}{V}\sum_{\mathbf{k}}\text{tr}[ \gamma_{\mu,\mathbf{k}+\mathbf{q},\mathbf{k}}G_{k+q}\Gamma_{\nu,k+q,k}G_{k}]e^{ik_{0} \delta}. \tag{25}\] Then the retarded correlation function \(\mathcal{R}_{\mu\nu}(q)\) is given by \(\text{Re}\mathcal{R}_{\mu\nu}(q)=\text{Re}\mathcal{P}_{\mu\nu}(q)\) and \(\text{Im}\mathcal{R}_{\mu\nu}(q)=\text{sgn}\,\omega\,\text{Im}\mathcal{P}_{ \mu\nu}(q)\) (see Appendix C). The optical conductivity \(\sigma_{ij}(q)\) including the vertex correction is then given by \[\sigma_{ij}(q)=\frac{i}{\omega}\mathcal{K}_{ij}(q)=\frac{i}{\omega}[ \mathcal{M}^{\rm BdG}_{ij}+\mathcal{R}_{ij}(q)]. \tag{26}\] Using the GWI (22) and the self-consistent equation (8), we have \[\sum_{j=1}^{d}\mathcal{P}_{ij}(q)(2\sin\frac{q_{j}}{2})-\mathcal{ P}_{i0}(q)\omega\] \[=\int\frac{dk_{0}}{2\pi i}\frac{1}{V}\sum_{\mathbf{k}}\text{tr}[( \sigma_{3}\gamma_{i,\mathbf{k}-\mathbf{q},\mathbf{k}}-\gamma_{i,\mathbf{k},\mathbf{k}+\mathbf{q}}\sigma _{3})G_{k}]\] \[=-\sum_{j=1}^{d}\mathcal{M}^{\rm BdG}_{ij}(2\sin\frac{q_{j}}{2}), \tag{27}\] implying that the gauge invariance is restored: \(\sum_{j=1}^{d}\mathcal{K}_{ij}(q)(2\sin\frac{q_{j}}{2})-\mathcal{K}_{i0}(q) \omega=0\). In Fig. 2, we show our numerical results for the optical conductivity \(\sigma_{ij}(q)\) in Eq. (26) for \(q_{2}=\cdots=q_{d}=0\). In single-band models with inversion and the time-reversal symmetry, the optical conductivity vanishes when \(\mathbf{q}=0\)[3]. Hence, here we assume spatially modulating field with \(q_{1}\neq 0\) and focus on the coefficient of \(q_{1}^{2}\) in the Taylor series of \(\text{Re}[\sigma_{11}(\omega,q_{1})]\) with respect to \(q_{1}\). We compare the results for the \(J=0.5U_{\rm tot}\) case (red squares) and the \(J=0\) case (blue circles) for fixed \(U_{\rm tot}\) and \(\Delta\). We found that nonzero \(J\) significantly affect the bare conductivities [(a),(d)] and the vertex corrections [(b),(e)] separately, although such differences are mostly suppressed in the sum \(\sigma_{ij}(q)\coloneqq\sigma_{ij}^{(0)}(q)+\sigma_{ij}^{\rm VC}(q)\) [(c),(f)]. In particular, the contribution from \(J\) usually decay with smaller power of \(\omega\) as shown in the insets. However, the cancellation is Figure 2: The optical conductivities \(\text{Re}[\sigma_{11}(\omega,q_{1})]\) towards nonuniform field with [(c),(f)] or without [(a),(d)] the vertex correction [(b),(e)] for \(t=1.2\), \(\mu=-2t\), \(\Delta=1\) (\(U_{\rm tot}\) is fixed by the self-consistent equation). We expand \(\text{Re}[\sigma_{11}(\omega,q_{1})]\) in the Taylor series of \(q_{1}^{2}\) and here we show the coefficient of the \(q_{1}^{2}\) term. Red squares are for \(J=0.5U_{\rm tot}\) and blue circles are for \(J=0\). Panels (a)–(c) are for \(d=3\) and (d)–(f) are for \(d=2\). The insets in panels (a)-(d) are the log-log plots of the absolute value \(|\text{Re}[\sigma_{11}(\omega,q_{1})/q_{1}^{2}]|\). Gray lines are obtained by fitting to determine the power of the decay \(\text{Re}[\sigma_{11}(\omega,q_{1})/q_{1}^{2}]\propto\omega^{-n}\). not perfect and there are still finite differences originating from nonzero \(J\). _Conclusion.--_ In this work, we revisited the electromagnetic response of superconductors. In general, the correspondence of the microscopic models and the BdG Hamiltonians after the mean-field approximation is "many to one": in the case of the spinful electron model in Eq. (5), as far as the renormalized parameter \(U_{\rm tot}=U_{0}+Jd\) is fixed, models with different choices of on-site Coulomb interaction \(U_{0}\) and the pair-hopping interaction \(J\) lead to the same BdG Hamiltonian. However, we found that the Meissner weight and optical conductivities are sensitive to the specific value of \(J\) (Eq. (18) and Fig. 2.). This means that the response toward U(1) gauge field has ambiguities unless the microscopic model with U(1) symmetry is provided. Our results call for caution in the standard practice in introducing the gauge-field \(\mathbf{A}\) to the BdG Hamiltonian as in Eq. (2). ###### Acknowledgements. We thank Seishiro Ono, Yohei Fuji, Kazuaki Takasan, Naoto Tsuji, and Rina Tazai for useful discussions. The work of H.W. is supported by JSPS KAKENHI Grant No. JP20H01825 and JP21H01789.
2306.09266
A9 Intersection Dataset: All You Need for Urban 3D Camera-LiDAR Roadside Perception
Intelligent Transportation Systems (ITS) allow a drastic expansion of the visibility range and decrease occlusions for autonomous driving. To obtain accurate detections, detailed labeled sensor data for training is required. Unfortunately, high-quality 3D labels of LiDAR point clouds from the infrastructure perspective of an intersection are still rare. Therefore, we provide the A9 Intersection Dataset, which consists of labeled LiDAR point clouds and synchronized camera images. Here, we recorded the sensor output from two roadside cameras and LiDARs mounted on intersection gantry bridges. The point clouds were labeled in 3D by experienced annotators. Furthermore, we provide calibration data between all sensors, which allow the projection of the 3D labels into the camera images and an accurate data fusion. Our dataset consists of 4.8k images and point clouds with more than 57.4k manually labeled 3D boxes. With ten object classes, it has a high diversity of road users in complex driving maneuvers, such as left and right turns, overtaking, and U-turns. In experiments, we provided multiple baselines for the perception tasks. Overall, our dataset is a valuable contribution to the scientific community to perform complex 3D camera-LiDAR roadside perception tasks. Find data, code, and more information at https://a9-dataset.com.
Walter Zimmer, Christian Creß, Huu Tung Nguyen, Alois C. Knoll
2023-06-15T16:39:51Z
http://arxiv.org/abs/2306.09266v1
# A9 Intersection Dataset: ###### Abstract Intelligent Transportation Systems (ITS) allow a drastic expansion of the visibility range and decrease occlusions for autonomous driving. To obtain accurate detections, detailed labeled sensor data for training is required. Unfortunately, high-quality 3D labels of LiDAR point clouds from the infrastructure perspective of an intersection are still rare. Therefore, we provide the A9 Intersection Dataset, which consists of labeled LiDAR point clouds and synchronized camera images. Here, we recorded the sensor output from two roadside cameras and LiDARs mounted on intersection parity bridges. The point clouds were labeled in 3D by experienced annotators. Furthermore, we provide calibration data between all sensors, which allow the projection of the 3D labels into the camera images and an accurate data fusion. Our dataset consists of 4.8k images and point clouds with more than 57.4k manually labeled 3D boxes. With ten object classes, it has a high diversity of road users in complex driving maneuvers, such as left and right turns, overtaking, and U-turns. In experiments, we provided multiple baselines for the perception tasks. Overall, our dataset is a valuable contribution to the scientific community to perform complex 3D camera-LiDAR roadside perception tasks. Find data, code, and more information at [https://a9-dataset.com](https://a9-dataset.com). Dataset, 3D Perception, Camera, LiDAR, Intelligent Transportation Systems, Autonomous Driving ## I Introduction The roadside deployment of high-tech sensors to detect road traffic participants offers significant added value for intelligent and autonomous driving. This technology allows the vehicle to react to events and situations that are not covered by the vehicle's internal sensor range. Thus, the advantage is the drastic expansion of the field of view and the reduction of occlusions. For this reason, we can observe a continuous increase in Intelligent Transportation Systems (ITS) world-wide. It is noticeable that cameras and increasingly LiDARs are used to create a live digital twin of road traffic [15]. To obtain accurate detections with such sensor systems, labeled sensor data is required for training. Numerous datasets in the field of intelligent and autonomous driving have already been created. Datasets like [1, 2, 4, 5] are taken from the vehicle perspective. In contrast, [3, 7, 8, 9, 16] are recorded from a very steep elevated view from a drone or a high building, so they are more suitable for trajectory prediction and tracking tasks. They are less suitable for 3D object detection because vehicles are far away and are only observed from above. Recently, a few datasets [10, 11, 12, 13, 14] have been acquired from an roadside perspective and are thus suitable for improving perception algorithms for ITS. However, some datasets have deficiencies in their labeling quality, which harm the training of the algorithms (e.g., censored image areas with filled rectangles), or they lack certain vehicle classes (e.g., missing trucks and buses), or the datasets are too small in terms of 3D box labels and attributes. According to the work mentioned, it can be recognized that high-quality 3D box labels of LiDAR point clouds from the roadside perspective with a wide diversity of traffic participants and scenarios are still rare. Therefore, our A9 Intersection (A9-I) Dataset provides LiDAR point clouds and camera images from a road intersection. The \(4.8\)k labeled point cloud frames, which were labeled by experts, contain complex driving maneuvers such as left and right turns, overtaking maneuvers, and U-turns. With its ten object classes, our dataset has a high variety of road users, including vulnerable road users. Furthermore, we provide synchronized camera images and the extrinsic calibration data between LiDARs and the cameras. These matrices allow the projection of the 3D box labels to the camera images. All in all, our A9-I offers synchronized \(4.8\)k images and \(4.8\)k point clouds with \(57.4\)k 3D box labels with track IDs that were manually labeled. In this work, we show additional comprehensive statistics and the effectiveness of our dataset. Over and beyond, we would like to emphasize that A9-I is an extension of our previous debut the A9 Dataset [14], which covers highway traffic scenarios. Thus, we extend the existing A9 Dataset with additional traffic scenarios on a crowdy intersection and scale it up from \(15\)k labeled 3D box labels to \(57.4\)k including vulnerable road users. In evaluation experiments, we provide multiple baselines for the 3D perception task of 3D object detection with a monocular camera, a LiDAR sensor, and a multi-modal camera-LiDAR setup. Last but not least, we offer our dataset in OpenLABEL format under the Creative Commons License CC BY-NC-ND 4.0 so that it can be widely used by the scientific research community. In summary, our **contributions** are: * A detailed and diverse dataset of \(4.8\)k camera images as well as \(4.8\)k labeled LiDAR point cloud frames. Thereby, we used two synchronized cameras and LiDARs, which cover an intersection from an elevated view of an ITS. * Extrinsic calibration data between cameras and LiDARs allow an early and late fusion of objects. * We provide an extensive A9-Dewkit to load, transform, split, evaluate and visualize the data. * \(57.4\)k high-quality manually labeled 3D boxes with \(273\)k attributes for both LiDARs resulting in \(38\)k 3D box labels after data fusion. * Comprehensive statistics and analysis of the labels, number of points, occlusions, and tracks on the dataset, and the distribution of ten different object classes of road traffic. * Multiple baselines for the 3D perception task of 3D object detection with a monocular camera, a LiDAR sensor, and a multi-modal camera-LiDAR setup. ## II Related Work As part of the development in the field of autonomous driving and intelligent vehicles, the number of datasets is increasing rapidly. The most popular datasets in this field are KITTI [1], nuScenes [4], Cityscapes [2], and Waymo Open dataset [5]. Except for the Cityscapes, the datasets provide labeled camera images and LiDAR point clouds. These datasets are used to train perception algorithms. Unfortunately, these valuable datasets only contain data from a vehicle's perspective. Therefore, this ego perspective is suboptimal for transfer learning. Networks trained on a dataset from the vehicle's perspective do not perform well on data obtained, e.g. from a roadside perspective. Another sensor perspective is, for example, the elevated view. With this, the scene can ideally be viewed without occlusions. To achieve a high level of perception for this \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Name & Year & Perspective & \# Point Clouds & \# Images & \# Classes & \# 3D Labels & \# Tracks & License \\ \hline KITTI [1] & 2013 & V & 15.4k & 15k & 8 & 80k & - & CC BY-NC-SA 3.0 \\ Cityscapes [2] & 2016 & V & - & 25k & **30** & - & - & Non-Commercial Use \\ high [3] & 2018 & SE & - & 1.4M & 2 & - & 20k & Custom \\ nuScenes [4] & 2020 & V & **400k** & 1.4M & 23 & 1.4M & - & CC BY-NC-SA 4.0 \\ Waymo [5] & 2020 & V & 200 & 1M & 4 & **12.6M** & **7.6M** & Custom \\ inD [6] & 2020 & SE & - & 0.9M & 5 & - & 11.5k & Custom \\ round [7] & 2020 & SE & - & 0.5M & 8 & - & 13.7k & Custom \\ exiD [8] & 2022 & SE & - & 1.4M & 3 & - & 69k & Custom \\ MONA [9] & 2022 & SE & - & **11.7M** & 2 & - & 702k & Custom \\ DAIR-V2X* [10] & 2022 & V,R & 71k & 71k & 10 & 1.2M & - & Non-Commercial Use \\ PJS300++ [11] & 2022 & R & 14.1k & 14.1k & 7 & 4.5M & - & CC BY-NC-SA 4.0 \\ Rope3D [12] & 2022 & R & - & 50k & 13 & 1.5M & - & Non-Commercial Use \\ LUMPI [13] & 2022 & R & 90k & 200k & 6 & - & - & CC BY-NC 3.0 \\ A9 & 2022 & R & 5.3k & 5.4k & 10 & 71.9k & 506 & CC BY-NC-ND 4.0 \\ - A9 [14] & 2022 & R & 0.5k & 0.6k & 9 & 14.5k & - & CC BY-NC-ND 4.0 \\ - **A9-1 (Ours)** & 2023 & R & 4.8k & 4.8k & 10 & 57.4k & 506 & CC BY-NC-ND 4.0 \\ \hline \hline \multicolumn{8}{l}{\({}^{*}\) 40% of data from roadside perspective, 60% from vehicle perspective.} \\ \multicolumn{8}{l}{\({}^{**}\)Trucks and buses are sparsely represented which can lead to a limited perception performance.} \\ \end{tabular} \end{table} TABLE I: Comparison of popular autonomous driving datasets. Here we compare the perspective, the number of frames, the number of classes, the number of labeled objects, the number of tracks, and the license terms. The datasets cover different view perspectives: the vehicle view (V), from the steep elevated view (SE), and from the roadside view (R). elevated view, training with appropriate datasets is necessary. The focus of the drone dataset family highD [3], inD [16], rounD [7], and exiD [8] is the trajectory of road users in the city as well as in the freeway area. The datasets were recorded by a drone and provide a vast top-down view of the scene. The main limitation is the limited recording time in challenging weather conditions. To overcome this drone-related issue, the MONA [9] dataset provides data that was created with a camera mounted on a building. On the one hand, these datasets are ideal for trajectory research, because they were recorded from a very steep angle to the road. On the other hand, they are less suitable for 3D object detection, because of the missing 3D dimensions. A dataset, which contains data from an elevated view of an ITS with an angle that is not too steep, is the DAIR-V2X [10]. The main focus of DAIR-V2X is the support of of 3D object detection tasks. It consists of \(71\)k labeled camera images and LiDAR point clouds, \(40\%\) of which are from an roadside infrastructure. For this purpose, the dataset covers city roads, highways, and intersections in different weather and lighting conditions. Unfortunately, no exact statistics for this variation or exact sensor specifications are available. As a last point, the quality of the data is further compromised by filled rectangles over privacy-sensitive image areas (e.g., license plates), which can lead to problems during training for object detection. Another dataset from the roadside infrastructure perspective with a camera and LiDAR combination is the IPS300+ [11]. The dataset includes \(14\)k data frames, with an average of \(319\) labels per frame. They used \(1\) LiDAR and \(2\) cameras as a stereo setup with a lens focal length of \(4.57\) mm. The dataset was recorded several times a day at one intersection and provides seven different object categories: car, cyclist, pedestrian, tricycle, bus, truck, and engineer car. According to the statistics, unfortunately, there is less representation in the classes of trucks and buses, so that the recognition of these classes will probably be poor. The Roadside Perception 3D dataset (Rope3D) [12] provides \(50\)k images including 3D box labels from an monocular infrastructure camera at an intersection. The missing 3D information of the detected objects in the 2D camera image was added with a LiDAR, which was mounted on a vehicle. In total, the images contain over \(1.5\)M labeled 3D boxes, \(670\)k 2D bounding boxes, in various scenes at different times (daytime, night, dawn/dusk), different weather conditions (sunny, cloudy, rainy), and different traffic densities. Furthermore, the objects are divided into \(13\) classes with several attributes. Another roadside infrastructure dataset is LUMPI [13], which was recorded at an intersection in Hanover, Germany. For this purpose, a total of \(200\)k images as well as \(90\)k point clouds were acquired. Three different cameras and five different LiDARs provide several field of views on the scene. Here, different sensor configurations were used for the recordings. The sensor perspective is from a vehicle as well as from the roadside infrastructure. Unfortunately, the number of labels and other detailed information about the labeled objects was not provided. A further contribution in the field of roadside infrastructure data for training perception algorithm is the A9-Dataset [14]. It is our preliminary work and includes \(642\) camera images and \(456\) LiDAR point clouds. In total, this dataset consists of 1k 3D box labels. The charm is that most camera images contain the same traffic scene from four different viewpoints. Here, we labeled 14k 3D boxes. Moreover, the frames contain \(13.17\) 3D box labels in average. For this purpose, we supported the common classes of car, trailer, truck, van, pedestrian, bicycle, bus, and motorcycle in the domain of a highway. The main limitations in our previous work were firstly the small number of labeled LiDAR point clouds and secondly that we only had a simple highway scenario. For this reason, we present an extension to our dataset that addresses these weaknesses. ## III A9 Intersection Dataset In this section, we present the A9 Intersection Dataset. It is an extension of our previous work, the A9-Dataset [14], which covers the highway domain. We describe the Fig. 3: Two cameras and two LiDARs are used to create the A9 Intersection Dataset. The sensors are mounted on a sign gantry and thus record the central intersection area. Then, the Data Fusion Unit (DFU) process the data streams, which results in a fused digital twin of the road traffic. Furthermore, the coordinate systems of the individual sensors and the road coordinate system, which was defined at the northern stem of the bridge, can be taken from the figure. sensor setup at our intersection, the data selection and annotation process, and the data structure used. Last, this section contains comprehensive statistics and an introduction to our A9-Devkit. ### _Sensor Setup_ The A9-I Dataset is recorded on the ITS testbed, which was established as part of the Providentia++ project [17, 18]. Here, roadside sensors are set up on a gantry located at the intersection of Schleissheimer Strasse (B471) and Zeppelinstrasse in Garching near Munich, Germany. For this dataset, we use two cameras and two LiDARs with the following specifications: * **Camera:** Basler ace acA1920-50gc, \(1920\times 1200\), Sony IMX174, glo. shutter, color, GigE with 8 mm lenses. * **LiDAR:** Ouster OS1-64 (gen. 2), 64 vert. layers, \(360^{\circ}\) FOV, below horizon configuration, \(120\) m range, \(1.5-10\) cm accuracy. The sensors are mounted side by side on the gantry, as shown in Figure 3. Here, the sensors detect the traffic in the center of the intersection from a height of \(7\) m. It is worth mentioning that the cameras and LiDARs are spatiotemporally calibrated. For the temporal calibration, we synchronized the sensors with a Network Time Protocol (NTP) time server, for the extrinsic calibration between the cameras and the LiDARs, we used a targetless extrinsic calibration method, which was inspired by [19]. ### _Data Selection and Annotation_ We select the data based on interesting and challenging traffic scenarios like left, right, and U-turns, overtaking maneuvers, tail-gate events, and lane merge scenarios. Furthermore, we take highly diverse and dense traffic situations into account, so that we get an average of over \(15\) road users per frame. To cover diverse weather and light conditions in our A9-I Dataset, it consists of \(25\%\) nighttime data including heavy rain, and \(75\%\) daytime data with sunny and cloudy weather conditions. This enables a good performance of the detector even in challenging weather conditions. We record camera data at \(25\) Hz and LiDAR data at \(10\) Hz into _rosbag_ files. Then we extract the raw data and synchronize the camera and LiDAR frames at \(10\)Hz based on timestamps. Based on the raw data of the LiDAR point clouds, 3D box labels were created by experts. As all four sensors are cross-calibrated, we can also use these 3D box labels from the point cloud to evaluate monocular 3D object detection algorithms. Since the labeling quality of the test sequence is very important, it was reviewed multiple times by us. Here, we improve the labeling quality by using our preliminary _proAnno_ labeling toolbox [20]. ### _Data Structure_ Our dataset is divided into subsets _S1_ through _S4_, which contain continuous camera and labeled LiDAR recordings. Set _S1_ and _S2_ are each 30 seconds long and demonstrate a daytime scenario at dusk. A 120-second long sequence during daytime and sunshine can be found in sequence _S3_. Sequence _S4_ contains 30-second data recording at night and in heavy rain. The file structure is given below: All labeled data is in _OpenLABEL_ format [21]. _OpenLABEL_ files are stored in _json_ format. One file contains all labeled objects of a single frame with 32-bit long unique identifiers (UUIDs), the position, dimensions, rotation, and the attributes like the occlusion level, the body color, the number of trailers, the specific object type, and the number of 3D points. Furthermore, a frame contains properties like the exact epoch timestamp, the weather type, the time of day, and the corresponding image and point cloud file names. In _OpenLABEL_ the label files also contain the calibration data - intrinsic and extrinsic information. We suggest a split into training (\(80\%\)), validation (\(10\%\)), and test set (\(10\%\)). The test set is made up of a continuous sequence with track IDs, as well as randomly sampled frames from four different scenarios and daytimes. We sample frames using stratified sampling to create a balanced dataset among sensor types, weather scenarios, and day times. To prevent overfitting, we do not publish our test set labels. ### _Data Statistics_ In total, we provide \(4,800\) labeled LiDAR point cloud frames sampled from four different sequences. Here, \(57,406\) 3D objects (\(506\) unique objects) were annotated with \(273,861\) object attributes. After fusing the labels from both LiDARs we get \(38,045\) registered 3D objects (\(482\) unique objects) with \(171,045\) attributes. The following statistics refer to the fusion result with the complete dataset inclusive of training, validation, and test set. In Table II, we can see an overview of the registered 3D box labels. A deep dive into distribution of the labels of our A9-I Dataset is provided in Figure 4. Here, the distribution of the ten object classes is shown. The vehicle class _CAR_ is dominant, followed by the classes _TRUCK_, _TRAILER_, _VAN_, and _PEDESTRIAN_, which occur in roughly the same order of magnitude. The classes _MOTORCYCLE_, _BUS_, _BICYCLE_, _EMERGENCY VEHICLES_, and _OTHER_ are present in a slightly smaller number. Since we have annotated the oc clusion level for each 3D box label, we come to the result that \(78.2\%\) were classified as _NOT_OCCLUDED_, \(16.1\%\) as _PARTIALLY_OCCLUDED_, \(0.8\%\) as _MOSTLY_OCCLUDED_, and \(4.9\%\) were classified as _UNKOWN_ (not labeled). It can also be seen that most of the labeled frames contain between \(15\) and \(20\) labeled 3D boxes. In 100 frames, there are even between \(45\) and \(50\) labeled 3D objects. Furthermore, the A9-I includes significantly more variations in the maneuvers of the A9-I. Fig. 4: The A9-I Dataset consists of \(38,045\) 3D box labels after data fusion, where the _CAR_ class is dominant and \(78.2\%\) of the objects are occlusion-free. The 3D box labels show different rotations, which is due to the road traffic in an intersection. Fig. 5: As expected, there is a causality between the dimensions of road users and the number of points. Most classes have the highest number of points at a distance between \(10\) to \(30\) meters. On average, each 3D box label contains \(73.52\) points. Fig. 6: The 3D box label of each traffic participant has always received the same tracking ID in successive frames in a sequence. The average track length is \(24.18\) m. In total, our 3D box labels have a track length of \(12.23\) km. road users at the intersection, as compared to our previous work [14]. We can see three peaks where vehicles are moving to the south, north and east direction of the intersection. Vehicles moving between south and north are indicated by the peaks around \(90\) and \(270\) degrees. The smaller peaks adjacent to the main peaks correspond to turning maneuvers, such as right or left turns. The labels are based on the LiDAR point clouds. In Figure 5, we performed a detailed analysis of the points concerning the labeled classes, of the individual distances of the points concerning the labeled classes, and of the distribution of the points. Firstly, as expected, the correlation between the average number of points and the average size of the class can be observed. Here, the _TRAILER_ class, which has the highest height, also has the highest average number of points, followed by the _BUS_ class, which is the longest. Conversely, the _PEDESTRIAN_ class, which has the smallest size, has the lowest average number of points. Second, in general, due to the elevated position of the LiDARs, the field of view only starts to have an effect from about \(10\) m onwards. Most classes have the highest number of points at a distance between \(10\) m to \(30\) m. Interestingly, the class _TRAILER_ has the highest average number of points at a distance between \(15\) m and \(20\) m. With increasing distance, the average number of points is naturally declining. Lastly, all 3D box labels have a total of \(2,797,112\) points. According to the distribution of the number of points per 3D box label, most of the boxes have a maximum of about \(50\) points. However, the 3D box labels have on average \(73.52\) points per object. In addition to the statistics about the labels and the underlying point clouds, we also analysed the calculated tracks, see Figure 6. We were able to determine these trivially since the same tracking ID was selected for each consecutive frame when marking the 3D box labels. The average track length in our A9-I Dataset is \(24.18\) m. Here, the class _BUS_ is very dominant with an average track length of \(75\) m. The reason for this is because, firstly, the buses are very visible, and secondly, completely cross the intersection. All in all, the full dataset contains \(506\) unique objects (3D box labels) with a total track length of \(12.23\) km with a maximum track length of \(162.87\) m. Thus, our A9 Intersection Dataset can also be used to handle issues regarding tracking that are addressed by [22]. ### _A9-Devkit_ To work with our A9-I Dataset, we provide the A9 Development Kit: [https://github.com/providentia-project/a9-dev-kit](https://github.com/providentia-project/a9-dev-kit). It contains a dataset loader to load images, point clouds, labels, and calibration data. Furthermore, we provide a converter from _OpenLABEL_ to multiple different dataset formats like _KITTI_, _COCO_, _YOLO_, and the other way round. We follow the _json_-based _OpenLABEL_ standard [21] from the ASAM organization for the label structure. Some pre-processing scripts transform and filter the raw point cloud _.pcd ASCII_ data into binary data to reduce the file size and make it compatible with point cloud loaders. In addition, a point cloud registration module can merge multiple point clouds to increase the point density. Finally, we provide a data visualization module to project the point clouds and labels into the camera images. ## IV Evaluation In our study, we conducted a comparative analysis of monocular camera and LiDAR 3D object detection with early and late fusion. In our first evaluation experiment, we used our _MonoDet3D_[23] 3D object detector that takes camera images as input. It transforms the 2D instance masks into 3D bottom contours by using extrinsic calibration data. Our augmented _L-Shape-Fitting_ algorithm extracts the dimensions and calculates the rotation for each object. In our second experiment, we used _PointPillars_[24] and trained the model from scratch on all classes of our camera field of views _Camera_south1_, _Camera_south2_, and _full_. In the last experiment, we evaluate our multi-modal _InfraDet3D_[23] detector, which incorporates a late fusion approach, leveraging the _Hungarian_ algorithm to establish correspondences between detections obtained from the _MonoDet3D_ and _PointPillars_ baselines. For all these experiments, we provide post-processing scripts in our A9-Devkit for early data fusion, and cropping the point cloud labels to fit the mentioned field of view. We evaluated each detector on three difficulty levels _Easy_, _Moderate_, and _Hard_, see Table III. The _Hard_ category contains objects with a distance over \(50\) m, objects that are mostly occluded, or objects that have less than 20 points within the 3D box. Partially occluded objects with a distance of \(40\) to \(50\) m, and 20 to 50 points are part of the _Moderate_ category. Lastly, the _Easy_ category contains objects that are not occluded, less than \(40\) m away, and contain more than 50 points. As a quantitative metric, we used the mean Average Precision (mAP) to evaluate the performance. The _overall_ mAP is the average of _Easy_, _Moderate_, and _Hard_. The advantage of using a monocular setup is a better detection of small objects such as pedestrians. On the other side, a LiDAR detector can detect objects during nighttime. The combination of LiDAR and the camera through late fusion techniques can significantly enhance the overall performance. In this work, we were able to confirm this \begin{table} \begin{tabular}{l|c c c c c} \hline Class & \#Labels & \(\diameter\)Length & \(\diameter\)Width & \(\diameter\)Height & \(\diameter\)Points \\ \hline Car & **22,773** & 4.27 & 1.91 & 1.59 & 34.03 \\ Truck & 2,704 & 3.11 & 2.90 & 3.43 & 116.87 \\ Trailer & 3,177 & 10.19 & **3.12** & **3.65** & **328.36** \\ Van & 4,353 & 6.35 & 2.52 & 2.47 & 86.11 \\ Motorcycle & 734 & 1.90 & 0.83 & 1.60 & 21.23 \\ Bus & 908 & **12.65** & 2.95 & 3.27 & 222.36 \\ Pedestrian & 2,507 & 0.80 & 0.73 & 1.72 & 14.98 \\ Bicycle & 663 & 1.57 & 0.74 & 1.72 & 20.95 \\ Emergency & 142 & 6.72 & 2.35 & 2.35 & 58.95 \\ Vehicle & & & & & \\ Other & 84 & 5.28 & 1.92 & 1.90 & 128.17 \\ \hline Total & 38,045 & - & - & - & 103.20 \\ \hline \end{tabular} \end{table} TABLE II: The total number of labels, average dimensions in meters, and the average number of 3D LiDAR points among all classes. assumption in our evaluation. We achieved the best detection results with the LiDAR_North modality and the _InfraDet3D_ model in the _Easy_ difficulty level. Interestingly, the early fusion approach with _PointPillars_ consistently achieves the best performance in all subsets at _Moderate_ difficulty level. The better performance of _PointPillars_ and _InfraDet3D_ over the _MonoDet3D_ shows the strengths of the LiDAR sensor in comparison to a camera. Mostly, the late fusion of LiDAR and the camera provided better overall results than a single LiDAR detector. Moreover, the combination of early fusion between LiDAR sensors with camera sensors via late fusion, which combines the advantages of both sensor modalities, gives consistently robust results. A visual representation of the qualitative results is provided in Figure 7. ## V Conclusions In this work we extended the A9 Dataset with labeled data of an intersection. We provided 3D box labels from elevated road side sensors. Two synchronized cameras and LiDARs were used to record challenging traffic scenarios. Our data was labeled by experienced experts. As all sensors were calibrated to each other, we can use the 3D bounding box point cloud labels to perform Monocular 3D object \begin{table} \begin{tabular}{l l c c c c} \hline **FOV** & **Model** & **Sensor Modality** & \multicolumn{3}{c}{\(mAP_{3D}\)} \\ & & Easy & Mod. & Hard & Overall \\ \hline Camera\_S1 & MonoDet3D [23] & Camera\_S1 & 43.27 & 13.28 & 2.16 & 19.57 \\ Camera\_S1 & PointPillars\({}^{*}\)[24] & LiDAR\_N & **76.19** & 34.58 & 30.00 & 46.93 \\ Camera\_S1 & PointPillars\({}^{*}\)[24] & LiDAR\_S & 46.35 & 41.05 & 24.16 & 37.18 \\ Camera\_S1 & PointPillars\({}^{*}\)[24] & EF(LiDAR\_N + LiDAR\_S) & 275.81 & **47.66** & **42.16** & **55.21** \\ Camera\_S1 & InfraDet3D [23] & LF(Camera\_S1 + EF(LiDAR\_N + LiDAR\_S)) & 67.08 & 31.38 & 35.17 & 44.55 \\ \hline Camera\_S2 & MonoDet3D [23] & Camera\_S2 & 16.82 & 27.87 & 26.67 & 23.78 \\ Camera\_S2 & PointPillars\({}^{*}\)[24] & LiDAR\_N & 45.26 & 27.26 & 26.24 & 32.92 \\ Camera\_S2 & PointPillars\({}^{*}\)[24] & LiDAR\_S & 26.27 & 38.24 & 13.16 & 25.89 \\ Camera\_S2 & PointPillars\({}^{*}\)[24] & EF(LiDAR\_N + LiDAR\_S) & 38.92 & **46.60** & **43.86** & **43.13** \\ Camera\_S2 & InfraDet3D [23] & LF(Camera\_S2 + EF(LiDAR\_N + LiDAR\_S)) & **58.38** & 19.73 & 33.08 & 37.06 \\ \hline Camera\_full & MonoDet3D [23] & LF(Camera\_S1 + Camera\_S2) & 19.05 & 24.12 & 25.55 & 22.91 \\ Camera\_full & PointPillars\({}^{*}\)[24] & LiDAR\_N & **76.04** & 26.03 & 20.60 & 40.89 \\ Camera\_full & PointPillars\({}^{*}\)[24] & LiDAR\_S & 38.82 & 32.83 & 10.93 & 27.53 \\ Camera\_full & PointPillars\({}^{*}\)[24] & EF(LiDAR\_N + LiDAR\_S) & 70.53 & **44.20** & **39.04** & **51.25** \\ Camera\_full & InfraDet3D [23] & LF(LF(Camera\_S1+Camera\_S2) + LF(LiDAR\_N+LiDAR\_S)) & 47.27 & 26.15 & 19.71 & 31.04 \\ Camera\_full & InfraDet3D [23] & LF(LF(Camera\_S1+Camera\_S2) + EF(LiDAR\_N+LiDAR\_S)) & 64.30 & 23.83 & 26.05 & 38.06 \\ \hline \multicolumn{6}{l}{\({}^{*}\)PointPillars inference score threshold is set to 0.3.} \\ \end{tabular} \end{table} TABLE III: Evaluation results on the A9-I Dataset test set (N=North, S=South, EF=Early Fusion, LF=Late Fusion). Fig. 7: Qualitative results on the test sequence of the three baselines: MonoDet3D (camera-only), PointPillars (LiDAR-only) and InfraDet3D (fusion) during the day (top row) and the night time (bottom row). Detections for _MonoDet3D_ and _PointPillars_ are colored by their class color. The fusion results of the _InfraDet3D_ model are shown in red (fused detections), green (unmatched camera detections), and blue (unmatched LiDAR detections). detection. In total, our dataset contains \(4.8\)k RGB images and \(4.8\)k LiDAR point cloud frames with \(57.4\)k high-quality labeled 3D boxes, partitioned into ten object classes of traffic participants. We offered a comprehensive statistics of the labels including their occlusion levels, the number of points grouped by class category and distance, and an extensive analysis of the labeled tracks. In our evaluation experiments, we provided three baselines for the perception task of 3D object detection: A camera, a LiDAR and a multi-modal camera-LiDAR combination. With these experiments, we were able to show the potential of our dataset for your 3D perception tasks. For future work, we plan to create and publish more ground truth labels based on the presented camera images which can support more evaluation methods for our data fusion algorithm. Furthermore, the publication of further labeled sensor data with specific traffic scenarios, e.g. accidents, as well as the usage of other sensor modalities is also on our agenda. ## Acknowledgment This research was supported by the Federal Ministry of Education and Research in Germany within the project _AUTOtech.agil_, Grant Number: 01IS22088U. We thank Venkatnarayanan Lakshminarasimhan and Leah Strand for the collective work on the A9 Intersection (A9-I) Dataset.
2305.11179
Spectral form factors of unconventional superconductors
We show that spectral form factors of unconventional gapped superconductors have singularities occurring periodically in time. These are the superconductors whose gap function vanishes somewhere in momentum space (Brillouin zone) but whose fermionic excitation spectrum is fully gapped. Many, although not all, of these superconductors are topologically nontrivial. In contrast, conventional fully gapped superconductors have featureless spectral form factors which are analytic in time. Some gapless superconductors may also have singularities in their spectral form factors, but they are not as ubiquitous and their appearance may depend on the details of the interactions among fermionic particles which form the superconductor and on the underlying lattice where the particles move. This work builds on the prior publication [1] where Loschmidt echo of topological superconductors, related but not identical to spectral form factors, was studied. It follows that spectral form factors could be used as a test of the structure of the superconducting gap functions.
Sankalp Gaur, Victor Gurarie
2023-05-16T02:19:07Z
http://arxiv.org/abs/2305.11179v1
# Spectral form factors of unconventional superconductors ###### Abstract We show that spectral form factors of unconventional gapped superconductors have singularities occurring periodically in time. These are the superconductors whose gap function vanishes somewhere in momentum space (Brillouin zone) but whose fermionic excitation spectrum is fully gapped. Many, although not all, of these superconductors are topologically nontrivial. In contrast, conventional fully gapped superconductors have featureless spectral form factors which are analytic in time. Some gapless superconductors may also have singularities in their spectral form factors, but they are not as ubiquitous and their appearance may depend on the details of the interactions among fermionic particles which form the superconductor and on the underlying lattice where the particles move. This work builds on the prior publication [1] where Loschmidt echo of topological superconductors, related but not identical to spectral form factors, was studied. It follows that spectral form factors could be used as a test of the structure of the superconducting gap functions. Spectral form factor is a way to characterize the energy spectrum of quantum systems. It is defined as a trace of the evolution operator \[{\cal Z}={\rm tr}\,e^{-i\hat{H}t}=\sum_{n}e^{-iE_{n}t}, \tag{1}\] and can be written as a sum over all the energy levels \(E_{n}\) of a quantum system. It is closely related to the thermal partition function of a quantum system, coinciding with its formal analytic continuation to the complex values of temperature. Fourier transform of the absolute value square of the spectral form factor produces the correlation between energy levels of the system \[\int dt\,e^{i\omega t}\left|{\cal Z}\right|^{2}=2\pi\sum_{nm}\delta\left(\omega- E_{n}+E_{m}\right). \tag{2}\] Behavior of the spectral form factors in quantum many body systems attracted some attention recently [2; 3]. Here we show that spectral form factors in unconventional gapped superconductors have singularities which occur periodically in time, while their conventional counterparts have featureless spectral form-factors. More precisely, consider the gap function \(\Delta({\bf p})\), where \({\bf p}\) is the (quasi)momentum. In unconventional and especially in topological superconductors it cannot be nonzero everywhere. Rather it vanishes at points, lines or surfaces in the momentum space or in the Brillouin zone. Consider the spectrum of Bogoliubov quasiparticles \(E({\bf p})\) for those values \({\bf p}\) where \(\Delta({\bf p})=0\). Suppose \(E_{-}\) is the minimum of \(E_{\bf p}\) for all such \({\bf p}\). If \(\Delta({\bf p})\) vanishes at a single point, then we obviously cannot minimize \(E_{\bf p}\) and instead take \(E_{-}\) to be equal to \(E({\bf p})\) calculated at this point. Then the singularities occur at times \(t_{n}=\pi(1+2n)/(2E_{-})\) where \(n\) is an arbitrary integer. We show that the nature of the singularities depends on the dimensionality of space and is given by \[\frac{\partial\ln{\cal Z}}{\partial t}\sim\ln|t-t_{n}| \tag{3}\] in two dimensional superconductors and \[\frac{\partial\ln{\cal Z}}{\partial t}\sim\sqrt{|t-t_{n}|} \tag{4}\] in three dimensional superconductors. For superconductors whose underlying fermionic particle move on a lattice, the singularities could also occur at \(t_{n}=\pi(1+2n)/(2E_{+})\), where \(E_{+}\) is the maximum of the excitation spectrum computed where \(\Delta({\bf p})=0\) (if \(E_{+}\) is different from \(E_{-}\)). The existence of these singularities is not as ubiquitous as that of the ones associated with \(E_{-}\). Let us now demonstrate this by first studying the example of the two dimensional \(p_{x}+ip_{y}\) (often abbreviated as \(p+ip\)) superconductor. Specifically, we have in mind attractively interacting identical fermions with the Hamiltonian [4] \[H=\sum_{{\bf p}}\epsilon(p)\hat{a}_{\bf p}^{\dagger}\hat{a}_{\bf p}-\frac{ \lambda}{V}\sum_{{\bf p},{\bf k},{\bf q}}{\bf k}\cdot{\bf q}\,\hat{a}_{\bf\frac {\bf q}{2}+{\bf k}}^{\dagger}\hat{a}_{\bf\frac{\bf q}{2}-{\bf k}}^{\dagger} \hat{a}_{\bf\frac{\bf q}{2}+{\bf q}}\cdot \tag{5}\] Here \[\epsilon(p)=\frac{p^{2}}{2m}-\mu \tag{6}\] is the kinetic energy of these interacting spinless fermions, \(\lambda\) is the interaction constant and \(V\) is the volume of the system. These fermions are known to form a \(p_{x}+ip_{y}\) paired fermionic superfluid, which for brevity we will refer to as a \(p\)-wave superconductor. It is a class D superconductor [5] which is topological if \(\mu>0\) and has a gap as long as \(\mu\neq 0\). The Bogoliubov-de-Gennes (BdG) Hamiltonian of this superconductor takes the following standard form \[\hat{H}=\sum_{{\bf p},\;p_{y}>0}\left(\hat{a}_{\bf p}^{\dagger}\;\;\hat{a}_{- \bf p}\right)\left(\begin{matrix}\epsilon(p)&\Delta({\bf p})\\ \bar{\Delta}({\bf p})&-\epsilon(p)\end{matrix}\right)\left(\begin{matrix}\hat{ a}_{\bf p}\\ \hat{a}_{-\bf p}^{\dagger}\end{matrix}\right). \tag{7}\] Here \(\Delta({\bf p})=\left(p_{x}+ip_{y}\right)\Delta_{p}\) and \(\bar{\Delta}({\bf p})=\left(p_{x}-ip_{y}\right)\bar{\Delta}_{p}\) are the gap functions. \(\Delta_{p}\) and \(\bar{\Delta}_{p}\) are the magnitudes of the gap functions (the subscript \(p\) emphasizes that these are \(p\)-wave gap functions). To avoid double counting, the summation over \(\mathbf{p}\) is restricted to \(p_{y}>0\). Below all the sums over \(\mathbf{p}\) for \(p\)-wave superconductors will be restricted in this way. Let us use the BdG Hamiltonian to calculate the spectral form factor. To do that, we diagonalize the BdG Hamiltonian for each \(\mathbf{p}\). Its eigenvalues \(\omega_{\pm}(p)\) are \[\omega_{\pm}(p)=\pm E(p), \tag{8}\] where \[E(p)=\sqrt{\epsilon(p)^{2}+p^{2}\bar{\Delta}_{p}\Delta_{p}}. \tag{9}\] Therefore the trace of its evolution operator is \[\mathcal{Z}=\prod_{\mathbf{p}}S_{\mathbf{p}},\ S_{\mathbf{p}}=e^{-itE(p)}+e^{ itE(p)}=2\cos\left(tE(p)\right). \tag{10}\] Before proceeding to study \(\mathcal{Z}\), let us briefly discuss its analytic properties. Each factor \(S_{\mathbf{p}}\) is obviously an analytic function of time \(t\). However, if \(S_{\mathbf{p}}\) vanishes for some values of \(\mathbf{p}\) at some critical time \(t=t_{c}\) with all \(S_{\mathbf{p}}\) remaining nonzero if \(t\) deviates from \(t_{c}\), this could make \(\mathcal{Z}\) nonanalytic at \(t_{c}\) (we postpone the discussion whether \(S_{\mathbf{p}}\) can indeed behave in this way until later). Indeed, suppose \(S_{\mathbf{p}}\) vanishes at \(t=t_{c}\) if \(\mathbf{p}=\mathbf{p}_{c}\). Quite generally we should expect that in the vicinity of \(\mathbf{p}=\mathbf{p}_{c}\) and \(t=t_{c}\), \(S_{\mathbf{p}}\) has the following expansion \[S_{\mathbf{p}}\approx C\left(t-t_{c}+\alpha\left|\mathbf{p}-\mathbf{p}_{c} \right|^{2}\right), \tag{11}\] where \(\alpha\) and \(C\) are some complex constants (we will see later that, even though it may not be obvious right now, the factors \(S_{\mathbf{p}}\) are generally complex valued). This immediately leads to \[\frac{\partial\ln\mathcal{Z}}{\partial t}=\sum_{\mathbf{p}}\frac{\partial\ln S _{\mathbf{p}}}{\partial t}\approx\sum_{\mathbf{p}}\frac{1}{t-t_{c}+\alpha \left|\mathbf{p}-\mathbf{p}_{c}\right|^{2}}. \tag{12}\] On the right hand side above the approximate expression for \(S_{\mathbf{p}}\) valid with \(\mathbf{p}\) in the vicinity of \(\mathbf{p}_{c}\) and \(t-t_{c}\) small is substituted. The sum above is obviously a singular function of time at \(t=t_{c}\), with the details of the singularity dependent on the dimensionality of space and on whether \(\mathbf{p}_{c}\) is zero or nonzero. This makes \(\ln\mathcal{Z}\) as well as \(\mathcal{Z}\) itself a nonanalytic function of time at \(t=t_{c}\) (note an obvious similarity between the thermal free energy and \(\ln\mathcal{Z}\) introduced above). Let us now go back to Eq. (10). For a Hamiltonian (7) with given \(\Delta_{p}\), \(\bar{\Delta}_{p}\), and \(\epsilon(p)\), Eq. (10) gives the answer for its spectral form factor. However, in a superconductor, \(\Delta_{p}\) and \(\bar{\Delta}_{p}\) are not fixed beforehand but must be determined self-consistently, by matching the Hamiltonian (5) with the BdG Hamiltonian (7). To understand how to do it, let us recall that to calculate thermal partition function \(\mathrm{tr}\ \exp(-\hat{H}/(k_{B}T))\), we must determine \(\Delta_{p}\) and \(\bar{\Delta}_{p}\) by solving the gap equation. In a \(p\)-wave superconductor, it takes the form \[\frac{1}{V}\sum_{\mathbf{p}}\frac{p^{2}\ \mathrm{tanh}\left[\frac{E(p)}{2k_{B}T} \right]}{E(p)}=\frac{1}{\lambda}, \tag{13}\] where \(T\) is the temperature and \(k_{B}\) is the Boltzmann constant. This equation is solved for the product \(\bar{\Delta}_{p}\Delta_{p}\) which enters \(E(p)\). The solution to this equation can be used for example to calculate the thermal partition function of the superconductor. In order to adapt this to calculating the spectral form factor, we replace \(1/(k_{B}T)\to it\), with the result \[\frac{i}{V}\sum_{\mathbf{p}}\frac{p^{2}\ \mathrm{tan}\left[\frac{tE(p)}{2} \right]}{E(p)}=\frac{1}{\lambda}. \tag{14}\] This should be understood as an equation to determine \(\bar{\Delta}_{p}\Delta_{p}\), which should then be substituted into Eq. (10). In principle, there could be many solutions of the equation (14). To find the one we should use we should first find the solution of equation (13) for the temperatures \(T\) where the solution \(\bar{\Delta}_{p}\Delta_{p}\) is nonzero, and then analytically continue it to the imaginary values of \(T\). Now observe that this equation predicts that \(\bar{\Delta}_{p}\Delta_{p}\) must not be real. Indeed, if it is real, the left hand side of this equation is necessarily imaginary, while the right hand side is real. Similar situation occurs in evaluation of the Loschmidt echo where one also finds [1] that \(\bar{\Delta}\) and \(\Delta\) are not complex conjugates of each other. With \(\bar{\Delta}_{p}\Delta_{p}\) being complex, \(E(p)\) is also generally complex. As a result, the factors \(\cos\left(tE(p)\right)\) generally do not vanish at any \(t\). The exception to that is \(p=0\) where \(E(0)=\left|\epsilon(0)\right|=\left|\mu\right|\), and is independent of \(\bar{\Delta}_{p}\Delta_{p}\). Quite remarkably, this takes us to the previously discussed scenario given by the Eq. (11) with \(\mathbf{p}_{c}=0\). Specifically, for \(t\) close to any of the values \(t_{n}\) given by \[t_{n}=\frac{\pi}{2\left|\mu\right|}\left(1+2n\right), \tag{15}\] with an arbitrary integer \(n\), we can write \[S_{\mathbf{p}}=2\cos(tE_{p})\approx C\left(t-t_{n}+\alpha p^{2}\right). \tag{16}\] Here \[C=2(-1)^{n+1}\left|\mu\right|, \tag{17}\] and \[\alpha=\pi\left(\frac{1}{2}+n\right)\frac{\bar{\Delta}_{p}\Delta_{p}m-\mu}{2 \mu^{2}|\mu|m}. \tag{18}\] Importantly, \(\alpha\) is complex due to \(\bar{\Delta}_{p}\Delta_{p}\) being complex. Note that this matches the conjectured form (11). Working in the large \(V\) limit and replacing summation over \({\bf p}\) with integration we find \[\frac{1}{V}\frac{\partial\ln{\cal Z}}{\partial t}=\frac{1}{4\pi}\int\frac{p\,dp }{t-t_{n}+\alpha p^{2}}. \tag{19}\] The integral above is taken over \(p\) varying from \(0\) to infinity, although we must remember that only the approximate value for the expression being integrated is written above valid for small \(p\) only. In particular, that means that the integral above can be cut off at some momentum scale, avoiding any divergencies at large \(p\). It is then straightforward to see that the leading singularity is \[\frac{1}{V}\frac{\partial\ln{\cal Z}}{\partial t}\approx-\frac{1}{8\pi\alpha} \ln|t-t_{n}|\,. \tag{20}\] The expression here is approximate, valid when \(t\) is in the vicinity of \(t_{n}\). Therefore we arrive at a conclusion advertised earlier. The spectral form factor for the 2D \(p\)-wave chiral superconductor has periodic logarithmic singularities which occur at times \(t_{n}\), defined above in Eq. (15). It is important for this argument that \(\alpha\) is complex and is not real, which in turn is related to \(\bar{\Delta}_{p}\Delta_{p}\) being complex. Let us contrast this behavior with that of a conventional \(s\)-wave superconductor. Its Bogoliubov-de-Gennes Hamiltonian takes the form \[\hat{H}=\sum_{\bf p}\left(\hat{a}^{\dagger}_{{\bf p}\uparrow}\ \ \hat{a}_{-{\bf p} \downarrow}\right)\begin{pmatrix}\epsilon(p)&\Delta_{s}\\ \bar{\Delta}_{s}&-\epsilon(p)\end{pmatrix}\begin{pmatrix}\hat{a}_{{\bf p} \uparrow}\\ \hat{a}^{\dagger}_{-{\bf p}\downarrow}\end{pmatrix}. \tag{21}\] Here \(\Delta_{s}\), \(\bar{\Delta}_{s}\) are momentum-independent \(s\)-wave gap functions. The spectral form factor takes the same form (10) but with the spectrum \[E_{s}(p)=\sqrt{\epsilon(p)^{2}+\bar{\Delta}_{s}\Delta_{s}}. \tag{22}\] Here \(\bar{\Delta}_{s}\Delta_{s}\) is controlled by the gap equation almost identical to the one for the \(p\)-wave superconductor, given by \[\frac{i}{2V}\sum_{\bf p}\frac{\tan\left[\frac{tE_{s}(p)}{2}\right]}{E_{s}(p)} =\frac{1}{\lambda}. \tag{23}\] The main point is that, just like in case of Eq. (14), the solution of this equation necessarily corresponds to \(\bar{\Delta}_{s}\Delta_{s}\) being complex. As a result, \(E_{s}(p)\) is complex. Unlike in case of the \(p\)-wave superconductor, \(E_{s}(p)\) is complex for all \(p\) without exceptions. As a result, none of the factors \(S_{\bf p}\) defined in Eq. (10) vanish for any time \(t\), and the spectral form factor \({\cal Z}\) is analytic at all times. We see that the key distinction between \(s\)-wave and 2D \(p\)-wave superconductors was the presence in case of the latter of a point \({\bf p}=0\) in the gap function \(\Delta({\bf p})=(p_{x}+ip_{y})\Delta_{p}\) where it vanishes. Furthermore, despite having to analytically continue the solution of the gap equation (13) to imaginary temperature \(1/T\to it\), we expect that the analytically continued gap function must also vanish as \({\bf p}\to 0\). Indeed, from the structure of the Hamiltonian (5) the \(p\)-wave gap function must satisfy \[\Delta({\bf p})=-\Delta(-{\bf p}). \tag{24}\] This enforces that the gap function must always vanish at \({\bf p}=0\) even if it is a solution of the analytically continued gap equation (14). More generally, the key necessarily condition for a nonanalytic spectral form factor is the gap function \(\Delta({\bf p})\) which vanishes at certain values of \({\bf p}\), not only at finite temperature, but also when analytically continued to imaginary values of temperature. A good second example of a \(p\)-wave superconductor is class DIII 3D topological superconductor [5] (Helium III B phase) with the the Bogoliubov-de-Gennes Hamiltonian \[\hat{H}=\sum_{\bf p}\left(\hat{a}^{\dagger}_{\bf p}\ \ \hat{a}_{-{\bf p}} \right)\begin{pmatrix}\epsilon(p)&ip_{\mu}\sigma^{y}\sigma^{\mu}\Delta_{p} )\\ -ip_{\mu}\sigma^{\mu}\sigma^{y}\bar{\Delta}_{p}&-\epsilon(p)\end{pmatrix} \begin{pmatrix}\hat{a}_{\bf p}\\ \hat{a}^{\dagger}_{-{\bf p}}\end{pmatrix}, \tag{25}\] where \(\sigma^{y}\) and \(\sigma^{\mu}\) are Pauli matrices acting on the spin indices of the operators \(\hat{a}_{\bf p}\) and \(\hat{a}^{\dagger}_{\bf p}\). Its spectrum is also given by the equation (9), but with \({\bf p}\) being the 3D vector. By analogy with the previous analysis leading to equation (19), we immediately find \[\frac{1}{V}\frac{\partial\ln{\cal Z}}{\partial t}=\frac{1}{2\pi^{2}}\int\frac{ p^{2}\,dp}{t-t_{n}+\alpha p^{2}}\sim\sqrt{|t-t_{n}|}. \tag{26}\] On the other hand, let us examine 2D spin-singlet chiral \(d\)-wave superconductor, which belongs to the symmetry class C. The corresponding Bogoliubov-de-Gennes Hamiltonian is \[\hat{H}=\sum_{\bf p}\left(\hat{a}^{\dagger}_{{\bf p}\uparrow}\ \ \hat{a}_{-{\bf p} \downarrow}\right)\begin{pmatrix}\epsilon(p)&\Delta({\bf p})\\ \bar{\Delta}({\bf p})&-\epsilon(p)\end{pmatrix}\begin{pmatrix}\hat{a}_{{\bf p} \uparrow}\\ \hat{a}^{\dagger}_{-{\bf p}\downarrow}\end{pmatrix}, \tag{27}\] with \(\Delta({\bf p})=(p_{x}+ip_{y})^{2}\Delta_{d}\), \(\bar{\Delta}({\bf p})=(p_{x}-ip_{y})^{2}\bar{\Delta}_{d}\). What sets this example apart from others is that while the gap function vanishes at \({\bf p}=0\), it is not automatically obvious that the gap function analytically continued to imaginary temperature would still vanish in this limit. To elucidate this further, we suppose that the gap function consists of both \(d\)-wave and \(s\)-wave pieces, \(\Delta({\bf p})=\Delta_{s}+(p_{x}+ip_{y})^{2}\Delta_{d}\), \(\bar{\Delta}({\bf p})=\bar{\Delta}_{s}+(p_{x}-ip_{y})^{2}\bar{\Delta}_{d}\). With rotationally invariant interactions, the gap equation should decouple into two separate equations for \(\Delta_{s}\), \(\bar{\Delta}_{s}\) and for \(\Delta_{d}\), \(\bar{\Delta}_{d}\). If \(\Delta_{s}\) is equal to zero for any temperature \(T\), its analytic continuation to imaginary values of \(T\) should also be zero. At the same time, just as earlier, \(\bar{\Delta}_{d}\Delta_{d}\) becomes a complex number, with the spectrum given by \(E(p)=\sqrt{\epsilon^{2}(p)+p^{4}\bar{\Delta}_{d}\Delta_{d}}/2\). leading to the following singularity in the spectral form factor (below \(\beta\) is real, while \(\alpha\) is complex) \[\frac{1}{V}\frac{\partial\ln\mathcal{Z}}{\partial t}=\frac{1}{2\pi}\int\frac{p\, dp}{t-t_{n}+\beta p^{2}+\alpha p^{4}}\sim\ln\left|t-t_{n}\right|. \tag{28}\] However, if \(\Delta_{s}\) is nonzero at some range of temperature, then it may still be nonzero after the analytic continuation \(1/(k_{B}T)\to it\). Then the superconductor will have a non-singular spectral form factor. To decide whether a particular superconductor of this form will have singularities in its structure factor we need to examine the original Hamiltonian of the interacting fermions which led to this superconductor and see if any \(s\)-wave pairing is possible in addition to the \(d\)-wave pairing. Therefore, the singularities in this case are not as ubiquitous as in the \(p\)-wave case. All superconductors that we looked at so far were gapped to fermionic excitations. Let us now look at an example of a gapless superconductor. As an example, consider a 3D \(p\)-wave spin-triplet superconductor which has the Bololiubov-de-Gennes equation (7) with the gap function which behaves as \[\Delta(\mathbf{p})=\left(p_{x}+ip_{y}\right)\Delta_{p}. \tag{29}\] This gap function vanishes if \(p_{x}=p_{y}=0\), for all \(p_{z}\). Furthermore, given \(\epsilon(p)=p^{2}/(2m)-\mu\) with \(\mu>0\), the excitation spectrum \[E(\mathbf{p})=\sqrt{\left(\frac{p^{2}}{2m}-\mu\right)^{2}+\left(p_{x}^{2}+p_{ y}^{2}\right)\bar{\Delta}_{p}\Delta_{p}} \tag{30}\] vanishes at \(p_{x}=p_{y}=0\), \(p_{z}=\sqrt{2m\mu}\). Suppose just as in the previous examples, once the temperature is made imaginary, \(\Delta_{p}\Delta_{p}\) becomes complex, but otherwise no other terms appear in the gap function. However, unlike the previous examples of gapful superconductors, setting \(p_{x}=p_{y}=0\), we find that \(E(p_{z})\) now ranges from zero to infinity. As a result, the spectral form factor \(\mathcal{Z}(t)\) is now an analytic function of time \(t\). Now it is further possible to imagine that the fermions which formed this superconductor move on a lattice, as opposed to a continuous space. If so, then \(E(p_{z})\), at \(p_{x}=p_{0}\) now has a maximum somewhere as \(p_{z}\) is varied. Denoting the maximum \(E_{+}\) it is straightforward to see that this would lead to a singularities in the spectral form factor occurring at times \(t_{n}=\pi(2n+1)/(2E_{+})\). These arguments show that singularities are possible even in gapless superconductors, but they are not as ubiquitous and their existence requires some assumptions. Note however that if \(\mu<0\), then the resulting superconductor is gapful, although not topological. It will still have singularities controlled by \(E_{-}=\left|\mu\right|\). Coming back to the gapful (topological) superconductors, we can rely on the classification of the topological superconductors [5] to see that there are five distinct classes of topological superconductors of interest, three in the two dimensional space and two more in the three dimensional space. We can summarize the behavior of their spectral form factors in the following table. \begin{tabular}{|l|l|l|} Class & Gap function & Spectral Form Factor \\ \hline D, \(2D\) & \(\left(p_{x}+ip_{y}\right)\Delta_{p}\) & \(\frac{\partial\ln\mathcal{Z}}{\partial t}\sim\ln\left|t-t_{n}\right|\) \\ \hline C, \(2D\) & \(\left(p_{x}+ip_{y}\right)^{2}\Delta_{d}\) & \(\frac{\partial\ln\mathcal{Z}}{\partial t}\sim\ln\left|t-t_{n}\right|\) \\ \hline DIII, \(2D\) & \(\left(\sigma^{z}p_{x}+ip_{y}\right)\Delta_{p}\) & \(\frac{\partial\ln\mathcal{Z}}{\partial t}\sim\ln\left|t-t_{n}\right|\) \\ \hline DIII, \(3D\) & \(ip_{\mu}\sigma^{y}\sigma^{\mu}\Delta_{p}\) & \(\frac{\partial\ln\mathcal{Z}}{\partial t}\sim\sqrt{\left|t-t_{n}\right|}\) \\ \hline CI, \(3D\) & vanishes on surfaces & \(\frac{\partial\ln\mathcal{Z}}{\partial t}\sim\sqrt{\left|t-t_{n}\right|}\) \\ \hline \end{tabular} The first three entries in the table above were already worked out above. In particular, class D and class DIII superconductors are \(p\)-wave and the singularities in their spectral form factor are ubiquitous. The class C superconductor may have singularities in their spectral form factor if its gap equation excludes the possibility of an additional \(s\)-wave gap function. The last entry refers to the class CI topological spin-singlet superconductor in three dimensions [6]. It is in the same class as the conventional \(s\)-wave spin-singlet superconductor and therefore will have singularities in the spectral form factor only if its gap equation excludes the possibility of an additional \(s\)-wave gap function. If this is excluded, then working out its singularities relies on the understanding that its gap function vanishes on 2D surfaces in its 3D Brillouin zone. Starting from the point on the surface where \(E(p)\) has its minimum, following the arguments given here it is easy to see that \[\frac{\partial\ln\mathcal{Z}}{\partial t}\sim\int\frac{d^{2}q_{1}dq_{2}}{t-t_ {n}-\alpha q_{1}^{2}-\beta q_{2}^{2}}. \tag{31}\] Here \(q_{1}\) is the coordinate parametrizing the surface and \(q_{2}\) is the direction perpendicular to the surface, \(\alpha\) is real while \(\beta\) is complex. By analogy with (26) we find \[\frac{\partial\ln\mathcal{Z}}{\partial t}\sim\sqrt{\left|t-t_{n}\right|}, \tag{32}\] just as stated in the table above. Therefore, we see that the type of the singularity in the spectral form factor which occurs in topological superconductors depends only on the dimensionality of space. Finally, we would like to remark that spectral form factors nowadays are accessible to measuring in experiment, using the techniques of the atomic physics. For example, if the superconductor is realized by means of cold ions [7], its spectral form factor could in principle be measured by directly evolving a random initial product-state up to some time, projecting it back to the inital state, and averaging over the random initial state. Therefore, spectral form factors can be used as a probe of the structure of the superconducting order parameter. Acknowledgement: We would like to thank E. Yuzbashyan for inspiring discussions. VG was supported by the Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foundation (651440).
2306.10737
Stellar spots cause measurable variations in atmospheric metallicity
To accurately measure a star's atmospheric parameters and chemical abundances, it is crucial to have high-quality spectra. Analysing the detailed chemical abundances of groups of stars can help us better understand nucleosynthesis, galactic chemical enrichment, and stellar evolution. In this study, we explored whether stellar spots can affect a star's inferred metallicity and, if so, where the impact is the strongest. To investigate this, we created synthetic infrared spectra that included stellar spots for a sample of main-sequence stars younger than the sun. We then applied two models to the data: one that accounted for spots and one that did not. From this, we can determine the bias introduced when fitting spotted spectra with a non-spotted model and how this bias varies with different parameters. Our findings revealed that fitting spotted spectra with a non-spotted model can introduce a scatter of up to 0.05 dex in the inferred metallicity, especially for stars with high levels of spot coverage. This bias is similar in magnitude to other relevant effects, such as atomic diffusion, radiative levitation, or non-local thermodynamic equilibrium. We also found that the effect is most pronounced in young stars and decreases with age. These results suggest that stellar spots can introduce a systematic uncertainty in metallicity that is not currently accounted for in spectroscopic analysis. This could potentially limit scientific inferences for population-level studies of young stars and differential abundance analyses.
Tanner A. Wilson, Andrew R. Casey
2023-06-19T07:15:01Z
http://arxiv.org/abs/2306.10737v1
# Stellar spots cause measurable variations in atmospheric metallicity ###### Abstract To accurately measure a star's atmospheric parameters and chemical abundances, it is crucial to have high-quality spectra. Analysing the detailed chemical abundances of groups of stars can help us better understand nucleosynthesis, galactic chemical enrichment, and stellar evolution. In this study, we explored whether stellar spots can affect a star's inferred metallicity and, if so, where the impact is the strongest. To investigate this, we created synthetic infrared spectra that included stellar spots for a sample of main-sequence stars younger than the sun. We then applied two models to the data: one that accounted for spots and one that did not. From this, we can determine the bias introduced when fitting spotted spectra with a non-spotted model and how this bias varies with different parameters. Our findings revealed that fitting spotted spectra with a non-spotted model can introduce a scatter of up to 0.05 dex in the inferred metallicity, especially for stars with high levels of spot coverage. This bias is similar in magnitude to other relevant effects, such as atomic diffusion, radiative levitation, or non-local thermodynamic equilibrium. We also found that the effect is most pronounced in young stars and decreases with age. These results suggest that stellar spots can introduce a systematic uncertainty in metallicity that is not currently accounted for in spectroscopic analysis. This could potentially limit scientific inferences for population-level studies of young stars and differential abundance analyses. keywords: stars: abundances - stars:starspots - stars: rotation ## 1 Introduction It is widely assumed that the elemental abundances in a star's atmosphere accurately reflect the abundances of the material from which the star formed (Gibson et al., 2003; Pagel, 2009; Salaris and Cassisi, 2017). This assumption is critical for chemical tagging (Anders et al., 2016; Randich et al., 2022), understanding galactic formation (Gibson et al., 2003), and the synthesis of elements across cosmic time (McWilliam and Rauch, 2004; Johnson et al., 2020). Precise measurements of elemental abundances are essential in many areas of astrophysics. For example, chemical tagging allows us to track the history of the galaxy, which would be impossible with biased measures of abundance. Differential abundance techniques (Onehag et al., 2011; Melendez et al., 2014; Reggiani et al., 2016; Maia et al., 2019; Liu et al., 2020; Nissen et al., 2020; Spina et al., 2021) - employed for solar twins and planet-hosting stars - claim very precise abundance measurements, which are essential for probing planet formation (Tayar et al., 2022). Similarly, when determining cluster ages (Bensby et al., 2004; Pont and Eyer, 2004), the turn-off age of a star is particularly useful for this purpose because a small change in colour/magnitude, which depends on metallicity, indicates a relatively large change in age compared to the main-sequence. Recognising that surface abundances may change over a star's evolution is important. The surface abundances can change due to numerous processes. Atomic diffusion and radiative levitation introduce surface abundance variations on the scale of 0.05 dex, with a magnitude and bias that depends on the element and the stellar effective temperature (Onehag et al., 2014). Enhanced mixing can also cycle material to the surface. Nuclear reactions, such as lithium depletion (Pinsonneault et al., 2002) or CNO cycling (Crowther, 2007)) enhance and deplete specific surface abundances and isotopic ratios. Accretion can enhance surface metallicity and vary particular elemental abundances for a short time depending on the companion type (Pasquini et al., 2007; Maldonado et al., 2019; Laughlin and Adams, 1997). For example, mass loss can strip away H-rich surface regions in Wolf-Rayet stars (Crowther, 2007) - increasing the observed stellar metallicity or carrying away surface metals which will have a small to negligible decrease of surface metals on the main sequence. These effects are usually ignored when estimating a star's stellar parameters and chemical abundances. Most spectroscopic analyses usually adopt some simplifying assumptions to make the computation time tractable. For example, we usually assume the stellar photosphere can be represented in one dimension (1D) and that baryonic matter can be described by thermal distributions in small regions (local thermal equilibrium; LTE). These assumptions can particularly influence the measured stellar parameters (e.g., Blanco-Cuaresma, 2019). Both can lead to an over-estimate of the temperature gradient in the atmosphere and an underestimation of the density, which can result in an over-estimate of the abundance. We also typically ignore magnetic activity, but recently Spina et al. (2020) showed it has a measurable impact on the chemical abundances of young, fast-rotating stars. While these assumptions simplify inference, it is important to consider their effects when reaching conclusions. Stars have spots, which are important indicators of the rotational rate of stars, especially along the main sequence (McQuillan et al., 2014; Santos et al., 2021). The properties of stellar spots and their effect on the observed properties of a star vary with age, rotation rate, mass, and metallicity (Mathur et al., 2014; Karoff et al., 2018; Nichols-Fleming & Blackman, 2020). For example, as the rotation rates of stars decrease with age, the average magnetic activity likewise decreases. This results in smaller short lived spots that cover only a small fraction of the stellar surface (Cao & Pinsonneault, 2022). Spot properties can be generalised by: their coverage across the stellar surface, the temperature difference relative to the surrounding, and the occurrence pattern. Cao & Pinsonneault (2022) recently quantified the spot parameters of stars in the Pleiades and M67. They found that young or fast rotating stars tend to be more magnetically active and have a greater spot coverage than their older, slower counterparts. The spotted areas of the star can be thousands of degrees cooler than the surrounding areas - solar spots for example can be 500-2000K cooler than the surrounding photosphere (Berdyugina, 2005; Herbst et al., 2020). A spotted star's stellar spectra are more complex than their non-spotted counterparts (Morris et al., 2019). Accurate inference of stellar parameters requires a model that reflects the stellar spectra well. In this work, we quantify the effect of fitting spotted spectra with non-spotted models and identify the parts of the main sequence where the effect is most prevalent. In Section 2, we outline the generative model for stellar spectra with spots and describe our choices of stellar parameters before outlining the fitting procedure used. In Section 3, we present the difference in the recovered stellar parameters with the spotted and non-spotted models. We discuss parts of the main sequence where the effect is most prevalent. Finally, in Section 4, we place those results in the context of other significant effects on measured stellar metallicity and provide recommendations for high-precision spectroscopic investigations in specific regions of stellar evolution. ## 2 Method ### Stellar parameters for a population of fake stars We prepare a sample of stellar spectra that spans the main sequence to estimate the impact that stellar spots can have on the accuracy of inferred stellar parameters. This sample is intended to be indicative of a possible population of main-sequence stars but not intended to represent which stars would, or would not, have spots. We generate 1500 spectra of main-sequence and early post-main-sequence stars with various values of mass, age, metallicity, \(v\sin i\), \(f_{\rm spot}\), and \(x_{\rm spot}\) across the HR diagram. We drew masses from a Salpeter initial mass function (Salpeter, 1955) between 0.5 and 1.5 \(M_{\odot}\) with \(\alpha=2.35\). This limits our range of masses to those with a radiative surface and convective core and reaches beyond the Kraft break (Kraft, 1967). Metallicity is drawn from a distribution to approximately reflect what is observed in the Milky Way. Specifically, we defined a variable \(\phi\) to be drawn from a Beta distribution \[\phi\sim\mathcal{B}\left(\alpha=10,\beta=2\right) \tag{1}\] and applied a transform from \(\phi\) to [Fe/H] by requiring the metallicities be bounded between \(\mathrm{[Fe/H]_{min}}=-2\) and \(\mathrm{[Fe/H]_{max}}=+0.5\). We also required that the mode of \(\phi\), defined as \(\frac{\alpha-1}{\alpha+\beta-2}\) for a Beta distribution, occurs at Solar metallicity. This leads to the transform: \[\mathrm{[Fe/H]}=\left(\ \mathrm{[Fe/H]_{max}}-\mathrm{[Fe/H]_{min}}\right) \left(\phi-\frac{\alpha-1}{\alpha+\beta-2}\right)\quad. \tag{2}\] The stars we generate mock data for in this work span from the zero-age main sequence (ZAMS) to low-luminosity subgiants. We draw equivalent evolutionary phase (EEP) values from a uniform distribution \(\mathrm{EEP}\sim\mathcal{U}(200,450)\), where \(\mathcal{U}\left(x,y\right)\) denotes a uniform prior between x and y. The bounds of this range (200 and 450) represent the ZAMS and the low-luminosity subgiant phase, respectively. Using the EEP, mass and metallicity, we interpolate a position along the MIST stellar isochrones (Morton, 2015) to calculate the expected \(T_{\rm eff}\) and \(\log g\) for each random star. We also obtain the star's age (post-ZAMS) that we can use in conjunction with the other stellar parameters to determine rotational properties (see below). We have limited the age of the stars we consider in this work up to the age of the Sun. This is the range available for rotational rate and convective turnover timescales from the sources we draw from in this work. This limits the post-MS stars we consider to more massive stars. We briefly discuss bias' which may introduce in Section 4. The surface rotation period is interpolated from stellar cluster-tuned rotational isochrones given the stellar age and mass (Table A1 in Spada et al. (2016)). Rotational broadening \(v\sin i\) can then be calculated by combining the rotational period, the radius from the interpolated isochrone model, and an inclination angle. We have drawn inclination from a uniform distribution in \(\cos i\sim\mathcal{U}(0,1)\). \(f_{\rm spot}\) is related to the Rossby number, \(R_{o}\), which is defined as the ratio of the surface rotational period to the convective turnover timescale (\(\tau_{\rm conv}\)). \(\tau_{\rm conv}\) is interpolated from Table 1 in Landin et al. (2010) given the stellar age and mass. Combining this value with the rotational period, we obtain \(R_{o}\). \(f_{\rm spot}\) is then calculated from the relationship between \(f_{\rm spot}\) and \(R_{o}\) identified in Cao & Pinsonneault (2022) (Eq. 5): \[f_{\rm spot}=\left\{\begin{array}{ll}0.278,&\log R_{o}\leq-0.729\\ 0.0623~{}R_{o}^{-0.881},&\log R_{o}>-0.729\\ \end{array}\right\}. \tag{3}\] There is some scatter in \(f_{\rm spot}\) which is not accounted for by this relation (see left panel of Figure 7 in Cao & Pinsonneault (2022)). For this reason, we add random noise to our calculated \(f_{\rm spot}\) which is drawn from a normal distribution with a standard deviation of 0.1 it is clear that there is some scatter in \(f_{\rm spot}\) About We assume \(x_{\rm spot}\) is drawn from a uniform distribution \(x_{\rm spot}\sim\mathcal{U}(0.8,1.0)\). This represents the limits set when fitting \(x_{\rm spot}\) in (Cao & Pinsonneault, 2022), which is motivated by temperature bounds which they discuss in more detail in Section 2.2. \(x_{\rm spot}\) does not appear to have a clear relationship with other stellar parameters, but it - and \(f_{\rm spot}\) - may vary on multiple periodic timescales as they do for the Sun. The stochastic nature of stellar observations - and the admittedly simple nature of the model - means that \(f_{\rm spot}\) and \(x_{\rm spot}\) are random draws from the possible stellar spot parameters. We will eventually find that \(x_{\rm spot}\) has little effect on the bias introduced by fitting spotted spectra with a non-spotted model, so move forward with the knowledge that we have good coverage when modelling over the range of possible parameters. ### Spotted spectrum generative model We build upon the work of (Cao & Pinsonneault, 2022), where a forward model is developed to model the effect of starspots and to estimate the fractional spot coverage of stars in the Pleiades and M67. Their model assumes that the spectrum of a spotted star can be broken into spotted and non-spotted components. These two components have the same \(\log g\) [Fe/H], microturbulent velocity, and the same surface rotational velocity (\(v\sin i\)), but the two components vary in temperature. The spot and ambient temperatures (\(T_{\rm spot}\) and \(T_{\rm amb}\)) are related by \(T_{\rm spot}\) = \(x_{\rm spot}T_{\rm amb}\), and are coupled to the effective temperature of the star following the approach of Somers & Pinsonneault (2015) \[T_{\rm eff}=T_{\rm amb}(1-f_{\rm spot}+f_{\rm spot}x_{\rm spot}^{4})^{1\over 3}, \tag{4}\] where \(f_{\rm spot}\)is the fractional surface spot coverage. From these relations, the set \(\{T_{\rm eff},\,x_{\rm spot},f_{\rm spot}\}\) define a pair of ambient and spot temperatures that preserve stellar luminosity. We calculated a grid of synthetic spectra, which we interpolate between to generate the predicted spectra for a spotted or non-spotted model. The list of atomic and molecular transitions is from (Shetrone et al., 2015; Smith et al., 2021). We used a grid of plane-parallel MARCS (Gustafsson et al., 2008) model photospheres that span dimensions in effective temperature, surface gravity, and metallicity.1 Microturbulence was kept fixed at 1.15 km s\({}^{-1}\) for main-sequence stars and we assumed that [\(\alpha\)/H] scales with [Fe/H] (i.e., the so-called "standard" composition in MARCS). The abundance dimensions [C/M] and [N/M] were kept fixed at zero. We used Korg (Wheeler et al., 2022) to synthesise all model spectra at high resolution, which we then convolved and down-sampled to match the (uniform in log) pixel spacing used in the APOGEE data reduction pipeline (Holtzman et al., 2018). The convolution kernel includes two components that enter multiplicatively: one assuming a constant spectral resolution \(R=\lambda/\Delta\lambda\) of 22,500, and another representing rotational broadening \(v\sin i\). We convolved each spectrum with a grid of \(v\sin i\) values that were uniformly spaced in \(\log v\sin i\) from 0-100 km s\({}^{-1}\) in order to match the setup for the APOGEE analysis pipeline. Naturally, for low \(v\sin i\) values, the line spread function of the instrument will dominate. Footnote 1: We calculated spectra using spherical models as well, but in practice, only spectra from plane-parallel models (i.e., main-sequence stars) are used in this work. With this grid of spectra and some given spectral parameters \(\{T_{\rm eff},\,\log g,\,[{\rm Fe/H}],\,\log v\sin i\}\), we interpolate the spotted and ambient spectra and combine them in a fractional manner with wavelength as if they were separate black-body spectra in order to produce a flux-preserving combined spectrum: \[B(T_{\rm eff},\lambda)\ =\ f_{\rm spot}B(T_{\rm spot},\lambda)+(1-f_{\rm spot})B(T_ {\rm amb},\lambda)\quad. \tag{5}\] In total, our forward model for predicting spotted spectra includes six parameters: \(T_{\rm eff}\), \(\log g\), [Fe/H], \(\log v\sin i\), \(x_{\rm spot}\), and \(f_{\rm spot}\). This model is equally capable of predicting non-spotted spectra by fixing \(f_{\rm spot}\) to zero or \(x_{\rm spot}\) to unity. Using the 1500 sets of parameters outlined in Section 2.1 we generated synthetic spotted stellar spectra. We also apply realistic noise at each pixel from a Gaussian distribution with standard deviation = 0.01, assuming a signal-to-noise ratio of 100. Continuum normalisation is performed by assuming a running mean of the spectra, and during fitting, this procedure is applied to the fake spectrum (data) and to the model spectrum. We now have the tools to determine the effect of fitting spotted spectra with non-spotted models. We do this by finding the best-fitting stellar parameters given the synthetic spectra fitted twice: first with the model described in Section 2.2 and then with a non-spotted model (e.g., \(f_{\rm spot}\) fixed at 0 and \(x_{\rm spot}\) fixed at 1). Here we have performed least-squares fitting through the Levenberg-Marquardt algorithm implemented in ScIpY. We found that fitting the spotted parameters can be non-trivial. The likelihood surfaces are multimodal and degenerate, requiring informed choices about the initialisation of fitting. To resolve this issue, we performed a coarse evaluation of parameters (on a grid) before starting optimisation. ## 3 Results We began by confirming that we could accurately recover the injected parameters. The best-fit parameters following fitting the synthetic spotted spectra with a spotted model are shown in Figures 2 and 3. \(T_{\rm eff}\), \(\log g\), [Fe/H], and \(v\sin i\) are recovered accurately for every injected parameter set. While we identify scatter in recovered \(f_{\rm spot}\) this appears not to affect the accuracy of the recovery of the traditional stellar parameters. We move forward confident that any difference in the recovered parameters between fitting with the spotted and non-spotted models results from the model differences rather than the fitting procedure employed in this work. We now identify systematic effects in the recovered parameters when we fit the spotted spectra with an incorrect model of non-spotted spectra. The difference between the recovered parameters fitted with a spotted and non-spotted model of the stellar atmosphere are shown in Figure 4. A consistent scatter is introduced on each parameter when a non-spotted model is used to perform inference on a spotted spectrum. We calculate each parameter's average bias and scatter to quantify the effect. The injected parameters are separated into 10 bins, and we take the median and median absolute deviation of the difference between the spotted and non-spotted model's inferred parameters for each bin. We take the median as a measure of the average bias and the median absolute deviation as a proxy for the scatter. In Figure 5 we show the effect of the injected parameters on the stellar spot spectra through the difference between the recovered spot and non-spot model \(T_{\rm eff}\). Fitting a spotted spectrum with a small \(x_{\rm spot}\)with a non-spotted spectrum introduces a consistent bias to the inferred \(T_{\rm eff}\) of about \(-25\) K: a non-spotted model tends to underestimate the true effective temperature of a spotted spectrum. A scatter is also introduced \(T_{\rm eff}\) on the scale of \(\sim\)50K for spectra with significant spot coverage (low \(x_{\rm spot}\) and large \(f_{\rm spot}\)). The other injected parameters do not appear to have any strong correlations or effects on the recovered non-spot \(T_{\rm eff}\). Their median values are zero, and MAD appears consistent at \(\sim\)25K. Figure 6 shows the effects of fitting spotted spectra with a non-spotted model on \(\log g\) (orange) and \(\log\,v\sin i\) (red), respectively. There appears to be no statistically significant bias introduced to both of the inferred parameters as the median of each bin of injected parameters is consistently about zero. However, a consistent scatter is introduced to both parameters. The MAD of \(\Delta\log g\) and \(\Delta\log v\sin i\) Figure 1: HR diagram of the 1500 sets of stellar parameters drawn from physically motivated distributions of mass, metallicity and age coloured by [Fe/H]. in each injected parameter bin have an average value of \(\sim\)0.025 dex - corresponding to an average scatter on \(v\sin i\) of \(\sim\) 1 km s\({}^{-1}\). The scatter peaks for both recovered parameters at \(\sim\)0.05 dex for stars with significant spot coverage - which corresponds to a maximum scatter on \(v\sin i\) of \(\sim\) 2 km/s. The scatter on recovered \(v\sin i\) and \(\log g\) is otherwise constant with the other injected parameters. The effect of fitting spotted spectra with a non-spotted model is significant in the recovery of metallicity. This is seen in Figure 6 (green), where we compare the recovered [Fe/H] with a spotted and non-spotted model of the stellar atmosphere against the injected parameters of our spotted spectra. This process does not introduce a bias to the inferred metallicity of the spectra but does introduce a significant scatter to the recovered value, representing an intrinsic'minimum floor' of systematic uncertainty if the effects of spots are not included (see Section 4). The scatter introduced to [Fe/H] by fitting spotted spectra with a non-spotted model increases with injected \(f_{\rm spot}\). As \(f_{\rm spot}\) approaches 1, the MAD of \(\Delta\)[Fe/H] reaches a maximum of about 0.05 dex. Comparatively, as \(x_{\rm spot}\) decreases, so does the MAD of \(\Delta\)[Fe/H], peaking again at 0.04 dex. As the scatter in the other injected parameters is relatively constant, there is no significant relation between the other spectral parameters and \(\Delta\)[Fe/H]. The introduced scatter in [Fe/H] is dominated by the spot parameters of spectra. ## 4 Discussion The results in Section 3 indicate that using a non-spotted model to fit spotted stellar spectra introduces a systematic bias of up to \(-\)25 K in effective temperature and no substantial bias in other parameters. In their study of fitting a spotted stellar model to APOGEE spectra of members in the Pleiades and M67, Cao and Pinsonneault (2022) find a systematic 0.1 dex enhancement in observed [Fe/H]. The lack of bias we find here could be attributed to different stellar populations of stars (e.g., some stars are biased in one direction, but in our population, that effect is mitigated by biases in the opposite direction). We find that the effects of model mismatch (i.e., using a non-spotted model to fit a spotted spectrum) can also introduce a scatter (measured by median absolute deviation) of about 50 K in effective temperature and 0.05 dex in other parameters. If we assume that the spot model we adopt is representative of reality, then these scatter values would represent a minimum systematic uncertainty in these parameters if the wrong model (a non-spotted model) is used. These deviations are comparable to the typical random uncertainties reported by the APOGEE survey (150K, 0.13 dex and 0.1 dex; Hegeddis et al.2022), although these random uncertainties will vary with signal-to-noise. Systematic uncertainties (like model mismatches) will dominate in high signal-to-noise ratios, and the level of scatter we find in metallicity (0.05 dex) is comparable to the effects of radiative levitation, atomic diffusion (Onehag et al.2014), and magnetic broadening of Figure 2: Recovered traditional stellar parameters (\(T_{\rm eff}\), \(\log g\), [Fe/H] and \(v\sin i\)) from fitting synthetic spotted spectra with a spotted model of the stellar atmosphere against the corresponding injected parameters. We consistently accurately recover each injected value when a spotted model of the stellar atmosphere is employed to fit the spotted synthetic spectra. absorption lines (Spina et al., 2020). Unlike these effects, which can in part be mitigated through parameterisation with other stellar parameters, accounting for stellar spots requires a model that explicitly predicts their contribution to the emergent spectrum. This scatter in [Fe/H] is significant as it is of the same order as the precision of spectroscopic inference of metallicity. In particular, a differential analysis of two Solar twins might report abundance uncertainties on the level of 0.01-0.02 dex. While the two stars are selected to be extremely similar in order to mitigate systematic effects, those two stars could have very different coverages of stellar spots, which would introduce a systematic uncertainty floor. ### Imperfect models The results we show here are limited in their applicability. When generating the mock data, only a fraction of randomly drawn stellar parameters could be used to synthesise spectra, either because of limitations of stellar isochrones, the spectral grid, or limits in the procedure in estimating an appropriate rotational velocity and Rossby number. We also limit the ages of the stellar sample to 4.6 Gyr - the maximum ages of both the models used to determine the convective turnover timescale and grid of rotational periods set by observations. As a result, our sample is limited to relatively young stars, and there are hints of a bias in injected parameters towards higher \(f_{\rm spot}\). We have extensively probed the region where the effect should be most prevalent in terms of the scatter it introduces, but this is not intended to be a complete and representative population of main-sequence stars. The quantitative results may not be perfectly accurate for some regions of the HR diagram, and might vary with photosphere geometry. However, by assuming spots are present everywhere across the main sequence, our analysis shows where the consequential effects are most or least prevalent. The treatment of stellar spots in this work requires some discussion. Stellar spots are highly complex regions on the surface of stars. The position of spots relative to the observer, their temporal evolution, and the inherent magnetic activity and faculae surrounding stellar spots, would all introduce complexity to the emergent spectra from these regions. The spotted model employed in this work is a first-order approximation of the average effect of spots on stellar spectra. The functional form of the temporal evolution of the stellar spots in stars other than the Sun is not well known. For a given \(f_{\rm spot}\) we could assume that \(x_{\rm spot}\) varies on some periodic or temporal scale, even if we don't know the functional form of that variability. In this scenario with our model, \(x_{\rm spot}\) is drawn from a uniform prior, which implicitly assumes that we are observing the star at some random time. This modelling of \(x_{\rm spot}\) is relatively crude since, in principle, \(x_{\rm spot}\) could vary as a function of other stellar parameters. Investigations of the evolution of fractional spot coverage of stars is a developing field. For example, recent works have shown an enhancement in \(f_{\rm spot}\) for stars undergoing core-envelope recoupling (Cao et al., 2023). For this reason, our results are only indicative rather than prescriptive. Applying this model to more stars APOGEE samples and time series spectroscopic observations of stars could elucidate the relationship between the parameters. Cao & Pinsonneault (2022) suggest that young, magnetically active stars - stars with Rossby numbers \(<\) 0.4 - have \(f_{\rm spot}\) greater than \(>\) 0.1, saturating at \(f_{\rm spot}\)\(\sim\) 0.3, with significant scatter, when \(R_{o}\)\(<\) 0.2. There is also a significant scatter in \(f_{\rm spot}\) for these stars. The use of the Rossby number to reflect the magnetic/spot activity of stars should be treated with some care. The Sun expresses periodic evolution of its magnetic activity (time scale on the order of decades) and stellar spot expression (time scale on the order of years). The range of fractional spot coverages we observe in the Sun is on the order of [0, 0.12] without variations in the Rossby number. As a result, we draw the injected \(f_{\rm spot}\)from relations with Rossby number and add a random scatter drawn from a Gaussian distribution with a standard deviation of 0.1. Employing a non-spotted spectra model to fit spotted spectra can introduce a scatter to recovered parameters, but fitting a spotted model to non-spotted spectra has little to no effect on the recovered parameters. We recommend that a spotted model, if only as simple as the one used in this work, will consistently recover stellar parameters better than a non-spotted model while also providing a measure of the spot parameters of stars. ### When should a spotted model of the stellar atmosphere be employed? The scatter introduced to the recovered stellar parameters increases with fractional spot coverage. Fractional spot coverage is inversely related to the rotation rate of stars through \(R_{o}\). Further, the rotation rate of stars decreases with time, owing to magnetic braking. As a result, the fractional spot coverage of stars is expected to decrease with age. We can probe when the scatter introduced to the recovery of stellar parameters by stellar spots is most prominent by calculating the Figure 3: Recovered spot parameters (\(x_{\rm spot}\) and \(f_{\rm spot}\)) from the synthetic spotted spectra fitted with a spotted model of the stellar spectra against the injected parameters of the synthetic spectra. We identify that the spot parameters are not always accurately recovered through the fitting procedure. The recovered spot parameters are notably more inaccurate as \(x_{\rm spot}\) approaches 1. scatter in \(\Delta\)[Fe/H] with bins of age. In Figure 7, we show the bias and scatter introduced to \(\Delta\)[Fe/H] with respect to stellar age. The introduced scatter is greatest for stars younger than \(\sim\) 2Gyr is \(\sim\)0.02, while the bias, measured through the median, is zero for stars in this age range. The scatter decreases for older stars (\(>\)3Gyr) to \(\sim\)0.01, but the median \(\Delta\)[Fe/H] appears to increase with increasing age. The increase in the median value is most likely not indicative of a trend and rather the result of the low number of stars in the larger age bins. The trends that we identify in this work are only qualitative - though they do allow us to make recommendations for future work. Our results indicate that fitting the spotted spectra of a star with a non-spotted model when \(f_{\rm spot}\)? 0.1 will introduce a scatter to bias the recovered parameters. We suggest using a spotted model if a star is significantly photometrically variable due to stellar spots. McQuillan et al. (2014) calculated the rotation periods of low-mass main-sequence stars that are photometrically variable due to stellar spots. They were able to determine the rotation rates of stars across a wide mass range (0.6 \(<M_{\odot}<\) 1.1) at multiple points along the main sequence. These stars must therefore express stellar spots and may have the measured stellar parameters influenced by the effect we identify in this work. They estimated that \(\sim\)23% of main-sequence stars exhibit definite rotational modulation from stellar spots, a lower bound due to observational effects. We, therefore, believe at least 1/4 of the main sequence stars may be affected by this bias. ## 5 Conclusions Here we have shown that stellar spots can introduce measurable systematic bias and variance to inferred stellar parameters when a non-spotted model is used. The results demonstrate that spectra with strong spot features can introduce a scatter in inferred metallicity of order 0.05 dex. This emphasises the need for caution when performing spectroscopic analysis on stars with visible spots, particularly young, fast-rotating stars. Our findings highlight the importance of incorporating the effect of spots into spectroscopic models to ensure accurate and precise results. The magnitude of this effect is comparable to others that plague stellar spectroscopy, including atomic diffusion, radiative levitation, and non-local thermodynamic equilibrium. However, the impact of this effect will vary depending on the scientific context. Turn-off ages of clusters are likely to be only minimally impacted, as the metallicity bias for old, slowly rotating stars is less than 0.01 dex. In contrast, a systematic error floor of 0.05 dex caused by spots on the main sequence would critically limit the capacity of strong chemical tagging (Casamiquela et al., 2021). Similarly, star spots could limit any inferences from differential abundance analyses of Sun-like stars, where the typical reported uncertainty is 0.01-0.02 dex (e.g., Melendez et al., 2014; Nissen, 2015; Reggiani et al., 2016; Maia et al., 2019; Liu et al., 2020; Nissen et al., 2020; Spina et al., 2021). While we have focused on the impact on overall metallicity Figure 4: The difference between the recovered traditional stellar spectra parameters (\(T_{\rm eff},\ \log g,\ [{\rm Fe/H}]\) and \(v\sin i\)) from the synthetic spotted spectra fitted with both a spotted and non-spotted model of the stellar spectra against the injected parameters of the synthetic spectra (spotted model non-spotted model recovered parameter). We identify scatter introduced to each of the stellar parameters when fitting spotted spectra with a non-spotted model of the stellar atmosphere. and not on individual abundances, it will be important to examine these effects more closely at a per-element level. These results provide valuable insights for future studies on stars and their properties and underscore the need for continued research on the impact of spots on spectroscopic inference. ## Acknowledgements We thank the reviewer for their constructive feedback which helped improve the paper. We thank Prof. Ilya Mandel for his helpful feedback and our fruitful discussions. A. R. C. thanks Jon Holtzman and Adam Wheeler for helpful discussions. A. R. C. is supported in part by the Australian Research Council through a Discovery Early Career Researcher Award (DE190100656). Parts of this research were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D) through project number CE170100013. ## Data Availability The data and models underlying this article are available upon request to the corresponding author.
2306.17058
Episodic fluid venting from sedimentary basins fuelled by pressurised mudstones
Subsurface sandstone reservoirs sealed by overlying, low-permeability layers provide capacity for long-term sequestration of anthropogenic waste. Leakage can occur if reservoir pressures rise sufficiently to fracture the seal. Such pressures can be generated within the reservoir by vigorous injection of waste or, over thousands of years, by natural processes. In either case, the precise role of intercalated mudstones in the long-term evolution of reservoir pressure remains unclear; these layers have variously been viewed as seals, as pressure sinks or as pressure sources. Here, we use the geological record of episodic fluid venting in the Levant Basin to provide striking evidence for the pressure-source hypothesis. We use a Bayesian framework to combine recently published venting data, which record critical subsurface pressures since $\sim$2~Ma, with a stochastic model of pressure evolution to infer a pressure-recharge rate of $\sim$30~MPa/Myr. To explain this large rate, we quantify and compare a range of candidate mechanisms. We find that poroelastic pressure diffusion from mudstones provides the most plausible explanation for these observations, amplifying the $\sim$3~MPa/Myr recharge caused primarily by tectonic compression. Since pressurised mudstones are ubiquitous in sedimentary basins, pressure diffusion from mudstones is likely to promote seal failure globally.
Luke M. Kearney, Richard F. Katz, Christopher W. MacMinn, Chris Kirkham, Joe Cartwright
2023-06-29T16:04:56Z
http://arxiv.org/abs/2306.17058v2
# Episodic fluid venting from sedimentary basins fuelled by ###### Abstract **Subsurface sandstone reservoirs sealed by overlying, low-permeability layers provide capacity for long-term sequestration of anthropogenic waste. Leakage can occur if reservoir pressures rise sufficiently to fracture the seal. Such pressures can be generated within the reservoir by vigorous injection of waste or, over thousands of years, by natural processes. In either case, the precise role of intercalated mudstones in the long-term evolution of reservoir pressure remains unclear; these layers have variously been viewed as seals, as pressure sinks or as pressure sources. Here, we use the geological record of episodic fluid venting in the Levant Basin to provide striking evidence for the pressure-source hypothesis. We use a Bayesian framework to combine recently published venting data, which record critical subsurface pressures since \(\sim\)2 Ma, with a stochastic model of pressure evolution to infer a pressure-recharge rate of \(\sim\)30 MPa/Myr. To explain this large rate, we quantify and compare a range of candidate mechanisms. We find that poroelastic pressure diffusion from mudstones provides the most plausible explanation for these observations, amplifying the \(\sim\)1 MPa/Myr recharge caused by tectonic compression. Since pressureised mudstones are ubiquitous in sedimentary basins, pressure diffusion from mudstones is likely to promote seal failure globally.** Sedimentary successions often include high-permeability sandstone units enveloped by thick, low-permeability mudstone units. Because the surrounding mudstones can act as barriers to fluid leakage, these sandstones are often viewed as sealed reservoirs and therefore as targets for the large-scale sequestration of waste or storage of sustainable fuels (Krevor et al., 2023; Heinemann et al., 2021; Ringrose et al., 2021). However, fluid injection can pressurise such a reservoir to the point of triggering hydraulic fractures that breach the mudstone seal, enabling rapid depressurisation by fluid venting. It is widely believed that pressures below this failure threshold will dissipate by poroelastic diffusion through sealing mudstones over thousands of years (Muggeridge et al., 2004, 2005; Luo and Vasseur, 2016). However, this slow depressurisation relies on the assumption that the mudstones themselves will remain at low pressure over these long timescales, whereas a variety of natural mechanisms are known to gradually pressurise the entire sedimentary column (Osborne and Swarbrick, 1997). Luo and Vasseur (2016) showed that overpressured mudstones can, in theory, act as a pressure source rather than as a pressure sink, re-pressurising a sandstone reservoir after natural fluid venting. They proposed that this mechanism could fuel further episodes of venting. Kearney et al. (2023) recently developed a poroelastic model of episodic venting that supports and extends this basic concept. However, the predictions of these theoretical studies are difficult to test against observational evidence due to the long timescale associated with mudstone pressure evolution. Here, we test the hypothesis that mudstones can act as sources of pressure, fuelling fluid venting from sedimentary basins. The geological record of episodic fluid venting in the Levant Basin (Fig. 1a) provides a rare opportunity to elucidate the role of mudstones in the pressure evolution of sedimentary basins. These vents release overpressure in localised fluid-expulsion events that transport fluid through kilometres of low-permeability rock via cylindrical conduits known as fluid-escape pipes (Cartwright et al., 2018). Fluid-escape pipes are generally interpreted to form by hydraulic fracturing (Cartwright and Santamarina, 2015), providing a high-permeability pathway to the surface. At the surface, pipes terminate as pockmarks, each recording a discrete episode of venting. Hydraulic fracturing typically requires the pore pressure to exceed the local compressive stress. Therefore, fluid-escape pipes record subsurface pressures, acting as geological piezometers. Furthermore, with stratigraphic estimates of the time of each venting episode, fluid-escape pipes can constrain the rate of pressure recharge between episodes. In the North Levant Basin, located in the Eastern Mediterranean (Fig. 1b), more than 300 fluid-escape pipes have been documented, recording episodic fluid venting from 13 fixed locations across the region. For one of these locations, named Oceanus (Fig. 1c), Cartwright et al. (2021) calculated that the initiation of venting via hydraulic fracturing requires \(\sim\)30 MPa of overpressure. Tectonic compression and marginal uplift have been proposed as the main overpressur mechanisms in the region (Oppo et al., 2021; Cartwright et al., 2021). The Levant Basin resides within a compressive tectonic regime stemming from the collision of the African and Eurasian plates. We estimate the strain at Oceanus to be less than 10% (Supplementary Material S1). Within the Levant Basin is a \(\sim\)3 km-thick Oligo-Miocene elastic succession consisting of turbidic sandstones of Late Oligocene to Early Miocene age that are encased by mudstone. Many of these sandstone reservoirs host biogenic methane accumulations in NE-SW trending anticlines. The Levant pipes source methane and water from these anticline reservoirs and terminate at the seafloor as pockmarks (Fig. 1a). The pipes penetrate through a \(\sim\)1.5 km-thick layer of salt deposited during the Messinian Salinity Crisis (Ryan, 2009). Recent activity of the Levant Fracture System has been uplifting the eastern margin of the basin, leading to gravity-driven, basinward salt flow since \(\sim\)2 Ma, contemporaneous with the formation of pipes in the area. Each pipe forms vertically but the basinward viscous flow of salt advects existing pipes away from their initial positions, such that subsequent venting from the same reservoir requires the formation of a replacement pipe (Fig. 1c). Repetition of this process leads to the 13 observed trails of pipes in the North Levant Basin (Cartwright et al., 2018; Oppo et al., 2021). Thus, each pipe trail records episodic fluid venting from a single reservoir, suggesting that these reservoirs are repeatedly repressurised. From the spatial distributions of pockmarks within each pipe trail, the time of formation of each pipe can be Figure 1: Fluid escape pipe trails in the Levant Basin. (a) Overview of base-salt surface, showing sub-salt anticlines and the elevated margin platform, adjacent to the normally faulted deeper basin; adapted from Oppo et al. (2021), where lighter colours indicate larger depth. (b) Study area located on the North Levant Basin margin, offshore Lebanon. (c) General mechanism for fluid escape pipe trail formation, adapted from Cartwright et al. (2018), with (_i_) as the formation of the initial pipe at 1.7 Ma and (_ii_) as the present-day arrangement. (d) Pipe trails labelled 1–12 and Oceanus from panel (a) when corrected for relative salt translation rates (Oppo et al., 2021). estimated (Fig. 1d) using the methods of Oppo et al. (2021) and Cartwright et al. (2021). These approaches reveal that for each trail, pipe formation typically occurs every \(\sim\)100 kyr. Since fluid-escape pipes record critical subsurface pressures, the Levant pipe trails enable us to distinguish between theories for pressure redistribution between sedimentary layers. The timings of the pockmarks of the isolated Oceans pipe trail are particularly well constrained as it is situated in a less tectonically-active region of the basin. Oceans is therefore less susceptible to local stress changes that might affect the recharge mechanics. We thus focus our analysis on the Oceans trail. The remaining 12 trails are distributed along the active basin margin and are used to extend our inferences from Oceans to a more complex system. To test the pressure-source hypothesis, we develop a novel stochastic model of reservoir pressure evolution and use it to invert the Levant pipe trail data under a Bayesian framework for model parameters such as the pressure-recharge rate. Using basic physical arguments, we then estimate recharge rates for each candidate overpressure mechanism and compare with the inferred rates. In particular, Kearney et al. (2023) showed that pressure diffusion from mudstones amplifies the rate of pressure recharge generated by tectonic compression. In mudstone-dominated basins like the Levant Basin, pressure-recharge rates can be amplified by a factor of \(\sim\)10. Therefore if this hypothesis is correct, then we expect that the inferred recharge rate is a factor of \(\sim\)10 greater than that predicted for tectonic compression alone. We assume that a fluid-escape pipe forms via hydraulic fracturing when the pore pressure exceeds the critical fracture pressure \(p_{f}\), which is the sum of the minimum horizontal compressive stress \(\sigma_{\rm min}\) and tensile strength \(\sigma_{T}\) of the overlying mudstone (Price and Cosgrove, 1990; Scandella et al., 2011), \[p_{f}=\sigma_{\rm min}+\sigma_{T}, \tag{1}\] where we take compression to be positive. Once venting begins, the pressure drops rapidly until the pathway closes, which we assume occurs when the pressure reaches \(\sigma_{\rm min}\). Once closed, we expect fractures to self-seal via swelling and mineral precipitation (Bock et al., 2010). Roberts and Nunn (1995) predict fluid venting durations of order years, which may be considered instantaneous relative to recharge times, of order 100 kyrs. Over the latter timescale, pressure will become spatially uniform within a high-permeability reservoir. We thus assume that reservoir pressure depends on time only. For multiple venting episodes to be sourced from the same reservoir, the reservoir pressure must recharge between episodes. We consider generic pressure recharge at an average rate \(\Gamma\), such that the corresponding time \(\Delta t\) between events is \[\Delta t=\frac{p_{f}-\sigma_{\rm min}}{\Gamma}=\frac{\sigma_{T}}{\Gamma}. \tag{2}\] Fractures exploit pre-existing rock weaknesses that change over geologic time, such that \(\sigma_{T}\) will vary between events. We model this variability by asserting that \(\sigma_{T}\) is a normally distributed random variable with mean \(\overline{\sigma_{T}}\) and standard deviation \(s_{T}\). Equation (2) then implies that \(\Delta t\sim\mathcal{N}(\overline{\sigma_{T}}/\Gamma,s_{T}/\Gamma)\). Thus, the mean and standard deviation of inter-event times of a trail of pockmarks can be used to infer the underlying recharge rate. As this is a limited dataset that has been produced by an inherently stochastic process, Bayesian inference is used to invert the pipe trail data for the full probability distribution of each parameter and quantify their uncertainty. Our prior estimates of each parameter (\(\Gamma,\overline{\sigma_{T}},s_{T}\)) are updated by evaluating the data with a likelihood function to recover the posterior probability distributions of each parameter. The likelihood function provides a statistical measure of model-data agreement by calculating the probability of observing the data given a set of model parameters. The simplicity of our physical model enables the likelihood function to be expressed analytically (Supplementary Material S2). We apply a conservative Gaussian prior for \(\overline{\sigma_{T}}\ \sim\ \mathcal{N}(2.0,1.0)\) MPa, since mudstone tensile strengths are typically a few MPa (Okland et al., 2002; Raen et al., 2006); in particular, Roberts and Nunn (1995) predict a pressure drop of \(\sim\)2 MPa from venting. The posterior distributions of each parameter are sampled using the Metropolis-Hastings algorithm (Hastings, 1970). From the posterior distributions inferred for the isolated Oceans trail (Supplementary Material S4), we use the mean posterior parameter values as input for our stochastic model to simulate an instance of linearised pressure evolution (Fig. 2a). The qualitative similarity between the pockmark data and the model output is apparent (Fig. 2a, lower panel). For statistical comparison, we use samples from the posterior parameter distributions to calculate a range of posterior time interval distributions (Fig. 2b) that agree well with the data; variations between samples indicate the level of uncertainty in the inference. We note that as we have inferred the time-averaged recharge rate, this linearised pressure evolution resembles the sawtooth behaviour that is predicted for recharge from tectonic compression only (Cartwright et al., 2021; Kearney et al., 2023). However, our statistical model makes no physical assumptions regarding the mechanism or dynamics of pressure recharge between venting episodes. To the east of Oceanus are 12 other trails distributed along the basin margin (Oppo et al., 2021). Some of these trails originate from the same anticline, separated only by \(\sim\)1 km, and thus may be in hydraulic communication. To account for this possibility, we introduce pressure coupling as a feature of the model. For a coupled system of pipes, after any one pipe vents, the pressures of all pipes coupled to it reset to \(\sigma_{\min}\) and a new \(\sigma_{T}\) is sampled for each. Therefore, the pipe that vents pressure temporarily inhibits any coupled pipes from venting. Consequently, the pipes in a coupled system form complementary pockmark series (Supplementary Material S5). If a group of pipes are instead uncoupled, each pipe behaves independently. This contrast between independent and complementary venting behaviour is a qualitative diagnostic for pressure coupling. To evaluate whether a pair of adjacent trails are coupled, we calculate the Bayes factor of the coupled model \(M_{c}\) and uncoupled model \(M_{u}\) (Supplementary Material S2). The Bayes factor \(B_{cu}\) of two models \(M_{c}\) and \(M_{u}\) is given by the ratio of probabilities of observing the data \(\mathbf{t}\) given each model, i.e., \[B_{cu}=\frac{\mathbb{P}(\mathbf{t}\,|\,M_{c})}{\mathbb{P}(\mathbf{t}\,|\,M_{u} )}. \tag{3}\] For example, if \(B_{cu}>1\) then \(M_{c}\) is preferred over \(M_{u}\). Kass and Raftery (1995) state that Bayes factor magnitudes between 10-100 are'strong' and above 100 are 'decisive'. We use this interpretation to assess the couplings of the pipe trails. For the Levant margin pipe data (Fig. 3a), we infer similar recharge rates to those inferred for Oceanus, although mean recharge rates range up to 66 MPa/Myr for pipe trail 8 (Fig. 3b). Fig. 3c shows Bayes factors of pairwise analysis of adjacent trails. Triple-wise analysis leads to the same conclusions, but has been omitted to simplify the interpretation (Supplementary Material S6). The model identifies all adjacent pipes that are greater than 10 km apart as decisively uncoupled. Furthermore, the inverted model indicates hydraulic connectivity between pipes 3, 4 and 5, each located along the same anticline, as well as trails 7 and 8 (Fig. 3c). The inferences for pressure coupling are in agreement with the qualitative diagnostic behaviour. For example, the complementary venting behaviour of trails 3, 4 and 5 is visually evident. Conversely, trails 10, 11 and 12 are statistically inferred to be uncoupled and exhibit independent venting behaviour. Bayes factors with magnitudes below 10 exist for trail pairs {1, 2}, {2, 3} and {5, 6}, indicating a lack of preference for either coupling or not. We attribute this neutrality to features in the data that obscure the underlying recharge mechanics. These features are likely due to local stress variations Figure 2: Results of Bayesian inference applied to the Oceanus pockmark trail. (a) Lower panel: Time-transformed pockmark data (blue) and stochastic model instances of venting history using inferred posterior mean (dark green) and sample parameters (translucent green). Upper panel: Stochastic instance of linearised pressure evolution using inferred posterior mean parameters (dark green), with pressures in excess of the minimum compressive stress corresponding to the mean tensile strength (dashed line) and corresponding to tensile strength values within one standard deviation of the mean (grey shaded area). (b) Posterior mean (dark green) and sample (translucent green) time interval distributions compared with Oceanus data. caused by, for example, faulting. Nonetheless, since the majority of results do have strong preferences to one model or another, we assert that the physical model captures the main pressure behaviour, both spatially and temporally. This result lends support to our statistical inferences of pressure-recharge rates. The venting observations could plausibly be explained by various overpressure mechanisms that have been previously proposed. We next show that these mechanisms are inconsistent with our inferred recharge rates. Tectonic compression has been proposed as a major contributor to overpressure in the region (Cartwright et al., 2021). Previous numerical modelling of tectonic compression indicates that overpressures of 11-14 MPa in total can be generated from 10% strain (Obradors-Prats et al., 2017). At Oceanus, the strain accumulated since at least the Messinian Salinity Crisis, 5-6 Ma, is less than 10%. This implies a maximum recharge rate of \(\sim\)3 MPa/Myr from tectonic compression, which is insufficient to reproduce the observations (Fig. 4). However, Kearney et al. (2023) showed that pressure diffusion from mudstones amplifies the tectonic pressure-recharge rate in adjacent sandstones by a factor of \((1+\nu/\gamma)\). The factor is termed the venting frequency multiplier and \(\nu/\gamma\) is a ratio of dimensionless numbers that quantifies the relative effects of diffusion and compression. The dimensionless quantity \(\gamma\) measures the tectonic pressure-recharge rate of the sandstone relative to that of the mudstone; \(\nu\) is hydraulic capacitance of the mudstone relative to that of the sandstone. The hydraulic capacitance of a layer is the product of compressibility and thickness. Typically, \(\nu/\gamma\gg 1\) in basins composed primarily of mudstone (Kearney et al., 2023), like the North Levant Basin. Calculating the recharge rate from the combined effect of diffusion and compression using prior distributions of each constituent parameter gives a probability distribution that largely overlaps with inferred recharge rates (Fig. 4). Other candidate mechanisms predict much lower recharge rates than those inferred from the data (Fig. 4). The details of how we estimate the pressure-recharge rates from each mechanism are found in Supplementary Material S7. Oppo et al. (2021) proposed that marginal uplift generates significant overpressures at the basin margin by driving lateral fluid migration from the highly overpressured deep basin. If pressure is transferred laterally along a connected, high-permeability sandstone unit, the venting periods would be several orders of magnitude lower than are observed. However, it is likely that there is poor lateral reservoir connectivity in the area (Cartwright et al., 2021) so the only pathway for lateral fluid migration is via mudstones, thus implicitly requiring pressure diffusion from mudstones for reservoir recharge. Marginal uplift may also generate overpressure by flow focusing (Flemings et al., 2002), though this mechanism likely produces insufficient recharge rates (Fig. 4). Flow focusing due to fold amplification (Flemings et al., 2002) merely generates overpressures at a rate less than \(\sim\)1 MPa/Myr. Disequilibrium compaction has a negligible effect \(\sim\)1 MPa/Myr, due to the small post-salt sediment thickness of \(\sim\)300 m (Cartwright et al., 2021). Furthermore, hydrocarbon generation likely cannot generate the required recharge rate since the additional head is greater than \(\sim\)1 km/Myr and most gas generation was complete by 5-6 Ma (Al-Balushi et al., 2016). Sea-level fluctuations may trigger venting episodes (see Scandella et al., 2011), though this mechanism alone provides no net pressure recharge. Figure 3: Results of Bayesian inference applied to Levant margin data. (a) Time-transformed data from Oppo et al. (2021). Dashed lines divide pipe clusters that are separated by more than 10 km. (b) Violin plot of posterior recharge rate distributions for each pipe trail. (c) Bayes factors of pairwise pipe analysis, where a positive value implies the coupled model is more likely. The venting observations from the Levant fluid-escape pipe trails are consistent with predictions deriving from the hypothesis that pressure diffusion from mudstones fuels episodic venting in the region. Therefore, the Levant pipe trails provide strong spatiotemporal evidence supporting this hypothesis. In doing so, the pipe trails support a more general idea--that pressure diffusion from mudstones plays an important role in pressure redistribution between sedimentary layers--and provide evidence that the theoretical literature (e.g. Muggeridge et al. (2004, 2005), Luo & Vasseur (2016), Kearney et al. (2023)) has hitherto been lacking. It is likely that tectonic compression and marginal uplift slowly pressurised the basin to near-lithostatic by \(\sim\)2 Ma. This initiated basin-wide fluid venting by hydraulic fracturing, sourced by high-permeability pre-salt sandstone reservoirs. Tectonic compression continued to slowly pressurise (\(\sim\)3 MPa/Myr) the entire sedimentary succession while poroelastic pressure diffusion from mudstones recharged the sandstone reservoirs back to failure at a rate of \(\sim\)30 MPa/Myr. This combination of pressure diffusion and tectonic compression led to episodic fluid venting with a typical venting period of \(\sim\)100 kyr. While this is not a universal result for pipes in any basin, pressure diffusion exists wherever the corresponding reservoir unit is encased by highly overpressured, low-permeability rocks. Furthermore, the effect of pressure diffusion is intensified in sedimentary basins composed mostly of mudstone (Kearney et al., 2023) where fluid venting phenomena are commonly observed (Cartwright & Santamarina, 2015). However, since there are large uncertainty ranges in our inferences, this is an impetus to improve our mechanistic models of fluid-escape pipe formation. Because understanding subsurface pressure is crucial to prevent unwanted fluid leakage, these results have wider implications for risk assessment during waste sequestration (e.g., CO\({}_{2}\)(Neufeld et al., 2010)) and borehole drilling. Even after several episodes of fluid venting, near-lithostatic reservoir overpressures can be retained due to pressure diffusion from mudstones (Kearney et al., 2023). This highlights the importance of considering this mechanism in assessing basin overpressures. For sequestration sites with evidence of previous fluid venting (e.g., the Sleipner field (Arts et al., 2004, Cavanagh & Haszeldine, 2014)), our results indicate that reservoir pressure monitoring over several millenia may be required to ensure containment.
2303.05282
Hydrodynamic interactions change the buckling threshold of parallel flexible sheets in shear flow
Buckling induced by viscous flow changes the shape of sheet-like nanomaterial particles suspended in liquids. This instability at the particle scale affects collective behavior of suspension flows and has many technological and biological implications. Here, we investigated the effect of viscous hydrodynamic interactions on the morphology of flexible sheets. By analyzing a model experiment using thin sheets suspended in a shear cell, we found that a pair of sheets can bend for a shear rate ten times lower than the buckling threshold defined for a single sheet. This effect is caused by a lateral hydrodynamic force that arises from the disturbance flow field induced by the neighboring sheet. The lateral hydrodynamic force removes the buckling instability but massively enhances the bending deformation. For small separations between sheets, lubrication forces prevail and prevent deformation. Those two opposing effects result in a non-monotonic relation between distances and shear rate for bending. Our study suggests that the morphology of sheet-like particles in suspensions is not purely a material property, but also depends on particle concentration and microstructure.
Hugo Perrin, Heng Li, Lorenzo Botto
2023-03-09T14:26:56Z
http://arxiv.org/abs/2303.05282v1
# Hydrodynamic interactions change the buckling threshold of parallel flexible sheets in shear flow ###### Abstract Buckling induced by viscous flow changes the shape of sheet-like nanomaterial particles suspended in liquids. This instability at the particle scale affects collective behavior of suspension flows and has many technological and biological implications. Here, we investigated the effect of viscous hydrodynamic interactions on the morphology of flexible sheets. By analyzing a model experiment using thin sheets suspended in a shear cell, we found that a pair of sheets can bend for a shear rate ten times lower than the buckling threshold defined for a single sheet. This effect is caused by a lateral hydrodynamic force that arises from the disturbance flow field induced by the neighboring sheet. The lateral hydrodynamic force removes the buckling instability but massively enhances the bending deformation. For small separations between sheets, lubrication forces prevail and prevent deformation. Those two opposing effects result in a non-monotonic relation between distances and shear rate for bending. Our study suggests that the morphology of sheet-like particles in suspensions is not purely a material property, but also depends on particle concentration and microstructure. ## I Introduction Soft biological or synthetic objects, such as cells, lipid bilayers, macromolecules, and nanoparticles, can deform when suspended in sufficiently strong shear or extensional flows [1; 2; 3; 4]. Predicting flow-induced morphological changes is crucial in many fields, ranging from biophysics, where swimming of micro-organisms relies on fluid-structure interactions [5], to soft matter physics, where the rheological response of a particulate suspension is affected by the instantaneous particle shape [6]. Model studies in canonical flows have provided profound physical insights of general applicability. For example, the theoretical prediction of the coil-stretch transition of polymers in simple shear flow [7; 8] was instrumental in the development of rheological models for dilute polymer solutions [9]. The recent need to develop liquid-based methods to process two-dimensional (2D) nanomaterials [10; 11; 12] has triggered new interest on the effect of flow on the morphology of sheet-like materials [13; 14; 10; 11; 15; 16; 3; 17]. Two-dimensional materials have low bending modulii and therefore can undergo transient or permanent buckling in flow [3]. Recent numerical studies [3; 18] demonstrate that purely mechanical models based on the competition between hydrodynamic compressive force and elastic-bending forces can capture the change of morphology of isolated graphene sheet and 2D polymers suspended in a simple shear flow. This agreement demonstrates that the morphology of a single sheet is determined by a buckling instability whose threshold depends, for a given fluid shear rate and viscosity, only on the bending modulus and length of the sheet. However, the extension of this result to suspensions of many particles is an open question. Because of their relatively large contact area, sheet-like particles are prone to stacking at small inter-particle separations [19; 11]. Hydrodynamic interactions between nearly parallel sheets are thus expected to alter the buckling dynamics predicted for single sheets. In this study we investigate parallel pairs of flexible sheets in a shear flow as a function of their separation distance, and study how the buckling instability threshold depends on hydrodynamic interactions. By performing model experiments, and interpreting the results with the help of boundary-integral simulations and theoretical modeling, we demonstrate that hydrodynamic interactions can trigger bending far below the buckling threshold of a single sheet. Hydrodynamic interactions cannot therefore be considered second-order effects when predicting the morphology of flexible sheets in flow. More specifically, our simulations and theoretical modeling show that the dipolar disturbance flow field induced by each sheet gives rise to a lateral hydrodynamic force. This lateral force modifies the mechanical response of the sheet pair to the compressive axial hydrodynamic force experienced when the pair is oriented in the compressional quadrant of the shear flow. On the other hand, for small separations, the lubrication forces overcome this dipolar contribution and prevent bending. These two competing effects result in a non-monotonic relation between interparticle distance and critical shear rate for buckling. More generally, our results suggest that the deformation of close sheets in a suspension may not only depend on the mechanical and geometric properties of each sheet but also strongly on the pair-particle separation and thus on the concentration. ## II Experiments Mylar sheets (Young's modulus \(E\simeq 4\) GPa) of different thicknesses (\(h=23,\ 50\) and \(125\ \mu\)m), width \(w=1\) cm and length \(L\) ranging from \(1\) to \(4\) cm were used. Corresponding sheet bending modulii are \(B\simeq 5.0\times 10^{-6},\ 5.0\times 10^{-5},\ \text{and}\ 8.1\times 10^{-4}\ \text{J} (B=Eh^{3}/12(1-\nu^{2})\) where \(\nu\simeq 0.5\) is the Poisson's ratio). The shear cell is composed of a belt driven by two co-rotating cylinders of diameter \(6\) cm - see fig.1(a). A motor, connected to one of the cylinders, imposes a controlled shear rate in the range \(\dot{\gamma}=0.4-10\ \text{s}^{-1}\). We considered single sheets, and pairs of parallel sheets with separation distance \(d\) varying in the range \(d/L=0.03-1\). The sheets were placed in glycerol (the viscosity is \(\eta\simeq 1\) Pa.s and the density is \(\rho\simeq 1.2\times 10^{3}\ \text{kg.m}^{-3}\)). The maximum Reynolds number \(Re=\rho\dot{\gamma}L^{2}/\eta\) is of order \(1\) at the maximum shear rate. With the use of tweezers, the sheets were set with their normal vector in the flow-gradient plane. The cylinders were set in motion after the sheets were placed. During the dynamics, the normal vector remains in the flow-gradient plane and thus the dynamics is two-dimensional. Optical measurements with a camera were carried out from the top, i.e. along the vorticity direction of the undisturbed shear flow. We extracted from the images the midpoint orientation angle \(\theta(t)\) by fitting a line to each sheet's profile - see fig.1(b). The mid-point curvature \(\bar{\kappa}(t)\) was obtained by fitting a parabola to each sheet's profile. Measurements were performed with a time resolution of \(0.1\) s and with a spatial resolution of \(25\ \mu\)m/pixel. Even though the maximum \(Re\) is of order \(1\), in analyzing the results we will consider a low Reynolds approximation (Stokes flow). As it will appear later, this approximation gives a reasonable agreement between the simulations and the experimental data. ## III Simulation method We simulated the fluid-structure interaction of thin sheets in Stokes flow by a regularized Stokeslet approach [20; 21; 17; 22]. The regularized Stokeslet method has been used to study a variety of fluid-structure interaction problems at low Reynolds number, including cilia-driven transport [20], flagella synchronization [21], and flow around double helices [22]. As in the experiment the flow and the sheet dynamics are two-dimensional, we simplified the simulation choosing a two-dimensional description. For a two-dimensional slender body, the approach consists in placing regularized force singularities along the body's centerline. The integral of the regularized force density over each discretization line segment represents the force exerted by that segment of the slender body on the fluid. Owing to the linearity of the Stokes equation, the velocity field \(\mathbf{u}(\mathbf{x},t)\) at position \(\mathbf{x}\) and time \(t\) obeys the following boundary integral equation [23]: \[\mathbf{u}(\mathbf{x},t)=\mathbf{u}^{\infty}(\mathbf{x})+\frac{1}{4\pi\eta} \int_{C}\mathbf{G}_{\epsilon}(\mathbf{x},\mathbf{x}_{0})\cdot\mathbf{f}( \mathbf{x}_{0},t)\mathrm{d}l, \tag{1}\] where \(\mathbf{u}^{\infty}\) is the undisturbed background flow, \(\eta\) is the dynamic viscosity, \(\mathbf{f}(\mathbf{x}_{0},t)\) is the force density exerted on the fluid by the sheet element \(\mathrm{d}l\) centered at \(\mathbf{x}_{0}\) and \(\mathbf{G}_{\epsilon}\) is a 2D regularized Stokeslet for an unbounded flow [24]. Here, we neglected the double-layer potential because of the inextensibility approximation for the sheets [22; 17]. Since the sheets are inertia-less, the hydrodynamic force (\(-\mathbf{f}\)) is balanced by the local internal elastic force. Numerically, we compute the elastic force from the derivative of the bending energy, as done in [25; 21; 17]. The kinematics of each Figure 1: (a) Schematic of the shear cell. (b) Schematic of a buckled sheet viewed along the vorticity direction of the shear flow and definition of the mid-point orientation angle \(\theta\) and mid-point curvature \(\bar{\kappa}\). The compressional quadrant and the extensional quadrant are shown. In this schematic, the sheet is oriented in the compressional quadrant (\(-\pi/2<\theta<0\)). sheet is governed by the no-slip boundary condition on the surface of the sheet. In the slender body approximation, this condition is approximated by a no-slip condition at the centerline of the sheet: \[\frac{\partial\mathbf{X}(s,t)}{\partial t}=\mathbf{u}(\mathbf{X}(s,t)), \tag{2}\] where \(\mathbf{X}(s,t)\) is the position vector along the centerline of the sheet at the curvilinear coordinate \(s\) and time \(t\). In this numerical method the sheet has zero thickness, therefore in order to observe tumbling and bending, the sheet needs to be initialized at an orientation angle different from \(\pm\pi/2\) and the initial shape set to a perturbation from a straight line [26]. Based on our previous work [17] we chose for the initial orientation \(\theta_{0}=-\pi/2+\pi/10\) and for the initial shape perturbation the first buckling mode \(\kappa(s)=\kappa_{0}\sin(s\pi/L)\) with a small amplitude \(\kappa_{0}=8\times 10^{-3}/L\), where \(\kappa(s)\) is the local curvature at \(s\). At each time step, the velocity field is calculated first by eq.(1), then eq.(2) is advanced in time by a first-order explicit Euler scheme to obtain the sheet's configuration at the new time step. In the simulations, each sheet is discretized by 51 nodes and the time step is \(10^{-5}\dot{\gamma}^{-1}\). Validations of the code on two cases for which asymptotic solutions are known can be found in our previous article [17]. Those two cases are the relaxation of an initially deformed sheet in a quiescent flow and the tumbling dynamics of a single sheet in a shear flow. During the simulation, at each time step the mid-point curvature \(\bar{\kappa}(t)=\kappa(s=L/2,t)\) of a sheet is calculated by fitting a parabola to its center. ## IV Dynamics of a single sheet For an inextensible flexible sheet of length \(L\), width \(w\) and bending modulus \(B\), the Euler buckling force for axial compression scales proportionally to \(wB/L^{2}\)[27; 28]. The viscous compressive force in a shear flow in the Stokes limit scales as \(\eta\dot{\gamma}Lw\). Its dependence on the orientation angle is \(-2\sin\theta\cos\theta\)[3; 29], which is maximum when the sheet is oriented along the compressional axis \(\theta=-\pi/4\) of the shear flow (see fig.1(b)). The buckling dynamics of a single flexible sheet depends therefore on the elasto-viscous number [3] \[E_{v}=\frac{\eta\dot{\gamma}L^{3}}{B}. \tag{3}\] This non-dimensional number can be also interpreted as the ratio of two time scales: \(1/\dot{\gamma}\), the characteristic time scale of the shear flow, and \(\eta L^{3}/B\), the characteristic time scale of curvature relaxation in a quiescent viscous liquid. We determined the single-sheet buckling threshold by measuring experimentally the sheet curvature \(\bar{\kappa}\) corresponding to different elasto-viscous numbers, placing only one sheet in the shear cell. For small elasto-viscous numbers, the sheet tumbles in the flow and remains straight. For relatively large elasto-viscous numbers, for example \(E_{v}\simeq 21\) (fig.2(a)), the sheet deforms during tumbling. The time dependence of the angle \(\theta(t)\) in fig.2(a) is well described by Jeffery's solution for rigid oblate ellipsoids [3; 30; 31]. This agreement validates the Stokes flow assumption we made for the simulations. This agreement also shows that the tumbling dynamics is not significantly affected by the sheets deformations for curvatures smaller than \(1/L\). The time-dependent curvature is seen to grow when the sheet is oriented in the compressional quadrant (\(-\pi/2<\theta<0\)), which is the signature of the buckling instability. Then the curvature decays to zero, over a time scale \(1/\dot{\gamma}\), as \(\theta(t)\) spans the extensional quadrant (\(0<\theta<\pi/2\)) - see fig.2(a). To identify the single-sheet buckling threshold, we measured the maximum curvature \(\bar{\kappa}_{max}\) attained during a tumbling cycle for different elasto-viscous numbers (see fig.2(b)). The results lie in two regions sharply separated by a critical elasto-viscous number \(E_{v}^{c}\simeq 11\) above which the sheet deforms with a curvature larger than the experimental resolution. Below this number the sheets curvature is negligible. To corroborate this observation, we performed numerical simulations of single sheets for different elasto-viscous numbers, see fig.2(c). For elasto-viscous numbers larger than \(E_{v}=8\), the curvature increases in time, signature of the growth of the buckling instability. For elasto-viscous numbers smaller than \(E_{v}=8\), the curvature decays. The agreement between the numerical prediction (\(\simeq 8\)) and the experimental value (\(\simeq 11\)) is acceptable considering the finite experimental resolution, which makes a very precise determination of the buckling threshold difficult [29]. In literature, a mathematical model based on applying Jeffery's solution for the hydrodynamic stress on an oblate ellipsoids to a flexible disk predicted a threshold values \(\simeq 10^{2}\)[32]. Recent simulations of an hexagonal flexible sheet modeled as a collection of beads interacting via long-range hydrodynamic interactions - represented at the Rotne-Prager-Yamakawka level - suggested a critical buckling threshold of about 50 [3]. Since the hydrodynamic compressive force depends on the shape, it is expected that the buckling threshold for rectangular sheets is different than the ones for ellipsoids and hexagonal sheets, so differences with published work are expected. The experimental determination of the buckling threshold for single rectangular sheets, confirmed by our numerical simulation, is an important step that provides a reference case for the study of pairs of parallel sheets. ## V Dynamics of a pair of parallel sheets A body formed by two sheets bonded together by adhesion or friction has a larger bending rigidity than a single sheet [28; 33; 34]. Therefore one may intuitively assume that two sheets separated by a layer of viscous liquid would have a larger buckling threshold than a single sheet. In contrast, we found that a pair of parallel sheets can deform for values of \(E_{v}\) below the single-sheet threshold. For example, for \(E_{v}\simeq 3.6\) the single-sheet curvature is negligible (see fig.2(b) and fig.3(a)) while for the same parameter two sheets separated by \(d/L\simeq 0.2\) display a finite curvature. The curvature of the two sheets increases with time, then decreases, changes sign and finally decays to zero at the end of the tumbling motion (see fig.3(a)). In the single-sheet case, for \(E_{v}>E_{v}^{c}\) the curvature relaxes while the sheet is oriented in the extensional quadrant (see fig.2(a)). In contrast, pair of sheets deform while oriented in the extensional quadrant (fig.3(b) right panel). These two changes of behavior for a pair of sheets, bending below the buckling threshold and bending in the extensional quadrant, are consequences of hydrodynamic interactions between the sheets, as it will be demonstrated below. To rationalize the experimental observations, we simulated the dynamics of two parallel flexible sheets. For \(E_{v}=7\) Figure 2: (a) Normalized mid-point curvature \(\bar{\kappa}L\) (crosses) and mid-point orientation angle \(\theta\) (triangular markers) versus rescaled time \(\dot{\gamma}t\) for \(E_{v}\simeq 21\). Time has been shifted so that \(\dot{\gamma}t=0\) corresponds to the orientation \(\theta=0\). The black line is Jeffery’s prediction \(\theta(t)=\arctan(\dot{\gamma}t)\)[30]. (b) Maximum normalized curvature versus elasto-viscous number for a single sheet. The dark and light grey regions delimit the rigid limit and the buckling region, respectively. The measured critical elasto-viscous number from this diagram is \(E_{v}^{c}\simeq 11\). (c) Normalized curvature versus normalized time from dynamic simulations of a single sheet for different elasto-viscous numbers. the simulations indicate that the single sheet dynamics is stable: a small initial curvature decreases in time - see the black line in fig.3(c). For a pair of parallel sheets separated by a distance \(d=0.1L\) and the same value \(E_{v}=7\), the computed curvature follows qualitatively the experimental dynamics, see the red line in fig.3(c): each sheet of the pair deforms, adopts a concave shape in the compressional quadrant (fig.3(d), left panel), then the curvature changes sign, the sheets adopt a convex shape in the extensional quadrant (fig.3(d), right panel) and finally the deformation relaxes to zero. Because in the simulation only hydrodynamic interactions are accounted for, the simulation results support the hydrodynamic origin of the two changes of behavior discussed above in relation to experiments. From the numerical simulations of two sheets we computed the lateral force on one of the two sheets, when oriented in the compressional quadrant and for varying distance \(d/L\), see fig.4(a). The lateral force is non-uniform along the sheet, with a maximum value in the center of the sheet and minima located at the two edges. The force distribution can be described, to a first approximation, as a parabola. As \(d/L\) increases it is seen from fig.4(b) that the amplitude of the parabolic profile decreases, following a power law with an exponent close to \(-1\). To explain and model this lateral force, we quantified the disturbance flow field set up by a sheet. Because the sheet is inertia-less and the flow 2D, the disturbance flow field in the far field is that of a 2D force dipole whose amplitude decreases as \(1/r\)[35], where \(r\) is the distance from the geometric center of the sheet. The flow disturbance induced by a body oriented along the compressional or extensional axis of a shear flow can be approximated by placing an elongated particle in a two-dimensional purely straining flow [36], with the long axis of the particle along the extensional direction. We performed simulations with this simplified flow configuration. The computed vector plots in fig.4(c) illustrate the dipolar characteristics of the flow, where it is seen that the spatial variation of the flow corresponds to the parabolic distribution of the lateral force. The amplitude of the disturbance flow field is reasonably well captured by a \(1/r\) dependence for \(r\) as small as \(0.5L\) (see fig.4(d)). The sign of the background straining flow governs the sign of the dipole: with our convention the sign is positive for compressional background flow and negative for extensional background flow. Hence, the simulations show that the presence of one sheet generates a parabolic lateral force on the other sheet; this lateral force originates from the disturbance dipole flow field, its amplitude scales as \(L/d\) and its sign is governed by the background flow field, being positive when the sheets are in the compressional quadrant and negative in the extensional quadrant. Figure 3: Comparison between single sheet dynamics and dynamics of a pair of parallel sheets below the single-sheet buckling threshold. (a) Experimental normalized curvature \(\bar{\kappa}L\) versus normalized time \(\dot{\gamma}t\) for a single sheet (in black) and a sheet pair separated by \(d/L\simeq 0.2\) (in red) for \(E_{v}\simeq 3.6\). (b) Images of a pair of parallel sheets at two selected times. (c) Normalized curvature \(\bar{\kappa}L\) versus normalized time \(\dot{\gamma}t\) for simulation of a single sheet (in black) and a pair of sheets separated by \(d/L=0.1\) (in red) for \(E_{v}=7\). (d) Simulated shapes of a pair of parallel sheets corresponding to the two selected times of fig.3(b). The information above enables to construct a minimalistic model of flow-induced shape changes that takes into account the dependence on sheet-to-sheet distance. From a balance of forces and moments on an inextensible sheet, in the linear approximation the curvature \(\kappa\) obeys the Euler-Bernoulli equation \[Bw\frac{d^{2}\kappa}{ds^{2}}-T_{t}(s)\kappa(s)-f_{n}(s)=0, \tag{4}\] where \(s\) is the curvilinear coordinate, \(f_{n}\) is the lateral hydrodynamic force per unit length and \(T_{t}\) is the axial tension [28; 37]. The axial tension satisfies \[\frac{dT_{t}}{ds}+f_{t}(s)=0, \tag{5}\] where \(f_{t}\) is the axial hydrodynamic force per unit length [28; 37]. To model \(f_{n}\) and \(f_{t}\) we used a quasi-static approximation that consists of two main assumptions. First, we neglected the effect of the lateral hydrodynamic drag force caused by the time variation of the curvature. Second, we assume that the curvature is only coupled to the orientation \(\theta\) through the amplitude of \(f_{t}\), which we assume to be \(-2\sin\theta\cos\theta\). Considering the two extreme cases \(\theta=-\pi/4\) (orientation at maximum compression) and \(\theta=\pi/4\) (orientation at maximum extension) and modeling the axial force per unit length as an edge force arising from the straining component of the imposed shear rate, we obtain \(T_{t}=-\eta\dot{\gamma}Lw\) for \(\theta=-\pi/4\) and \(T_{t}=\eta\dot{\gamma}Lw\) for \(\theta=\pi/4\). Fitting the results of our numerical simulations, we modeled the lateral force per unit length arising from the dipolar flow field as \[f_{n}(s)=\pm\,w\,\frac{L}{d}\,\eta\dot{\gamma}\,Kg(s) \tag{6}\] where the sign depends on whether the sheet is oriented along the compressional or the extensional axis. The function \(g(s)=\frac{1}{12}-\left(\frac{\pi}{L}-\frac{1}{2}\right)^{2}\) is a symmetric parabola of zero mean that reproduces the spatial variation of the lateral force seen in fig.4(a) and \(K\) is a numerical pre-factor. We estimated \(K\simeq 0.4\) from the force amplitude computed at the orientation \(\theta_{0}\), see fig.4(b) (by definition \(K=A\ g(1/2)/2\sin\theta_{0}\cos\theta_{0}\), where \(A\) is a fitting parameter). The moment balance then reads \[\frac{d^{2}\tilde{\kappa}}{d\tilde{s}^{2}}\pm E_{v}\left(\tilde{\kappa}( \tilde{s})-\frac{K}{\tilde{d}}g(\tilde{s})\right)\ =\ 0, \tag{7}\] where \(\tilde{\kappa}=\kappa L\), \(\tilde{s}=s/L\) and \(\tilde{d}=d/L\). The "\(+\)" sign corresponds to the maximum compression (\(\theta=-\pi/4\)). The "\(-\)" sign corresponds to the maximum extension (\(\theta=+\pi/4\)). At maximum compression, for the single sheet case Figure 4: Hydrodynamic interactions from simulation. (a) Lateral force induced by the first sheet on the second sheet in the case of two parallel sheets in a shear flow, oriented in the compressional quadrant (at \(\theta_{0}=-\pi/2+\pi/10\)), for \(d/L=1,\ 0.7\) and \(0.5\). (b) Magnitude of the lateral force at the center of the sheet versus the separation distance when the sheets are oriented in the compressional quadrant (at \(\theta_{0}=-\pi/2+\pi/10\)). The line is the best fit \(y=Ax^{\alpha}\) with \(A\simeq 0.02\) and \(\alpha\simeq-1.1\). (c) Vector plot of the 2D disturbance flow. The rectangle represents the sheet. (d) Magnitude of the disturbance flow velocity \(u_{dist}\) induced by a sheet in a 2D compressional flow, versus the distance \(r\) measured orthogonally to the sheet. \((1/\tilde{d}\to 0)\) this differential equation reduces to the classical Euler-buckling equation for an edge axial load. If \(E_{v}=\pi^{2}\), the Euler-buckling equation admits two solutions verifying the free end boundary conditions \(\tilde{\kappa}(0)=\tilde{\kappa}(1)=0\). One solution is the trivial solution \(\tilde{\kappa}(\tilde{s})=0\) and the other is the first buckling mode \(\tilde{\kappa}(\tilde{s})=\tilde{\kappa}_{0}\sin{(\pi\tilde{s})}\) for a purely axial load. The value of the buckling threshold, here \(\pi^{2}\), corresponds to a uniform axial tension. However, it can be shown that for a more realistic model of the axial hydrodynamic force \(f_{t}(s)\), i.e. a linear variation along the sheet [29], the axial tension is a parabola and the threshold is reduced by only \(15\%\) with respect to \(\pi^{2}\). Therefore the model of uniform axial tension captures the essential behavior of buckling. The value \(E_{v}^{c}\simeq 11\) we measured experimentally is comparable with the prediction \(\pi^{2}\) of this minimal model. On the other hand, at the maximum compression, for finite \(\tilde{d}\) eq.(7) admits only one solution satisfying the boundary conditions for any given value of \(E_{v}\): \[\tilde{\kappa}(\tilde{s})=\frac{K}{6\tilde{d}E_{v}}\left([E_{v}(-6(\tilde{s}-1 )\tilde{s}-1)+12]+(E_{v}-12)\cos{\left(\sqrt{E_{v}}\tilde{s}\right)}+(E_{v}-1 2)\tan(\sqrt{E_{v}}/2)\sin{\left(\sqrt{E_{v}}\tilde{s}\right)}\right). \tag{8}\] The existence of a unique non-zero solution means that there is no buckling instability in the strict sense. Hydrodynamic interactions remove the buckling instability and the curvature has a finite bending amplitude for all values of \(E_{v}\). To summarize, for a single sheet in pure compression there is a buckling instability while for a pair of sheets, there is no buckling instability but bending deformations do occur. Taking the limits \(E_{v}\to 0\) and \(E_{v}-E_{v}^{c}\to 0\) with \(E_{v}<E_{v}^{c}\), one can derive from eq.(8) the following approximation for the maximum curvature (at \(\theta=-\pi/4\)) of the mid-point of the sheet: \[\bar{\kappa}\sim+K\frac{L}{d}\frac{E_{v}}{E_{v}^{c}-E_{v}}. \tag{9}\] Here we have indicated explicitly the sign of the lateral force, the \("+"\) sign corresponding to the compressional quadrant. In the extensional quadrant, by solving eq.(7), one can show that the curvature scales as \(\bar{\kappa}\sim-KE_{v}\,L/d\). As illustrated in the sketch on the left panel of fig.5, the change of sign of the dipole force as the sheet tumbles explains the change from concave to convex morphologies seen in fig.3(b). Equation (7) is linear, so the scaling in \(L/d\) for the dipole amplitude determines the dependence of the bending curvature with respect to \(d\). To evaluate the ability of the model above to capture essential features of the experimental data, for each value of the parameters \((E_{v},d/L)\), we measured the maximum rescaled curvature \(\bar{\kappa}_{max}L\) during tumbling. For pair of sheets, the results indicate two regions of behavior (see right panel fig.5). A first region where each sheet's curvature is lower than the experimental resolution. The label "Straight" in the figure indicates this first region. And a second region where the sheets deform significantly (indicated by the label 'Bent' in the figure). Significant deformations are seen to occur for \(E_{v}\) as small as \(0.8-1\), i.e. approximately ten times smaller than in the single-sheet case. Sheet proximity has thus a strong effect on the morphology. Our simple model provides a criterion for which the curvature becomes larger than the experimental resolution \(\kappa_{r}=0.01/L\). This criterion defines two regions in the morphology diagram delimited by the dashed line of equation \(d/L=KE_{v}/\left(\kappa_{r}L(E_{v}^{c}-E_{v})\right)\) in fig.5. The model predicts a much larger amplitude of deformation than observed in the experiments, but a similar trend with respect to \(E_{v}\). Indeed, the value of \(K\) used in fig.5, \(K=0.01\), is smaller than the one obtained from fitting the lateral force profiles of fig.4(b), \(K\simeq 0.4\). The overestimation of the amplitude of deformation in the model likely originates from two aspects of the quasi-static approximation we used: first, we neglected the hydrodynamic drag force in the normal direction to the sheets, which delays the curvature response time; second, we assumed that the compressive force \(f_{t}\) is constant while in the experiments the sheets tumbles and thus are not submitted to a constant force. But it can be seen accounting for the difference in the value of \(K\) that the model is in reasonably good agreements with experimental data for \(d/L\gtrsim 0.05\), as the dashed line in this interval separates the circles from the squares symbols with the correct scaling law in \(L/d\). For \(d/L\lesssim 0.05\), the sheets are observed to remain straight during the tumbling motion for all \(E_{v}\) tested and the \(L/d\) prediction fails. The experimental data of fig.5 reveal thus that the relation between the separation distance and the critical elasto-viscous number to observe significant bending is non-monotonic. Lubrication forces between two plates separated by a distance \(d\) scale as \(1/d^{3}\) for a normal displacement [38] and so are dominant at small distances over the dipolar forces. The time scale for the growth of the deformation in the case of a steady compressive force and lubrication scales as \((L/d)^{3}\tau\)[38] where \(\tau=\eta L^{3}/B\) is the elasto-viscous time scale. This time scale is much longer than the tumbling time scale \(1/\dot{\gamma}\) for moderate \(E_{v}=\dot{\gamma}\tau\) and small \(d/L\). Thus, lubrication forces constrain dynamically the deformation for very small distances and moderate \(E_{v}\). ## VI Conclusion In this study we measured for the first time the effective buckling threshold, which we define as the threshold to observe significant bending, for a pair of flexible sheets suspended in a viscous simple shear flow as function of the sheet-sheet distance. In experiments, we obtain a value of the critical elasto-viscous number for buckling of a single rectangular sheet of \(E_{v}^{c}\simeq 11\). This number is quite close to the one we obtain form 2D simulations, \(E_{v}^{c}\simeq 8\). Our main result is the demonstration of a large reduction, by about a factor of ten, of the elasto-viscous number for which a close pair of parallel sheets bend significantly. This reduction is caused by the dipolar flow disturbance induced by one sheet. This disturbance induces a lateral force on the second sheet. With a minimal model we showed that this lateral force enhances the effect of the compressional force experienced by the pair when oriented along the compressional axis of the shear flow. Furthermore, we showed that the dipolar flow disturbance induces bending also when the pair is oriented in the extensional quadrant. Experiments and simulations suggest that the amplitude of bending is inversely proportional to the distance between the sheets. For small separations, the lubrication force prevails and limits the dynamical deformation of the sheets. The competition between the dipolar enhancement and lubrication leads to a non-monotonic relation between distance and effective buckling threshold. In the applied context of designing macroscopic materials, for instance nanocomposites, from sheet-like nanoparticles by liquid-based methods (as ink printing, coating, polymer nano-composite processing and liquid-phase exfoliation [10; 11]), our results suggest that at finite volume fraction hydrodynamic interactions could amplify deformations induced by the shear flow. The effect could alter thermal, optical or electrical properties that are dependent on the nanoparticle shape. In the context of rheology, by focusing on hydrodynamic pair-interactions our results provide a first step to understand the dynamics of flexible sheets in suspension. In particular, it has been evidenced for suspensions of fibers that buckling produces normal stress differences [26]. Hence, our results suggest that the microstructure of a suspension of sheet-like particles, including the statistics of pair-particle orientation and inter-particle distance, could have a profound influence on the rheology by affecting the instantaneous particle shape. Therefore, the microstructure Figure 5: Left panel: sketch of the hydrodynamic forces distribution in the single and two sheet cases. The black line segments represent the sheets. The green arrows represent the compressional or extensional tangential forces. The red arrows represent the dipolar lateral forces. For \(E_{v}<E_{v}^{c}\) the single sheet remains straight while the pair of sheets adopts concave and convex shapes. Right panel: morphology diagram. Maximum normalized curvature for different normalized separation distances \(d/L\) and elasto-viscous numbers \(E_{v}=\eta\dot{\gamma}L^{3}/B\). The black squares correspond to deformation below the experimental resolution \(\kappa_{r}=0.01/L\), the color circles to a finite curvature. The shadow regions are guides for the eye. The experimental data for single sheets of fig.2(b) are reported in correspondence to \(d/L=\infty\). The equation of the dashed line is \(d/L=KE_{v}/\left(\kappa_{r}L(E_{v}^{c}-E_{v})\right)\), with \(E_{v}^{c}=11\), \(\kappa_{r}L=0.01\) and \(K=0.01\). The dashed line is plotted for \(d/L>0.05\) of suspensions of sheet-like particles should be well-characterized in future rheological studies. **Acknowledgment** We thank D. Tam and B. Metzger for discussions. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant agreement No. 715475)
2301.13299
CMS Tracker Alignment Activities during LHC Long Shutdown 2
The strategies for and the performance of the CMS tracker alignment during the 2021-2022 LHC commissioning preceding the Run 3 data-taking period are described. The results of the very first tracker alignment after the pixel reinstallation, performed with cosmic ray muons recorded with the solenoid magnet off are presented. Also, the performance of the first alignment of the commissioning period with collision data events, collected at center-of-mass energy of 900 GeV, is presented. Finally, the tracker alignment effort during the final countdown to LHC Run 3 is discussed.
Sandra Consuegra Rodríguez
2023-01-30T21:34:11Z
http://arxiv.org/abs/2301.13299v1
# CMS Tracker Alignment Activities during LHC Long Shutdown 2 ###### Abstract The strategies for and the performance of the CMS tracker alignment during the 2021-2022 LHC commissioning preceding the Run 3 data-taking period are described. The results of the very first tracker alignment after the pixel reinstallation, performed with cosmic ray muons recorded with the solenoid magnet off are presented. Also, the performance of the first alignment of the commissioning period with collision data events, collected at center-of-mass energy of 900 GeV, is presented. Finally, the tracker alignment effort during the final countdown to LHC Run 3 is discussed. ## 1 CMS tracker detector The innermost tracking system of the CMS experiment, called the tracker, consists of two devices, the Silicon Pixel and Silicon Strip detectors, within a length of 5.8 m and a diameter of 2.5 m [1]. The closest detector to the interaction point, the Silicon Pixel detector, consists of the barrel pixel sub-detector (BPIX) composed of 4 layers and the forward pixel sub-detector (FPIX) with two forward endcaps, each composed of 3 disks, for a total of 1856 modules in the current configuration (Phase 1). Because of its proximity to the interaction point, the pixel detector is exposed to the highest density of tracks from the proton-proton collisions and, therefore, suffers more extensively the effects of the radiation damage. To tackle these effects, the pixel tracker was extracted from the CMS experimental cavern, underwent extensive repairs, was provided with a new innermost layer, and was reinstalled during the LHC Long Shutdown 2 (LS2). The silicon strip detector consists of 15 148 modules and is composed of the Tracker Inner Barrel (TIB), Tracker Inner Disks (TID), Tracker Outer Barrel (TOB), and Tracker EndCaps (TEC). ## 2 Track-based alignment The tracker was specifically designed to allow for a very accurate determination of the trajectory of charged particles by ensuring an intrinsic spatial resolution of up to 10-30 microns. From the positions at a number of points registered in the detector in the form of hits, the trajectory of a charged particle can be reconstructed. Once the hits belonging to each trajectory are associated, a fit is performed and the track parameters such as the track curvature radius, impact parameter \(d_{xy}\) in the xy plane, impact parameter \(d_{z}\) along the beam direction, as well as the angles \(\theta\) and \(\phi\), are obtained. The tracking efficiency, a measure of how efficiently a charged particle passing through the tracker is reconstructed, provides a quantitative estimate of the tracking performance. Given the deflection of charged particles in a magnetic field, their transverse momentum can be computed as the product of the electric charge, the magnetic field, and the curvature radius. Thus, the track parameter uncertainties are propagated to the momentum measurement. Furthermore, the resolution of a reconstructed primary vertex position depends strongly on the number of tracks used to fit the vertex and the transverse momentum of those tracks. Hence, high-quality track reconstruction paves the way for accurate primary and secondary vertex reconstruction. To ensure good tracking efficiency, as well as a precise momentum measurement and primary vertex reconstruction, the uncertainty on the track parameters needs to be reduced as much as possible. One of the main contributions to this uncertainty comes from the limited precision of the hit position measurements entering the track fit from which the track parameters are determined, which in turn is related to the overall limited knowledge of the position of the detector modules. The latter has two main contributions: the intrinsic spatial resolution of the sensors and the uncertainty related to the limited knowledge of the displacements, rotations, and surface deformations of the tracker modules. For an accurate determination of the track parameters, this second source of uncertainty in the position of the detector modules needs to be reduced to at least the intrinsic spatial resolution of the sensors. The correction of the position, orientation, and curvature of the tracker modules to reach a precision better than the intrinsic spatial resolution is the task of tracker alignment. The so-called track-based alignment consists of fitting a set of tracks with an appropriate track model, and computing track-hit residuals, i.e., the difference between the measured hit position and the corresponding prediction obtained from the track fit. Geometry corrections can be derived from the \(\chi^{2}\) minimization of these track-hit residuals. The Millepede and HipPy alignment algorithms are used by CMS to solve the \(\chi^{2}\) minimization problem [2, 3]. The alignment parameters are determined with Millepede in a simultaneous fit of all tracks, involving two types of parameters: the local parameters that characterize the tracks used for the alignment, and nine global parameters that describe the position, orientation, and surface deformations of the modules. The local parameters of a single track are only connected to the subset of global parameters that are related to that track, and they are not directly connected to the local parameters of other tracks. The global parameters of each of the single modules of the detector can be corrected in a single alignment fit if enough tracks are available. On the other hand, when using the HipPy algorithm, the \(\chi^{2}\) of each sensor is minimized with respect to a change in the local alignment of that sensor only, keeping the parameters of every other sensor fixed, in an iterative procedure. Once the set of alignment constants is obtained, the improvement of post-alignment track-hits residuals is reviewed. Furthermore, before the new detector geometry is updated online for the data taking or used for the data reprocessing, the impact of the new set of alignment constants in the tracking performance, vertexing performance, and physics observables such as the mass of the Z boson resonance as function of the pseudorapidity and azimuthal angle is checked. A simplified version of the offline alignment described above also runs online as part of the Prompt Calibration Loop (PCL). The PCL alignment uses the MillePede algorithm and performs the alignment of the pixel high-level structures at the level of ladders and panels, which ensures the consideration of radiation effects of the innermost layer of the barrel pixel detector already during data taking. The obtained constants are then used for the reconstruction of the next run if movements are above certain thresholds. Thus, the online and offline alignment are complementary components of the tracker alignment within CMS, one providing automated online correction of the pixel high-level structures and the other refining the alignment calibration with the possibility to reach each of the single modules of the detector in a single alignment fit. ## 3 Tracker alignment effort prior to the beginning of Run 3 The first data-taking exercise upon the restart of operations in 2021 consisted of recording cosmic ray muons with the magnetic field off, "cosmic run at zero Tesla" (CRUZET). The alignment with cosmic ray data has the advantage of allowing the update of the tracker alignment constants before the start of collision data taking. Major shifts in the pixel and strip sub-detectors (e.g., due to magnet cycles and temperature changes) can be identified and the geometry corrected accordingly before beams are injected into the collider and collision data becomes available. The very first alignment of the pixel detector after reinstallation in the experimental cavern was performed using 2.9M cosmic ray tracks recorded during the CRUZET period, at the level of single modules for the pixel detector and the outer barrel of the strip detector, and of half-barrels and half-cylinders for the rest of the strip partitions. This period was followed by cosmic data-taking with magnetic field at nominal value (3.8T), "cosmic run at four Tesla" (CRAFT). In this case, the geometry was derived using 765k cosmic ray tracks with the alignment corrections derived at the level of single modules for the barrel pixel and at the level of half-barrels and half-cylinders for the forward pixel and all of the strip sub-detectors. While the geometries derived with CRUZET and CRAFT data constituted relevant updates of the alignment constants starting from a potentially large misalignment, the results are statistically limited by the available number of cosmic ray tracks, especially in the forward pixel endcaps, and systematically limited by the lack of kinematic variety of the tracks sample. For a further improvement of the alignment calibration, a sample of 255.2M pp collision tracks, accumulated at a center-of-mass energy of 900 GeV and 3.8T magnetic field, was used. Finally, shortly before the start of pp collisions in 2022, the alignment was updated using 6.3M cosmic ray tracks recorded at 3.8 T. Alignment corrections were derived at the level of single modules for the pixel detector and at the level of half-barrels and half-cylinders for the different strip sub-detectors. A comparison of the performance of the different sets of alignment constants obtained with cosmic rays at 0T, cosmic rays at 3.8T, and pp collision tracks during 2021, as well as the alignment performance in 2022 prior to pp collisions at \(\sqrt{s}\)=13.6 TeV, are presented in this section. ### Offline alignment using cosmic-ray and collision tracks (2021) The distribution of the median of the track-hit residuals per module (DMRs) constitutes a measure of the tracking performance. The DMRs are monitored for all the tracker substructures, as shown for the barrel and forward pixel sub-detectors in Figure 1. A significant improvement on the track-hit residuals for the alignment with collision data over the alignments with cosmic ray muons only is observed. In the barrel region, DMR distributions can be obtained separately for the pixel barrel modules pointing radially inwards or outwards. The difference of their mean values \(\Delta\mu\) in the local-x (x') direction shown in Figure 2 as a function of the delivered integrated luminosity constitutes a measure of the reduction of Lorentz drift angle effects with the alignment procedure. The effect of the alignment calibration on the reconstruction of physics objects is also studied. The distance between tracks and the unbiased track-vertex residuals is studied, searching for Figure 1: The distribution of median residuals is shown for the local-x coordinate in the barrel pixel (left) and forward pixel (right). The alignment constants used for the reprocessing of the pp collision data (red) are compared with the ones derived using cosmic rays only, recorded at 0T (green) and 3.8T (blue). The quoted means \(\mu\) and standard deviations \(\sigma\) correspond to parameters of a Gaussian fit to the distributions [4]. potential biases in the primary vertex reconstruction. The mean value of the unbiased track-vertex residuals is shown in Figure 3 for the longitudinal and transverse planes, with a significant reduction of the bias when collision tracks are included in the alignment procedure. ## 4 Offline alignment using cosmic-ray tracks (2022) The alignment constants were updated before the start of Run 3 using cosmic ray muons recorded at 3.8 T, to correct for movements caused by the magnet cycle during the 2021-2022 winter break and repeated temperature cycles due to strip detector maintenance. After the geometry update, the bias on the distribution of median residuals for the forward pixel detector was corrected, as shown in Figure 4, left. Furthermore, the difference in the track impact parameters in the transverse plane Figure 3: The mean track-vertex impact parameter in the transverse \(d_{xy}\) plane (left) and longitudinal \(d_{z}\) plane (right) in bins of the track azimuthal angle \(\phi\) is shown [4]. Figure 2: Difference between the mean values \(\Delta\mu\) obtained separately for the modules with the electric field pointing radially inwards or outwards. After alignment with cosmic and collision tracks, the mean difference \(\Delta\mu\) is consistently closer to zero [4]. \(d_{xy}\) for cosmic ray tracks passing through the pixel detector and split into two halves at their point of closest approach to the interaction region was also reduced, as shown in Figure 4, right. ## 5 Summary The tracker alignment effort during the Run 3 commissioning period has been presented. The online alignment in the Prompt Calibration Loop and the strategy followed for the alignment calibration considering the availability of tracks with certain topologies have been discussed. Finally, the data-driven methods used to derive the alignment parameters and the set of validations that monitor the physics performance after the update of the tracker alignment constants have been presented.
2301.10790
Nitrogen-enriched, highly-pressurized nebular clouds surrounding a super star cluster at cosmic noon
Strong lensing offers a precious opportunity for studying the formation and early evolution of super star clusters that are rare in our cosmic backyard. The Sunburst Arc, a lensed Cosmic Noon galaxy, hosts a young super star cluster with escaping Lyman continuum radiation. Analyzing archival HST images and emission line data from VLT/MUSE and X-shooter, we construct a physical model for the cluster and its surrounding photoionized nebula. We confirm that the cluster is $\lesssim4\,$Myr old, is extremely massive $M_\star \sim 10^7\,M_\odot$ and yet has a central component as compact as several parsecs, and we find a gas-phase metallicity $Z=(0.22\pm0.03)\,Z_\odot$. The cluster is surrounded by $\gtrsim 10^5\,M_\odot$ of dense clouds that have been pressurized to $P\sim 10^9\,{\rm K}\,{\rm cm}^{-3}$ by perhaps stellar radiation at within ten parsecs. These should have large neutral columns $N_{\rm HI} > 10^{22.5}\,{\rm cm}^{-2}$ to survive rapid ejection by radiation pressure. The clouds are likely dusty as they show gas-phase depletion of silicon, and may be conducive to secondary star formation if $N_{\rm HI} > 10^{24}\,{\rm cm}^{-2}$ or if they sink further toward the cluster center. Detecting strong ${\rm N III]}\lambda\lambda$1750,1752, we infer heavy nitrogen enrichment $\log({\rm N/O})=-0.21^{+0.10}_{-0.11}$. This requires efficiently retaining $\gtrsim 500\,M_\odot$ of nitrogen in the high-pressure clouds from massive stars heavier than $60\,M_\odot$ up to 4 Myr. We suggest a physical origin of the high-pressure clouds from partial or complete condensation of slow massive star ejecta, which may have important implication for the puzzle of multiple stellar populations in globular clusters.
Massimo Pascale, Liang Dai, Christopher F. McKee, Benny T. -H. Tsang
2023-01-25T19:01:08Z
http://arxiv.org/abs/2301.10790v3
Nitrogen-enriched, highly-pressurized nebular clouds surrounding a super star cluster at cosmic noon ###### Abstract Strong lensing offers a precious opportunity for studying the formation and early evolution of super star clusters that are rare in our cosmic backyard. The Sunburst Arc, a lensed Cosmic Noon galaxy, hosts a young super star cluster with escaping Lyman continuum radiation. Analyzing archival HST images and emission line data from VLT/MUSE and X-shooter, we construct a physical model for the cluster and its surrounding photoionized nebula. We confirm that the cluster is \(\lesssim 4\) Myr old, is extremely massive \(M_{\star}\sim 10^{7}\,M_{\odot}\) and yet has a central component as compact as several parsecs, and we find a metallicity \(Z=(0.22\pm 0.03)\,Z_{\odot}\). The cluster is surrounded by \(\gtrsim 10^{5}\,M_{\odot}\) of dense clouds that have been pressurized to \(P\sim 10^{9}\,\mathrm{K\,cm^{-3}}\) by perhaps stellar radiation at within ten parsecs. These should have large neutral columns \(N_{\mathrm{HI}}>10^{22.5}\,\mathrm{cm^{-2}}\) to survive rapid ejection by radiation pressure. The clouds are likely dusty as they show gas-phase depletion of silicon, and may be conducive to secondary star formation if \(N_{\mathrm{HI}}>10^{24}\,\mathrm{cm^{-2}}\) or if they sink further toward the cluster center. Detecting strong N III]\(\lambda\lambda\)1750,1752, we infer heavy nitrogen enrichment \(\log(\mathrm{N/O})=-0.21^{+0.10}_{-0.11}\). This requires efficiently retaining \(\gtrsim 500\,M_{\odot}\) of nitrogen in the high-pressure clouds from massive stars heavier than \(60\,M_{\odot}\) up to 4 Myr. We suggest a physical origin of the high-pressure clouds from partial or complete condensation of slow massive star ejecta, which may have important implication for the puzzle of multiple stellar populations in globular clusters. ## 1 Introduction Massive star clusters represent the most prominent mode of star formation across cosmic history (Adamo & Bastian, 2018). In the hierarchical picture of star formation, massive clusters are considered to be the natural precursors of globular clusters (Kruijssen et al., 2019; Krumholz et al., 2019). Understanding the internal dynamics of massive clusters during their formation and evolution episodes is crucial to properly answering many other important questions in astrophysics. Strong stellar feedback from young stars in massive clusters in forms of winds (Silich et al., 2004; Geen et al., 2021; Lancaster et al., 2021), radiation pressure (Murray et al., 2010; Kim et al., 2018, 2019; Menon et al., 2022, 2020), and supernova explosions (Walch & Naab, 2015) are known to regulate star formation and drive outflows on galactic scale. In addition, the extremely high stellar densities at the cores of massive clusters are conducive to stellar collisions and merging, a potential channel for seeding the formation of supermassive black holes (Portegies Zwart et al., 2004; Davies et al., 2011) and making gravitational wave sources (Banerjee, 2017, 2018; Antonini et al., 2019; Rodriguez et al., 2019). During the epoch of reionization, very young massive star clusters are capable of releasing escaping Lyman continuum radiation to the intergalactic medium (IGM) and might make an important contribution to cosmic reionization (Vanzella et al., 2020). With high-density birth environments being rare in our cosmic backyard, well-resolved dense star clusters of a few million years old are uncommon in the Local Group. A short list of examples include Westerlund 1 (Portegies Zwart et al., 2010; Gennaro et al., 2017) and Westerlund 2 (Ascenso et al., 2007), the Arches (Espinoza et al., 2009; Harfst et al., 2010), the Quintuplet (Figer et al., 1999), and R136 in the Large Magellanic Cloud (Andersen et al., 2009; Cignoni et al., 2015; Crowther et al., 2016). These have relatively small stellar masses \(M_{\star}\sim 10^{4}\)-\(10^{5}\,M_{\odot}\). More massive systems \(M_{\star}\sim 10^{5}\)-\(10^{6}\,M_{\odot}\), either exposed or obscured by gas, are more frequently found in nearby dwarf or massive starburst galaxies, such as in NGC 5253 (Gorjian et al., 2001; Turner et al., 2015), NGC 1569 (Ho & Filippenko, 1996), M82 (McCrady & Graham, 2007) and Henize 2-10 (Chandar et al., 2003). Recently, super star clusters with masses \(M_{\star}=10^{6}\)-\(10^{7}\,M_{\odot}\) were also observed in the nuclear region of NGC 1365 with the James Webb Space Telescope (Whitmore et al., 2022). Remarkably, extragalactic observations aided with strong gravitational lensing have revealed that very massive \(M_{\star}\gtrsim 10^{6}\)-\(10^{7}\,M_{\odot}\) yet compact young systems appear commonplace in high-redshift star-forming galaxies (Vanzella et al., 2017, 2022). Strongly lensed systems hence offer a unique opportunity for studying clustered star formation at the highest densities. In this work, we scrutinize a young massive star cluster found in the Sunburst Arc at \(z_{s}=2.369\). The lensed galaxy was first discovered by Dahle et al. (2016) behind the galaxy cluster PSZ1 G311.65-18.48 (Planck Collaboration et al., 2014). It is the brightest known gravitationally lensed starburst galaxy at Cosmic Noon. A lensed compact star-forming clump was found on the arc, which shows twelve different lensed avatars arising from an Einstein ring configuration disrupted by secondary lensing caustics cast by multiple cluster member galaxies (Rivera-Thorsen et al., 2019; Pignataro et al., 2021; Diego et al., 2022; Sharon et al., 2022). The clump appears to be a super star cluster of about 3 Myr old (Chisholm et al., 2019). Remarkably, optically thin channels must have been carved out through the surrounding interstellar medium (ISM), as both triple-peaked Ly\(\alpha\) line radiation (Rivera-Thorsen et al., 2017) and escaping Lyman continuum (LyC) radiation (Rivera-Thorsen et al., 2019) were detected from the super star cluster (hence we refer to it as the LyC cluster throughout this paper), with a high escape fraction \(f_{\rm esc}=40\)-90% along the line of sight (Rivera-Thorsen et al., 2019). High signal-to-noise ratio spectroscopy (Mainali et al., 2022) revealed evidences that the system drives outflowing nebular gas at \(\sim 300\,\rm km\,s^{-1}\), more dramatically so than in other Sunburst systems which do not allow LyC photons to escape. Mestric et al. (2023) suggest that a few dozen \(>100\,M_{\odot}\) very massive stars aggregate at the cluster center from which escaping LyC photons appear to originate, and can account for the observed broad He II\(\lambda\)1640 emission. Since the LyC cluster is a fantastic example of clustered massive star formation and evolution at Cosmic Noon and probably resembles the exposed LyC sources that have reionized the intergalactic medium (IGM) at \(z=6\)-15, much efforts have been made to understand its physical properties and the host galaxy. Rest-frame UV spectroscopy compared the LyC cluster of Sunburst to Starburst99 spectral models (Leitherer et al., 1999) and inferred a cluster age \(\sim 3\,\rm Myr\) from the broad He II \(\lambda\)1640 emission, a moderate sub-solar metallicity \(Z=0.5\)-\(0.6\,Z_{\odot}\) from the presence of C IV \(\lambda\)1550 P Cygni stellar wind feature, and a moderate amount of dust reddening \(\rm E(B-V)=0.15\)(Chisholm et al., 2019). Vanzella et al. (2020) suggests that the LyC cluster spectrally resembles other known compact LyC leaking systems and should have an important contribution to cosmic reionizaton through holes created by stellar feedback. The cluster is shown to have strong emissions of (semi-)forbidden UV lines from O III, C III, Si III and Ne III (Vanzella et al., 2020). Using multi-filter HST images to model the cluster SED and employing a detailed gravitational lensing model, Vanzella et al. (2022) concluded that the cluster is impressively massive \(M_{\star}\approx 10^{7}\,M_{\odot}\) yet compact \(R_{\rm e}\simeq(8\pm 3)\,\rm pc\). From emission line widths they derived a line-of-sight velocity dispersion \(\simeq 40\,\rm km\,s^{-1}\), which suggests that the cluster is gravitationally bound. The authors also suggested that the host galaxy has a high specific star formation rate sSFR\(\gtrsim 10\,\rm Gyr^{-1}\). In this work, we extract physical parameters describing the intriguing LyC cluster system and reliably estimate their uncertainties, by performing an independent spectral analysis using archival HST images and ground-based VLT/MUSE and VLT/X-shooter spectroscopy data. While the datasets we use (more details in Section 2) are not new to what were used in the aforementioned studies, we present independent multi-filter photometry and emission line flux measurements. We then construct a physical model for the star cluster and the surrounding photoionization nebula based on the BPASS synthetic spectra (Stanway et al., 2016; Eldridge and Stanway, 2016) for young stellar populations, and perform photoionization calculations using Cloudy(Ferland et al., 2017). Our spectral modeling improves upon previous efforts in the following ways. First, we self-consistently account for the density dependence of nebular observables by imposing the more realistic isobaric condition (Draine, 2011; Kewley et al., 2019) in Cloudy calculations. This is crucial as we find a highly pressurized component (\(n_{\rm e}\sim 10^{5}\,\rm cm^{-3}\)) in the photoionized nebula as deduced from C III]\(\lambda\lambda\)1906,1908 and Si III]\(\lambda\lambda\)1883,1892 line ratios. Moreover, we account for nebular continuum radiation powered by radiative recombination in the photoionized gas, which is substantial for systems younger than 5 Myr (Reines et al., 2010; Byler et al., 2017). On one hand, the superposition of nebular continuum on top of the direct star light changes the apparent UV spectral slope and is crucial for our conclusion that dust reddening from the host ISM is small \(\rm E(B-V)<0.05\). On the other hand, the nebular continuum is proportional to the fraction of stellar ionizing radiation absorbed by the photoionized clouds; it enables us, when combined with nebular line strengths, to infer an isotropic LyC escape fraction \(f_{\rm esc}=(35\pm 20)\%\) from the cluster vicinity. Furthermore, our inference of a higher line-of-sight escape fraction \((50\pm 20)\%\) is a hint that we are seeing exposed cluster stars through an optically thin channel. Since photoionized clouds may have an anisotropic distribution around the cluster, our model is flexible to account for the spectral difference between illuminated cloud faces and shaded faces. This can be important for hydrogen Balmer lines due to optical depth effect from a large H I column, and for extinction of UV-to-optical continuum and lines by dust in the cloud interior. This probes the geometric distribution of photoionized clouds located on the near side versus on the far side of the stars relative to our viewing angle. We furthermore use emission lines to measure element abundances for C, N, Ne and Si. From the detection of unusually strong N III]\(\lambda\lambda\)1750,1752, we measure a remarkable N/O ratio enhanced by \(\sim 12\)-\(19\) fold in the highly-pressurized clouds compared to what is expected for ISM at sub-solar metallicities. Knowledge of chemical anomalies informs us of the physical origin of the clouds, and provides clues to whether they might give birth to chemically self-enriched stellar population observed in evolved massive star clusters (i.e. globular clusters) (Bastian and Lardo, 2018). We also infer a depletion of gas-phase Si by \(\sim 4\)-\(5\) fold, which indicates internal grain formation. Curiously but consistent with many other observations of high-\(z\) blue starburst systems, no strong dust reddening along the line of sight is seen from the host galaxy's large-scale ISM. Based on our analysis, we depict the following physical picture for the star cluster and its surroundings as illustrated by Figure 1. A young cluster of \(t_{\rm age}\lesssim 4\) Myr and sub-solar metallicity \(Z\approx 0.2\)-\(0.3\,Z_{\odot}\) consists of a compact central component and a subdominant extended component. Intense stellar radiation pressurizes nearby photoionized gas to \(P={\rm a\ few}\times 10^{9}\) K cm\({}^{-3}\) at several parsecs, while dust grains form inside these dense clouds. The clouds are significantly enriched with nitrogen and hence an order-unity mass fraction must have originated from massive star ejecta (Roy et al., 2022), perhaps through a mechanism of slow mass loss. Under irradiation by stars, these clouds glow in strong emission lines, processing \(\sim 40\%\) of the cluster's LyC output. These clouds might linger for a time much longer than the free-fall time in the cluster's gravitational potential well, probably because they have large enough column densities \(N_{\rm H}>10^{22.5}\) cm\({}^{-2}\) to counter ejection by radiation pressure. Some other \(\sim 20\%\) of the cluster's LyC output might be processed by lower-pressure photoionized clouds further from the cluster center, but those do not appear to have significant nitrogen enrichment. The remaining \(\sim 40\%\) of the LyC radiation, together with the hot cluster gas, stream out of the system without obstruction through preferentially optically thin, low density channels. The rest of the paper is organized as the following. In Section 2 we summarize the archival imaging and spectroscopy data used in our analysis. In Section 3 and Section 4, we provide details on how HST filter magnitudes and emission line fluxes are measured. In Section 5, we employ stellar population synthesis and photoionization calculations to construct a grid of spectral models for the composite emission from the star cluster and its surrounding nebula. We then fit our spectral models to data, and present Bayesian parameter inference results in Section 6. We give physical interpretation of the results in Section 7, and give concluding remarks in Section 8. ## 2 Data In this section, we summarize the archival data used in this work: HST imagining data, VLT/MUSE IFU spectroscopy, and VLT/X-shooter slit spectroscopy. ### HST Imaging Observations for PSZ1 G311.65-18.48 were obtained across multiple programs between 2018 and 2020 (P.I. Dahle Program ID 15101, P.I. Gladders Program ID 15949, P.I. Bayliss Program ID 15377). In this work, we make use of observations publicly available on the MAST archive for 15 filters: WFC3/UVIS F390W, WFC3/UVIS F410M, WFC3/UVIS F555W, WFC3/UVIS F606W, ACS/WFC F814W, WFC3/UVIS F098M, WFC3/IR F105W, WFC3/IR F125W, WFC3/IR F126N, WFC3/IR F140W, WFC3/IR F153M, WFC3/IR F160W, WFC3/IR F164N, WFC3/F167N. All UVIS+WFC images were aligned to a common pixel scale of 30 mas/pixel, while all IR images were aligned to a pixel scale of 60 mas/pixel using the astropy reproject package in combination with astroalign(Beroiz et al., 2020). ### VLT/MUSE IFU Spectroscopy We use archived and reduced MUSE IFU science datacubes directly accessed from the ESO archive1. Two datacubes are used, which cover the observed wavelength range 475-935 nm at \(R\approx 2600\)-3000 and were calibrated and reduced with the standard reduction pipeline: one from observations conducted on May 13 and Aug 24 in 2016 (Program 107.22SK; PI: E. Vanzella; total exposure 4449 s), and another from observation on Aug 10, 2021 (Program 297.A-5012; PI: N. Aghanim; total exposure 2808 s). All MUSE IFU observations were carried out in the Narrow Field Mode under a typical seeing condition FWHM = 0.5-0.6'', achieving a sky resolution 1.4-2''. ### VLT/X-shooter Slit Spectroscopy We use ESO archived science data products of X-shooter slit spectroscopy reduced with the standard reduction pipeline. Data were collected from Program 0103.A-0688 (PI: E. Vanzella). Exposures were obtained under typical seeing conditions FWHM \(\sim\) 0.5-0.6''. Slit spectra covering observed 994-2479 nm (R\(\approx\)8000) are used for extracting fluxes for rest-frame NUV and optical emission lines, while additional spectra covering observed 534-1020 nm (R\(\approx\)11000) are used for cross-checking strong FUV emission lines. For accessing spatially resolved spectral information, we carry out flux calibration of the 2D spectra using the flux-calibrated Figure 1: A cartoon illustrating a physical picture for the LyC leaking star cluster of Sunburst Arc derived from joint analysis of photometry and spectroscopy data. The central energy source, a super star cluster of \(t_{\rm age}\lesssim 4\) Myr old, consists of a compact central component and an extended component of outskirt stars. Mass and kinetic energy output from both massive star winds and probably the first SN explosions thermalize and create a hot (\(T\sim 10^{6}\)–\(10^{7}\) K) cluster gas that blows out as a supersonic cluster wind. Pressed by stellar radiation (and possibly by the cluster wind), high-pressure (\(n_{\rm e}\sim 10^{5}\) cm\({}^{-3}\) and \(P\sim 10^{9}\)–\(10^{10}\) K cm\({}^{-3}\)) clouds of large column density \(N_{\rm H}\gtrsim 10^{22.5}\) cm\({}^{-2}\) finger close to the cluster at \(\sim 5\)–\(10\) pc, and are subject to a competition between outward pressure force and inward gravitational pull. Photoionized by the stars, these emit strong nebular lines from high-ionization ionic species, is chemically enriched by massive star ejecta, and is dense enough for dust formation. More clouds at lower densities (\(n_{\rm e}\sim 10^{3}\)–\(10^{4}\) cm\({}^{-3}\) and \(P\sim 10^{7-8}\) K cm\({}^{-3}\)) exist at large distances, where nebular lines from low-ionization ionic species are excited. Photoionized clouds do not completely surround the star cluster, and a fraction of the LyC radiation escapes the system via low-density channels. and flux-uncalibrated 1D spectra, and then extract fluxes at the location of the target objects. ## 3 Photometry We measure the PSF of each HST image using the photutils package by stacking isolated, unsaturated stars across the image. An initial catalog of stars is generated using DAOSTarFinder following Stetson (1987), and then each star is vetted by eye to be isolated and unsaturated, resulting in a final catalog of dozens of stars. The final star list is extracted into centered cutouts which are normalized and median-subtracted to mitigate sky background, and the cutouts are stacked into an oversampled PSF using the EPSFBuilder function. PSFs in the ACS filters were oversampled by a factor of 8, while WFC3 were only oversampled by a factor of 4 due to the lower native resolution. We found using stars throughout the entire image rather than those local to the LyC image produced more accurate PSFs, owing to the greater number of stars available. ### Image 9 There are a total of 12 lensed images of the LyC cluster available for analysis (Rivera-Thorsen et al., 2019). Those have different magnification factors and are contaminated to various degrees by other sources on the arc. We perform PSF photometry on Image 9 (following designation of Rivera-Thorsen et al. (2019)) using Galfit(Peng et al., 2002). We choose Image 9 due to its separation from other bright LyC images and neighboring galaxies, as well as its moderate brightness relative to other LyC images. The brightest lensed images, which are the most magnified, are generally poorly approximated by a PSF (Image 10 is an example), whereas the fainter images have lower SNRs - Image 9 offers a compromise between these two effects. For Image 9, we simultaneously model both the LyC cluster and the neighboring lensed knots as point-like objects in Galfit, and approximate the underlying diffuse arc background as having a flat surface brightness. To mitigate large-scale variations in the diffuse background, we first mask out fainter pixels with brightness \(<N\sigma\) to fit only the lensed arc, where \(\sigma\) is the noise standard deviation of local sky background and \(N\) is a factor of order dozens chosen by eye on an image-to-image basis. We found that different choices for \(N\) caused small changes in the measured flux, and adjust the errors accordingly. We assign minimum errors of 0.05 and 0.1 magnitudes in the WFC3/UVIS+WFC/ACS and WFC3/IR bands respectively to broadly account for systematics. Our photometry results are reported in Table 1. ### Image 10 Because of the higher magnification factor, an extended continuum component of the LyC cluster appears resolved in Image 10 (Vanzella et al., 2020; Mainali et al., 2022), in addition to a bright central component. While the central component appears well-approximated by the PSF, the extended component extends \(\sim 0.4\arcsec\) along the direction of the arc. The magnification in the radial direction, \(\mu_{r}\), is expected to be of order unity (see SS7.1), such that the extended component is only resolved in the tangential direction. Hence we choose a 1D Gaussian convolved with the PSF to approximate the light profile of the extended component. Retaining the same masking approach for the arc background used for image 9, we spatially fit a 1D Gaussian for the extended component, an unresolved central component, and the arc background simultaneously using PyMultinestFeroz et al. (2009); Buchner et al. (2014). This allows for separate photometric measurements for both the central and extended components, which we perform only on the WFC3/UVIS and ACS/WFC bands due to their sharper PSF's when compared to the IR bands. This is done for F555W, F606W and F814W, for which spatial resolution is good and continuum light is unaffected by Lyman alpha radiation transfer. Similar to the Image 9 photometry, we assign minimum errors of 0.05 and 0.1 magnitudes in the WFC3/UVIS+WFC/ACS and WFC3/IR bands respectively. ### Magnification ratio between Images 10 and 9 Comparing the flux of Image 10 (sum of central and extended components) to that of the unresolved Image 9, we derive that the magnification ratio is \(\mu_{10}/\mu_{9}=4.0-4.1\), consistent between F555W, F606W and F814W. However, we measure a magnification ratio \(\mu_{10}/\mu_{9}=3.4\pm 0.1\) in the F275W filter which probes escaping LyC radiation. Interestingly, the extended component is undetected in Image 10 (Mainali et al., 2022). The discrepancy in the magnification ratio suggests that the extended component has a higher magnification than the central component. Such magnification variation across a small angular scale \(\gtrsim 0.5\arcsec\) could be caused by a nearby invisible dark matter subhalo. After all, the large apparent magnification ratio \(\mu_{10}/\mu_{9}=3-4\) between Image 10 and Image 9 is surprising in light of lens modeling predictions that they should have similar magnifications (Pignataro et al., 2021; Sharon et al., 2022). This is a hint for spatial fine structures in the lens model near Images 9 and 10. In this work, we do not delve into investigating the physical cause of this effect, but use separate magnification ratios as priors for subsequent analyses: \(\mu_{10}/\mu_{9}=3.4\pm 0.1\) for the central component, and \(\mu_{10}/\mu_{9}=5.6\pm 0.5\) for the extended component. We shall assume that the unresolved Image 9 has a single magnification factor \(\mu_{9}\) that applies to both components. ## 4 Spectroscopy ### Vlt/Muse In order to constrain physical properties of the LyC cluster and its surrounding nebula using line ratio information, we extract rest-frame UV emission line fluxes from VLT/MUSE IFU data. Limited by seeing, MUSE IFU imaging has poor spatial resolution compared to HST images, and it is difficult to separate the lensed images of the LyC cluster from nearby sources on the arc. Since Image 10 is the brightest among the 12 lensed images of the cluster (Rivera-Thorsen et al., 2019), we apply aperture photometry to it to robustly extract emission line fluxes. Using any of the other 11 fainter lensed images degrades the accuracy of line flux measurements due to reduced SNRs. At a pixel scale \(0.2^{\prime\prime}\), we extract fluxes from a \(4\times 4\) aperture placed on top of Image 10. The aperture size is suitably chosen so that line contamination from nearby (unresolved) sources on the arc is small while a significant fraction (but not entirety) of the LyC cluster's flux is included. Sky background is then subtracted by extracting fluxes from an identical aperture placed at an empty location off the arc. Aperture correction is estimated by placing the same aperture at a nearby isolated star and comparing the extracted flux to that extracted from a sufficiently large aperture that includes all flux of the star. Within the MUSE wavelength coverage, the aperture correction factor varies from \(\sim 2.6\) to \(\sim 2.3\) with a mild wavelength dependence. A number of rest-frame FUV lines are robustly detected (Vanzella et al., 2020), all appearing to have the same systemic redshift. We fit each line to a Gaussian line profile on top of the local continuum, and flux uncertainty is estimated based on local flux fluctuations. Two MUSE IFU datacubes are used, one from combined exposures on May 13, 2016 and Aug 24, 2016, and another from observation on Aug 10, 2021. We found that all detected lines have fluxes consistent between the two datacubes (corresponding to an observed baseline of 5 years, or 1.5 years in the source frame) within the noise level, thus finding no evidence of spectral variability. We then combine measurements from both datacubes using inverse variance weighting. We report isotropic line luminosities multiplied by magnification in Table 2, which are aperture corrected but not dust reddening corrected. ### VLT/X-shooter Additional rest-frame NUV and optical lines have been detected from slit spectroscopy with VLT/X-shooter (Vanzella et al., 2020). In the 2D spectra (for a \(0.8^{\prime\prime}\times 11^{\prime\prime}\) slit), we apply for a wavelength dependent flux calibration factor calculated as the ratio between the 1D spectrum and the reduced 1D spectrum. Flux is extracted from a \(0.8^{\prime\prime}\times 1.27^{\prime\prime}\) segment on the slit centered at Image 10. For each line, we fit a Gaussian line profile on top of the continuum. Since it is difficult to empirically determine the slit correction factor based on available data, we estimate a theoretical flux correction factor \(=1.2\) by modeling the effect of seeing as a Moffat PSF profile (Trujillo et al., 2001) calculated at the mean seeing FWHM=\(0.54^{\prime\prime}\), and approximate it as wavelength independent. Inferred line luminosities multiplying magnification but without dust reddening correction are reported in Table 3. ## 5 Spectral Modeling We jointly fit HST photometry as well as line fluxes and ratios from MUSE and X-shooter to spectral models that incorporate both direct stellar radiation and nebula reprocessed (continuum and line) radiation from photoionized clouds. In this way, we can infer physical parameters that characterize the star cluster and the surrounding nebula. Below we discuss how we construct the spectral models. ### Star Cluster SED We use incident stellar spectra from the population synthesis model BPASS (v2.0) (Stanway et al., 2016; Eldridge and Stanway, 2016) as the input for photoionization calculations. Our default choice is that the LyC cluster can be modeled as an instant burst of star formation, and we include output radiation from binaries. A stellar initial mass range \(0.1<m/M_{\odot}<300\) is used with an IMF slope \(\xi(m)\propto m^{-1.3}\) for \(m<0.5\,M_{\odot}\) and a slope \(\xi(m)\propto m^{-2.35}\) for \(m>0.5\,M_{\odot}\). We later refer to this as the standard IMF. We choose a high mass cutoff \(m<300\,M_{\odot}\) because the system is very massive \(M\gtrsim 10^{6}\,M_{\odot}\)(Vanzella et al., 2020). The incident stellar spectrum is parameterized by cluster age \(t_{\rm age}\) and metallicity \(Z\). While BPASS uses a solar metallicity value \(Z_{\odot}=0.02\) to which early stellar atmosphere models were calibrated to, the solar nebular metal abundance we shall use later for photoionization calculation with Cloudy is \(Z_{\odot}=0.014\), which is the preferred value according to Asplund et al. (2021). ### Photoionization Calculations Given the incident stellar radiation, we use the photoionization code Cloudy (v17) (Ferland et al., 2017) to compute nebular continuum and line emissions, as well as the attenuation of star light through the gas. For all except three elements, we set their nebular abundances to be a factor \(Z/Z_{\odot}\) of Cloudy's default solar abundance (Table 7.1 of Cloudy manual HAZY 1). For our purposes, those are consistent with the more recent compilation in Asplund et al. (2021), and they correspond to \(Z_{\odot}=0.014\). We will use \(Z_{\odot}=0.014\) when reporting gas metallicity in units of solar metallicity. Baseline abundances for helium, carbon and nitrogen are adjusted as a function of \(Z/Z_{\odot}\) to account for nucleosynthesis (Dopita et al., 2006). For helium, we correct for primary nucleosynthesis in addition to primordial abundance (Russell and Dopita, 1992; Pagel et al., 1992), He/H=\(0.0737+0.024\,(Z/Z_{\odot})\). For carbon and nitrogen, we adopt the enrichment model of Dopita et al. (2006), using C/H=\(6\times 10^{-5}\,(Z/Z_{\odot})+2\times 10^{-4}\,(Z/Z_{\odot})^{2}\), and N/H=\(1.1\times 10^{-5}\,(Z/Z_{\odot})+4.9\times 10^{-5}\,(Z/Z_{\odot})^{2}\), respectively. Through our analysis, we find significant depletion of gas phase silicon, which indicates grain formation. Hence, we reduce the baseline nebular silicon abundance by a fiducial factor of six in our models. Additionally, we find a strikingly elevated nitrogen abundance \(\log(\rm{N/O})\simeq-0.21\) which is more dramatic than local nitrogen enrichment in H II regions around low \begin{table} \begin{tabular}{c c c c c} \hline \hline Filter & Img. 9 & Img. 10 central & Img. 10 extended & Comment \\ \hline F275W & \(26.62\pm 0.05\) & \(25.27\pm 0.05\) & \(>28.6\) & Rest-frame LyC; no extended component and not included in analysis \\ F390W & \(23.03\pm 0.05\) & \(22.09\pm 0.05\) & \(22.74\pm 0.05\) & Affected by Ly\(\alpha\) and damping wings; not included in analysis \\ F410M & \(22.88\pm 0.05\) & \(22.04\pm 0.05\) & \(22.29\pm 0.05\) & Strongly affected by Ly\(\alpha\) and damping wings; not included in analysis \\ \hline F555W & \(22.95\pm 0.05\) & \(21.91\pm 0.05\) & \(22.56\pm 0.05\) & \\ F606W & \(22.91\pm 0.05\) & \(21.92\pm 0.05\) & \(22.46\pm 0.05\) & \\ F814W & \(23.10\pm 0.05\) & \(22.14\pm 0.05\) & \(22.57\pm 0.05\) & \\ F098M & \(23.10\pm 0.07\) & — & — & \\ F105W & \(23.01\pm 0.10\) & — & — & \\ F125W & \(23.05\pm 0.10\) & — & — & \\ F126N & \(22.50\pm 0.20\) & — & — & Strongly enhanced by [O II]\(\lambda\lambda\)3727,3729 \\ F128N & \(23.01\pm 0.10\) & — & — & No strong lines \\ F140W & \(23.01\pm 0.10\) & — & — & Enhanced by multiple NUV lines. \\ F153M & \(23.57\pm 0.10\) & — & — & No strong lines \\ F160W & \(22.57\pm 0.10\) & — & — & Enhanced by mainly H\(\beta\) and [O III]\(\lambda\lambda\)4959,5007 \\ F164N & \(22.55\pm 0.10\) & — & — & Strongly enhanced by H\(\beta\), not included in analysis \\ F167N & \(21.61\pm 0.10\) & — & — & Strongly enhanced by [O III]\(\lambda\)4959 \\ \hline \hline \end{tabular} \end{table} Table 1: Photometry for lensed Image 9 and Image 10 of the LyC cluster. \begin{table} \begin{tabular}{c c} \hline \hline Emission line & Luminosity \(\mu\,L\,[10^{41}\,{\rm erg\,s^{-1}}]\) \\ \hline [O II]\(\lambda\)3726 & \(36\pm 7\) \\ [O II]\(\lambda\)3729 & \(18\pm 11\) \\ [Ne III]\(\lambda\)3869 & \(47\pm 5\) \\ [Ne III]\(\lambda\)3967 & \(13\pm 4\) \\ H\(\beta\) & \(94\pm 32\) \\ [O III]\(\lambda\)4959 & \(295\pm 36\) \\ [O III]\(\lambda\)5007 & \(864\pm 86\) \\ [N II]\(\lambda\)6548 & \(2\pm 11\) \\ H\(\alpha\) & \(335\pm 108\) \\ [N II]\(\lambda\)6583 & \(0\pm 11\) \\ \hline \hline \end{tabular} \end{table} Table 3: Inferred isotropic luminosities of nebular emission lines detected from VLT/X-shooter for Image 10 of the LyC cluster. Magnified line luminosities are reported, without correcting for dust reddening. We have applied a wavelength independent PSF correction factor \(=1.2\) using a theoretical Moffat PSF model with \(\beta=4.765\). metallicity young starbursts (Izotov et al., 2006). We therefore apply a fiducial nitrogen enhancement by a factor of four on top of the Dopita et al. (2006) baseline. Later when fitting emission lines, we will introduce abundance rescaling factors as free parameters relative to these baseline abundances, for C, N, Si and Ne; we set the baseline abundances to prevent big errors in gas cooling calculations in Cloudy. We compute isobaric models, which are more realistic than constant density models (Kewley et al., 2019). To this end, we compute a grid for \(6<\log P[\mathrm{K\,cm^{-3}}]<12\), where \(P\) is the sum of gas pressure and radiation pressure, and is kept constant throughout the cloud. In all runs, the cloud geometry is that of a plane parallel slab, truncated at a fixed neutral column density \(N_{\mathrm{HI}}=10^{22.5}\,\mathrm{cm^{-2}}\). This is a sufficient value to prevent the (dusty) clouds from being ejected out of the cluster's gravitational potential under radiation pressure (Krumholz et al., 2019). We compute for a grid of the standard ionization parameter \(U\) by varying an artificial cloud distance to the stars. While our results suggest that external ISM dust reddening is minimal toward the LyC cluster, gas-phase silicon depletion combined with the high inferred gas density indicates that dust grains should have efficiently formed in the cloud interior. We therefore set a dust-to-gas ratio that is rescaled by \(Z/Z_{\odot}\) from the standard value 1% in the solar-metallicity ISM gas. Furthermore, we set a fixed velocity dispersion \(3\,\mathrm{km\,s^{-1}}\) for gas micro turbulence, which has unimportant effects on the nebular spectrum. ### Viewing Geometry Observed continuum and line radiation depends on the observer's viewing angle. As illustrated in Figure 2, there are three generic types of viewing angle -- the observer may see (1) light from exposed stellar photospheres and light emerged from the illuminated faces of the photoionized clouds; (2) light from exposed stellar photospheres and light emerged from the shaded faces of the clouds; (3) star light transmitted through the clouds and nebular light emerged from the shaded faces of the clouds. Cloudy calculations conveniently provide four different spectral components, whose superposition can well approximate a viewing-angle-averaged observed spectrum: direct star light \(I_{\nu}\) (from BPASS templates), star light transmitted through clouds \(T_{\nu}\), continuum and line nebular radiation emerging from illuminated cloud faces \(R_{\nu}\), and that emerging from shaded cloud faces \(D_{\nu}\). \(T_{\nu}\) would exhibit spectral features due to absorption and scattering inside the clouds, while \(R_{\nu}\) would include star light reflected backward by the clouds. Both \(R_{\nu}\) and \(D_{\nu}\) would include continuum radiation generated from the cloud interior (e.g. from radiative recombination). Optically thin emission lines would have equal fluxes in \(R_{\nu}\) and \(D_{\nu}\). For simplicity, we distinguish between direct light and transmitted light from the stars, and between illuminated and shaded faces of the clouds, but do not explicitly model the spectral dependence on a continuously varying viewing angle. In the simplest model, we may consider a homogeneous population of clouds with fixed \(\log P\) and \(\log U\). The observed spectrum \(F_{\nu}\) is given by \[F_{\nu}=\left(1-z\right)I_{\nu}+z\,T_{\nu}+x\left(y\,R_{\nu}+\left(1-y\right)D _{\nu}\right), \tag{1}\] where \(0\leqslant x\leqslant 1\) is the geometric fraction of star light processed by the clouds and \(0\leqslant z\leqslant 1\) is the fraction of stellar photospheres obscured by clouds when viewed along the observer's line of sight. We have also introduced an asymmetry parameter, the fraction of illuminated cloud face observed \(0\leqslant y\leqslant 1\), to quantify the probability of clouds located on the near side versus on the far side of the star cluster. If \(y\) is closer to zero, clouds are preferentially located on the near side and we are more likely to see their shaded faces. If \(y\) is closer to 1, clouds are more likely located on the far side and we see their illuminated faces more. Emission line data favors a picture in which two distinct population of nebular clouds with significantly different ionization degrees and densities are present. One population of high ionization degree \(\log U>-2\) and high pressure \(\log P\approx 9\) are needed to explain the observed [CIII]\(\lambda 1906\)/CIII]\(\lambda 1908\) and [SiIII]\(\lambda 1883\)/SiIII]\(\lambda 1892\) line ratios. However, these clouds are too dense (\(\log n_{\mathrm{e}}\approx 5\)) and too highly ionized to emit strong [OII]\(\lambda\lambda 3726,3729\), hinting at an additional population of lower \(U\) and lower pressure. HST continuum images in F555W, F606W and F814W show a two-component structure in Image 10 (see Figure 9 for an example). The extended component is \(\sim 30-40\%\) as bright as the central component. Even in F390W and F410M, the extended component is similarly bright. The high relative brightness cannot be explained by recombination continuum. Reflected stellar UV light by dust grains only reaches \(\sim 20\%\) even for a high dust albedo \(a=0.65\) and a low forward scattering parameter \(g=0.35\). Therefore, the extended continuum source must be predominantly stars on the outskirts of the cluster. Sharon et al. (2022) derived a source-plane upper bound on the cluster size \(r_{\mathrm{eff}}\lesssim 20\,\mathrm{pc}\). Vanzella et al. (2020) and Vanzella et al. (2022) reported marginal evidence from Image 10 seen in the rest-frame FUV con tinuum that the cluster has an effective angular radius \(r_{\rm eff}=(0.06\pm 0.02)^{\prime\prime}\). Their results likely reflect the superposition of the central and extended stellar components. The above considerations motivate us to consider a cluster model of two spatial components, one at the center and another extended. Either component has stars and is associated with a population of photo-ionized clouds. It is unclear whether the two stellar components are coeval, so our model allows two separate ages. One population of clouds closely surrounding the central component have an ionization parameter \(U_{1}\) and a high pressure \(P_{1}\) and process a fraction \(x_{1}\) of the total star light with asymmetry parameter \(y_{1}\). A second population of clouds, at larger distances from the cluster center, have a different ionization parameter \(U_{2}\) and a lower pressure \(P_{2}\), and process another fraction \(x_{2}\) of the total star light with asymmetry parameter \(y_{2}\). For this geometry, we assume that the low-pressure clouds are excited by ionizing radiation from both the central and the extended stellar components, while the high-pressure clouds are only excited by the central stellar component. Both populations are assumed to have the same metallicity, the same element abundance (except for nitrogen; see Section 5.5), and the same grain content. The observed spectrum is given by the linear combination \[F_{\nu}= (1-z_{1}-z_{2})\,I_{\nu}+z_{1}\,T_{1,\nu}+z_{2}\,T_{2,\nu}\] \[+x_{1}\,\left(y_{1}\,R_{1,\nu}+(1-y_{1})\,D_{1,\nu}\right)\] \[+x_{2}\,\left(y_{2}\,R_{2,\nu}+(1-y_{2})\,D_{2,\nu}\right)\] \[+(1-z_{2})\,\hat{I}_{\nu}+z_{2}\,\hat{T}_{2,\nu}\] \[+x_{2}\,\left(y_{2}\,\hat{R}_{2,\nu}+(1-y_{2})\,\hat{D}_{2,\nu} \right). \tag{2}\] Contributions with and without a hat are powered by the central and extended stellar component, respectively. The subscript \({}_{1}\) and \({}_{2}\) indicates quantities that are associated with nebular reprocessing by the central and extended population of clouds, respectively. The geometric parameters are subject to the physical constraints \(0\leqslant x_{1}+x_{2}\leqslant 1\) and \(0\leqslant z_{1}+z_{2}\leqslant 1\). In this picture, a fraction \((1-z_{1}-z_{2})\) of the star light (including LyC radiation) reaches the observer along the line of sight without encountering any significant opacity, while the fraction \((1-x_{1}-x_{2})\) represents the _isotropic_ escape fraction that star light escapes the cluster vicinity without being processed by optically thick gas. ### External Dust Reddening Dust extinction due to Milky Way's ISM from UV through NIR is applied through the dust_extinction package, using models from Fitzpatrick (1999). The model is defined by the parameter \(R({\rm V})=A({\rm V})/E({\rm B}-{\rm V})\), where we choose \(R({\rm V})=3.1\), the mean value for MW extinction, and a normalization \(E({\rm B}-{\rm V})=0.08\) according to the galactic dust maps measured by Schlegel et al. (1998). To account for possible dust reddening due to ISM in the host galaxy, we adopt the phenomenological reddening law of Reddy et al. (2015), which is empirically determined for \(z\sim 1.4\)-2.6 galaxies. This reddening law was used by Chisholm et al. (2019) to analyze the rest-frame UV stellar spectral features of the LyC cluster. We normalize host ISM reddening using \(E({\rm B}-{\rm V})\), which we treat as a free parameter of the model. ### Element Abundances From observed emission line strengths, we can measure gas phase abundance for four elements: (1) C/O from C III]\(\lambda\lambda\)1906,1908; (2) N/O from N III]\(\lambda\lambda\)1750,1752 or constrained by upper limits on [N II]\(\lambda\lambda\)6548,6583; (3) Ne/O from [Ne III]\(\lambda\lambda\)3869,3967; (4) Si/O from Si III]\(\lambda\lambda\)1883,1892. As mentioned before, we set baseline abundance values for these elements in our Cloudy calculations (see Section 5.2). Since for a typical abundance pattern oxygen forbidden lines are the main coolants in the photoionized gas, we assume that rescaling the abundance of a specific element, at the leading order, leads to the same rescaling of all lines associated Figure 2: Effect of viewing angle on the observed spectrum. There are three generic viewing perspectives: (1) the observer sees unobscured stellar radiation \(I_{\nu}\), as well as radiation \(R_{\nu}\) emerging from the illuminated faces of photoionized clouds; (2) the observer sees unobscured stellar radiation \(I_{\nu}\), as well as radiation \(D_{\nu}\) emerging from the shaded faces of clouds; (3) the observer sees stellar radiation \(T_{\nu}\) transmitted through the clouds, as well as radiation \(D_{\nu}\) emerging from the shaded faces of the clouds. with that element, while changes to cloud ionization and temperature can be neglected. In the fitting, we include one free rescaling factor for C, Ne and Si, respectively, which applies to both cloud populations. For nitrogen, the low-pressure clouds are located further away from the cluster than the high-pressure ones, so they are not necessarily equally enriched. We therefore introduce separate N abundance rescaling factors for both cloud populations. From X-shooter data we derive upper limits for the optical forbidden lines [N II]\(\lambda\lambda\)6548,6583. These are suppressed if the photoionized gas has high density and high ionization degree. While low-density, low-ionization clouds contribute negligibly to N III]\(\lambda\lambda\)1750,1752, their N enrichment can be constrained by non-detection of [N II]\(\lambda\lambda\)6548,6583. ### Spectral Fitting Using model spectra, we compute observed fluxes for a list of HST filters (Table 1), and a selected list of emission line ratios (Table 4). Assuming uncorrelated errorbars, the log likelihood used for fitting is minus one-half of the total chi square combining all photometric and emission line observables. We include a total of 12 available HST filters, but exclude F390W, F410M, and F164N. Fluxes in F390W and F410M are sensitive to Ly\(\alpha\) emission and damping wing attenuation, for which robust predictions are challenging due to the possibility of significant intervening HI columns along the line of sight. Neutral gas content physically far from the cluster may contribute to this column, about which data provide inadequate information to constrain the details. Other filters should be insensitive to these effects. F164N, a narrow band filter expected to be enhanced by H\(\beta\) emission, may suffer from flux calibration issues, consistently underpredicting the flux expected from spectroscopic measurements. While our fitting does show a disagreement between the model predicted flux and data at the 2\(\sigma\) level, we found that inclusion of F164N resulted in only minor changes to the inferred physical parameters. Regardless, we choose to remove the filter from fitting to be conservative. Fluxes in several filters (F126N, F160W and F167N) are expected to be significantly enhanced by one or several strong emission lines, such as [OII]\(\lambda\lambda\)3726, 3729, H\(\beta\), and [OIII]\(\lambda\lambda\)4959, 5007. We find that through strong line contributions HST photometry significantly constrains nebular properties independently of the emission line data. H II gas glows in continuum light due to radiative recombination of hydrogen. This nebular continuum is proportional to the amount of LyC radiation processed by the photoionized cloud. Young starburst systems with \(t_{\rm age}\lesssim 5\) Myr have strong enough LyC radiation relative to the non-ionizing FUV radiation, and therefore the nebular continuum is a sizable contribution to the observed continuum flux (Reines et al., 2010; Byler et al., 2017). Recombination continuum has spectral discontinuities at the wavelengths of Lyman limit, Balmer limit, Paschen limit, and so on. The Balmer discontinuity at rest-frame 3645 \(\AA\) is particularly relevant at the redshift of Sunburst because several HST filters probe opposite sides of the redshifted break and hence are sensitive to the size of this discontinuity, which provides information about \(t_{\rm age}\), \(Z\) and the electron temperature \(T_{\rm e}\). The medium filter F153M, redward of the break, happens to be unaffected by strong lines. We indeed detect flux deficit in that filter relative to neighboring filters. Table 4 lists 6 line ratios that we include in the analysis. To be conservative about element abundance uncertainties, we refrain from including line ratios that involve different metals except for measuring the abundance of certain metals. Compared to analyzing MUSE IFU data, we are relatively less confident about absolute flux calibration when analyzing X-shooter slit spectroscopy data. Hence, we refrain from using line ratios between MUSE detection and X-shooter detection. Line ratios based on X-shooter measurements always involve two lines with close wavelengths. In particular, CIII]\(\lambda\lambda\)1906,1908 and SiIII]\(\lambda\lambda\)1883,1892 line ratios probe clouds of high ionization degree and are sensitive \(n_{\rm e}\) diagnostics in the high density regime \(3\lesssim\log n_{\rm e}\lesssim 6\)(Keenan et al., 1992; Jaskot & Ravindranath, 2016). The [OII]\(\lambda\lambda\)3726,3729 line ratio probes \(n_{\rm e}\) in clouds of low ionization degree, and is sensitive to a lower density range \(1\lesssim\log n_{\rm e}\lesssim 4\)(Osterbrock, 1989). Ratios between the O II and O III lines, [OII]\(\lambda\lambda\)3726, 3729/[OIII]\(\lambda\lambda\)4959, 5007 (Kewley & Dopita, 2002; Kobulnicky & Kewley, 2004; Dors et al., 2011) in rest-frame optical and [OII]\(\lambda\)2471/[OIII]\(\lambda\lambda\)1660, 1666 (Kewley et al., 2019) in rest-frame UV, are sensitive to ionization degree and gas temperature, and simultaneously constrain the relative contributions to nebular radiation from the low-pressure and high-pressure clouds. The commonly used ratio [O III]\(\lambda\lambda\)4959,5007 / H\(\beta\) probes metallicity and ionization parameter. Since we combine observables of Image 9 and Image 10, uncertainty in the magnification ratio between the two must be taken into account. We treat the magnification ratio for both the central and the extended components as two free parameters with informative priors. This has been described in Section 3.3. We fit our Cloudy models to the observed photometry (Figure 3), emission line fluxes and ratios (Figure 4) using PyMultinest(Feroz et al., 2009; Buchner et al., 2014), with 1,000 live points running for 10,000 steps. The two-population model consists of 22 parameters in total: host dust reddening, metallicity, and the stellar age (for both stellar components); the pressure, ionization parameter and three geometric parameters for each population; the relative lensing magnification factor between the less magnified Image 9 (used for HST PSF photometry) and the more magnified Image 10 (used for emission line measurement) for both the central and extended components; the relative normalization of the two stellar components, overall normalization, and abundance rescalings for C, N (for both cloud populations), Ne and Si. The model is fit to 18 photometric measurements (Image 9 photometry for 12 HST filters, and Image 10 photometry for the central and extended components separately in 3 filters), 9 emission line fluxes, 6 emission line ratios, and 2 emission line non-detections; in sum 35 observables leading to 13 degrees of freedom overall. The best-fit model reproduces data with \(\chi^{2}_{\nu}=1.85\). ### Star Cluster Properties Properties of the star cluster are shown in Figure 5. Assuming a single burst, we derive a cluster age \(t_{\rm age}=2.4^{+1.6}_{-1.0}\) Myr from the central component, which agrees with the independent UV spectroscopy analysis of Chisholm et al. (2019) when the BPASS SED model is used, and is broadly consistent with the age requirement for very massive stars powering broad He II\(\lambda 1640\) wind emission as suggested by Mestric et al. (2023). The age constraint on the extended component is less tight, but the posterior peaks at 4 Myr. Data disfavors the extended component being older than 7 Myr, and it is possible that the extended component is coeval with the central component. While our model suggests that the extended component may contribute as much as \(\sim\) 25% of the ionizing flux, no extended source of escaping LyC radiation is seen in Image 10 in F275W (Mainali et al., 2022; Mestric et al., 2023) (also see Figure 9). The low density channels that allow LyC photons to pass freely may be connecting only to the central part of the cluster. We do not find any significant evidence for nonzero dust reddening caused by the host galaxy's ISM, with \(E(\rm B-V)<0.05\) at high confidence. The large-scale interstellar environment of the Sunburst galaxy toward the LyC cluster therefore does not appear dusty. This is in contrast to the result \(E(\rm B-V)=0.15\) from Chisholm et al. (2019) even though the same empirical reddening curve has been used. We are unable to explain the data with dust reddening as large as \(E(\rm B-V)=0.15\) even when model details are tweaked. There is a degeneracy between the nebular continuum, which renders the total FUV spectral slope redder (c.f. Figure 3), and external dust reddening. We speculate that neglecting the nebular continuum leads to an overestimation of \(E(\rm B-V)\). For the IMF we choose (c.f. Section 5.1), we measure a stellar mass log(\(\mu_{9}\,M_{\star,1}/M_{\odot})=8.8^{+0.2}_{-0.2}\) for the central component, and log(\(\mu_{9}\,M_{\star,2}/M_{\odot})=8.3^{+0.2}_{-0.2}\) for the subdominant extended component. The precise mass is unknown due to the large uncertainty in the magnification factor of Image 9, \(\mu_{9}\), as multiple independent lens modeling efforts (Vanzella et al., 2020; Pignataro et al., 2021; Diego et al., 2022; Sharon et al., 2022) have obtained magnification factors of the various lensed images of the cluster that are different from each other and are discrepant with the observed flux ratios. A plausible range for the magnification of Image 9 might be \(\mu_{9}=10\)-30. Just the central component has \(\sim 5\times 10^{6}\,M_{\odot}\) of stars even if Image 9 has \(\mu_{9}=100\), which is one or two orders of magnitude more massive than any exposed young super star clusters seen in the local Universe. This is in agreement with the inference of (Vanzella et al., 2020, 2022) and supports the idea that such a system will likely evolve to a massive globular cluster. We repeated our analysis with a top-heavy IMF with \(\xi(m)\propto m^{-2}\) for \(m>0.5\,M_{\odot}\). No significant changes to the model parameters were seen, except that the inferred cluster mass decreased by a factor of \(\sim 2\) (see Table 6). This is because at \(t_{\rm age}\lesssim 4\) Myr the cluster ionizing output is completely dominated by the most massive stars. In particular, the central component of \begin{table} \begin{tabular}{c c l} \hline \hline Line ratio & Value \& uncertainty & Comment \\ \hline Si III]\(\lambda 1892\) / [Si III]\(\lambda 1883\) & \(2.42\pm 0.97\) & Probe density of high-pressure H II clouds \\ C III]\(\lambda 1908\) / [C III]\(\lambda 1906\) & \(2.32\pm 0.23\) & Probe density of high-pressure H II clouds \\ [O II]\(\lambda 2471\) / [O III]\(\lambda 1660\),1666 & \(0.23\pm 0.06\) & Probe ratio between the high-pressure and low-pressure clouds \\ [O II]\(\lambda 3729\) / [O II]\(\lambda 3727\) & \(0.50\pm 0.32\) & Probe density of low-pressure clouds \\ [O II]\(\lambda\lambda 3726\),3729 / [O III]\(\lambda 4959\),5007 & \(0.047\pm 0.012\) & Probe ratio between the high-pressure and low-pressure clouds \\ [O III]\(\lambda\lambda 4959\),5007 / H\(\beta\) & \(12.38\pm 4.40\) & Ionization and metallicity diagnostics \\ \hline \hline \end{tabular} \end{table} Table 4: Nebular emission line ratios included in the spectral analysis. the young super star cluster has a huge output of LyC photons, \(\log(\mu_{9}\,Q(\mathrm{H}^{0}_{1}))=55.5^{+0.3}_{-0.2}\). Barring the uncertainty in \(\mu_{9}\), this result is insensitive to the IMF slope. ### Nebula Properties Figure 6 shows inferred parameters describing the photoionized clouds: cloud pressure \(P\), ionization parameter \(U\), and the ionizing flux incident onto the cloud \(\Phi(\mathrm{H}^{0})\). These intensive nebula parameters can be inferred independently of the uncertain lensing magnification factor. The model has two cloud populations with well-constrained but distinct pressure values. The high-pressure clouds, which is the primary source for emission lines from high ionization species, have \(\log P_{1}=9.6^{+0.3}_{-0.2}\) (pressure in units of K cm\({}^{-3}\)). Their ionizing parameter is high \(\log U_{1}=-1.25^{+0.4}_{-0.4}\). The H II gas pressure is constrained to be high because C III]\(\lambda\lambda\)1906,1908 line ratio indicates \(\log n_{\mathrm{e}}\approx 5\). The intensity of ionizing photons striking the cloud surface (in units of cm\({}^{-2}\) s\({}^{-1}\)) has a high value \(\log\Phi_{1}(\mathrm{H}^{0})=14.3^{+0.3}_{-0.3}\). A second population of low-pressure clouds account for the strong [O II]\(\lambda\lambda\)3726,3729 doublet. They have a lower pressure \(\log P_{2}=7.8^{+0.2}_{-0.3}\), a lower ionization parameter \(\log U_{2}=-2.5^{+0.2}_{-0.2}\), and are irradiated by a weaker ionizing flux \(\log\Phi_{2}(\mathrm{H}^{0})=11.5^{+0.3}_{-0.3}\). Line profile analysis performed in Mainali et al. (2022) revealed that [O III]\(\lambda\)5007 emission from the LyC cluster consists of two components with distinct kinematics: a narrow component with FWHM=100 km s\({}^{-1}\) and a broader component with FWHM=300 km s\({}^{-1}\). Since our model predicts that strong UV metal lines such as C III]\(\lambda\)1908 are predominantly from the high-pressure nebula, and that those lines appear consistent with Figure 3: Best fit SED (grey) from Cloudy modeling. Co-plotted is the 68% C.I. range of the model predicted filter magnitude (open boxes) and the observed magnitude (circles with error bars). Two of the bluest filters F390W and F410M (open circles and green boxes) are not used in the fit since the flux might have been significantly affected by Ly\(\alpha\) line formation and damping wing scattering by an intervening H I column. The fitted models predict fluxes higher than observed in these two filters. The narrow-band F164N filter is also not used due to potential flux calibration issues. The observed magnitudes of the extended and central component from image 10 are shown demagnified by the median predicted magnification ratio for each component applied (pink and orange circles with errors bars), as well as 68% C.I. range of the model predicted filter magnitudes demagnified in the same way (blue boxes). Best fit central (magenta) and extended (green) component continuum contributions are separately shown. FWHM=100 km s\({}^{-1}\) rather than FWHM=300 km s\({}^{-1}\), we surmise that the narrow and broad line components correspond to the high- and low-pressure nebulae, respectively. ### Geometric Parameters Our analysis simultaneously constrain the spatial geometry of the surrounding photoionized clouds. Information about these mainly come from the relative fluxes between the observed lines and continuum, the spectral difference between the illuminated and shaded cloud faces, as well as the amount of reddening for stellar light transmitted through any obscuring clouds that intervene the light of sight compared to the incident stellar light. The results for geometric parameters are shown in Figure 7. About \(x_{1}=(40^{+22}_{-17})\%\) of the cluster's ionizing output is processed by high-pressure clouds where high ionization emission lines are powered, and a remaining \(x_{2}=(20^{+17}_{-15})\%\) is processed by low-pressure clouds. While the geometry parameters have broad posterior distributions, our fitting results imply that a substantial fraction \((1-x_{1}-x_{2})=(35^{+23}_{-21})\%\) of the ionizing output leaves the cluster vicinity in various directions without being processed by opaque clouds. On the other hand, along the specific line of sight toward the observer, an even higher fraction \((1-z_{1}-z_{2})=(52^{+20}_{-18})\%\) of the ionizing radiation from the stars escapes the system. This picture is fully consistent with previous study carried out by Rivera-Thorsen et al. (2019), who suggest a line-of-sight LyC escape fraction 40-90% if dust absorption within the host galaxy is negligible. Since we assume that the photoionized clouds have a high H column density and have dust grains, illuminated cloud faces and shaded cloud faces are distinguishable because continuum and line radiation emerging from the shaded cloud faces is severely suppressed, especially at rest-frame UV wavelengths. In our results, both \(y_{1}\) and \(y_{2}\) have a broad posterior consistent with \(y_{1}=1/2\) and \(y_{2}=1/2\). This is compatible with the picture in which the photoionized clouds, of both populations, are isotropically distributed around the star cluster. In our opinion, this is a simple, natural scenario. If data preferred a situation where clouds were predominantly seen from the illuminated faces or from the shaded faces, we would have to either attribute that to chance, or would have to invoke additional physical explanations for such geometric asymmetry. There is insufficient information to determine whether the photoionized clouds are shat Figure 4: Top: 68% C.I. for model predicted line fluxes (blue boxes) and observed line flux measured from MUSE/X-Shooter (circles with errorbars). The 68% C.I. contributions from the high-pressure (purple boxes) and low-pressure (green boxes) clouds are shown separately. Bottom: 68% C.I. for model predicted line ratios (blue rectangles) and observed line ratios measured from MUSE/X-Shooter (red markers). tered into large quantities of cloudlets, in which case the line-of-sight escape fraction (\(1-z_{1}-z_{2}\)) may equal the isotropic escape fraction. ### Element Abundances Under the assumption that [O/H] = log(\(Z/Z_{\odot}\)) for a stellar metallicity \(Z\) and with our fiducial abundance pattern, we measure \(Z/Z_{\odot}=0.22\pm 0.03\). This is lower than the results of Chisholm et al. (2019), \(Z/Z_{\odot}=0.5\)-0.6, which was inferred from rest-frame UV spectral features of stellar winds. We consistently infer \(Z\approx(0.2-0.3)\,Z_{\odot}\) when modeling details are varied (c.f. Table 6). The picture of dusty photoionized clouds is corroborated by the inferred gas-phase Si depletion by a factor \(\sim\) 4-5 (Figure 8). If seeds are present in the first place, grain growth should be efficient at high densities \(\log n_{\rm e}\) = 4-5 and a cool gas temperature at a few thousand K, even under strong external UV irradiation. For example, dust grains are observed to rapidly form in colliding wind shells of the Wolf-Rayet binary system WR140 (Han et al., 2022; Lau et al., 2022). Mg II\(\lambda\lambda\) 2795,2803 doublet and [Fe IV]\(\lambda\lambda\)2829,2835 doublet are transitions from ionic ground levels. Photoionization calculation shows that strong emissions should emerge from the illuminated cloud face. Inspect Figure 5: Posterior distributions for star cluster parameters. 1D histograms representing the posterior distributions are marked with the mean and 68% confidence intervals (C.I’s, dashed lines). 2D contours show the 50% and 90% levels for the density of points from PyMultinest fitting. Refer to the “Two-Comp” model in 6 for the 68% C.I.’s of the parameters. Figure 6: Posterior distributions for the measured physical parameters of the two cloud populations. Contours levels and histogram confidence intervals follow Fig. 5. Parameter 68% C.I.’s are tabulated in Table 6. Figure 7: Posterior distributions for the measured geometric parameters of the two clouds, including covering factors, obscuring factors, and viewing angles. Contours levels and histogram confidence intervals follow Fig. 5. Refer to the “Two-Comp” model in Table 6 for the 68% C.I.’s of the parameters. ing the the X-shooter data, we nevertheless detect only weak Mg II\(\lambda\lambda\) 2795,2803 emission, and find no sign for [Fe IV]\(\lambda\lambda\)2829,2835 emission. The plausible explanation is that gas-phase Si, Mg and Fe are all depleted due to grain formation. This further supports the model of dusty photoionized clouds. We have detected strikingly elevated nitrogen abundance \(\log(\rm N/O)=-0.21^{+0.10}_{-0.11}\) for the high-pressure clouds, a factor of \(\sim 12\)-19 higher than the typical ISM abundance \(\log(\rm N/O)\simeq-1.4\) at \(Z/Z_{\odot}\simeq 0.2\)(Molla et al., 2007). By contrast, no evidence of enrichment is seen for the low-pressure clouds, which have \(\log(\rm N/O)\lesssim-1.31\). This is evidence for localized self-enrichment by massive star ejecta (discussed in Section 7.4 with more details). No dramatic abundance anomalies are noticeable for carbon \(\log(\rm C/O)=-0.51^{+0.05}_{-0.05}\) and for neon \(\log(\rm Ne/O)=-0.81^{+0.05}_{-0.05}\). The unremarkable Ne/O ratio disfavors significant gas-phase O depletion. Abundance results are summarized in Figure 8 and Table 5. ### Altering Model Assumptions We study how robust our parameter inference is by modifying some of the basic modeling assumptions. We study three alternate model assumptions. Parameter inference results for these modified models, together with our default model ("Two-Comp.") already explained in previous sections, are tabulated in Table 6. For the first alternative model ("Top-heavy"), we consider a top-heavy IMF for which high mass stars are up-weighted by having \(\xi(m)\propto m^{-2}\) for \(m>0.5\,M_{\odot}\), but retains the same IMF slope for lower mass stars, the same mass cutoff, and instant star formation. This model has a comparable reduced chi square. Compared to the default model, the inferred parameters are consistent within \(1\sigma\), with only exception being the magnified cluster stellar mass for both the central and extended components, which are a factor of two smaller. This is easily understandable because high mass stars dominate all light observables at \(\lesssim 4\) Myr; the number of low mass stars, which dominate the stellar mass, is necessarily estimated by extrapolation following the assumed IMF. The same mass-IMF degeneracy is expected for other forms of top-heavy IMF. Even though an instant starburst can be a good approximation for a localized starburst, we explore the alternative scenario of continuous star formation at a constant rate as the second alternative model ("Cont. SF"). This model has a reduced chi square comparable to that of the default model, and nearly all inferred parameters change by less than \(1\sigma\). The notable exception is the central cluster age, which favors \(4.3\) Myr but with larger uncertainties (\(\pm 1-2\) Myr). Similarly, the extended cluster age favors \(4.7\) Myr with large uncertainties. This suggests observable features sensitive to \(t_{\rm age}\) in the burst model are no longer constraining for continuous star formation. While our fitting does not show a significant preference for either an instant starburst or continuous star formation, Chisholm et al. (2019) found that constant star formation did not reproduce the ob Figure 8: Posterior distributions for the gas-phase abundances of C, N, Ne and Si (arbitrary units for the vertical axis). In the upper right panel, we observe high elevation of N abundance in the high-pressure clouds (purple) compared to the median ISM abundance at our inferred metallicity \(Z\approx 0.2\)–\(0.3\,Z_{\odot}\), which indicates enrichment by massive star ejecta. By contrast, the low-pressure, more distant clouds (green) cannot be as much enriched and are consistent with uncontaminated. We also observe significant Si depletion which is evidence for grain formation. C and Ne appear to have normal abundances. \begin{table} \begin{tabular}{c c c c} \hline \hline Abundance & Value & Solar & Important lines \\ \hline \(\log(\rm C/O)\) & \(-0.51^{+0.05}_{-0.05}\) & \(-0.30\) & C III]\(\lambda\lambda\)1906,1908 \\ \({}^{\rm a}\)\(\log(\rm N/O)\) & \(-0.21^{+0.10}_{-0.11}\) & \(-0.76\) & N III]\(\lambda\lambda\)1750,1752 \\ \({}^{\rm b}\)\(\log(\rm N/O)\) & \(\lesssim-1.31\) & \(-0.76\) & [N II]\(\lambda\lambda\)6548,6583 \\ \(\log(\rm Ne/O)\) & \(-0.81^{+0.05}_{-0.05}\) & \(-0.69\) & [Ne III]\(\lambda\lambda\)3869,3967 \\ \(\log(\rm Si/O)\) & \(-1.84^{+0.07}_{-0.07}\) & \(-1.15\) & Si III]\(\lambda\lambda\)1883,1892 \\ \hline \hline \end{tabular} * Nitrogen abundance for high-pressure clouds. * Nitrogen abundance for low-pressure clouds. \end{table} Table 5: Inferred element abundances of carbon, nitrogen, neon and silicon. Also refer to the “Two-Comp” model in Table 6. served C IV\(\lambda\)1550 and N V\(\lambda\)1240 P-Cygni features of massive star winds as well as instant starburst does. Finally, we also test a simplified model without photometric constraints from Image 10, and without a second stellar population ("One-Comp."). This model retains only a single free parameter for the magnification ratio between Images 9 and 10, which is applied uniformly across both the high-pressure and low-pressure cloud emission in order to fit to emission line fluxes measured from Image 10. With the removal of 6 observational constraints from Image 10 photometry and 3 free parameters corresponding to the extended stellar component, this model instead has only 11 degrees of freedom. We find this model provides a similarly good \(\chi^{2}_{\nu}\) to the two-stellar component model, and the inferred physical parameters are in agreement with the two-stellar component model. The inferred cluster age for the single stellar component, \(t_{\rm age}=3.5^{+0.9}_{-0.9}\)Myr, is broadly in agreement with the central component from the two-stellar component model. This single stellar component has inferred magnified mass \(\log(\mu_{9}\,M_{\star})=8.8^{+0.2}_{-0.2}\), and magnified ionizing photon rate \(\log(\mu_{9}Q({\rm H}^{0}))=55.4^{+0.2}_{-0.2}\), also in agreement with the central component of the two-stellar component model. The high-pressure clouds see small variations in their ionization parameter and pressure, with \(\log U_{1}=-1.2^{+0.3}_{-0.4}\) and \(\log P_{1}=9.8^{+0.7}_{-0.4}\), however these are well within \(1\sigma\) of the two-stellar component model. The uniform magnification ratio inferred is \(\mu_{10}/\mu_{9}=3.40^{+0.34}_{-0.34}\), which is consistent at the \(2\sigma\) level with the total magnification ratio measured photometrically in the F555W, F606W, and F814W filters as described in Section 3.3. The physical parameters for the low-pressure clouds, the geometric factors, and elemental abundances all see negligible changes between the two models. We conclude that our most important qualitative findings about the cluster (e.g. age, metallicity, external dust reddening) and about the dense photoionized clouds (e.g. high pressure and high ionization, nitrogen enhancement, silicon depletion) are robust against tweaking of detailed model assumptions. ## 7 Discussion In this section, we discuss the physical implications of our inferred model for the cluster and the surrounding nebula. ### Nebula Size The typical distance between the clouds from the cluster center can be computed from the ionizing flux striking the cloud surface \(\Phi\) and the total ionizing output \(Q({\rm H}^{0})\). Up to a dependence on the uncertain lensing magnification factor \(\mu_{9}\), we find that the high-pressure clouds form an inner nebula very close to the cluster at a distance \(R_{1}=(\mu_{9}/10)^{-1/2}\,(6\hbox{--}12)\,{\rm pc}\), while low-pressure clouds form an outer nebula at a much larger distance \(R_{2}=(\mu_{9}/10)^{-1/2}\,(130\hbox{--}320)\,{\rm pc}\). The angular diameter distance to \(z_{s}=2.369\) is \(d_{\rm A}=1.7\,{\rm Gpc}\), corresponding to an apparent proper distance \(8.4\,{\rm kpc}\) for one arcsecond. The high-pressure nebula has an angular half-width along the direction of arc elongation, for Image 9: \[\theta_{1,9}=\frac{\mu_{9}\,R_{1}}{\mu_{r}\,d_{\rm A}}\approx 0.006^{\prime \prime}\,\left(\frac{R_{1}\sqrt{\mu_{9}}}{10^{20.0}\,{\rm cm}}\right)\,\left( \frac{\mu_{9}}{10}\right)^{1/2}\,\left(\frac{\mu_{r}}{2}\right)^{-1}, \tag{3}\] where \(\mu_{r}\) is the radial magnification. The angular half-width for the nebula comprised of the low-pressure clouds at larger distances is \[\theta_{2,9}=\frac{\mu_{9}\,R_{2}}{\mu_{r}\,d_{\rm A}}\approx 0.15^{\prime \prime}\,\left(\frac{R_{2}\sqrt{\mu_{9}}}{10^{21.4}\,{\rm cm}}\right)\,\left( \frac{\mu_{9}}{10}\right)^{1/2}\,\left(\frac{\mu_{r}}{2}\right)^{-1}. \tag{4}\] The fiducial values \(\log(\sqrt{\mu_{9}}\,R_{1})=20.0\) and \(\log(\sqrt{\mu_{9}}\,R_{2})=21.4\) we shall use are median values from the posterior distributions (c.f. Figure 10). Absolute magnification factors of the various lensed images are uncertain, as existing lens models reproduce image locations well but not magnification ratios (Sharon et al., 2022). There is also uncertainty for \(\mu_{r}\), whose value should vary moderately between lensed images. While Vanzella et al. (2022) predicted \(\mu_{r}=1.2\hbox{--}1.3\), Diego et al. (2022) presented two models with \(\mu_{r}=1.4\) and \(\mu_{r}=2.1\), respectively. The source-plane reconstruction results of Sharon et al. (2022) imply \(\mu_{r}=1.6\hbox{--}2\). We use a high fiducial value \(\mu_{r}=2\). If the low-pressure nebula has a largely isotropic distribution around the cluster, the elongated image half-light full-width \(\theta_{2,9}=0.15^{\prime\prime}\) is smaller than the FWHM of the PSF we use \(\approx 0.27^{\prime\prime}\) in the F126N filter from which we extract a flux enhanced by [O II]\(\lambda\lambda\)3726,3729. The angular half-light full-width for Image 10 is about a factor of \(\approx 4\) larger, reaching \(\theta_{2,10}\approx 0.60^{\prime\prime}\), comparable to the seeing-limited FWHM \(\sim 0.5\hbox{--}0.6^{\prime\prime}\) of X-shooter spectroscopy data. Evidence for the spatially larger, low-pressure nebula is visible in Image 10 in a few filters enhanced by emission lines. The best example is the narrow filter F167N, whose flux is strongly boosted by [O III]\(\lambda\)4958. The posterior distribution of this line flux shows significant contribution from both inner and outer populations of clouds. In F167N, a compact component appears super-imposed on a more extended component slightly offset to one side along the arc direction. Given that our spectral \begin{table} \begin{tabular}{c|l|c c c c} \hline \hline Parameter & Meaning & Two-Comp. & Top-heavy & One-Comp.a & Cont. SF \\ \hline \(\chi_{2}^{2}\) & Reduced chi square & 1.85 & 1.82 & 1.89 & 1.96 \\ \(\chi^{2}\) & Chi square & 24.08 & 23.68 & 18.92 & 25.51 \\ \hline \(E(\mathrm{B-V})\) & Host ISM dust reddening & \(0.02^{+0.03}_{-0.01}\) & \(0.02^{+0.02}_{-0.01}\) & \(0.01^{+0.01}_{-0.01}\) & \(0.04^{+0.02}_{-0.02}\) \\ \(\log Z\) & Metallicity & \(-2.33^{+0.05}_{-0.06}\) & \(-2.31^{+0.05}_{-0.06}\) & \(-2.20^{+0.06}_{-0.06}\) & \(-2.31^{+0.06}_{-0.07}\) \\ \(t_{\mathrm{age},1}\) & Cluster age of central stars [Myr] & \(2.43^{+1.60}_{-0.99}\) & \(3.33^{+1.24}_{-1.77}\) & \(3.47^{+0.49}_{-0.97}\) & \(4.33^{+1.68}_{-1.85}\) \\ \(t_{\mathrm{age},2}\) & Cluster age of outskirt stars [Myr] & \(4.18^{+1.62}_{-1.75}\) & \(4.18^{+1.70}_{-1.69}\) & — & \(4.73^{+1.48}_{-1.70}\) \\ \hline \(\log U_{1}\) & Ionization parameter of high-pressure clouds & \(-1.25^{+0.44}_{-0.45}\) & \(-1.35^{+0.48}_{-0.46}\) & \(-1.19^{+0.47}_{-0.51}\) & \(-1.24^{+0.45}_{-0.53}\) \\ \(\log U_{2}\) & Ionization parameter of low-pressure clouds & \(-2.52^{+0.22}_{-0.21}\) & \(-2.50^{+0.22}_{-0.02}\) & \(-2.41^{+0.21}_{-0.28}\) & \(-2.51^{+0.23}_{-0.22}\) \\ \(\log P_{1}\) & Total pressure of high-pressure clouds [\(\mathrm{K\,cm^{-3}}\)] & \(9.63^{+0.28}_{-0.24}\) & \(9.62^{+0.27}_{-0.25}\) & \(9.79^{+0.34}_{-0.40}\) & \(9.69^{+0.29}_{-0.29}\) \\ \(\log P_{2}\) & Total pressure of low-pressure clouds [\(\mathrm{K\,cm^{-3}}\)] & \(7.81^{+0.22}_{-0.27}\) & \(7.78^{+0.25}_{-0.29}\) & \(7.86^{+0.23}_{-0.27}\) & \(7.84^{+0.23}_{-0.27}\) \\ \hline \(\log(\mu_{9}\,M_{*,1})\) & (Magnified) stellar mass of central stars [\(\mathrm{M_{\odot}}\)] & \(8.77^{+0.21}_{-0.19}\) & \(8.41^{+0.25}_{-0.26}\) & \(8.85^{+0.20}_{-0.21}\) & — \\ \(\log(\mu_{9}\,M_{*,2})\) & (Magnified) stellar mass of extended stars [\(\mathrm{M_{\odot}}\)] & \(8.27^{+0.20}_{-0.19}\) & \(7.94^{+0.26}_{-0.34}\) & — & — \\ \(\log(\mu_{9}\,\mathrm{SFR_{1}})\) & (Magnified) SFR of central stars [\(\mathrm{M_{\odot}}\)/yr] & — & — & — & \(2.13^{+0.30}_{-0.19}\) \\ \(\log(\mu_{9}\,\mathrm{SFR_{2}})\) & (Magnified) SFR of extended stars [\(\mathrm{M_{\odot}}\)/yr] & — & — & — & \(1.49^{+0.19}_{-0.13}\) \\ \(\log(\mu_{9}Q(\mathrm{H_{0}^{1}}))\) & (Magnified) ionizing output of central stars [\(\mathrm{s^{-1}}\)] & \(55.46^{+0.28}_{-0.23}\) & \(55.39^{+0.29}_{-0.21}\) & \(55.38^{+0.18}_{-0.15}\) & \(55.59^{+0.21}_{-0.16}\) \\ \(\log(\mu_{9}Q(\mathrm{H_{0}^{2}}))\) & (Magnified) ionizing output of outskirt stars [\(\mathrm{s^{-1}}\)] & \(54.66^{+0.22}_{-0.23}\) & \(54.67^{+0.19}_{-0.21}\) & — & \(54.98^{+0.15}_{-0.13}\) \\ \(\log\Phi_{1}(\mathrm{H^{0}})\) & Ionizing flux incident on high-pressure clouds [\(\mathrm{s^{-1}\,cm^{-2}}\)] & \(14.34^{+0.28}_{-0.28}\) & \(14.29^{+0.28}_{-0.28}\) & \(14.50^{+0.68}_{-0.39}\) & \(14.41^{+0.32}_{-0.30}\) \\ \(\log\Phi_{2}(\mathrm{H^{0}})\) & Ionizing flux incident on low-pressure clouds [\(\mathrm{s^{-1}\,cm^{-2}}\)] & \(11.54^{+0.28}_{-0.30}\) & \(11.53^{+0.31}_{-0.31}\) & \(11.69^{+0.32}_{-0.39}\) & \(11.58^{+0.31}_{-0.30}\) \\ \(\log(R_{1}\mu_{y}^{1/2})\) & Distance of high-pressure nebula [cm] multiplying \(\sqrt{\mu_{9}}\) & \(20.02^{+0.17}_{-0.17}\) & \(20.02^{+0.18}_{-0.17}\) & \(19.89^{+0.21}_{-0.35}\) & \(20.06^{+0.19}_{-0.19}\) \\ \(\log(R_{2}\mu_{y}^{1/2})\) & Distance of low-pressure nebula [cm] multiplying \(\sqrt{\mu_{9}}\) & \(21.43^{+0.20}_{-0.20}\) & \(21.40^{+0.22}_{-0.20}\) & \(21.31^{+0.22}_{-0.21}\) & \(21.94^{+0.19}_{-0.20}\) \\ \hline \(x_{1}\) & Fraction of radiation processed by high-pressure clouds & \(0.40^{+0.22}_{-0.17}\) & \(0.40^{+0.21}_{-0.16}\) & \(0.32^{+0.10}_{-0.14}\) & \(0.35^{+0.22}_{-0.15}\) \\ \(y_{1}\) & Asymmetry of high-pressure cloud distribution & \(0.52^{+0.28}_{-0.23}\) & \(0.53^{+0.26}_{-0.22}\) & \(0.56^{+0.26}_{-0.23}\) & \(0.49^{+0.28}_{-0.22}\) \\ \(x_{2}\) & Fraction of radiation processed by low-pressure clouds & \(0.20^{+0.17}_{-0.10}\) & \(0.19^{+0.16}_{-0.09}\) & \(0.33^{+0.21}_{-0.15}\) & \(0.16^{+0.15}_{-0.08}\) \\ \(y_{2}\) & Asymmetry of low-pressure cloud distribution & \(0.40^{+0.31}_{-0.20}\) & \(0.43^{+0.31}_{-0.21}\) & \(0.58^{+0.26}_{-0.25}\) & \(0.43^{+0.29}_{-0.21}\) \\ \(z_{1}\) & Line-of-sight obscuration fraction by high-pressure clouds & \(0.22^{+0.20}_{-0.15}\) & \(0.21^{+0.20}_{-0.14}\) & \(0.21^{+0.19}_{-0.15}\) & \(0.19^{+0.19}_{-0.14}\) \\ \(z_{2}\) & Line-of-sight obscuration fraction by low-pressure clouds & \(0.21^{+0.19}_{-0.14}\) & \(0.20^{+0.19}_{-0.14}\) & \(0.21^{+0.19}_{-0.15}\) & \(0.19^{+0.20}_{-0.13}\) \\ \(1-x_{1}-x_{2}\) & Isotropic escape fraction of model predicts much smaller stellar and nebular continuum compared to [O III]\(\lambda\)4958 in F167N (c.f. Figure 4), they probably correspond to the [O III]\(\lambda\)4958 emission from the high-pressure and the low-pressure nebulae, respectively. If we focus on the central compact stellar component, and assumes that it is enclosed within \(R_{1}\), the mean stellar surface density is \[\Sigma_{\star}=\frac{M_{\star,1}}{\pi\,R_{1}^{2}}=1.9\times 10^{5}\,\frac{M_{ \odot}}{\mathrm{pc}^{2}}\,\left(\frac{\mu_{9}\,M_{\star,1}}{10^{8.8}\,M_{ \odot}}\right)\,\left(\frac{R_{1}\,\sqrt{\mu_{9}}}{10^{20.0}\,\mathrm{cm}} \right)^{-2}, \tag{5}\] independent of the uncertain magnification \(\mu_{9}\). This is on the order of the maximum surface density \(\Sigma_{\star}\simeq 10^{5}\,M_{\odot}\,\mathrm{pc}^{-2}\) observed in local Universe stellar systems (Hopkins et al., 2010; Krumholz et al., 2019), but a top-heavy IMF can lower this value. The line-of-sight cloud velocity corresponding to a radius \(R_{1}\) is \[v(R_{1})= \left(\frac{G\,M_{\star,1}}{3\,R_{1}}\right)^{\frac{1}{2}}\approx 9 0\,\mathrm{km}\,\mathrm{s}^{-1}\,\left(\frac{\mu_{9}\,M_{\star,1}}{10^{8.8}\,M_ {\odot}}\right)^{1/2}\] \[\times\left(\frac{R_{1}\,\sqrt{\mu_{9}}}{10^{20.0}\,\mathrm{cm}} \right)^{-1}\,\left(\frac{\mu_{9}}{10}\right)^{-\frac{1}{4}}, \tag{6}\] where we exclude the extended stellar component as its size is significantly beyond \(R_{1}\). This is larger than the observed \(40\,\mathrm{km}\,\mathrm{s}^{-1}\) quoted by Vanzella et al. (2022). Adopting a more top-heavy IMF will lead to a smaller \(\mu_{9}\,M_{\star,1}\), and hence a smaller \(v(R_{1})\), which may mitigate the discrepancy. ### Nebula Mass and Column Density Depending on the column density, the high-pressure clouds amount to mass \[M_{c}\simeq 1.4\times 4\pi\,R_{1}^{2}\,N_{\mathrm{H}}\,m_{p}\,x_{1}\] \[= 6\times 10^{5}\,M_{\odot}\,\left(\frac{N_{\mathrm{H}}}{10^{23}\, \mathrm{cm}^{-2}}\right)\,\left(\frac{\sqrt{\mu_{9}}R_{1}}{10^{20.0}\, \mathrm{cm}}\right)^{2}\] \[\times\left(\frac{\mu_{9}}{10}\right)^{-1}\,\left(\frac{x_{1}}{0. 4}\right), \tag{7}\] For a comparison, the cumulative stellar mass loss up to \(t_{\mathrm{age}}=(3\text{--}4)\,\mathrm{Myr}\) from the central stellar component is \[\Delta M_{\star}=(1\text{--}2)\times 10^{6}\,M_{\odot}\,\left(\frac{\mu_{9}\,M_{ \star,1}}{10^{8.8}\,M_{\odot}}\right)\,\left(\frac{\mu_{9}}{10}\right)^{-1}. \tag{8}\] This is 2-4% of the initial stellar mass assuming the standard IMF. We attempted to constrain the H I column \(N_{\mathrm{HI}}\) from data by fixing cluster and nebula intensive parameters to the best-fit values but allowing extensive and geometric parameters to freely vary. Additionally, we varied \(\log N_{\mathrm{HI}}\,[\mathrm{cm}^{-2}]\) from 21 to 24 in Cloudy calculations. We were however unable to obtain any significant constraint on \(N_{\mathrm{HI}}\), mainly because no emission lines that specifically probe the cool H I gas are available, and because dust attenuation through the H I column is strongly degenerate with extensive and geometric parameters. UV absorption lines are not useful either because the dusty clouds severely attenuate any UV light emerging from the shaded faces. Nevertheless, dynamics consideration constrains \(N_{\mathrm{HI}}\) for the high-pressure clouds. The H II gas has a thickness \(\sim 0.02\,\mathrm{pc}/(n_{\mathrm{H}}/10^{5}\,\mathrm{cm}^{-3})\) and a column \(N_{\mathrm{HII}}\sim 6\times 10^{21}\,\mathrm{cm}^{-2}/(n_{\mathrm{H}}/10^{5}\, \mathrm{cm}^{-3})\). If the H II gas is density bounded, \(n_{\mathrm{H}}\sim 10^{5}\,\mathrm{cm}^{-3}\) leads to a sufficient radiation pressure coupling with LyC photons that would rapidly eject the clouds out of the cluster potential compared to the free-fall timescale \[t_{\mathrm{ff}} =\frac{\pi}{2}\,\left(\frac{R_{1}^{3}}{G\,M_{\star,1}}\right)^{ \frac{1}{2}}\] \[\approx 10^{5}\,\mathrm{yr}\,\left(\frac{\mu_{9}\,M_{\star,1}}{10^{8.8} \,M_{\odot}}\right)^{-\frac{1}{2}}\,\left(\frac{R_{1}\,\sqrt{\mu_{9}}}{10^{20. 0}\,\mathrm{cm}}\right)^{\frac{3}{2}}\,\left(\frac{\mu_{9}}{10}\right)^{- \frac{1}{4}}. \tag{9}\] If instead the H II gas is ionization bounded, a minimum column \(\log N_{\mathrm{HI}}>22.5\) is required for cluster Figure 9: Spatially resolved modeling of Image 10. In the top panels, continuum image in F814W shows that stars in the LyC cluster can be modeled as the sum of a central unresolved component and an extended component modeled as a 1D Gaussian profile (due to lensing shear). The 1D Gaussian is centered at an angular separation \(0.015\arcsec\pm 0.001\arcsec\) away from the central component, with a Gaussian width \(\sigma=0.15\arcsec\pm 0.01\arcsec\). However, in the F275W filter which probes escaping LyC radiation, the extended component is not visible. This implies that only LyC photons emitted from the central compact stellar component manage to escape. gravity to counteract radiation pressure and retain the cloud (Fall et al., 2010; Murray et al., 2010; Krumholz et al., 2019). For even larger \(N_{\rm HI}\), the bulk of the cloud would fall back to the cluster, but no faster than \(t_{\rm ff}\). For \(23<\log N_{\rm HI}<25\), dust reprocessed IR radiation trapped in the H I gas may drive a slow outflow (Menon et al., 2022), albeit the outflow velocity may be insignificant given the large escape velocity \(v_{\rm esc}\sim 100\,{\rm km\,s^{-1}}\) involved here. Seeing short-lived clouds with a lifetime \(<10^{5}\,{\rm yr}\) would appear unnatural given \(t_{\rm age}\lesssim 4\,{\rm Myr}\), unless these clouds are continuously replenished. The dusty cloud interior should cool down to a low temperature typical of molecular clouds, with an increased density \(\sim 10^{6}\)-\(10^{7}\,{\rm cm^{-3}}\) as a result of pressure equilibrium. The clouds have a typical thickness \[l=0.03\,{\rm pc}\,\left(\frac{N_{\rm H}}{10^{23}\,{\rm cm^{-2}}}\right)\,\left( \frac{n_{\rm H}}{10^{6}\,{\rm cm^{-3}}}\right)^{-1}. \tag{10}\] Since the dusty interior can cool down to very low temperatures, the pressure support in the cloud interior is likely dominated by turbulence (McKee and Tan, 2003). In this case, Eq. (10) holds whether or not the interior is atomic or molecular. Clouds are out of virial equilibrium and subject to collapse for pressures \(P\lesssim 3\pi\,G\,\Sigma^{2}/20\)(Krumholz et al., 2019). Correspondingly, if the high-pressure clouds have a column \(N_{\rm H}\gtrsim 0.9\times 10^{24}\,{\rm cm^{-2}}\,(P_{1}/10^{9}\,{\rm K\, cm^{-3}})^{1/2}\) and survive long enough near the star cluster, they can form stars internally. At this value of \(N_{\rm H}\), the total cloud mass may be too large to come entirely from cluster mass loss (c.f. Eq. (8)), and the clouds would have to be a mixture of stellar ejecta and leftover natal gas. A delicate balance between outward radiation pressure and inward gravity may be unrealistic. If a cloud migrates to a different radius from the cluster center, the characteristic ambient pressure will change, which will in turn result in a pressure change in the cloud neutral interior. Assume that the ambient pressure scales with cluster-centric radius \(r\) as \(P\propto r^{-\alpha}\) with \(\alpha>0\). If this pressure scale is set by radiation pressure, for instance, we may have \(\alpha=2\). If the dust-shielded, cold cloud interior is supported by either thermal pressure at a fixed temperature, or by turbulence at roughly a fixed velocity dispersion, the particle number density will scale as \(n\propto r^{-\alpha}\), and the mass column density will scale as \(\Sigma\propto r^{-2\,\alpha/3}\) under the assumption that the cloud geometry maintains an order-unity aspect ratio. This means that \(P/\Sigma^{2}\propto r^{\alpha/3}\). If gravity overcomes radiation pressure and pulls a cloud to a smaller \(r\), it further pressureizes and the threshold for Jeans instability is easier to reach. Conversely, if radiation pressure overcomes gravity and pushes the cloud outward, the cloud will expand and will never form stars. If the observed high-pressure clouds can survive other possible mechanisms of shattering or destruction and sink toward the cluster center due to its weight, they may form stars internally. When the star cluster is mass segregated, the most intensely radiating stars cluster in the core. In this situation, radiation pressure increases more steeply than gravity at distances smaller than the cluster's outer edge. We note that this may allow an intermediate range of distances where the balance between radiation pressure and gravity could be stable for clouds of suitable column density. For a realistic cloud geometry radiation likely manages to expel peripheral parts of the clouds along low column density sight-lines even if the cloud bulk is retained by gravity (Yeh and Matzner, 2012; Thompson and Krumholz, 2016; Raskutti et al., 2017). Without replenishment, the clouds are not expected to survive for many millions of years. It would be interesting to observationally determine \(N_{\rm H}\) in order to better understand the dynamical origin and fate of the pressurized dusty clouds. ### Nebula Pressure For the high-pressure clouds, the ionizing flux \(\log\Phi_{1}({\rm H^{0}})\approx 14.3^{+0.3}_{-0.2}\) imparts a radiation pressure \(\log P_{1,{\rm rad}}=9.0\)-9.5. Further including dust absorption of UV light increases the total radiation pressure by about three fold to \(\log P_{1,{\rm rad}}=9.5\)-10.0. Compression by radiation appears to be a viable dynamical origin for the high-pressure clouds within our inference uncertainty. For our best-fit incident SED, external radiation pressure equals H II gas pressure if \(\log U_{1}=-1.4\). From C III]\(\lambda\lambda\)1908,1906 line ratio, the H II density \(\log n_{\rm e}\approx 5\)(Kewley et al., 2019) at \(T_{\rm e}=15000\,{\rm K}\) corresponds to a gas pressure \(\log P_{1,{\rm gas}}\approx 9.5\). The high-pressure clouds are in the regime where external radiation pressure is comparable or slightly higher than the H II gas pressure. For the low-pressure clouds, on the other hand, the ionizing flux \(\log\Phi_{2}({\rm H^{0}})\approx 11.5^{+0.3}_{-0.3}\) converts into a radiation pressure \(\log P_{2,{\rm rad}}\approx 6.6\)-7.2, which is subdominant than the inferred total pressure \(\log P_{2}\). Radiation pressure therefore appears dynamically unimportant for the low-pressure clouds. Another possible source of confining pressure is kinetic feedback from stellar winds and SN explosions (Koo and McKee, 1992, 1992, 1992; Rogers and Pittard, 2013; Calura et al., 2015; Wareing et al., 2017), and other means of cluster mass loss (de Mink et al., 2009). Taking \(Z=0.2\,Z_{\odot}\) and at \(t_{\rm age}=2\)-4 Myr, the BPASS (v2.2) model predicts a total mass loss rate \[\dot{M}_{\star,1}\approx 1\,M_{\odot}\,\mathrm{yr}^{-1}\,\left(\frac{\mu_{9}\,M_{ \star,1}}{10^{8.8}\,M_{\odot}}\right)\,\left(\frac{\mu_{9}}{10}\right)^{-1}, \tag{11}\] and a total mechanical luminosity dominated by the first SN explosions (onset at \(t_{\mathrm{age}}\approx 3\,\mathrm{Myr}\)) \[L_{\mathrm{mech}}\approx 6\times 10^{41}\,\mathrm{erg}\,\mathrm{s}^{-1}\,\left( \frac{\mu_{9}\,M_{\star,1}}{10^{8.8}\,M_{\odot}}\right)\,\left(\frac{\mu_{9}}{ 10}\right)^{-1}. \tag{12}\] If the first SNe have not yet exploded, pure wind contributions to \(L_{\mathrm{mech}}\) would be a factor of \(\sim 5\) smaller. If these mass loss thoroughly thermalizes to drive a supersonic, non-radiative cluster wind (Chevalier and Clegg, 1985; Canto et al., 2000; Wunsch et al., 2011; Silich et al., 2011), the wind reaches a velocity \(v_{\mathrm{w}}=(2\,L_{\mathrm{mech}}/\dot{M}_{\star,1})^{1/2}\approx 1400\, \mathrm{km}\,\mathrm{s}^{-1}\), with uncertainty on \(\mu_{9}\) and \(M_{\star,1}\) canceling out. If the cluster wind is fully blown well within radius \(R_{1}\), it should exert a ram pressure on the high-pressure clouds: \[P_{\mathrm{ram}} =\frac{\dot{M}_{\star,1}\,v_{\mathrm{w}}}{4\pi R_{1}^{2}}=3.8 \times 10^{9}\,\mathrm{K}\,\mathrm{cm}^{-3}\,\left(\frac{\mu_{9}\,M_{\star,1}}{10^{ 8.8}\,M_{\odot}}\right)\] \[\times\left(\frac{R_{1}\,\sqrt{\mu_{9}}}{10^{20.0}\,\mathrm{cm}} \right)^{-2}\,\left(\frac{v_{\mathrm{w}}}{1400\,\mathrm{km}\,\mathrm{s}^{-1}} \right). \tag{13}\] This is comparable to the inferred total pressure \(P_{1}\) or the expected radiation pressure. Theoretical studies have suggested that for a very compact, massive star cluster the cluster wind may be subject to efficient or even catastrophic radiative cooling (Silich et al., 2004; Wunsch et al., 2011; Gray et al., 2019; Danehkar et al., 2021), which can further reduce kinetic feedback. It is intriguing to investigate if the cluster ejecta rapidly cools and slows down before they reach radius \(R_{1}\). ### Nitrogen Enrichment The N III] multiplet near 1750 A (Feldman and Doschek, 1979) is not commonly reported as a strong UV line from H II nebulae as N is significantly less abundant than O and C in the ISM. Among a sample of 45 analogs of high-\(z\) starburst galaxies at \(0.002<z<0.182\) from the CLASSY survey (Berg et al., 2022), there is only one strong detection of this multiplet from the blue compact dwarf galaxy Mrk 996, and a lower significance detection in S/N \(\sim 3\) from another system (Mingozzi et al., 2022). The N III] detection in Mrk 996 is toward the nuclear region and indicates hugely enhanced N abundance log(N/O) = 0.0 possibly related to ejecta from WN stars. However, a large velocity dispersion \(\sim 450\,\mathrm{km}\,\mathrm{s}^{-1}\) is associated with the N III] emitter (Thuan et al., 1996), and it is unclear whether the example in Mrk 996 shares a similar origin from nebulae around a super star cluster as in Sunburst. On the other hand, a three-fold N enrichment log(N/O) = \(-0.85\) localized to a few super star clusters is reported in NGC 5253 (Kobulnicky et al., 1997; Lopez-Sanchez et al., 2007). The detection of N III]\(\lambda\lambda\)1750,1752 in this paper, with log(N/O) = \(-0.21^{+0.10}_{-0.11}\), is localized to the LyC cluster vicinity but unseen in other parts of the arc. Mestric et al. (2023) mentioned the detection of N III]\(\lambda\)1750 from the LyC cluster without giving further comments. The facts that (1) the \(\lambda\lambda\)1750,1752 lines show as the strongest among all five transitions of the N III] multiplet and (2) the observed line ratio \(\lambda 1750/\lambda 1752\approx 2\) are consistent with the inferred density of the high-pressure clouds. This indicates an N abundance that is increased by \(\sim 12\)-19 fold relative to the median interstellar N abundance log(N/O) \(\approx-1.4\) at \(Z\approx 0.2-0.3\,Z_{\odot}\)(Molla et al., 2007; Berg et al., 2019) in the very vicinity of the star cluster, a scenario previously theorized by Izotov et al. (2006). This level of N enrichment is higher than what is found near several super star clusters in NGC 5253 (Kobulnicky and Kewley, 2004), and the N/O ratio is higher than what is observed from extragalactic H II regions in low metallicity emission (see Fig. 11 of Izotov et al. (2006)). In contrast to the \(z\approx 0\) HII regions, the nitrogen enhancement we find in the nebular gas is comparable to that seen in post-turnoff dwarf stars of Figure 10: Posterior distributions for the measured ionizing photon rate, ionizing flux, and the typical distance from the cluster center for each cloud population. Contours levels and histogram confidence intervals follow Fig. 5. Refer to the “Two-Comp” model in Table 6 for the 68% C.I.’s of the parameters. Milky-Way's globular clusters such as NGC 6752 (Carretta et al., 2005). Shortly after the completion of this work, four pre-prints have appeared and have examined remarkably strong nitrogen emission lines seen in GN-z11, a \(z\approx 11\) galaxy confirmed through JADES NIRSpec IFU spectroscopy (Bunker et al., 2023; Cameron et al., 2023; Senchyna et al., 2023; Charbonnel et al., 2023). Using Cloudy photoionization calculations, Senchyna et al. (2023) showcase a similarly high density \(n_{e}\gtrsim 10^{5}\,\mathrm{cm^{-3}}\) for photoionized gas, and a high nebular N abundance log(N/O) \(\sim-0.25\) at a metallicity of \(Z\approx 0.1Z_{\odot}\). This object may be a high-redshift, high-mass (stellar mass \(\sim 10^{8}\)-\(10^{9}\,M_{\odot}\) Bunker et al. (2023)) version of the enrichment processes seen in Sunburst. Figure 11 shows the inferred LyC cluster nitrogen abundance compared to GN-z11, as well as to individual chemically anomalous stars in a local globular cluster, local star forming galaxies, and local HII regions. We surmise that heavy N enrichment might be typical in the vicinity of very young super star clusters, and is indicative of either pollution of natal gas by WN stars or LBVs, or condensation solely from the wind material of those stars, or even ejecta from interacting binary massive stars (de Mink et al., 2009), which happens as early as \(\sim 3\)-\(4\,\mathrm{Myr}\) after the onset of star formation. The CNO cycle \({}^{14}\mathrm{N}(\mathrm{p},\gamma)^{15}\mathrm{O}\) bottleneck causes efficient conversion of C and O to N, such that ejecta mixed up with H-burning-processed material can be significantly N enhanced but is expected to have a conserved sum of C, N and O. Given the solar C:N:O ratio of 4:1:7, complete conversion of C and O into N can cause up to 12-fold increase in the absolute abundance of N in ejected material. Since the observed photoionized gas is not extremely derived of C and O, the CNO-processed material must mix with other chemically normal gas. The observed high N/O \(\sim 0.5\)-\(0.8\) indicates that the CNO-processed material accounts for an order-unity mass fraction of the dense gas (also see Figure 1 of Charbonnel et al. (2023)). Since we have estimated that the mass of the dense, N-enriched gas is substantial compared to the total massive star mass loss (Eq. (7)) based on the arguments of column density and efficient turbulence mixing (see the remainder of this sub-section), we suggest that high N abundance is evidence for radiation-hydrodynamic processes that have caused massive star ejecta to efficiently condense and be retained in the cluster potential. High-pressure clouds condensed solely out of cooled SN ejecta is not favored since the high-pressure clouds do not appear rich in oxygen and carbon. The inferred cluster age constraint \(t_{\mathrm{age}}\lesssim 4\,\mathrm{Myr}\) makes it unclear whether the first SNe have exploded, and if those have exploded, whether the SN ejecta remains hot and quickly leaks out of the cluster vicinity via low density channels (Lopez et al., 2011; Rogers and Pittard, 2013, 2014). At radius \(R_{1}\), the escape velocity out of the cluster's potential \(v_{\mathrm{esc}}(R_{1})\sim 220\,\mathrm{km\,s^{-1}}\) (a factor of \(\sqrt{6}\) larger than Eq. (6)) is slow compared to that of the typical O-star/WR-star wind or a fully blown, hot cluster superwind. This favors mass-loss processes with relatively slow outflow velocities, such as dense LBV winds or non-conservative binary mass transfer. In our best-fit model, the high-pressure clouds have a thickness given by Eq. (10), so that turbulent motion \(\sigma\approx(P_{1}/n_{\mathrm{H}}\,m_{\mathrm{p}})^{1/2}\approx 4\,\mathrm{km \,s^{-1}}\,(n_{\mathrm{H}}/10^{6}\,\mathrm{cm^{-3}})^{-1/2}\) should quickly mix N throughout the cloud on a Figure 11: Nitrogen abundances of the LyC cluster compared to post-turnoff dwarf stars of local globular clusters (NGC 6752 Carretta et al., 2005), star forming regions of NGC 5253 (López-Sánchez et al., 2007), local star forming galaxies (Berg et al., 2022; Stephenson et al., 2023, CLASSY), and local HII regions in SDSS galaxies (Pilyugin et al., 2012). Our photoionization calculation infers a nitrogen abundance comparable to highest seen in local HII and star forming galaxies. Our measured abundance pattern falls on a diagonal line (which goes from upper left to lower right) that is traced by abundances measured from chemically anomalous dwarf stars in the Milky-Way globular cluster NGC 6752, which indicates the various degree of mixing between N-enriched gas and chemically normal gas. Finally, we also compare to GN-z11, a \(z\sim 11\) galaxy hosting candidate globular cluster progenitors (Bunker et al., 2023; Cameron et al., 2023; Charbonnel et al., 2023; Senchyna et al., 2023). timescale \[t_{\rm mix}\approx\frac{l}{\sigma}=8000\,{\rm yr}\,\left(\frac{N_{\rm H}}{10^{23} \,{\rm cm}^{-2}}\right)\,\left(\frac{n_{\rm H}}{10^{6}\,{\rm cm}^{-3}}\right)^{- \frac{1}{2}}. \tag{14}\] Given the high log(N/O) and the estimated total mass for the high-pressure clouds Eq. (7), if the clouds are gravitationally retained, we estimate a total nitrogen mass under the assumption of thorough mixing \[M_{N}\approx 500\,M_{\odot}\,\left(\frac{N_{\rm H}}{10^{23}\,{\rm cm}^{-2}} \right)\,\left(\frac{\mu_{9}^{1/2}R_{1}}{10^{20.0}\,{\rm cm}}\right)^{2}\] \[\times\left(\frac{\mu_{9}}{10}\right)^{-1}\,\left(\frac{x_{1}}{0.4}\right)\,\left(\frac{{\rm N/O}}{0.6}\right)\,\left(\frac{Z}{0.2\,Z_{\odot}} \right). \tag{15}\] Until \(3\,{\rm Myr}\), about \(8400\,(\mu_{9}\,M_{\star,1}/10^{8.8}\,M_{\odot})\,(\mu_{9}/10)^{-1}\) stars more massive than \(140\,M_{\odot}\) should have evolved to the end of their lives (Bressan et al., 2012). By \(4\,{\rm Myr}\), the minimum mass becomes \(60\,M_{\odot}\) and the number becomes \(3.6\times 10^{4}\,(\mu_{9}\,M_{\star,1}/10^{8.8}\,M_{\odot})\,(\mu_{9}/10)^{-1}\). For an average nitrogen yield (0.01-0.1) \(M_{\odot}\) from each such star (Prantzos et al., 2018), the maximum nitrogen pollution should be about \(\sim(84\)-\(3600)\,M_{\odot}\,(\mu_{9}\,M_{\star}/10^{8.8}\,M_{\odot})\,(\mu_{9}/10)^{-1}\), the estimate of which is insensitive to IMF assumptions. The high-pressure clouds therefore appear to have retained an order-unity fraction of the ejected nitrogen. As we have argued before, an order-unity fraction of the cloud gas must have originated from massive star ejecta to account for the observed enhancement. Given the expected nitrogen budget, Eq. (15) suggests that \(\log N_{\rm H}\gtrsim 24\) is unlikely unless the clouds are not well mixed and are only partially polluted with nitrogen on the illuminated side which includes the H II zone. The column density of the high-pressure clouds may barely reach the threshold of secondary star formation without violating the nitrogen budget constraint, unless they pressurize further. ## 8 Conclusion High SNR archival data owing to large gravitational magnifications has enabled us to closely study a newborn, LyC-leaking super star cluster in the Sunburst galaxy at Cosmic Noon. We have used the BPASS stellar population SED templates that account for radiation from binary stars. Using these templates as the input, we have computed the observed spectrum from the system by performing self-consistent photoionization calculations using Cloudy, taking into account the dependence on nebula pressure, incident radiation strength, and nebula geometry. We have constructed a grid of spectral models from these calculations and have applied them to inferring physical parameters describing the super star cluster (summarized in Table 6). We have demonstrated that a detailed, physically motivated photoionization model can be constructed for a super star cluster surrounded by photoionized clouds, and that a joint photometric and spectroscopic analysis enables comprehensive extraction of key physical parameters despite the tremendous cosmic distance to the source. We have determined that the LyC cluster is very massive (\(M_{\star}\sim{\rm few}\times 10^{7}\,M_{\odot}\)), \(\lesssim 4\,{\rm Myr}\) old, and has a sub-solar metallicity \(Z\approx 0.2\,Z_{\odot}\). Little dust was detected from the large-scale ISM of the host galaxy. Assuming the standard IMF, the cluster's surface stellar mass density is on the order of the maximum observed in massive stellar systems in the local Universe, \(\Sigma\sim 10^{5}\,M_{\odot}\,{\rm pc}^{-2}\)(Hopkins et al., 2010). We have shown that the cluster stars irradiate highly-pressurized (\(P\sim 10^{9}\,{\rm K}\,{\rm cm}^{-3}\)), probably density-bounded photoionized clouds at \(\lesssim 10\) pc. The clouds show normal C/O and Ne/O abundance ratios but appear highly enriched with nitrogen, having \(\log({\rm N/O})\approx-0.21\). The clouds also exhibit depleted gas-phase Si and likely have self-shielded, dusty interior. To avoid rapid ejection from the system by radiation pressure, these clouds must have large enough column densities and hence amount to at least a few \(\times 10^{5}\,M_{\odot}\) of gas, and \(\sim 500\,M_{\odot}\) of nitrogen if chemically thoroughly mixed. This requires efficiently retaining nitrogen from condensed massive star ejecta as up to \(4\) Myr stars heavier than \(60\,M_{\odot}\) can yield up to \(3600\,M_{\odot}\) of nitrogen in optimistic estimates. Detection of optical [O II] doublet which requires a low ionization condition implies that a less dense nebula extends to tens of pc or beyond, and might have already been resolved in a few emission-line dominated HST filters. We have found that nitrogen is probably not as significantly enriched in this low-density component. While in our modeling we have introduced two cloud populations with disparate properties, it might as well be that there is a continuous distribution of photo-ionized clouds, with decreasing pressure, density and ionization parameter as the cluster-centric distance increases. Through modeling the nebula emission relative to direct star light, we have inferred that (10-50)% of the ionizing radiation escape the cluster's vicinity along unblocked slight lines, and that this fraction is as high as (30-70)% along our slight line, consistent with previous works (Rivera-Thorsen et al., 2019). The origin of the high-pressure clouds probably connects to radiation-hydrodynamic evolution of massive star winds, SNe, or binary ejecta from massive stars in the cluster potential. Mass loss mechanisms that predominantly produce slowly moving ejecta (\(\lesssim 300\,{\rm km}\,{\rm s}^{-1}\)) are favored. The inferred element abun dance is more consistent with pollution by nitrogen-enriched H-burning-processed material than oxygen-rich He-burning-processed material. Since observational selection by strong lensing is likely independent of these source properties, we suggest that this phenomenon may be commonplace in the vicinity of newborn super star clusters. We have argued that these clouds likely have a large column density \(N_{\rm H}\gtrsim 10^{22.5}\,{\rm cm}^{-2}\). They might allow star formation in their interior if \(N_{\rm H}\gtrsim 10^{24}\,{\rm cm}^{-2}\), or if they sink further toward the cluster in the future. It is intriguing to ponder the question whether the nitrogen enriched gas connects to the puzzle of multiple stellar populations in GCs, as second-generation GC stars often exhibit elevated N abundance (Bastian & Lardo, 2018). There are future observational prospects for the Sunburst galaxy to determine the physical origin of the dense photoionized clouds. Upcoming JWST imaging and spatially resolved spectroscopy (GO #2555; PI: T. E. Rivera-Thorsen) in near-IR will complement existing HST data and will put our nebula model to further test. It would be an interesting possibility to determine the column density of the dusty H I component or even detect a molecular component with JWST/MIRI or ALMA. Moreover, deep Chandra X-ray observations might constrain cluster wind/ejecta radiative cooling and determine whether radiation feedback is more important than kinetic feedback in the system. ## Acknowledgments The authors would like to thank Xiao Fang, Brenda Frye, Alex Ji and Wenbin Lu for helpful discussions. MP was funded through the NSF Graduate Research Fellowship grant No. DGE 1752814, and acknowledges the support of System76 for providing computer equipment. LD acknowledges research grant support from the Alfred P. Sloan Foundation (Award Number FG-2021-16495). The research of BT is funded by the Gordon and Betty Moore Foundation through Grant GBMF5076, and by NASA ATP-80NSSC18K0560 and ATP-80NSSC22K0725. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. The research of CFM is supported in part by NASA ATP grant 80NSSC20K0530.
2307.14678
Exposing Hypersensitivity in Quantum Chaotic Dynamics
We demonstrate that the unitary dynamics of a multi-qubit system can display hypersensitivity to initial state perturbation. This contradicts the common belief that the classical approach based on the exponential divergence of initially neighboring trajectories cannot be applied to identify chaos in quantum systems. To observe hypersensitivity we use quantum state-metric, introduced by Girolami and Anza in [Phys. Rev. Lett. 126 (2021) 170502], which can be interpreted as a quantum Hamming distance. As an example of a quantum system, we take the multi-qubit implementation of the quantum kicked top, a paradigmatic system known to exhibit quantum chaotic behavior. Our findings confirm that the observed hypersensitivity corresponds to commonly used signatures of quantum chaos. Furthermore, we demonstrate that the proposed metric can detect quantum chaos in the same regime and under analogous initial conditions as in the corresponding classical case.
Andrzej Grudka, Paweł Kurzyński, Adam S. Sajna, Jan Wójcik, Antoni Wójcik
2023-07-27T08:07:40Z
http://arxiv.org/abs/2307.14678v1
# Exposing Hypersensitivity in Quantum Chaotic Dynamics ###### Abstract We demonstrate that the unitary dynamics of a multi-qubit system can display hypersensitivity to initial state perturbation. This contradicts the common belief that the classical approach based on the exponential divergence of initially neighboring trajectories cannot be applied to identify chaos in quantum systems. To observe hypersensitivity we use quantum state-metric, introduced by Girolami and Anza in [Phys. Rev. Lett. 126 (2021) 170502], which can be interpreted as a quantum Hamming distance. As an example of a quantum system, we take the multi-qubit implementation of the quantum kicked top, a paradigmatic system known to exhibit quantum chaotic behavior. Our findings confirm that the observed hypersensitivity corresponds to commonly used signatures of quantum chaos. Furthermore, we demonstrate that the proposed metric can detect quantum chaos in the same regime and under analogous initial conditions as in the corresponding classical case. _Introduction._ Hypersensitivity to small perturbations is a defining characteristic of classical chaos [1]. On the other hand, it is usually assumed that there is no state sensitivity in the quantum realm [2; 3; 4]. To deal with this problem different methods to detect chaotic behavior in the quantum domain were proposed. Instead of state sensitivity, one can observe sensitivity to Hamiltonian perturbation [4] which can be measured e.g. by Loschmidt echo [5; 6; 7]. A related concept to Loschmidt echo is known as the out-of-time-order correlator (OTOC) [8; 9; 10; 11]. Also, commonly used methods to detect quantum chaos consist of examining statistics of energy levels and eigenstates [12; 13; 14; 15]. Finally, it is also worth to mention entanglement dynamics as yet another powerful tool to investigate this problem [16; 17; 18; 19]. In this Letter, contrary to common belief, we show that it is still possible to define quantum chaos with the primordial concept of quickly growing distance between the original and the perturbed state. This possibility is based on two observations. First, quantum states are not represented as points in phase space, but as vectors in Hilbert space. Second, a scalar product (overlap) between quantum states is invariant under unitary dynamics and consequently, no sensitivity to perturbation can be detected through any metric based on it. However, one can find a different metric between quantum states, which is not invariant under unitary dynamics. Here we use a slight modification of the metric proposed by Girolami and Anza [20] which can be called quantum Hamming distance (QHD). We apply QHD to the multi-qubit implementation of the quantum kicked top. We choose it because QHD is extremely well adjusted to systems living in Hilbert space possessing natural tensor product structure. In addition, the kicked top is a very well-known system [2; 16; 17; 18; 18; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36] and our goal is not so much to study its behavior as to prove that divergence of states in Hilbert space, as measured by QHD, can provide a reliable description of chaotic dynamics. _Quantum Hamming Distances._ Let us emphasize that the need for no overlap-based metric is not limited to the field of quantum chaos. An important concern regarding the overlap metric is its inability to detect differences between states that are physically significant. Let's consider as an example three states of n qubits \[|\psi_{1}\rangle = |000\ldots 00\rangle, \tag{1}\] \[|\psi_{2}\rangle = |000\ldots 01\rangle,\] (2) \[|\psi_{3}\rangle = |111\ldots 11\rangle. \tag{3}\] Due to their mutual orthogonality, the overlap metric yields \(dist(\psi_{1},\psi_{2})=dist(\psi_{1},\psi_{3})=dist(\psi_{2},\psi_{3})\). However, the states \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\) bare much more physical resemblance than \(|\psi_{1}\rangle\) and \(|\psi_{3}\rangle\), or \(|\psi_{2}\rangle\) and \(|\psi_{3}\rangle\). In particular, an initial microscopic perturbation \[|\psi_{1}\rangle\rightarrow|\delta_{1}\rangle=\sqrt{1-\delta}|\psi_{1}\rangle+ \sqrt{\delta}|\psi_{2}\rangle \tag{4}\] can unitarily evolve into a macroscopically different state \[|\delta_{2}\rangle=\sqrt{1-\delta}|\psi_{1}\rangle+\sqrt{\delta}|\psi_{3}\rangle. \tag{5}\] The overlap metric fails to capture the aforementioned micro-macro differences, which can be crucial in quantum chaotic systems. Put simply, the overlap metric quantifies how well two quantum states can be distinguished, and in the case of states (1-3), they are perfectly distinguishable. However, many important properties of these states extend far beyond mere distinguishability. Let us consider quantum state metrics that can be interpreted as QHDs. The fundamental idea behind QHD (given by Girolami and Anza [20]) involves a partition, denoted as \(P\), which divides the system into parts labeled as \(a=1,2,...,a_{max}\) (\(1\leq a_{max}\leq n\)). Each part contains \(k_{a}\) elements, hence \(\sum_{a=1}^{a_{max}}k_{a}=n\). Let \(\rho\) and \(\sigma\) represent two states of an \(n\)-partite system. The QHD between \(\rho\) and \(\sigma\) is defined [20] as \[D(\rho,\sigma) = \max_{P}\delta_{P}(\rho,\sigma), \tag{6}\] \[\delta_{P}(\rho,\sigma) = \sum_{a}\frac{1}{k_{a}}d(\rho_{a},\sigma_{a}), \tag{7}\] where \(\rho_{a}\) and \(\sigma_{a}\) are the states of subsystems with respect to a given partition \(P\). In the above \(d(.,.)\) stands for any metric. The definition of the QHD, specifically Eq. (7), relies on the property that the sum of metrics is also a metric. It can be observed that if \(d(\rho,\sigma)\) and \(\tilde{d}(\tilde{\rho},\tilde{\sigma})\) are two metrics, then \(D(\rho\otimes\tilde{\rho},\sigma\otimes\tilde{\sigma})=d(\rho,\sigma)+\tilde{ d}(\tilde{\rho},\tilde{\sigma})\) is also a metric. All of the metric properties of \(D(\rho\otimes\tilde{\rho},\sigma\otimes\tilde{\sigma})\) can be straightforwardly derived from the metric properties of \(d(\rho,\sigma)\) and \(\tilde{d}(\tilde{\rho},\tilde{\sigma})\). For instance, the triangle inequality corresponding to \(D(\rho\otimes\tilde{\rho},\sigma\otimes\tilde{\sigma})\) is simply the sum of the two original triangle inequalities. This generalizes easily to the sum of an arbitrary number of metrics. Note that \(\delta_{P}(\rho,\sigma)\) is not a metric. This is because for some partition \(P\) two different states \(\rho\neq\sigma\) may give rise to \(\rho_{a}=\sigma_{a}\) for all \(a\), which implies \(\delta_{P}(\rho,\sigma)=0\). But metric should be equal to zero iff \(\rho=\sigma\). This is the reason why maximization is used in the definition (6). In the original version, Girolami and Anza [20] use Bures length (also known as Bures angle) [37] \[d(\rho,\sigma)=\cos^{-1}\Big{(}\text{Tr}\sqrt{\sigma^{1/2}\rho\ \sigma^{1/2}} \Big{)}. \tag{8}\] However, in the subsequent sections of this work, we employ the distance metric based on trace distance [37; 38] \[d(\rho,\sigma)=\frac{1}{2}\text{Tr}|\rho-\sigma|. \tag{9}\] The choice of using the trace distance is motivated by its numerical tractability, as it is generally easier to evaluate computationally (especially when two states are almost identical) compared to the Bures length. With this choice, one obtains for our exemplary states (1-3) \[D(\psi_{1},\psi_{2})=1, \tag{10}\] \[D(\psi_{1},\psi_{3})=n \tag{11}\] and \[D(\psi_{2},\psi_{3})=n-1. \tag{12}\] This is why we called this measure QHD. Additionally, to streamline our analysis, we note that for any partition \(P\), QHD is lower bounded by \(\delta_{P}(\rho,\sigma)\) \[\delta_{P}(\rho,\sigma)\leq D(\rho,\sigma). \tag{13}\] Therefore, if \(\delta_{P}(\rho,\sigma)\) is hypersensitive to perturbation, then so is \(D(\rho,\sigma)\). Recently, one of us demonstrated in [39] that the QHD can detect state-sensitivity in specific quantum dynamics involving three-body interactions. However, to establish the effectiveness of such metrics in capturing quantum chaotic features, it is crucial to demonstrate their ability to detect state-sensitivity in more realistic systems, particularly those already known to exhibit quantum chaos based on other criteria. In this study, we investigate the dynamics of a quantum kicked top system consisting of \(n\) qubits [16; 29], with an interaction energy parameterized as \(\alpha\). We examine how a small initial perturbation evolves over time in this system. It is well-established that as the value of \(\alpha\) increases, the system undergoes a quantum order-to-chaos transition [2; 32]. To quantify the effect of perturbations, we use QHD. Our analysis reveals that QHD effectively detects hypersensitivity to perturbations for \(\alpha>3\). This result establishes a meaningful connection between classical and quantum chaotic behaviors, shedding light on the intuitive links between the two. _Kicked top._ The classical kicked top model describes the dynamics of an angular momentum vector \(\mathbf{J}\), which is governed by the Hamiltonian \(H(t)=H_{0}+H_{1}\sum_{n=-\infty}^{+\infty}T\delta(t-nT)\). Here, \(H_{0}=\beta J_{y}\) represents the natural dynamics, and \(H_{1}=\frac{\alpha}{2J}J_{z}^{2}\) represents the "kicks" that occur periodically with a period of \(T\). It is well-established that the model undergoes a transition from order to chaos as the parameter \(\alpha\) increases. This transition generally takes place in the regime \(1<\alpha<6\). Here, we examine the quantum version of the kicked top model implemented on spin \(s=n/2\), which is equivalent to a system of \(n\) interacting qubits. The dynamics is discrete, and a single step of the evolution can be described by the following equation \[|\psi_{t+1}\rangle=U_{1}U_{2}|\psi_{t}\rangle, \tag{14}\] where \[U_{1}=e^{i\frac{\alpha}{4\alpha}\sum_{i,j=1}^{n}\sigma_{z}^{(i)}\sigma_{z}^{(j )}}, \tag{15}\] and \[U_{2}=e^{i\frac{\beta}{2}\sum_{i=1}^{n}\sigma_{y}^{(i)}}. \tag{16}\] In the above equation, \(\sigma_{j}^{(i)}\) represents the Pauli-\(j\) matrix (where \(j=x,y,z\)) acting on the \(i\)-th qubit. In physical terms, this model describes a system of spin-\(1/2\) particles in a magnetic field with a magnitude proportional to \(\beta\). The spins naturally precess, but their motion is periodically interrupted by pairwise interactions. The energy of interaction is proportional to \(\alpha\). To simplify the equations of motion, it is common to choose \(\beta=\pi/2\). However, even with this choice, the system can exhibit intricate behaviors. Notably, it has been demonstrated in [21] that a quantum analog of the order-to-chaos transition can be observed within the same range of \(\alpha\). _Methods._ We conduct a numerical investigation into the evolution of the aforementioned system for \(\beta=\pi/2\), exploring various choices of \(\alpha\). In each simulation run, we initialize the system in the symmetric pure product state \(\rho_{0}=|\psi_{0}\rangle\langle\psi_{0}|\), where \(|\psi_{0}\rangle=|\chi\rangle^{\mathfrak{g}n}\), with \(|\chi\rangle=\cos\theta|0\rangle+e^{i\phi}\sin\theta|1\rangle\). The parameters \(\theta\) and \(\phi\) are randomly chosen. These symmetric qubit states, known as coherent spin states [40], are considered the most classical spin states as they minimize the uncertainty of the spin-\(n/2\) operators \(S_{j}=\frac{1}{2}\sum_{i=1}^{n}\sigma_{j}^{(i)}\). The coherent spin states are eigenstates of the spin operator \(S_{\mathbf{k}}|\psi_{0}\rangle=\frac{n}{2}|\psi_{0}\rangle\), where \(\mathbf{k}=(\cos\phi\sin\theta,\sin\phi\sin\theta,\cos\theta)\) represents the axis onto which the spin-\(n/2\) is projected, \(S_{\mathbf{k}}=\mathbf{k}\cdot\mathbf{S}\) and \(\mathbf{S}=(S_{x},S_{y},S_{z})\). Furthermore, we examine the evolution of the perturbed (pure) state \(\rho_{0}^{\dagger}=|\psi_{0}^{\dagger}\rangle\langle\psi_{0}^{\dagger}|\), where \(|\psi_{0}^{\dagger}\rangle=\left|\chi^{\prime}\right\rangle^{\mathfrak{g}n}\). \(|\chi^{\prime}\rangle\) is given by \(|\chi^{\dagger}\rangle=R_{\varphi}|\chi\rangle\), where \(R_{\varphi}=e^{i\frac{\phi}{2}\sigma_{m}}\) denotes a single-qubit rotation about a randomly chosen \(\mathbf{m}\)-axis, \(\sigma_{\mathbf{m}}=\mathbf{m}\cdot\mathbf{s}\) and \(\mathbf{s}=(\sigma_{x},\sigma_{y},\sigma_{z})\). The rotation angle is denoted by \(\varphi\) (\(\varphi\ll 1\)). Finally, we evaluate the distance \(D(\rho_{t},\rho_{t}^{\dagger})\) and analyze its dependence on the parameters \(t\), \(\varphi\), and \(\alpha\). It is important to note that both the initial state \(\rho_{0}\) and the perturbed state \(\rho_{0}^{\dagger}\) are symmetric, meaning they remain unchanged under the permutation of spins. This symmetry is preserved throughout the evolution due to the symmetric nature of the evolution operators \(U_{1}\) and \(U_{2}\). Consequently, the states \(\rho_{t}\) and \(\rho_{t}^{\prime}\) also maintain their symmetry. As a result, the dynamics of the \(n\)-qubit system occur within a symmetric subspace of dimension \(n+1\). This characteristic significantly simplifies the complexity of numerical simulations and facilitates the analysis of the obtained data. We found that in our numerical simulations, the optimal partition is a partition of the system into single qubits, i.e., \(a_{max}=n\), \(k_{a}=1\) for each \(a\). Because the system is symmetric, we get \[D(\rho_{t},\rho_{t}^{\prime})=\frac{n}{2}\mathrm{Tr}|\tilde{\rho}_{t}-\tilde {\rho}_{t}^{\prime}|, \tag{17}\] where \(\tilde{\rho}_{t}\) is the state of a single-qubit subsystem of \(\rho_{t}\) and \(\tilde{\rho}_{t}^{\prime}\) is the state of a single-qubit subsystem of \(\rho_{t}^{\prime}\). The general method of how to evaluate states of subsystems is given in Supplemental Material. Finally, the value of \(D(\rho_{t},\rho_{t}^{\prime})\) differs from one simulation to the other because it depends on the initial state that is chosen randomly. That is why we introduce average distance \[\mathcal{D}_{t}=\langle D(\rho_{t},\rho_{t}^{\prime})\rangle_{\rho_{0}}, \tag{18}\] in which we average over one hundred numerical runs. Therefore, \(\mathcal{D}_{t}\) reflects a property of the system, not a property of a particular state. Note also, that for \(\varphi\ll 1\) the initial distance \[D(\rho_{0},\rho_{0}^{\prime})=\frac{n\,\varphi}{2} \tag{19}\] is independent of the initial state (see Supplemental Material). _Results._ Our most important finding is that \(\mathcal{D}_{t}\) can be used as a witness of quantum chaos. In particular, it grows rapidly for the values of \(\alpha\) corresponding to the chaotic regime and slowly for the values corresponding to the regular regime. An example of such behavior is presented in Fig. 1 (left), where we consider the system of \(n=1000\) qubits and the perturbation angle \(\varphi=0.01\). We compare two cases, \(\alpha=6\) (red plot) and \(\alpha=1\) (blue plot). Interestingly, in the chaotic regime \(\mathcal{D}_{t}\) grows fast and then it decreases fast, resulting in a peak. The reason for this behaviour comes from the fact that \(\mathcal{D}_{t}\) is an averaged comparison between single-qubit sub-states \(\tilde{\rho}_{t}\) and \(\tilde{\rho}_{t}^{\prime}\). They initially diverge (\(\mathcal{D}_{t}\) grows), but due to the entangling nature of the kicked top dynamics they get more entangled with the rest of the system, therefore they become more mixed. The more mixed they become, the more similar they get (\(\mathcal{D}_{t}\) decreases). The amount of entanglement between the qubit and the rest of the system can be measured by linear entropy \(S_{t}=1-\mathrm{Tr}(\tilde{\rho}_{t}^{2})\). An example of how \(S_{t}\) changes in time is represented in Fig. 1 (right). It is clear that the peak in \(\mathcal{D}_{t}\) matches the growth of \(S_{t}\). Note that the fast growth of entanglement within the multipartite system is usually considered to be a signature of quantum chaos [16; 17; 18; 25; 29; 30; 33; 41; 42] (for detailed discussion see [34]). The entanglement time is analogous to the Ehrenfest time \(t_{E}\), the time after which the correspondence principle is no longer valid. Intuitively, this happens when the uncertainties are of the order of the size of the system, which is exactly what is manifested by entangled states. Because of the above, the position of the peak of \(\mathcal{D}_{t}\) can be considered as the Ehrenfest time of the system. Figure 1: Left: the plot of \(\mathcal{D}_{t}\) for \(n=1000\) and \(\varphi=0.01\). The points were joined for better visibility. The red (solid) plot corresponds to \(\alpha=6\) and the blue (dotted) one to \(\alpha=1\). Right: the corresponding growth of the single-qubit entropy. In Fig. 2 we show how \(\mathcal{D}_{t}\) depends on the number of qubits \(n\). It is expected [43; 18; 44] that Ehrenfest time scales as \(t_{E}\sim h_{eff}^{-1/2}\) for regular and as \(t_{E}\sim ln(h_{eff}^{-1})\) for chaotic dynamics, where \(h_{eff}\) is the effective Planck constant. For systems confined to the symmetric subspace the effective Planck constant is [45; 18] inversely proportional to the dimension of this subspace. Thus we expect \(t_{E}\sim n^{1/2}\) for regular and as \(t_{E}\sim ln(n)\) for chaotic dynamics. Position of the peaks in Fig. 2 roughly fulfills this expectation. This confirms that our quantum chaos witness based on \(\mathcal{D}_{t}\) is in accordance with the usually used ones. Another interesting problem is to investigate if our quantum chaos witness works in the range for which both chaotic and regular behaviors are known to coexist (\(1<\alpha<6\)). In this case, the chaotic properties of the system depend on the initial state and we cannot use the averaged witness \(\mathcal{D}_{t}\) anymore. Instead, we return to \(D(\rho_{t},\rho_{t}^{\prime})\). Moreover, we relate the observed quantum behavior to the behavior of classical trajectories with analogous initial conditions. The method for evaluationg classical trajectories is given in Supplemental Material. In Fig. 3 we show two examples for \(\alpha=2.3\). In the first one (blue, dotted) the initial conditions \(\theta\) and \(\phi\) correspond to a classical regular trajectory (see Fig. 3 right). It is clear that in this case \(D(\rho_{t},\rho_{t}^{\prime})\) does not change much (see Fig. 3 left). The second example (red, solid) corresponds to a classical chaotic trajectory and it is well visible that in this case \(D(\rho_{t},\rho_{t}^{\prime})\) grows. This observation confirms that our approach could be also used to witness quantum chaos in the transition region. _Conclusions._ We showed that, contrary to common belief, quantum and classical chaos can be treated on equal footing, provided one chooses a proper metric in the system's state space. In case of classical systems one can detect chaos using Euclidean metric in phase space. Here we demonstrated that in case of quantum systems one can detect chaos using a quantum Hamming distance [20] (QHD) in Hilbert space. We analysed quantum kicked top dynamics of \(n\) qubits and found that QHD between two quantum states can rapidly grow. This finding confirms that, just like classical chaotic dynamics, quantum chaotic dynamics is hypersensitive to perturbations. Our method of detecting quantum chaos is in accordance with previously developed methods. For example, we found that the growth of QHD matches the growth of entanglement [34]. Moreover, we argued that the time after which QHD reaches its maximum corresponds to the Ehrenfest time \(t_{E}\)[43; 18; 44], which scales as \(n^{1/2}\) and \(\log(n)\) in the regular and chaotic regimes, respectively. Finally, we showed that QHD can be used to distinguish between regular and chaotic dynamics in situations the system exhibits both behaviours for different initial conditions. In this work we initiate a research program of studying quantum chaos with the help of QHD. The next step is to investigate other discrete-time systems, continuous-time systems and continuous-variable systems. _Acknowledgements._ We are grateful to D. Girolami, F. Anza and M. Lewenstein for helpful discussions. This research is supported by the Polish National Science Centre (NCN) under the Maestro Grant no. DEC-2019/34/A/ST2/00081. J.W. acknowledges support from IDUB BestStudentGRANT (NO. 010/39/UAM/0010). Part of numerical studies in this work have been carried out using resources provided by Wroclaw Centre for Networking and Supercomputing (wcss.pl), Grant No. 551 (A.S.S.).
2301.01845
Sign-changing bubble tower solutions for sinh-Poisson type equations on pierced domains
For asymmetric sinh-Poisson type problems with Dirichlet boundary condition arising as a mean field equation of equilibrium turbulence vortices with variable intensities of interest in hydrodynamic turbulence, we address the existence of sign-changing bubble tower solutions on a pierced domain $\Omega_\epsilon:=\Omega\setminus \displaystyle \overline{B(\xi,\epsilon)}$, where $\Omega$ is a smooth bounded domain in $\mathbb{R}^2$ and $B(\xi,\epsilon)$ is a ball centered at $\xi\in \Omega$ with radius $\epsilon>0$. Precisely, given a small parameter $\rho>0$ and any integer $m\ge 2$, there exist a radius $\epsilon=\epsilon(\rho)>0$ small enough such that each sinh-Poisson type equation, either in Liouville form or mean field form, has a solution $u_\rho$ with an asymptotic profile as a sign-changing tower of $m$ singular Liouville bubbles centered at the same $\xi$ and with $\epsilon(\rho)\to 0^+$ as $\rho$ approaches to zero.
Pablo Figueroa
2023-01-04T23:05:55Z
http://arxiv.org/abs/2301.01845v2
# Sign-changing bubble tower solutions for sinh-Poisson type equations on pierced domains ###### Abstract. For asymmetric sinh-Poisson type problems with Dirichlet boundary condition arising as a mean field equation of equilibrium turbulence vortices with variable intensities of interest in hydrodynamic turbulence, we address the existence of sign-changing bubble tower solutions on a pierced domain \(\Omega_{\epsilon}:=\Omega\setminus\overline{B(\xi,\epsilon)}\), where \(\Omega\) is a smooth bounded domain in \(\mathbb{R}^{2}\) and \(B(\xi,\epsilon)\) is a ball centered at \(\xi\in\Omega\) with radius \(<0\). Precisely, given a small parameter \(\rho>0\) and any integer \(m\geq 2\), there exist a radius \(\epsilon=\epsilon(\rho)>0\) small enough such that each sinh-Poisson type equation, either in Liouville form or mean field form, has a solution \(u_{\rho}\) with an asymptotic profile as a sign-changing tower of \(m\) singular Liouville bubbles centered at the same \(\xi\) and with \(\epsilon(\rho)\to 0^{+}\) as \(\rho\) approaches to zero. Key words and phrases:sinh-Poisson type equation, pierced domain, tower of bubbles 2020 Mathematics Subject Classification: 35B44, 35J25, 35J60 ## 1. Introduction Let \(\Omega\) be a smooth bounded domain in \(\mathrm{I\!R}^{2}\). Given \(\epsilon>0\) and \(\xi\in\Omega\), define \(\Omega_{\epsilon}:=\Omega\setminus\overline{B(\xi,\epsilon)}\), a pierced domain, where \(B(\xi,\epsilon)\) is a ball centered at \(\xi\) with radius \(\epsilon\). Inspired by results in [1, 13, 17, 20, 30], we are interested in constructing sign-changing _bubble tower_ solutions to sinh-Poisson type equations, either in Liouville form or mean field form, with variable intensities and Dirichlet boundary conditions on a pierced domain \(\Omega_{\epsilon}\). Precisely, on one hand, we consider the following problem in Liouville form \[\left\{\begin{array}{rl}-\Delta u=\rho(V_{0}(x)e^{u}-\nu V_{1}(x)e^{-\tau u})& \text{in }\Omega_{\epsilon}\\ u=0&\text{on }\partial\Omega_{\epsilon}\end{array}\right., \tag{1.1}\] and, on the other hand, we also study the problem in mean field form \[\left\{\begin{array}{rl}-\Delta u=\dfrac{\lambda_{0}V_{0}(x)e^{u}}{\int_{ \Omega_{\epsilon}}V_{0}e^{u}}-\dfrac{\lambda_{1}\tau V_{1}(x)e^{-\tau u}}{ \int_{\Omega_{\epsilon}}V_{1}e^{-\tau u}}&\text{in }\Omega_{\epsilon}\\ u=0&\text{on }\partial\Omega_{\epsilon}\end{array}\right., \tag{1.2}\] where \(\rho>0\) is small, \(\lambda_{0},\lambda_{1}>0\), \(V_{0},V_{1}>0\) are smooth potentials in \(\Omega\), \(\tau>0\), \(\epsilon>0\) is a small number and \(\nu\geq 0\). Our aim is to construct for each problem a family of solutions \(u_{\rho}\) for a suitable choice of \(\epsilon=\epsilon(\rho)\), with an asymptotic profile as a sum of positive and negative singular Liouville bubbles centered at the same point \(\xi\) as \(\rho\to 0\), on the line of [20, 30]. These equations and related ones have attracted a lot of attention in recent years due to its relevance in the statistical mechanics description of 2D-turbulence, as initiated by Onsager [28]. Precisely, in this context Caglioti, Lions, Marchioro, Pulvirenti [6] and Sawada, Suzuki [35] derive the following equation: \[-\Delta u=\lambda\int\limits_{[-1,1]}\dfrac{\alpha e^{\alpha u}}{\Omega\,e^{ \alpha u}dx}d\mathcal{P}(\alpha)\quad\text{in }\Omega,\quad\ u=0\ \text{ on }\partial\Omega, \tag{1.3}\] where \(\Omega\) is a bounded domain in \(\mathbb{R}^{2}\), \(u\) is the stream function of the flow, \(\lambda>0\) is a constant related to the inverse temperature and \(\mathcal{P}\) is a Borel probability measure in \([-1,1]\) describing the point-vortex intensities distribution. We observe that (1.3) is obtained under a _deterministic_ assumption on the distribution of the vortex circulations. Equation (1.3) includes several well-known problems depending on a suitable choice of \(\mathcal{P}\). For instance, if \(\mathcal{P}=\delta_{1}\) is concentrated at \(1\), then (1.3) corresponds to the classical mean field equation \[-\Delta u=\lambda\frac{e^{u}}{\int\limits_{\Omega}e^{u}dx}\ \ \ \text{in }\Omega,\ \ \ \ u=0\ \ \text{on }\partial\Omega. \tag{1.4}\] Since there are plenty of results in the literature concerning (1.4), let us just quote [2, 7, 8, 9, 12, 26]. When \(\mathcal{P}=\sigma\delta_{1}+(1-\sigma)\delta_{-\tau}\) with \(\tau\in[-1,1]\) and \(\sigma\in[0,1]\), equation (1.3) becomes \[-\Delta u=\lambda\bigg{(}\sigma\frac{e^{u}}{\int\limits_{\Omega}e^{u}dx}-(1- \sigma)\tau\frac{e^{-\tau u}}{\int\limits_{\Omega}e^{-\tau u}dx}\bigg{)}\ \ \ \text{in }\Omega,\ \ \ \ u=0\ \ \text{on }\ \partial\Omega. \tag{1.5}\] Setting \(\lambda_{0}=\lambda\sigma\), \(\lambda_{1}=\lambda(1-\sigma)\) and \(V_{0}=V_{1}=1\) problem (1.5) can be rewritten as (1.2) replacing \(\Omega_{\epsilon}\) by \(\Omega\). If \(\tau=1\) and \(V_{0}=V_{1}\equiv 1\) problem (1.2) reduces to mean field equation of the equilibrium turbulence, see [5, 21, 24, 27, 31] or its related sinh-Poisson version, see [3, 4, 20, 23, 25], which have received a considerable interest in recent years. Recently, sign-changing solutions have been constructed in \(\Omega\) for the sinh-Poisson equation with Robin boundary condition in [19]. Concerning results for \(\tau>0\), Pistoia and Ricciardi built in [29] sequences of blowing-up solutions to (1.2) (in \(\Omega\) instead \(\Omega_{\epsilon}\)) when \(\lambda_{0},\lambda_{1}\tau^{2}\) are close to \(8\pi\). Ricciardi and Takahashi in [32] provided a complete blow-up picture for solution sequences of (1.2) and successively in [33] Ricciardi et al. constructed min-max solutions when \(\lambda_{0}\to 8\pi^{+}\) and \(\lambda_{1}\to 0\) on a multiply connected domain (in this case the nonlinearity \(e^{-\tau u}\) may be treated as a lower-order term with respect to the main term \(e^{u}\)). A blow-up analysis and some existence results are obtained when \(\tau>0\) in a compact Riemann surface in [22, 34]. Bubbling solutions in a compact Riemann surface has been recently constructed in [18]. On the other hand, on pierced domains, Ahmedou and Pistoia in [1] proved that on \(\Omega_{\epsilon}\), there exists a solution to the classical mean field equation (1.4) which blows-up at \(\xi\) as \(\epsilon\to 0\) for any \(\lambda>8\pi\) (extra symmetric conditions are required when \(\lambda\in 8\pi\mathbb{N}\)). In [13] the authors constructed a family of solutions to the mean field equation with variable intensities (1.2) blowing-up positively and negatively at \(\xi_{1},\dots,\xi_{m_{1}}\) and \(\xi_{m_{1}+1},\dots,\xi_{m}\), respectively, as \(\epsilon_{1},\dots,\epsilon_{m}\to 0\) on a pierced domain with several holes (\(\Omega_{\epsilon}\) is replaced by in \(\Omega\setminus\cup_{i=1}^{m}\overline{B(\xi_{i},\epsilon_{i})}\) ), in the super-critical regime \(\lambda_{0}>8\pi m_{1}\) and \(\lambda_{1}\tau^{2}>8\pi(m-m_{1})\) with \(m_{1}\in\{0,1,\dots,m\}\). Recently, in the same spirit of [13], the author in [17] addressed the sinh-Poisson type equation with variable intensities (1.1) on a pierced domain with several holes \(\Omega\setminus\cup_{i=1}^{m}\overline{B(\xi_{i},\epsilon_{i})}\). Equation (1.1) is related, but not equivalent, to problem (1.2) by using the change \[\rho=\frac{\lambda_{0}}{\int_{\Omega_{\epsilon}}V_{0}e^{u}}\ \ \ \ \ \ \ \text{and}\ \ \ \ \ \ \ \ \rho\nu=\frac{\lambda_{1}\tau}{\int_{\Omega_{\epsilon}}V_{1}e^{-\tau u}}.\] To the extent of our knowledge, there are by now just few results concerning non-simple blow-up for sinh-Poisson type problems. Precisely, sign-changing solutions with non-simple blow-up has been built in [15] for the Neumann sinh-Poisson equation. Grossi and Pistoia built in [20] a sign-changing bubble tower solutions for the sinh-Poisson version (\(\tau=1\)) in a symmetric domain \(\Omega\) with respect to a fixed point \(\xi\in\Omega\). After that, Pistoia and Ricciardi in [30] extend this construction to a sinh-Poisson type equation with asymmetric exponents (\(\tau\neq 1\)) under a symmetric assumption on \(\Omega\) depending on either \(\tau\in\mathbb{Q}\) or \(\tau\notin\mathbb{Q}\). In both situations [20, 30] the number of bubbles can be arbitrary large. A matter of interest to us is whether do there exist sign-changing bubble tower solutions to (1.1) for small values of \(\rho\) or to (1.2) for some values of the parameters \(\lambda_{0},\lambda_{1},\tau>0\) on a pierced domain \(\Omega_{\epsilon}\), with bubbles centered at \(\xi\). Our first result in this direction without symmetry assumptions reads as follows. **Theorem 1.1**.: _Let \(m\geq 2\) be a positive integer. There exists \(\rho_{0}>0\) such that for all \(0<\rho<\rho_{0}\) there is \(\epsilon=\epsilon(\rho)\) small enough such that problem (1.1) has a sign-changing solution \(u_{\rho}\) in \(\Omega_{\epsilon}\) blowing-up at \(\xi\) in the sense that_ \[u_{\rho}=(-1)^{m+1}\frac{2\pi}{\tau^{\nu(m)}}(\alpha_{m}+2)G(\cdot,\xi)+o(1) \tag{1.6}\] locally uniformly in \(\bar{\Omega}\setminus\{\xi\}\) as \(\rho\to 0^{+}\)._ Here, for simplicity, we denote \[\nu(i)=\frac{1+(-1)^{i}}{2}=\begin{cases}0&\text{ if $i$ is odd}\\ 1&\text{ if $i$ is even}\end{cases}\qquad\text{ and }\qquad\sigma(i)=\frac{1-(-1)^{i}}{2}=\begin{cases}1&\text{ if $i$ is odd}\\ 0&\text{ if $i$ is even}\end{cases}, \tag{1.7}\] \(\alpha_{m}\) is given by (2.4) with \(i=m\) and \(G(x,y)=-\frac{1}{2\pi}\log|x-y|+H(x,y)\) is the Green's function of \(-\Delta\) in \(\Omega\), where the regular part \(H\) is a harmonic function in \(\Omega\) so that \(H(x,y)=\frac{1}{2\pi}\log|x-y|\) on \(\partial\Omega\). Nevertheless, the latter result may not tell us whether (1.2) has sign-changing bubble tower solutions for some values of the parameters \(\lambda_{0},\lambda_{1},\tau>0\). Therefore, we perform directly to problem (1.2) a similar procedure. Conversely, Theorem 1.2 below may not tell us whether (1.1) has sign-changing bubble tower solutions for _all_ small \(\rho>0\). Assume that \(\lambda_{0}\) and \(\lambda_{1}\) decompose for some \(\alpha_{1}>2\) (see (2.3)), as either \[\lambda_{0}=2\pi m\left[\alpha_{1}+(m-2)\Big{(}1+\frac{1}{\tau}\Big{)}\right],\quad\lambda_{1}\tau^{2}=2\pi m\left[\alpha_{1}\tau+m(1+\tau)\right],\quad \text{if $m$ even} \tag{1.8}\] or \[\lambda_{0}=2\pi(m+1)\left[\alpha_{1}+(m-1)\Big{(}1+\frac{1}{\tau}\Big{)} \right],\ \ \lambda_{1}\tau^{2}=2\pi(m-1)\left[\alpha_{1}\tau+(m-1)(1+\tau)\right],\ \ \text{if $m$ odd}. \tag{1.9}\] In particular, we choose \(\alpha_{1}>2\) and \(\alpha_{1}\notin 2\mathds{N}\) if \(\tau=1\). Our second result (without symmetry assumptions) is the following **Theorem 1.2**.: _If either (1.8) or (1.9) holds for a positive integer \(m\geq 2\), then there exists a radius \(\epsilon>0\) small enough such that problem (1.2) has a sign-changing solution \(u_{\epsilon}\) in \(\Omega_{\epsilon}\) blowing-up at \(\xi\) in the sense of (1.6) locally uniformly in \(\bar{\Omega}\setminus\{\xi\}\) as \(\epsilon\to 0^{+}\)._ Our solutions correspond to a superposition of highly concentrated vortex configurations of alternating orientation around the hole \(B(\xi,\epsilon)\) and they extend some known results [20, 30] for symmetric domains. We also point out that a delicate point in the paper concerns the linear theory developed in Section 5. A bit more complicated analysis is necessary in comparison with linear theories developed in previous works [1, 11, 13, 14, 16, 20, 30]. Without loss of generality, we shall assume in the rest of the paper that \(\xi=0\in\Omega\), so that \(\Omega_{\epsilon}=\Omega\setminus B(0,\epsilon)\) where \(\epsilon>0\) is small and \(\nu=1\), since we can replace \(\nu V_{2}\) by \(V_{2}\). However, we need the presence of \(\nu\) when we compare (1.1) with equation (1.2). Finally, we point out some comments about the proofs of the theorems. Following the ideas presented in [20, 30] about bubble tower solutions for sinh-Poisson type equations and in [1, 13, 16] about construction of solutions on pierced domains, we find a solution \(u_{\rho}\) using a perturbative approach, precisely, we look for a solution of (1.1) as \[u_{\rho}=U+\phi, \tag{1.10}\] where \(U\) is a suitable ansatz built using the projection operator \(P_{\epsilon}\) onto \(H^{1}_{0}(\Omega_{\epsilon})\)(see (2.1)) and \(\phi\in H^{1}_{0}(\Omega_{\epsilon})\) is a small remainder term. Letting \(w_{\delta,\alpha}(x)=\log\frac{2\alpha^{2}\delta^{\alpha}}{(\delta^{\alpha}+|x |^{\alpha})^{2}}\) be a solution to the singular Liouville equation \[\Delta u+|x|^{\alpha-2}e^{u}=0\quad\text{ in }\mathds{R}^{2},\qquad\int_{ \mathds{R}^{2}}|x|^{\alpha-2}e^{u}<+\infty,\] the ansatz \(U\) is defined as follows \[U(x)=\sum_{i\text{ odd}}P_{\epsilon}w_{i}(x)-\frac{1}{\tau}\sum_{i\text{ even}}P_{\epsilon}w_{i}(x)=\sum_{i=1}^{m}\frac{(-1)^{i+1}}{\tau^{\nu(i)}}P_{ \epsilon}w_{i}(x), \tag{1.11}\] where \(w_{j}:=w_{\delta_{j},\alpha_{j}}\) for \(j=1,\dots,m\). A careful choice of the parameters \(\delta_{j}\)'s, \(\alpha_{i}\)'s and the radius \(\epsilon\), depending on \(\rho>0\), is made in section 2 (see (2.2), (2.3)-(2.4) and (2.21)) in order to make \(U\) be a good approximated solution. Indeed, the error term \(R\) for (1.1) given by \[R=\Delta U+\rho(V_{0}(x)e^{U}-V_{1}(x)e^{-\tau U}) \tag{1.12}\] is small in \(L^{p}\)-norm for \(p>1\) close to \(1\) (see Lemma 3.1). A linearization procedure around \(U\) leads us to re-formulate (1.1) in terms of a nonlinear problem for \(\phi\) (see equation (3.13)). We will prove the existence of such a solution \(\phi\) to (3.13) by using a fixed point argument, thanks to some estimates in subsection 3.2 (see (3.20) and (3.21)). The corresponding solution \(u_{\rho}\) in (1.10) behaves as a sign-changing tower of \(m\) singular Liouville bubbles thanks to the asymptotic properties of its main order term \(U\) (see (2.23) in Lemma 2.6). In Section 5 we will prove the invertibility of the linear operator naturally associated to the problem (see (3.14)) stated in Proposition 3.1. To conclude Theorem 1.2, the same procedure with the same ansatz is performed to equation (1.2), assuming (1.8)-(1.9) and \(\epsilon=\epsilon(\rho)\), where the error term is given by \[\mathcal{R}=\Delta U+\lambda_{0}\frac{V_{0}(x)e^{U}}{\int_{\Omega_{\epsilon}} V_{0}e^{U}}-\lambda_{1}\tau\frac{V_{1}(x)e^{-\tau U}}{\int_{\Omega_{\epsilon}} V_{1}e^{-\tau U}}. \tag{1.13}\] ## 2. Approximation of the solution In this section we shall make a choice of the parameters \(\alpha_{i}\)'s, \(\delta_{j}\)'s and \(\epsilon=\epsilon(\rho)\) in order to make \(U\) a good approximation. Introduce the function \(P_{\epsilon}w\) as the unique solution of \[\left\{\begin{array}{ll}\Delta P_{\epsilon}w=\Delta w&\mbox{in }\Omega_{ \epsilon}\\ P_{\epsilon}w=0,&\mbox{on }\partial\Omega_{\epsilon}.\end{array}\right. \tag{2.1}\] For simplicity, we will denote \(h_{0}=H(0,0)\). We have the following asymptotic expansion of \(Pw_{\delta,\alpha}\) as \(\delta\to 0\), as shown in [1, Lemma 2.1] (see also [13, Lemma 2.1]): **Lemma 2.1**.: _The function \(P_{\epsilon}w_{\delta,\alpha}\) satisfies_ \[P_{\epsilon}w_{\delta,\alpha}(x)=w_{\delta,\alpha}(x)-\log(2\alpha^{2}\delta^ {\alpha})+4\pi\alpha H(x,0)-\gamma^{\alpha}_{\delta,\epsilon}G(x,0)+O\Big{(} \delta^{\alpha}+\Big{(}\frac{\epsilon}{\delta}\Big{)}^{\alpha}+\Big{[}1+\Big{|} \frac{\log\delta}{\log\epsilon}\Big{|}\Big{]}\epsilon\Big{)},\] _uniformly in \(\Omega_{\epsilon}\), where \(\gamma^{\alpha}_{\delta,\epsilon}\) is given by \(\gamma^{\alpha}_{\delta,\epsilon}=\frac{-2\alpha\log\delta+4\pi\alpha h_{0}}{ -\frac{1}{2\pi}\log\epsilon+h_{0}}.\) In particular, there holds_ \[P_{\epsilon}w_{\delta,\alpha}(x)=[4\pi\alpha-\gamma^{\alpha}_{\delta,\epsilon }]G(x,0)+O\Big{(}\delta^{\alpha}+\Big{(}\frac{\epsilon}{\delta}\Big{)}^{ \alpha}+\Big{[}1+\Big{|}\frac{\log\delta}{\log\epsilon}\Big{|}\Big{]}\epsilon \Big{)}\] _as \(\epsilon\to 0\) locally uniformly in \(\Omega\setminus\{0\}\)._ Given \(m\in\mathds{N}\) with \(m\geq 2\), consider \(\delta_{j}>0\) and \(\alpha_{j}>2\), \(j=1,\ldots,m\), so that our approximating solution \(U\) is defined by (1.11), parametrized by \(\delta_{j}\)'s and \(\alpha^{\prime}_{i}\)s with \(\delta_{j}=\delta_{j}(\rho,\alpha_{1})\) (it also depends on \(\tau\), \(h_{0}\), \(V_{0}(0)\) and \(V_{1}(0)\)), where \(\nu(i)\) is defined in (1.7) and \(P_{\epsilon}\) the projection operator defined by (2.1) for a suitable choice of \(\epsilon\). In order to have a good approximation, for any \(i=1,\ldots,m\) we will assume that \[\delta_{i}^{\alpha_{i}}=d_{i}\rho^{\beta_{i}}, \tag{2.2}\] for small \(\rho>0\), where \(\alpha_{i}\)'s, \(\beta_{i}\)'s, and \(d_{i}\)'s will be specified below. We choose \[\alpha_{1}>2,\quad\mbox{with}\quad\alpha_{1}\in\begin{cases}\bigcap_{k=0}^{(m- 2)/2}\Big{(}2\mathds{N}-\frac{4k}{\tau}\Big{)}^{c}\bigcap\Big{[}\bigcap_{k=1}^ {m/2}\Big{(}\frac{2}{\tau}\mathds{N}-4k+2\Big{)}^{c}\Big{]}&\mbox{if $m$ is even}\\ \bigcap_{k=0}^{(m-1)/2}\Big{(}2\mathds{N}-\frac{4k}{\tau}\Big{)}^{c}\bigcap \Big{[}\bigcap_{k=1}^{(m-1)/2}\Big{(}\frac{2}{\tau}\mathds{N}-4k+2\Big{)}^{c} \Big{]}&\mbox{if $m$ is odd}\end{cases}, \tag{2.3}\] \[\mbox{and}\ \ \mbox{for}\ \ i\geq 2\qquad\alpha_{i}=\begin{cases}\alpha_{1}+2(i-1)+ \frac{2(i-1)}{\tau}&\mbox{if $i$ is odd}\\ \alpha_{1}\tau+2(i-1)\tau+2(i-1)&\mbox{if $i$ is even}\end{cases}. \tag{2.4}\] Note that \[\alpha_{i}=\left(\alpha_{1}+2i-2+\frac{2i-2}{\tau}\right)\tau^{\nu(i)},\qquad \mbox{for $i\geq 2$} \tag{2.5}\] and \(\alpha_{i}>0\) for all \(i=1,\ldots,m\). Furthermore, several identities and properties of \(\alpha_{i}\)'s, \(\beta_{i}\)'s, \(d_{i}\)'s and \(\epsilon\) will be proven in order to have a good approximation. From de definition (2.4), it is readily checked that \(\{\alpha_{2k+1}\}_{k}\) and \(\{\alpha_{2k}\}_{k}\) are increasing in \(k\). Since \(\alpha_{1}>2\) and \(\alpha_{2}=\alpha_{1}\tau+2\tau+2>2\), it follows that \(\alpha_{i}>2\) for all \(i=1,\dots,m\). Notice that all these sets are countable : \(2\mathds{N}-\frac{4k}{\tau}\) for \(k=0,\dots,\frac{m-2}{2}\) and \(\frac{2\tau}{2}\mathds{N}-4k+2\) for \(k=1,\dots,\frac{m}{2}\) with \(m\) even; \(2\mathds{N}-\frac{4k}{\tau}\) for \(k=0,\dots,\frac{m-1}{2}\) and \(\frac{2\tau}{2}\mathds{N}-4k+2\) for \(k=1,\dots,\frac{m-1}{2}\) with \(m\) odd. Hence, its union is also countable set and the complement of its union is dense in \(\mathds{R}\). Therefore, there exist \(\alpha_{1}\in\mathds{R}\) satisfying (2.3). **Lemma 2.2**.: _If \(\alpha_{i}\)'s are given by (2.4) then_ \[\alpha_{j+1}=(\alpha_{j}+2)\tau^{(-1)^{j+1}}+2=\begin{cases}(\alpha_{j}+2)\tau +2&\text{if $j$ is odd}\\ \dfrac{\alpha_{j}+2}{\tau}+2&\text{if $j$ is even}\end{cases} \tag{2.6}\] _and_ \[\sum_{i=1}^{j}\frac{(-1)^{i+1}}{\tau^{\nu(i)}}\alpha_{i}=\begin{cases}-j-\frac {j}{\tau}&\text{if $j$ is even}\\ \alpha_{1}+j-1+\frac{j-1}{\tau}&\text{if $j$ is odd}\end{cases} \tag{2.7}\] _holds for all \(j\geq 1\). If \(\alpha_{1}\) satisfies (2.3) then \(\alpha_{i}\notin 2\mathds{N}\) for all \(i=1,\dots,m\)._ **Proof:** Assume first that \(i\) is odd. Then, \(i+1\) is even, from (2.4) for \(i\) odd, \(\alpha_{i}=\alpha_{1}+2i-2+\frac{2i-2}{\tau}\) and direct computations lead us to obtain that \[(\alpha_{i}+2)\tau+2=\left(\alpha_{1}+2i+\frac{2i-2}{\tau}\right)\tau+2=( \alpha_{1}+2i)\tau+2i.\] On the other hand, from (2.4) for \(i+1\) even, we find that \(\alpha_{i+1}=\alpha_{1}\tau+2i\tau+2i\), so that \(\alpha_{i+1}=(\alpha_{i}+2)\tau+2\). Direct computations in case \(i\) is even allows us to conclude (2.6). From the choice of \(\alpha_{i}\)'s and (2.5), it follows that \[\sum_{i=1}^{j}\frac{(-1)^{i+1}}{\tau^{\nu(i)}}\alpha_{i} =\sum_{i=1}^{j}(-1)^{i+1}\bigg{[}\alpha_{1}+2i-2+\frac{2i-2}{\tau} \bigg{]}\] \[=\alpha_{1}\sum_{i=1}^{j}(-1)^{i+1}+2\sum_{i=1}^{j}(-1)^{i+1}(i-1 )+\frac{2}{\tau}\sum_{i=1}^{j}(-1)^{i+1}(i-1)\] and we conclude (2.7), in view of \[\sum_{i=1}^{j}(-1)^{i+1}(i-1)=\sum_{i=1}^{j}(-1)^{i+1}i+\sum_{i=1}^{j}(-1)^{i+ 1}=\begin{cases}-\frac{j}{2}+0&\text{if $j$ is even}\\ \frac{j+1}{2}-1&\text{if $j$ is odd}\end{cases}.\] Finally, assume that \(m\) is even. If \(\alpha_{1}\) satisfies (2.3) then we have that \(\alpha_{1}\in\left(2\mathds{N}-\frac{4k}{\tau}\right)^{c}\), \(k=0,1,\dots,\frac{m-2}{2}\) and \(\alpha_{1}\in\left(\frac{2}{\tau}\mathds{N}-4k+2\right)^{c}\), \(k=1,2,\dots,\frac{m}{2}\). That is to say, \(\alpha_{1}+\frac{4k}{\tau}\notin 2\mathds{N}\), \(k=0,1\dots,\frac{m-2}{2}\) and \(\tau(\alpha_{1}+4k-2)\notin 2\mathds{N}\), \(k=1,\dots,\frac{m}{2}\). Therefore, \(\alpha_{2k+1}=\alpha_{1}+4k+\frac{4k}{\tau}\notin 2\mathds{N}\), \(k=0,\dots,\frac{m-2}{2}\) and \(\alpha_{2k}=\alpha_{1}\tau+(4k-2)\tau+4k-2\notin 2\mathds{N}\), \(k=1,\dots,\frac{m}{2}\). Similar argument lead us to conclude that \(\alpha_{i}\notin 2\mathds{N}\) for all \(i=1,\dots,m\) if \(m\) is odd. \(\square\) In particular, we have that \(\alpha_{1}>2\), \(\alpha_{2}=(\alpha_{1}+2)\tau+2\), \(\alpha_{3}=\frac{\alpha_{2}+2}{\tau}+2\), \(\alpha_{4}=(\alpha_{3}+2)\tau+2\), \(\alpha_{5}=\frac{\alpha_{4}+2}{\tau}+2,\ \dots\) Furthermore, we have that \[4\pi\sum_{i=1}^{m}\frac{(-1)^{i+1}}{\tau^{\nu(i)}}\alpha_{i}-2\pi(\alpha_{1}-2 )=(-1)^{m+1}\frac{2\pi}{\tau^{\nu(m)}}(\alpha_{m}+2). \tag{2.8}\] Now, we define \(\beta_{i}\) as follows, if \(m\) is even then \[\beta_{l}=\tau^{\nu(l)}\bigg{[}m-l+\frac{m-l+1}{\tau}\bigg{]}=\begin{cases}(m-l )\tau+m-l+1,&\text{if $l$ is even}\\ m-l+\frac{m-l+1}{\tau}&\text{if $l$ is odd}\end{cases}, \tag{2.9}\] for \(l=1,2,\ldots,m\). If \(m\) is odd then \[\beta_{l}=\tau^{\nu(l)}\left[m-l+1+\frac{m-l}{\tau}\right]=\begin{cases}(m-l+1) \tau+m-l,&\text{if $l$ is even}\\ \\ m-l+1+\frac{m-l}{\tau}&\text{if $l$ is odd}\end{cases} \tag{2.10}\] for \(l=1,2,\ldots,m\). In any case, either \(m\) is odd or even, it is readily checked that \(\beta_{m}=1\) and \(\beta_{i}>0\) for all \(i=1,\ldots,m\). Note that we can rewrite \[\beta_{l}=\tau^{\nu(l)}\left[\frac{m-l+1}{\tau^{\nu(m)}}+\frac{m-l}{\tau^{\nu (m+1)}}\right]. \tag{2.11}\] Hence, we shall prove the following useful properties from the definition (2.9) and (2.10). Note that \(1-\sigma(i)=\nu(i)=\sigma(i+1)\) and \(\sigma(i)=1-\nu(i)=\nu(i+1).\) **Lemma 2.3**.: _If \(\beta_{i}\)'s are given by either (2.9) or (2.10) then for any \(l=2,\ldots,m\)_ \[\beta_{l}=\begin{cases}\tau\beta_{l-1}-\tau-1,&\text{ if $l$ is even}\\ \\ \frac{\beta_{l-1}}{\tau}-1-\frac{1}{\tau}&\text{ if $l$ is odd}\end{cases} \tag{2.12}\] _and for any \(l=1,\ldots,m-1\)_ \[\frac{\beta_{l}-1}{2\tau^{\nu(l)}}=\sum_{j=l+1}^{m}\frac{(-1)^{j+1}}{\tau^{ \nu(j)}}\beta_{j}. \tag{2.13}\] _Furthermore, it holds that \(\frac{\beta_{l}}{\alpha_{l}}<\frac{\beta_{l-1}}{\alpha_{l-1}}\) for any \(l=2,\ldots,m\), so that, \(\frac{\delta_{i}}{\delta_{j}}\to 0\) as \(\rho\to 0\) for any \(i<j.\)_ **Proof:** First, assume that \(m\) is even. So, \(\beta_{l}\)'s are given by (2.9). If \(l\) is even, then we have that \(l-1\) is odd, so that, \[\beta_{l-1}=m-l+1+\frac{m-l+2}{\tau}\qquad\text{and}\] \[\tau\beta_{l-1}-\tau-1=(m-l)\tau+\tau+m-l+2-\tau-1=\beta_{l}.\] Next, assume that \(l\) is odd (still \(m\) is even). Similarly as above, it follows that \(\frac{\beta_{l-1}}{\tau}-1-\frac{1}{\tau}=\beta_{l}.\) Arguing in the same way for \(\beta_{l}\)'s given by (2.10) when \(m\) is odd, we conclude (2.12). Now, for any \(m\), we shall prove that \[\beta_{l}=\begin{cases}1+2\sum_{i=1}^{m-l}(-1)^{i+1}\tau^{\sigma(i)}\beta_{l+ i},&\text{if $l$ is even}\\ \\ 1+2\sum_{i=1}^{m-l}\frac{(-1)^{i+1}}{\tau^{\sigma(i)}}\beta_{l+i}&\text{if $l$ is odd}\end{cases}. \tag{2.14}\] Assume that \(m\) is even. If \(l\) is even then \(\nu(i+l)=1-\sigma(i)\) and from (2.11) we get that \[\sum_{i=1}^{m-l}(-1)^{i+1}\tau^{\sigma(i)}\beta_{l+i}=\sum_{i=1}^{m-l}(-1)^{i +1}\tau\left[\frac{\beta_{l}}{\tau}-\Big{(}1+\frac{1}{\tau}\Big{)}i\right]= \beta_{l}\sum_{i=1}^{m-l}(-1)^{i+1}-(\tau+1)\sum_{i=1}^{m-l}(-1)^{i+1}i\] and (2.14) follows, in view of \(m-l\) is even, \(\sum_{i=1}^{m-l}(-1)^{i+1}=0\) and \(\sum_{i=1}^{m-l}(-1)^{i+1}i=-\frac{m-l}{2}.\) Next, assume that \(l\) is odd (still \(m\) is even) so that \(\nu(i+l)=\sigma(i)\) and similarly as above \[\sum_{i=1}^{m-l}\frac{(-1)^{i+1}}{\tau^{\sigma(i)}}\beta_{l+i}=\sum_{i=1}^{m-l }(-1)^{i+1}\left[\beta_{l}-\Big{(}1+\frac{1}{\tau}\Big{)}i\right]=\beta_{l} \sum_{i=1}^{m-l}(-1)^{i+1}-\Big{(}1+\frac{1}{\tau}\Big{)}\sum_{i=1}^{m-l}(-1)^ {i+1}i=\frac{\beta_{l}-1}{2},\] in view of \(m-l\) is odd, \(\sum_{i=1}^{m-l}(-1)^{i+1}=1\) and \(\sum_{i=1}^{m-l}(-1)^{i+1}i=\frac{m-l+1}{2}.\) Arguing in the same way for \(\beta_{l}\)'s given by (2.10) when \(m\) is odd we conclude (2.14). Now, from (2.14) taking \(j=l+i\) we have that if \(l\) is even then \[\beta_{l}=1+2\sum_{i=1}^{m-l}(-1)^{i+1}\tau^{\sigma(i)}\beta_{l+i}=1+2\sum_{j=l+1 }^{m}(-1)^{j+1}\frac{\tau}{\tau^{\nu(j)}}\beta_{i}\] in view of \(\sigma(j-l)=\sigma(j)=1-\nu(j)\), and if \(l\) is odd then \[\beta_{l}=1+2\sum_{i=1}^{m-l}\frac{(-1)^{i+1}}{\tau^{\sigma(i)}}\beta_{l+i}=1+ 2\sum_{j=l+1}^{m}\frac{(-1)^{j}}{\tau^{\nu(j)}}\beta_{i},\] in view of \(\sigma(j-l)=\nu(j)\). Thus, we deduce (2.13). Now, we know that \(0<\alpha_{l-1}+\alpha_{l-1}\tau+2\beta_{l-1}+2\beta_{l-1}\tau\) for any \(l\). Hence, for \(l\) even we have that \(\beta_{l}=\tau\beta_{l-1}-\tau-1\) and \(\alpha_{l}=(\alpha_{l-1}+2)\tau+2\) by using (2.12) and (2.6) respectively, so that \[\alpha_{l-1}[\tau\beta_{l-1}-\tau-1]<[(\alpha_{l-1}+2)\tau+2]\beta_{l-1}\iff \alpha_{l-1}\beta_{l}<\alpha_{l}\beta_{l-1}\] By using (2.6) and (2.12), for \(l\) odd we have that \(\beta_{l}=\frac{\beta_{l-1}}{\tau}-\frac{1}{\tau}-1\) and \(\alpha_{l}=\frac{\alpha_{l-1}+2}{\tau}+2\), and it is readily checked that \(\alpha_{l-1}\beta_{l}<\alpha_{l}\beta_{l-1}\). Thus, we deduce that \(\frac{\delta_{i}}{\delta_{j}}\to 0\) as \(\rho\to 0\) for any \(i<j\), in view of \[\frac{\delta_{i}}{\delta_{j}}=\frac{d_{i}^{1/\alpha_{i}}\rho^{\beta_{i}/\alpha _{i}}}{d_{j}^{1/\alpha_{j}}\rho^{\beta_{j}/\alpha_{j}}}=\frac{d_{i}^{1/\alpha_ {i}}}{d_{j}^{1/\alpha_{j}}}\rho^{\frac{\beta_{i}}{\alpha_{i}}\frac{\beta_{j}}{ \alpha_{j}}}\qquad\text{ and }\qquad\frac{\beta_{j}}{\alpha_{j}}<\frac{\beta_{i}}{ \alpha_{i}}.\] This finishes the proof. Now, we define \(d_{i}\)'s by \[\log d_{m}=a_{m}\quad\text{and}\quad\log d_{l}=a_{l}+2\tau^{\nu(l)}\sum_{i=l+ 1}^{m}\frac{a_{i}}{\tau^{\nu(i)}}=\begin{cases}a_{l}+2\sum_{i=l+1}^{m}a_{i} \tau^{\sigma(i)},&\text{if $l$ is even}\\ a_{l}+2\sum_{i=l+1}^{m}\frac{a_{i}}{\tau^{\nu(i)}}&\text{if $l$ is odd}\end{cases} \tag{2.15}\] for \(l=1,2,\ldots,m-1\), where \[a_{l}=\log\Big{(}\frac{\tau^{\nu(l)}V_{\nu(l)}(0)}{2\alpha_{l}^{2}}\Big{)}+ \frac{(-1)^{l}}{\tau^{\sigma(l)}}2\pi(\alpha_{m}+2)h_{0}=\begin{cases}\log \Big{(}\frac{\tau V_{1}(0)}{2\alpha_{l}^{2}}\Big{)}+2\pi(\alpha_{m}+2)h_{0},& \text{if $l$ is even}\\ \log\Big{(}\frac{V_{0}(0)}{2\alpha_{l}^{2}}\Big{)}-\frac{2\pi}{\tau}(\alpha_{ m}+2)h_{0}&\text{if $l$ is odd}\end{cases}. \tag{2.16}\] when \(m\) even, while when \(m\) is odd \[a_{l} =\log\Big{(}\frac{\tau^{\nu(l)}V_{\nu(l)}(0)}{2\alpha_{l}^{2}} \Big{)}+(-1)^{l+1}\tau^{\nu(l)}2\pi(\alpha_{m}+2)h_{0} \tag{2.17}\] \[=\begin{cases}\log\Big{(}\frac{\tau V_{1}(0)}{2\alpha_{l}^{2}} \Big{)}-2\pi(\alpha_{m}+2)\tau h_{0},&\text{if $l$ is even}\\ \log\Big{(}\frac{V_{0}(0)}{2\alpha_{l}^{2}}\Big{)}+2\pi(\alpha_{m}+2)h_{0}& \text{if $l$ is odd}\end{cases}.\] **Lemma 2.4**.: _If \(d_{i}\)'s are given by (2.15) then for any \(l=1,\ldots,m-1\)_ \[\log d_{l}=\begin{cases}a_{l}+2\sum_{i=1}^{m-l}(-1)^{i+1}\tau^{\sigma(i)}\log d _{l+i},&\text{if $l$ is even}\\ a_{l}+2\sum_{i=1}^{m-l}\frac{(-1)^{i+1}}{\tau^{\sigma(i)}}\log d_{l+i}&\text{ if $l$ is odd}\end{cases} \tag{2.18}\] _and consequently,_ \[\frac{\log d_{l}-a_{l}}{2\tau^{\nu(l)}}=\sum_{j=l+1}^{m}\frac{(-1)^{j+1}}{\tau^{ \nu(j)}}\log d_{j}. \tag{2.19}\] **Proof:** First, assume that \(m\) is even. So, \(a_{l}\)'s are given by (2.16). If \(l\) is even, then we have that \(\nu(i+l)=1-\sigma(i)\), \(\frac{\tau}{\tau^{\nu(j)}}=\tau^{\sigma(j)}\) and \(m-l\) is even, so that, \(\sigma(m-l)=0\) and \[\sum_{i=1}^{m-l}(-1)^{i+1}\tau^{\sigma(i)}\log d_{l+i}=\sum_{i=1} ^{m-l-1}(-1)^{i+1}\tau^{\sigma(i)}\bigg{[}a_{l+i}+2\tau^{\nu(l+i)}\sum_{j=l+i+1 }^{m}\frac{a_{j}}{\tau^{\nu(j)}}\bigg{]}-a_{m}\] \[=\sum_{i=1}^{m-l-1}(-1)^{i+1}\tau^{\sigma(i)}a_{l+i}+2\tau\sum_{i =1}^{m-l-1}\sum_{j=l+i+1}^{m}(-1)^{i+1}\frac{a_{j}}{\tau^{\nu(j)}}-a_{m}\] \[=\sum_{j=l+1}^{m-1}(-1)^{j+1}\tau^{\sigma(j)}a_{j}+\sum_{j=l+2}^{ m}[1+(-1)^{j}]a_{j}\tau^{\sigma(j)}-a_{m}=\sum_{j=l+1}^{m}a_{j}\tau^{\sigma(j)}\] and (2.18) follows. Similarly as above, if \(l\) is odd, then we have that \(\nu(i+l)=\sigma(i)\), \(\frac{\tau}{\tau^{\nu(j)}}=\tau^{\sigma(j)}\) and \(m-l+1\) is even, so that, \(\sigma(m-l+1)=0\) and \[\sum_{i=1}^{m-l}\frac{(-1)^{i+1}}{\tau^{\sigma(i)}}\log d_{l+i}= \sum_{i=1}^{m-l-1}\frac{(-1)^{i+1}}{\tau^{\sigma(i)}}a_{l+i}+2\sum_{i=1}^{m-l- 1}\sum_{j=l+i+1}^{m}(-1)^{i+1}\frac{a_{j}}{\tau^{\nu(j)}}+\frac{a_{m}}{\tau}\] \[=\sum_{j=l+1}^{m-1}\frac{(-1)^{j}}{\tau^{\nu(j)}}a_{j}+\sum_{j=l+ 2}^{m}[1+(-1)^{j+1}]\frac{a_{j}}{\tau^{\nu(j)}}+\frac{a_{m}}{\tau}=\sum_{j=l+1 }^{m}\frac{a_{j}}{\tau^{\nu(j)}}\] and we conclude (2.18). Similar arguments work out in case \(m\) is odd. We deduce (2.19) by using the change \(j=l+i\). This conclude the proof. Now, \(\epsilon=\epsilon(\rho)\) is chosen so that \[\sum_{i=1}^{m}\frac{(-1)^{i+1}}{\tau^{\nu(i)}}\gamma_{i}=2\pi(\alpha_{1}-2), \qquad\text{where}\quad\gamma_{j}=\gamma_{\delta_{j},\epsilon}^{\alpha_{j}}, \;\;j=1,\ldots,m \tag{2.20}\] and \(\gamma_{\delta,\epsilon}^{\alpha}\) is given in Lemma 2.1. Precisely, \(\epsilon=\epsilon(\rho)\) is given by \[\epsilon^{\alpha_{1}-2}=\begin{cases}\frac{[V_{0}(0)]^{m}[\tau V_{1}(0)]^{ \frac{m}{\tau}}}{\prod_{i=1}^{m}\alpha_{i}^{4/\tau^{\nu(i)}}}e^{\frac{2\pi}{ \tau}(\alpha_{m}+2)h_{0}}\Big{(}\frac{\rho}{2}\Big{)}^{m+\frac{m}{\tau}}&\text {if $m$ is even}\\ \frac{[V_{0}(0)]^{m+1}[\tau V_{1}(0)]^{\frac{m-1}{\tau}}}{\prod_{i=1}^{m} \alpha_{i}^{4/\tau^{\nu(i)}}}e^{2\pi(\alpha_{m}+2)h_{0}}\Big{(}\frac{\rho}{2} \Big{)}^{m+1+\frac{m-1}{\tau}}&\text{if $m$ is odd}\end{cases} \tag{2.21}\] It is readily checked that \(\epsilon(\rho)\to 0^{+}\) as \(\rho\to 0^{+}\). Moreover, \(\epsilon^{\alpha_{1}-2}\sim\rho^{\beta_{1}+1}\), in view of \[\beta_{1}=\begin{cases}m-1+\frac{m}{\tau}&\text{if $m$ is even}\\ m+\frac{m-1}{\tau}&\text{if $m$ is odd}\end{cases} \tag{2.22}\] **Lemma 2.5**.: _If \(\epsilon\) is given by (2.21) then it holds (2.20) and \(\frac{\epsilon}{\delta_{1}}\to 0\) as \(\rho\to 0\)._ **Proof:** First, assume that \(m\) is even so that, from (2.21) it follows that \[(\alpha_{1}-2)\log\epsilon=m\log V_{0}(0)+\frac{m}{\tau}\log[\tau V_{1}(0)]+ \frac{2\pi}{\tau}(\alpha_{m}+2)h_{0}-4\sum_{i=1}^{m}\frac{\log\alpha_{i}}{\tau^ {\nu(i)}}+\Big{(}m+\frac{m}{\tau}\Big{)}\log\Big{(}\frac{\rho}{2}\Big{)}.\] On the other hand, from the definition of \(\delta_{i}\) and \(\gamma_{i}\) it follows that \[\Big{[}-\frac{1}{2\pi}\log\epsilon+h_{0}\Big{]}\gamma_{i}=-2\log d_{i}-2\beta _{i}\log\rho+4\pi\alpha_{i}h_{0}\] so that, \[\Big{[}-\frac{1}{2\pi}\log\epsilon+h_{0}\Big{]}\sum_{i=1}^{m}\frac{(-1)^{i+1}}{ \tau^{\nu(i)}}\gamma_{i}=-2\sum_{i=1}^{m}\frac{(-1)^{i+1}}{\tau^{\nu(i)}}\log d _{i}-2\log\rho\sum_{i=1}^{m}\frac{(-1)^{i+1}}{\tau^{\nu(i)}}\beta_{i}+4\pi h_{0 }\sum_{i=1}^{m}\frac{(-1)^{i+1}}{\tau^{\nu(i)}}\alpha_{i}.\] Hence, we compute the sums involved as follows. From Lemma 2.4 we have that for any \(m\), either odd or even, \[\log d_{1}=a_{1}+2\sum_{j=2}^{m}\frac{(-1)^{j}}{\tau^{\nu(j)}}\log d_{j}.\] Then by using (2.15) and \(m=2k\) for some \(k\in\mathds{N}\) we get that \[\sum_{j=1}^{m}\frac{(-1)^{j}}{\tau^{\nu(j)}}\log d_{j}=\frac{a_{1 }+\log d_{1}}{2}=\sum_{i=1}^{m}\frac{a_{i}}{\tau^{\nu(i)}}=\sum_{i=1}^{k} \frac{1}{\tau}\left[\log[\tau V_{1}(0)]+2\pi(\alpha_{m}+2)h_{0}-\log(2\alpha_{ 2i}^{2})\right]\] \[\quad+\sum_{i=1}^{k}\left[\log V_{0}(0)-\frac{2\pi}{\tau}(\alpha_ {m}+2)h_{0}-\log(2\alpha_{2i-1}^{2})\right]\] \[=\frac{m}{2}\log V_{0}(0)+\frac{m}{2\tau}\log[\tau V_{1}(0)]- \Big{(}m+\frac{m}{\tau}\Big{)}\frac{\log 2}{2}-2\sum_{i=1}^{m}\frac{\log\alpha_{i}}{ \tau^{\nu(i)}}.\] Also, from the choice of \(\beta_{i}\)'s in (2.9), it is readily checked that \[\sum_{i=1}^{m}\frac{(-1)^{i+1}}{\tau^{\nu(i)}}\beta_{i}=\sum_{i=1}^{m}(-1)^{i +1}\left[m+\frac{m+1}{\tau}-\Big{(}1+\frac{1}{\tau}\Big{)}i\right]=\frac{m}{2 }+\frac{m}{2\tau}.\] Therefore, by using (2.7) we obtain that \[\Big{[}-\frac{1}{2\pi}\log\epsilon+h_{0}\Big{]}\sum_{i=1}^{m} \frac{(-1)^{i+1}}{\tau^{\nu(i)}}\gamma_{i}=-m\log V_{0}(0)-\frac{m}{\tau}\log [\tau V_{1}(0)]+\Big{(}m+\frac{m}{\tau}\Big{)}\log 2+4\sum_{i=1}^{m}\frac{\log \alpha_{i}}{\tau^{\nu(i)}}\] \[\quad-\Big{(}m+\frac{m}{\tau}\Big{)}\log\rho+4\pi h_{0}\Big{(}-m -\frac{m}{\tau}\Big{)}=-(\alpha_{1}-2)\log\epsilon+\frac{2\pi}{\tau}(\alpha_ {m}+2)h_{0}-4\pi h_{0}\Big{(}m+\frac{m}{\tau}\Big{)}\] \[=-(\alpha_{1}-2)\log\epsilon+2\pi(\alpha_{1}-2)h_{0},\] in view of \(\frac{\alpha_{m}+2}{\tau}-2m-\frac{2m}{\tau}=\alpha_{1}-2\) and (2.20) follows. Similarly, (2.20) is proven if \(m\) is odd. Finally, taking into account (2.22), (2.21) and \(\delta_{1}^{\alpha_{1}}=d_{1}\rho^{\beta_{1}}\), we deduce that \(\frac{\epsilon}{\delta_{1}}\to 0\iff\frac{\beta_{1}+1}{\alpha_{1}-2}>\frac{ \beta_{1}}{\alpha_{1}}\), in view of \(\epsilon^{\alpha_{1}-2}\sim\rho^{\beta_{1}+1}\). Indeed, it is readily checked that \(\frac{\beta_{1}+1}{\alpha_{1}-2}>\frac{\beta_{1}+1}{\alpha_{1}}>\frac{\beta_ {1}}{\alpha_{1}}.\) This completes the proof. Let us stress that the behavior of \(\epsilon=\epsilon(\rho)\) and \(\delta_{j}=\delta_{j}(\rho)\)'s with respect to \(\rho\) is given by \[\frac{\epsilon}{\delta_{j}},\ \ \frac{\delta_{i}}{\delta_{j}}\to 0\ \ \ \text{as}\ \ \ \ \rho\to 0\ \ \text{for}\ \ i<j\ \ \text{and}\ \ \ \epsilon\to 0\ \ \text{as}\ \ \ \ \rho\to 0.\] Assume that \(\delta_{i}\)'s are given by (2.2) with \(\alpha_{i}\)'s, \(\beta_{i}\)'s and \(d_{i}\)'s defined in (2.4), (2.9)-(2.10) and (2.15), respectively, and \(\epsilon\) is given by (2.21). Notice that \(\frac{\log\delta_{i}}{\log\epsilon}=O(1)\) as \(\rho\to 0\), for all \(j=1,\ldots,m\). Now, define the _shrinking annuli_ \[A_{i}=\{x\in\Omega\ |\ \sqrt{\delta_{i-1}\delta_{i}}<|x|\leq\sqrt{\delta_{i} \delta_{i+1}}\},\qquad j=1,\ldots,m,\] where for simplicity we denote \(\delta_{0}=\frac{\epsilon^{2}}{\delta_{1}}\) and \(\delta_{m+1}=\frac{M_{0}^{2}}{\delta_{m}}\) with \(M_{0}=\sup\{|x|:x\in\Omega\}\), so that, \(\Omega_{\epsilon}=\cup_{j=1}^{m}A_{j}\), \(\cap_{j=1}^{m}A_{j}=\emptyset\) and \(\frac{A_{j}}{\delta_{j}}\) approaches to \(\mathrm{I\!R}^{2}\) as \(\rho\to 0\) for each \(j=1,\ldots,m\). **Lemma 2.6**.: _There exist \(\eta>0\) such that the following expansions hold_ \[\begin{split} U(\delta_{j}y)&=\frac{(-1)^{j+1}}{\tau^{ \nu(j)}}\left[-2\log\delta_{j}-a_{j}-\log\rho+\log\frac{|y|^{\alpha_{j}-2}}{(1 +|y|^{\alpha_{j}})^{2}}\right]\\ &\quad+(-1)^{m+1}\frac{2\pi}{\tau^{\nu(m)}}(\alpha_{m}+2)H(\delta _{j}y,0)+O\left(\rho^{\eta}\right)\end{split} \tag{2.23}\] _for any \(j=1,\ldots,m\), uniformly for \(\delta_{j}y\in A_{j}\), \(j=1,\ldots,m\)._ **Proof:** From the expansions of \(P_{\epsilon}w_{j}\), \(j=1,\ldots,m\) and the definition of \(\gamma_{j}\) in Lemma 2.1, (2.8) and (2.20) we obtain that \[\begin{split} U(x)&=\sum_{i=1}^{m}\frac{(-1)^{i+1}} {\tau^{\nu(i)}}\left[w_{i}-\log(2\alpha_{i}^{2}\delta_{i}^{\alpha_{i}}) \right]+\frac{1}{2\pi}\sum_{i=1}^{m}\frac{(-1)^{i+1}}{\tau^{\nu(i)}}\gamma_{i }\log|x|\\ &\quad+\sum_{i=1}^{m}\frac{(-1)^{i+1}}{\tau^{\nu(i)}}\left[4\pi \alpha_{i}-\gamma_{i}\right]H(x,0)+O\left(\sum_{j=1}^{m}\left[\delta_{j}^{ \alpha_{j}}+\left(\frac{\epsilon}{\delta_{j}}\right)^{\alpha_{j}}\right]+ \epsilon\right)\\ &=\sum_{i=1}^{m}\frac{(-1)^{i+1}}{\tau^{\nu(i)}}\log\frac{1}{( \delta_{i}^{\alpha_{i}}+|x|^{\alpha_{i}})^{2}}+(\alpha_{1}-2)\log|x|+(-1)^{m+ 1}\frac{2\pi}{\tau^{\nu(m)}}(\alpha_{m}+2)H(x,0)\\ &\quad+O\left(\rho^{\eta}\right),\end{split} \tag{2.24}\] where we choose \[0<\eta\leq\min\bigg{\{}1,\frac{\beta_{1}+1}{\alpha_{1}-2},\min\Big{\{}\alpha_{ j}\Big{(}\frac{\beta_{1}+1}{\alpha_{1}-2}-\frac{\beta_{j}}{\alpha_{j}}\Big{)} \ :\ j=1,\ldots,m\Big{\}}\bigg{\}}, \tag{2.25}\] in view \(\beta_{j}\) are decreasing so that \(\delta_{j}^{\alpha_{j}}=O(\rho)\) for all \(j=1,\ldots,m\), \(\big{(}\frac{\epsilon}{\delta_{j}}\big{)}^{\alpha_{j}}=O\big{(}\rho^{\alpha_{ j}(\frac{\beta_{1}+1}{\alpha_{1}-2}-\frac{\beta_{j}}{\alpha_{j}})}\big{)}\) and \(\epsilon=O\big{(}\rho^{\frac{\beta_{1}+1}{\alpha_{1}-2}}\big{)}\). Now, from the choice of \(\delta_{i}\)'s in (2.2), (2.13) and (2.19) we have that for \(j\) odd \[\begin{split} 2\sum_{i=j+1}^{m}\frac{(-1)^{i}}{\tau^{(\nu(i)}} \alpha_{i}\log\delta_{i}&=2\sum_{i=j+1}^{m}\frac{(-1)^{i}}{\tau^{ (\nu(i)}}\log d_{i}+2\log\rho\sum_{i=j+1}^{m}\frac{(-1)^{i}}{\tau^{(\nu(i)}} \beta_{i}\\ &=2\frac{\log d_{j}-a_{j}}{2}+2\frac{\beta_{j}-1}{2}\log\rho= \alpha_{j}\log\delta_{j}-a_{j}-\log\rho\end{split}\] and similarly, for \(j\) even \(2\sum_{i=j+1}^{m}\frac{(-1)^{i}}{\tau^{(\nu(i)}}\alpha_{i}\log\delta_{i}=- \frac{1}{\tau}\big{[}\alpha_{j}\log\delta_{j}-a_{j}-\log\rho\big{]}.\) Thus, we rewrite as \[2\sum_{i=j+1}^{m}\frac{(-1)^{i}}{\tau^{(\nu(i)}}\alpha_{i}\log\delta_{i}=\frac{ (-1)^{j+1}}{\tau^{\nu(j)}}\big{[}\alpha_{j}\log\delta_{j}-a_{j}-\log\rho\big{]}. \tag{2.26}\] On the other hand, for any \(y\in\frac{A_{1}}{\delta_{1}}\) it holds that \[\begin{split} U(\delta_{1}y)=&-2\alpha_{1}\log\delta_ {1}+\log\frac{1}{(1+|y|^{\alpha_{1}})^{2}}+(\alpha_{1}-2)\log(\delta_{1}|y|)\\ &+\sum_{i=2}^{m}\frac{(-1)^{i+1}}{\tau^{\nu(i)}}\log\frac{1}{( \delta_{i}^{\alpha_{i}}+\delta_{1}^{\alpha_{i}}|y|^{\alpha_{i}})^{2}}+(-1)^{m+ 1}\frac{2\pi}{\tau^{\nu(m)}}(\alpha_{m}+2)H(\delta_{1}y,0)+O\left(\rho^{\eta} \right)\\ =&-(\alpha_{1}+2)\log\delta_{1}+\sum_{i=2}^{m}(-1)^{ i}\frac{2\alpha_{i}}{\tau^{\nu(i)}}\log\delta_{i}+\log\frac{|y|^{\alpha_{1}-2}}{(1+|y|^{ \alpha_{1}})^{2}}\\ &+\sum_{i=2}^{m}\frac{(-1)^{i}}{\tau^{\nu(i)}}2\log\left(1+ \left(\frac{\delta_{1}|y|}{\delta_{i}}\right)^{\alpha_{i}}\right)+(-1)^{m+1} \frac{2\pi}{\tau^{\nu(m)}}(\alpha_{m}+2)H(\delta_{1}y,0)+O\left(\rho^{\eta} \right)\\ =&-2\log\delta_{1}-a_{1}-\log\rho+\log\frac{|y|^{ \alpha_{1}-2}}{(1+|y|^{\alpha_{1}})^{2}}+(-1)^{m+1}\frac{2\pi}{\tau^{\nu(m)}} (\alpha_{m}+2)H(\delta_{1}y,0)\\ &+O\left(\sum_{j=2}^{m}\left(\frac{\delta_{1}}{\delta_{j}}\right) ^{\frac{\alpha_{j}}{2}}+\rho^{\eta}\right),\end{split} \tag{2.27}\] in view of (2.26) and \[\log\left(1+\left(\frac{\delta_{1}|y|}{\delta_{i}}\right)^{\alpha_{i}}\right) =O\left(\left(\frac{\delta_{1}|y|}{\delta_{i}}\right)^{\alpha_{i}}\right)=O \left(\left(\frac{\delta_{1}}{\delta_{i}}\right)^{\frac{\alpha_{i}}{2}} \right),\quad i=2,\ldots,m.\] Choosing \(\eta>0\) satisfying (2.25) and \[0<\eta\leq\frac{1}{2}\min\left\{\alpha_{i}\Big{(}\frac{\beta_{1}}{\alpha_{1}} -\frac{\beta_{i}}{\alpha_{i}}\Big{)}\ :\ \ 1<i\right\} \tag{2.28}\] we get that \(\left(\frac{\delta_{1}}{\delta_{i}}\right)^{\frac{\alpha_{i}}{2}}=O\left(\rho ^{\eta}\right)\), \(i=2,\ldots,m.\) Similarly, for \(y\in\frac{A_{j}}{\delta_{j}}\), \(1<j<m\) we find that \[\begin{split}& U(\delta_{j}y)=\sum_{i=1}^{j-1}\frac{(-1)^{i+1}}{ \tau^{\nu(i)}}\bigg{[}-2\alpha_{i}\log(\delta_{j}|y|)-2\log\left(1+\Big{(} \frac{\delta_{i}}{\delta_{j}|y|}\Big{)}^{\alpha_{i}}\right)\bigg{]}+(\alpha_{ 1}-2)\log(\delta_{j}|y|)\\ &+\frac{(-1)^{j+1}}{\tau^{\nu(j)}}\bigg{[}-2\alpha_{j}\log\delta_ {j}+\log\frac{1}{(1+|y|^{\alpha_{j}})^{2}}\bigg{]}+\sum_{i=j+1}^{m}\frac{(-1) ^{i}}{\tau^{\nu(i)}}2\alpha_{i}\log\delta_{i}\\ &+\sum_{i=j+1}^{m}\frac{(-1)^{i}}{\tau^{\nu(i)}}2\log\left(1+ \Big{(}\frac{\delta_{j}|y|}{\delta_{i}}\Big{)}^{\alpha_{i}}\right)+(-1)^{m+1} \frac{2\pi}{\tau^{\nu(m)}}(\alpha_{m}+2)H(\delta_{j}y,0)+O\left(\rho^{\eta} \right)\\ &=-2\log\delta_{j}\sum_{i=1}^{j}\frac{(-1)^{i+1}}{\tau^{\nu(i)}} \alpha_{i}+2\sum_{i=j+1}^{m}\frac{(-1)^{i}}{\tau^{\nu(i)}}\alpha_{i}\log\delta _{i}+(\alpha_{1}-2)\log\delta_{j}+(\alpha_{1}-2)\log|y|\\ &-2\log|y|\sum_{i=1}^{j-1}\frac{(-1)^{i+1}}{\tau^{\nu(i)}}\alpha_ {i}+\frac{(-1)^{j+1}}{\tau^{\nu(j)}}\log\frac{1}{(1+|y|^{\alpha_{j}})^{2}}\\ &+(-1)^{m+1}\frac{2\pi}{\tau^{\nu(m)}}(\alpha_{m}+2)H(\delta_{j} y,0)+O\bigg{(}\sum_{i=1}^{j-1}\Big{(}\frac{\delta_{i}}{\delta_{j}}\Big{)}^{ \frac{\alpha_{i}}{2}}+\sum_{i=j+1}^{m}\Big{(}\frac{\delta_{j}}{\delta_{i}} \Big{)}^{\frac{\alpha_{i}}{2}}+\rho^{\eta}\bigg{)},\end{split} \tag{2.29}\] in view of \[\begin{split}&\log\left(1+\Big{(}\frac{\delta_{i}}{\delta_{j}|y|} \Big{)}^{\alpha_{i}}\right)=O\left(\Big{(}\frac{\delta_{i}}{\delta_{j}|y|} \Big{)}^{\alpha_{i}}\right)=O\left(\Big{(}\frac{\delta_{i}}{\delta_{j}}\Big{)} ^{\frac{\alpha_{i}}{2}}\right),\quad i<j,\\ &\log\left(1+\Big{(}\frac{\delta_{j}|y|}{\delta_{i}}\Big{)}^{ \alpha_{i}}\right)=O\left(\Big{(}\frac{\delta_{j}|y|}{\delta_{i}}\Big{)}^{ \alpha_{i}}\right)=O\left(\Big{(}\frac{\delta_{j}}{\delta_{j}}\Big{)}^{\frac{ \alpha_{i}}{2}}\right),\quad j<i,\end{split}\] and by using again (2.20). Now, we choose \(\eta\) satisfying (2.25), (2.28) and smaller, if necessary, so that \[0<\eta\leq\frac{1}{2}\min\Big{\{}\alpha_{i}\Big{(}\frac{\beta_{i}}{\alpha_{i}}- \frac{\beta_{j}}{\alpha_{j}}\Big{)}\ :\ \ i<j\Big{\}},\ \ j=2,\ldots,m\] and \[0<\eta\leq\frac{1}{2}\min\Big{\{}\alpha_{i}\Big{(}\frac{\beta_{j}}{\alpha_{j}} -\frac{\beta_{i}}{\alpha_{i}}\ :\ \ j<i\Big{\}},\ \ j=1,\ldots,m-1\] thus, \(\big{(}\frac{\delta_{i}}{\delta_{j}}\big{)}^{\frac{\alpha_{i}}{2}}=O\left( \rho^{\eta}\right)\), \(i<j\) and \(\big{(}\frac{\delta_{j}}{\delta_{i}}\big{)}^{\frac{\alpha_{i}}{2}}=O\left( \rho^{\eta}\right)\), \(j<i\). Hence, by using (2.7) and (2.26) for \(j\) even we get that \[U(\delta_{j}y)=-2\log\delta_{j}\Big{[}-j-\frac{j}{\tau}\Big{]}- \frac{1}{\tau}\left[\alpha_{j}\log\delta_{j}-a_{j}-\log\rho\right]+(\alpha_ {1}-2)\log\delta_{j}+(\alpha_{1}-2)\log|y|\] \[\ -2\log|y|\left[\alpha_{1}+j-2+\frac{j-2}{\tau}\right]-\frac{1} {\tau}\log\frac{1}{(1+|y|^{\alpha_{j}})^{2}}+(-1)^{m+1}\frac{2\pi}{\tau^{\nu (m)}}(\alpha_{m}+2)H(\delta_{j}y,0)+O\left(\rho^{\eta}\right)\] and we conclude (2.23). Similarly, for \(j\) odd we get that \[U(\delta_{j}y)=-2\log\delta_{j}\Big{[}\alpha_{1}+j-1+\frac{j-1}{ \tau}\Big{]}+\alpha_{j}\log\delta_{j}-a_{j}-\log\rho+(\alpha_{1}-2)\log\delta_ {j}+(\alpha_{1}-2)\log|y|\] \[\ -2\log|y|\left[-j+1-\frac{j-1}{\tau}\right]+\log\frac{1}{(1+|y| ^{\alpha_{j}})^{2}}+(-1)^{m+1}\frac{2\pi}{\tau^{\nu(m)}}(\alpha_{m}+2)H( \delta_{j}y,0)+O(\rho^{\eta})\] and we obtain (2.23). Also, we have for any \(y\in\frac{A_{m}}{\delta_{m}}\) \[U(\delta_{m}y)=\sum_{i=1}^{m-1}\frac{(-1)^{i+1}}{\tau^{\nu(i)}} \left[-2\alpha_{i}\log(\delta_{m}|y|)-2\log\left(1+\Big{(}\frac{\delta_{i}}{ \delta_{m}|y|}\Big{)}^{\alpha_{i}}\right)\right]+(\alpha_{1}-2)\log(\delta_{m} |y|) \tag{2.30}\] \[+\frac{(-1)^{m+1}}{\tau^{\nu(m)}}\left[-2\alpha_{m}\log\delta_{m} +\log\frac{1}{(1+|y|^{\alpha_{m}})^{2}}\right]+(-1)^{m+1}\frac{2\pi}{\tau^{ \nu(m)}}(\alpha_{m}+2)H(\delta_{j}y,0)+O(\rho^{\eta})\] \[=\frac{(-1)^{m+1}}{\tau^{\nu(m)}}\left[-2\log\delta_{m}-a_{m}- \log\rho+\log\frac{|y|^{\alpha_{m}-2}}{(1+|y|^{\alpha_{m}})^{2}}+2\pi(\alpha_ {m}+2)H(\delta_{m}y,0)\right]+O(\rho^{\eta}).\] This completes the proof. ## 3. Problem in Liouville form (1.1) ### Error estimate In order to evaluate how well the approximation \(U\) satisfies the equation (1.1) (and get the invertibility of the linearized operator), we will use the norms \[\|h\|_{p}=\left(\int_{\Omega_{\epsilon}}|h(x)|^{p}\,dx\right)^{1/p}\qquad\text {and}\qquad\|h\|=\left(\int_{\Omega_{\epsilon}}|\nabla h(x)|^{2}\,dx\right)^{1 /2},\] the usual norms in the Banach spaces \(L^{p}(\Omega_{\epsilon})\) and \(H^{1}_{0}(\Omega_{\epsilon})\), respectively. Assume that \(\delta_{i}\)'s are given by (2.2) with \(\alpha_{i}\)'s, \(\beta_{i}\)'s and \(d_{i}\)'s defined in (2.4), (2.9)-(2.10) and (2.15), respectively, and \(\epsilon\) is given by (2.21). Let us evaluate the approximation rate of \(U\) in \(\|\cdot\|_{p}\), encoded in (1.12) **Lemma 3.1**.: _There exist \(\rho_{0}>0\), a constant \(C>0\) and \(1<p_{0}<2\), such that for any \(\rho\in(0,\rho_{0})\) and \(p\in(1,p_{0})\) it holds_ \[\|R\|_{p}\leq C\rho^{\eta_{p}} \tag{3.1}\] _for some \(\eta_{p}>0\)._ **Proof:** First, note that \(\Delta U=\sum_{i=1}^{m}\frac{(-1)^{i}}{\tau^{\nu(i)}}|x|^{\alpha_{i}-2}e^{w_{i}}\) so that, for any \(1<j<m\) and \(y\in\frac{A_{j}}{\delta_{j}}\) we have that \[\Delta U(\delta_{j}y)=\frac{(-1)^{j}}{\tau^{\nu(j)}}\frac{2\alpha_{j}^{2}|y|^{ \alpha_{j}-2}}{\delta_{j}^{2}(1+|y|^{\alpha_{j}})^{2}}+O\bigg{(}\sum_{i=1}^{j-1 }\Big{(}\frac{\delta_{i}}{\delta_{j}}\Big{)}^{\alpha_{i}}\frac{1}{\delta_{j}^{2} |y|^{\alpha_{i}+2}}+\sum_{i=j+1}^{m}\Big{(}\frac{\delta_{j}}{\delta_{i}}\Big{)} ^{\alpha_{i}}\frac{|y|^{\alpha_{i}-2}}{\delta_{j}^{2}}\bigg{)}. \tag{3.2}\] Similarly, we obtain that \[\Delta U(\delta_{1}y)=-\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{\delta_{j}^{2}(1+| y|^{\alpha_{j}})^{2}}+O\bigg{(}\sum_{i=2}^{m}\Big{(}\frac{\delta_{j}}{\delta_{i}} \Big{)}^{\alpha_{i}}\frac{|y|^{\alpha_{i}-2}}{\delta_{j}^{2}}\bigg{)},\qquad y \in\frac{A_{1}}{\delta_{1}} \tag{3.3}\] and \[\Delta U(\delta_{m}y)=\frac{(-1)^{m}}{\tau^{\nu(m)}}\frac{2\alpha_{j}^{2}|y|^{ \alpha_{j}-2}}{\delta_{j}^{2}(1+|y|^{\alpha_{j}})^{2}}+O\bigg{(}\sum_{i=1}^{m- 1}\Big{(}\frac{\delta_{i}}{\delta_{j}}\Big{)}^{\alpha_{i}}\frac{1}{\delta_{j}^ {2}|y|^{\alpha_{i}+2}}\bigg{)},\qquad y\in\frac{A_{m}}{\delta_{m}} \tag{3.4}\] By using (2.16) and (2.23) (from Lemma 2.6) for \(y\in\frac{A_{j}}{\delta_{j}}\) and any \(j\) odd we have that \[\begin{split}\rho V_{0}(\delta_{j}y)e^{U(\delta_{j}y)}& =\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{\delta_{j}^{2}(1+|y|^{ \alpha_{j}})^{2}}\frac{V_{0}(\delta_{j}y)}{V_{0}(0)}\exp\bigg{(}(-1)^{m+1} \frac{2\pi}{\tau^{\nu(m)}}(\alpha_{m}+2)[H(\delta_{j}y,0)-h_{0}]+O(\rho^{\eta} )\bigg{)}\\ &=\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{\delta_{j}^{2}(1+|y|^{ \alpha_{j}})^{2}}\left[1+O(\delta_{j}|y|+\rho^{\eta})\right]\end{split} \tag{3.5}\] and \[\begin{split}&\rho V_{1}(\delta_{j}y)e^{-\tau U(\delta_{j}y)}= \rho V_{1}(\delta_{j}y)\exp\bigg{(}-\tau\Big{[}-2\log\delta_{j}-a_{j}-\log \rho+\log\frac{|y|^{\alpha_{j}-2}}{\delta_{j}^{2}(1+|y|^{\alpha_{j}})^{2}}\\ &+(-1)^{m+1}\frac{2\pi}{\tau^{\nu(m)}}(\alpha_{m}+2)H(\delta_{j}y,0)\Big{]}+O(\rho^{\eta})\bigg{)}=O\bigg{(}\rho^{1+\tau}\delta_{j}^{2\tau} \frac{(1+|y|^{\alpha_{j}})^{2\tau}}{|y|^{(\alpha_{j}-2)\tau}}\bigg{)},\end{split} \tag{3.6}\] Similarly, by using (2.17) and \(1-\nu(m)=\sigma(m)\) for \(y\in\frac{A_{j}}{\delta_{j}}\) and \(j\) even \[\begin{split}\rho V_{1}(\delta_{j}y)e^{-\tau U(\delta_{j}y)}& =\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{\delta_{j}^{2}\tau(1+|y| ^{\alpha_{j}})^{2}}\frac{V_{1}(\delta_{j}y)}{V_{1}(0)}\exp\bigg{(}(-1)^{m}2 \pi\tau^{\sigma(m)}(\alpha_{m}+2)[H(\delta_{j}y,0)-h_{0}]+O(\rho^{\eta})\bigg{)} \\ &=\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{\delta_{j}^{2}\tau(1+|y| ^{\alpha_{j}})^{2}}\left[1+O(\delta_{j}|y|+\rho^{\eta})\right]\end{split} \tag{3.7}\] and \[\rho V_{0}(\delta_{j}y)e^{U(\delta_{j}y)}=O\bigg{(}\rho^{1+1/\tau}\delta_{j}^{ 2/\tau}\frac{(1+|y|^{\alpha_{j}})^{2/\tau}}{|y|^{(\alpha_{j}-2)/\tau}}\bigg{)}. \tag{3.8}\] Hence, by using (3.3), (3.5) and (3.6) for \(\delta_{1}y\in A_{1}\) we get that \[R(\delta_{1}y)=\frac{1}{\delta_{1}^{2}}\frac{2\alpha_{1}^{2}|y|^{\alpha_{1}-2 }}{(1+|y|^{\alpha_{1}})^{2}}O(\delta_{1}|y|+\rho^{\eta})+O\bigg{(}\rho^{1+\tau} \delta_{1}^{2\tau}\frac{(1+|y|^{\alpha_{1}})^{2\tau}}{|y|^{(\alpha_{1}-2)\tau }}+\sum_{i=2}^{m}\Big{(}\frac{\delta_{1}}{\delta_{i}}\Big{)}^{\alpha_{i}}\frac{ |y|^{\alpha_{i}-2}}{\delta_{1}^{2}}\bigg{)}, \tag{3.9}\] by using (3.2), (3.5) and (3.6) for \(1<j<m\), \(j\) odd and \(\delta_{j}y\in A_{j}\) \[\begin{split} R(\delta_{j}y)&=\frac{1}{\delta_{j}^{2 }}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2}}O(\delta_{ j}|y|+\rho^{\eta})\\ &\quad+O\bigg{(}\rho^{1+\tau}\delta_{j}^{2\tau}\frac{(1+|y|^{ \alpha_{j}})^{2\tau}}{|y|^{(\alpha_{j}-2)\tau}}+\sum_{i=1}^{j-1}\Big{(}\frac{ \delta_{i}}{\delta_{j}}\Big{)}^{\alpha_{i}}\frac{1}{\delta_{j}^{2}|y|^{ \alpha_{i}+2}}+\sum_{i=j+1}^{m}\Big{(}\frac{\delta_{j}}{\delta_{i}}\Big{)}^{ \alpha_{i}}\frac{|y|^{\alpha_{i}-2}}{\delta_{j}^{2}}\bigg{)},\end{split} \tag{3.10}\] by using (3.2), (3.7) and (3.8) for \(1<j<m\), \(j\) even and \(\delta_{j}y\in A_{j}\) \[\begin{split} R(\delta_{j}y)&=\frac{1}{\delta_{j}^{2 }}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2}}O(\delta_{j}| y|+\rho^{\eta})\\ &\quad+O\bigg{(}\rho^{1+1/\tau}\delta_{j}^{2/\tau}\frac{(1+|y|^{ \alpha_{j}})^{2/\tau}}{|y|^{(\alpha_{j}-2)/\tau}}+\sum_{i=1}^{j-1}\Big{(}\frac{ \delta_{i}}{\delta_{j}}\Big{)}^{\alpha_{i}}\frac{1}{\delta_{j}^{2}|y|^{\alpha_{ i}+2}}+\sum_{i=j+1}^{m}\Big{(}\frac{\delta_{j}}{\delta_{i}}\Big{)}^{\alpha_{i}} \frac{|y|^{\alpha_{i}-2}}{\delta_{j}^{2}}\bigg{)}\end{split} \tag{3.11}\] and by (3.4) for \(\delta_{m}y\in A_{m}\) \[R(\delta_{m}y)= \,\frac{1}{\delta_{m}^{2}\tau^{\nu(m)}}\frac{2\alpha_{m}^{2}|y|^{ \alpha_{m}-2}}{(1+|y|^{\alpha_{m}})^{2}}O(\delta_{m}|y|+\rho^{\eta}) \tag{3.12}\] \[+O\bigg{(}\rho^{1+\tau^{(-1)m+1}}\delta_{m}^{2\tau^{(-1)m+1}} \frac{(1+|y|^{\alpha_{m}})^{2\tau^{(-1)m+1}}}{|y|^{(\alpha_{m}-2)\tau^{(-1)m+1 }}}+\sum_{i=1}^{m-1}\Big{(}\frac{\delta_{i}}{\delta_{m}}\Big{)}^{\alpha_{i}} \frac{1}{\delta_{m}^{2}|y|^{\alpha_{i}+2}}\bigg{)}\] By (3.9)-(3.12), we finally get that there exist \(\rho_{0}>0\) small enough and \(p_{0}>1\) close to \(1\), so that for all \(0<\rho\leq\rho_{0}\) and \(1<p\leq p_{0}\) \[\int_{\Omega_{\epsilon}}|R(x)|^{p}\,dx=\sum_{j=1}^{m}\int_{A_{j}}|R(x)|^{p}\, dx=\sum_{j=1}^{m}\delta_{j}^{2}\int_{A_{j}}|R(\delta_{j}y)|^{p}\,dy=O\left( \rho^{p\eta_{p}}\right),\] for some \(\eta_{p}>0\), see Appendix, Lemma A.1 for more details. This completes the proof. ### The nonlinear problem and proof of Theorem 1.1 In this subsection, we will look for a solution \(u\) of (1.1) in the form \(u=U+\phi\), for some small remainder term \(\phi\). In terms of \(\phi\), the problem (1.1) is equivalent to find \(\phi\in H^{1}_{0}(\Omega_{\epsilon})\) so that \[\left\{\begin{array}{ll}L(\phi)=-[R+\Lambda(\phi)+N(\phi)],&\mbox{ in }\Omega_{\epsilon},\\ \phi=0,&\mbox{ on }\partial\Omega_{\epsilon}.\end{array}\right. \tag{3.13}\] where the linear operators \(L\) and \(\Lambda\) are defined as \[L(\phi)=\Delta\phi+K(x)\phi,\quad K(x)=\sum_{i=1}^{m}|x|^{\alpha_{i}-2}e^{w_{i}} \tag{3.14}\] and \[\Lambda(\phi)=\rho[V_{0}(x)e^{U}+\tau V_{1}(x)e^{-\tau U}]\phi-K\phi \tag{3.15}\] The nonlinear part \(N\) is given by \[N(\phi)=\rho V_{0}(x)e^{U}\big{(}e^{\phi}-\phi-1\big{)}-\rho V_{1}(x)e^{-\tau U }\big{(}e^{-\tau\phi}+\tau\phi-1\big{)}. \tag{3.16}\] **Proposition 3.1**.: _For any \(p>1\) there exists \(\rho_{0}>0\) so that for all \(0<\rho\leq\rho_{0}\), \(h\in L^{p}(\Omega_{\epsilon})\) there is a unique solution \(\phi\in H^{1}_{0}(\Omega_{\epsilon})\) of_ \[\left\{\begin{array}{ll}L(\phi)=h&\mbox{ in }\Omega_{\epsilon}\\ \phi=0&\mbox{ on }\partial\Omega_{\epsilon}.\end{array}\right. \tag{3.17}\] _Moreover, there is a constant \(C>0\) independent of \(\rho\) such that_ \[\|\phi\|\leq C|\log\rho\|h\|_{p}. \tag{3.18}\] The latter proposition implies that the unique solution \(\phi=T(h)\) of (3.17) defines a continuous linear map from \(L^{p}(\Omega_{\epsilon})\) into \(H^{1}_{0}(\Omega_{\epsilon})\), with norm bounded by \(C|\log\rho|\). We are now in position to study the nonlinear problem. **Proposition 3.2**.: _There exist \(p_{0}>1\) and \(\rho_{0}>0\) so that for any \(1<p<p_{0}\) and all \(0<\rho\leq\rho_{0}\), the problem_ \[\left\{\begin{array}{ll}L(\phi)=-[R+\Lambda(\phi)+N(\phi)],&\mbox{ in }\Omega_{\epsilon}\\ \phi=0,&\mbox{ on }\partial\Omega_{\epsilon}\end{array}\right. \tag{3.19}\] _admits a unique solution \(\phi(\rho)\in H^{1}_{0}(\Omega_{\epsilon})\), where \(N\), \(\Lambda\) and \(R\) are given by (3.16), (3.15) and (1.12), respectively. Moreover, there is a constant \(C>0\) such that for some \(\eta_{p}>0\)_ \[\|\phi\|_{\infty}\leq C\rho^{\eta_{p}}|\log\rho|\] Here, \(\eta_{p}\) is the same as in (3.1). We shall use the following estimates. **Lemma 3.2**.: _There exist \(p_{0}>1\) and \(\rho_{0}>0\) so that for any \(1<p<p_{0}\) and all \(0<\rho\leq\rho_{0}\) it holds_ \[\|\Lambda(\phi)\|_{p}\leq C\rho^{\eta^{\prime}_{p}}\|\phi\|, \tag{3.20}\] _for all \(\phi\in H^{1}_{0}(\Omega_{\epsilon})\) with \(\|\phi\|\leq M\rho^{\eta_{p}}\|\log\rho|\), for some \(\eta^{\prime}_{p}>0\)._ **Proof:** Arguing in the same way as in [13, Lemma 3.3], for simplicity, denote \(W=\rho[V_{0}e^{U}+\tau V_{1}e^{-\tau U}]\), so that the linear operator \(\Lambda\) is re-written as \(\Lambda(\phi)=(W-K)\phi\). By using (3.5)-(3.6) and (3.7)-(3.8), we find that for \(i\) odd \[\delta_{i}^{2}W(\delta_{i}y)=\frac{2\alpha_{i}^{2}|y|^{\alpha_{i}-2}}{(1+|y|_{ i}^{\alpha})^{2}}\left[1+O(|\delta_{i}y|+\rho^{\eta})\right]+O\Big{(}\rho^{1+ \tau}\delta_{i}^{2+2\tau}\frac{(1+|y|^{\alpha_{i}})^{2\tau}}{|y|^{(\alpha_{i}- 2)\tau}}\Big{)}\] uniformly for \(\delta_{i}y\in A_{i}\) and for \(i\) even \[\delta_{i}^{2}W(\delta_{i}y)=\frac{2\alpha_{i}^{2}|y|^{\alpha_{i}-2}}{(1+|y|_{ i}^{\alpha})^{2}}\left[1+O(|\delta_{i}y|+\rho^{\eta})\right]+O\Big{(}\rho^{1+1 /\tau}\delta_{i}^{2+2/\tau}\frac{(1+|y|^{\alpha_{i}})^{2/\tau}}{|y|^{(\alpha_ {i}-2)/\tau}}\Big{)}\] uniformly for \(x\in A_{i}\) and \(i\) even. Hence, for any \(q\geq 1\) and for \(i\) odd there holds \[\|W-K\|_{L^{q}(A_{i})}^{q}\leq C\bigg{[}\delta_{i}^{2-q}\int_{\frac{A_{i}}{\delta_{i}}}\left| \frac{2\alpha_{i}^{2}|y|^{\alpha_{i}-1}}{(1+|y|^{\alpha_{i}})^{2}}\right|^{q} dy+\rho^{\eta q}\delta_{i}^{2-2q}\int_{\frac{A_{i}}{\delta_{i}}}\left| \frac{2\alpha_{i}^{2}|y|^{\alpha_{i}-2}}{(1+|y|^{\alpha_{i}})^{2}}\right|^{q}dy\] \[+\rho^{(1+\tau)q}\delta_{i}^{2+2\tau q}\int_{\frac{A_{i}}{\delta_{ i}}}\left|\frac{(1+|y|^{\alpha_{i}})^{2\tau}}{|y|^{(\alpha_{i}-2)\tau}}\right|^{q} dy+\sum_{j<i}\delta_{i}^{2-2q}\Big{(}\frac{\delta_{j}}{\delta_{i}}\Big{)}^{ \alpha_{j}q}\int_{\frac{A_{i}}{\delta_{i}}}\left|\frac{1}{|y|^{\alpha_{j}+2}} \right|^{q}dy\] \[+\sum_{j>i}\delta_{i}^{2-2q}\Big{(}\frac{\delta_{j}}{\delta_{j}} \Big{)}^{\alpha_{j}q}\int_{\frac{A_{i}}{\delta_{i}}}|y|^{(\alpha_{j}-2)q}dy \bigg{]}\leq C\rho^{\eta\eta^{\prime}_{1,q}}\] for some \(\eta^{\prime}_{1,q}>0\), see Lemma A.1. Similarly, there holds \(\|W-K\|_{L^{q}(A_{i})}^{q}\leq C\rho^{\eta\eta^{\prime}_{2,q}}\) for any \(q\geq 1\) and for \(i\) even for some \(\eta^{\prime}_{2,q}>0\). Hence, we get that \[\|\Lambda(\phi)\|_{p}\leq\|(W-K)\,\phi\|_{p}\leq\|W-K\|_{pr}\,\|\phi\|_{ps}\leq C \rho^{\eta^{\prime}_{p}}\|\phi\|,\] where \(\eta^{\prime}_{p}=\min\left\{\eta^{\prime}_{1,pr},\eta^{\prime}_{2,ps}\right\}\) with \(r\), \(s\) satisfying \(\frac{1}{r}+\frac{1}{s}=1\). Furthermore, we have used the Holder's inequality \(\|uv\|_{q}\leq\|u\|_{qr}\|v\|_{qs}\) with \(\frac{1}{r}+\frac{1}{s}=1\) and the inclusions \(L^{p}(\Omega_{\epsilon})\hookrightarrow L^{pr}(\Omega_{\epsilon})\) for any \(r>1\) and \(H^{1}_{0}(\Omega_{\epsilon})\hookrightarrow L^{q}(\Omega_{\epsilon})\) for any \(q>1\). Let us stress that we can choose \(p\), \(r\) and \(s\) close enough to \(1\) such that \(\eta^{\prime}_{p}>0\). \(\square\) **Lemma 3.3**.: _There exist \(p_{0}>1\) and \(\rho_{0}>0\) so that for any \(1<p<p_{0}\) and all \(0<\rho\leq\rho_{0}\) it holds_ \[\|N(\phi_{1})-N(\phi_{2})\|_{p}\leq C\rho^{\eta^{\prime\prime}_{p}}\|\phi_{1}- \phi_{2}\| \tag{3.21}\] _for all \(\phi_{i}\in H^{1}_{0}(\Omega_{\epsilon})\) with \(\|\phi_{i}\|\leq M\rho^{\eta_{p}}\|\log\rho|\), \(i=1,2\), and for some \(\eta^{\prime\prime}_{p}>0\). In particular, we have that_ \[\|N(\phi)\|_{p}\leq C\rho^{\eta^{\prime\prime}_{p}}\ \|\phi\| \tag{3.22}\] _for all \(\phi\in H^{1}_{0}(\Omega_{\epsilon})\) with \(\|\phi\|\leq M\rho^{\eta_{p}}|\log\rho|\)._ **Proof:** We will argue in the same way as in [1, Lemma 5.1], see also [13, Lemma 3.4]. First, denoting \(f_{i}(\phi)=V_{i}(x)e^{(-\tau)^{i}(U+\phi)}\) we point out that \[N(\phi)=\sum_{i=0}^{1}\rho\left\{f_{i}(\phi)-f_{i}(0)-f_{i}^{\prime}(0)[\phi] \right\}\quad\text{and}\quad N(\phi_{1})-N(\phi_{2})=\sum_{i=0}^{1}\rho f_{i}^{ \prime\prime}(\tilde{\phi}_{\mu_{i}})[\phi_{\theta_{i}},\phi_{1}-\phi_{2}], \tag{3.23}\] by the mean value theorem, where \(\phi_{\theta_{i}}=\theta_{i}\phi_{1}+(1-\theta_{i})\phi_{2}\), \(\tilde{\phi}_{\mu_{i}}=\mu_{i}\phi_{\theta_{i}}\) for some \(\theta_{i},\mu_{i}\in[0,1]\), \(i=0,1\), and \(f_{i}^{\prime\prime}(\phi)[\psi,v]=\tau^{2i}V_{i}(x)e^{(-\tau)^{i}(U+\phi)}\psi v\). Using Holder's inequalities we get that \[\|f_{i}^{\prime\prime}(\phi[\psi,v]\|_{p}\leq|\tau|^{2i}\|V_{i}e^{(-\tau)^{i}(U+ \phi)}\|_{pri}\|\psi\|_{ps_{i}}\|v\|_{pt_{i}} \tag{3.24}\] with \(\frac{1}{r_{i}}+\frac{1}{s_{i}}+\frac{1}{t_{i}}=1\). We have used the Holder's inequality and \(\|uvw\|_{q}\leq\|u\|_{qr}\|v\|_{qs}\|w\|_{qt}\) with \(\frac{1}{r}+\frac{1}{s}+\frac{1}{t}=1\). Now, let us estimate \(\|V_{i}e^{(-\tau)^{i}(U+\phi)}\|_{pr_{i}}\) with \(\phi=\tilde{\phi}_{\mu_{i}}\), \(i=0,1\). By (3.5) and the change of variable \(x=\delta_{i}y\) let us estimate \[\int_{A_{i}}\left|V_{0}e^{U}\right|^{q}dx=O\left(\rho^{-q}\delta_{i}^{2-2q} \int_{\frac{A_{i}}{\delta_{i}}}\left|\frac{|y|^{\alpha_{i}-2}}{(1+|y|^{\alpha_{ i}})^{2}}\right|^{q}\left[1+O\big{(}\rho^{\eta}+\delta_{i}|y|\big{)}\right]^{q} \right)=O\left(\rho^{\frac{\delta_{i}}{\delta_{i}}(2-2q)-q}\right) \tag{3.25}\] for any \(i\) odd and similarly, by (3.7) we get that \[\int_{A_{i}}\left|V_{1}e^{-\tau U}\right|^{q}dx=O\left(\rho^{\frac{\beta_{i}}{ \alpha_{i}}(2-2q)-q}\right) \tag{3.26}\] for any \(i\) even, in view of (2.2). By (3.6) we get the estimate \[\int_{A_{i}}\left|V_{0}e^{U}\right|^{q}dx = O\Big{(}\rho^{\frac{q}{\alpha}}\delta_{i}^{2+\frac{2q}{\gamma}} \int\limits_{\frac{A_{i}}{\delta_{i}}}\Big{[}\frac{|y|^{\alpha_{i}-2}}{(1+|y| ^{\alpha_{i}})^{2}}\Big{]}^{-\frac{q}{\gamma}}dy\Big{)}=O\big{(}\rho^{-q+q \eta_{q}}\big{)}\] for all \(i\) even, in view of Lemma A.1. Similarly, by (3.8) we deduce that \[\int_{A_{i}}\left|V_{1}e^{-\tau U}\right|^{q}dx=O(\rho^{-q+q\eta_{q}}) \tag{3.27}\] for \(i\) odd, again in view of Lemma A.1. Therefore, by using (3.25) and (3.27) and that \(\frac{\beta_{i}}{\alpha_{i}}\) are decreasing, we deduce that \[\left\|V_{0}e^{U}\right\|_{q}^{q}=\sum_{\underset{i\text{ odd}}{i=1}}^{m}O \left(\rho^{\frac{\beta_{i}}{\alpha_{i}}(2-2q)-q}\right)+O\big{(}\rho^{-q+q \eta_{q}}\big{)}=O\left(\rho^{\frac{\beta_{1}}{\alpha_{1}}(2-2q)-q}\right) \quad\text{for any }q\geq 1 \tag{3.28}\] and, by using (3.26) and (3.27) we obtain that \[\left\|V_{1}e^{-\tau U}\right\|_{q}^{q}=O\left(\rho^{\frac{\beta_{1}}{\alpha_ {1}}(2-2q)-q}\right)\quad\text{for any }q\geq 1. \tag{3.29}\] On the other hand, using the estimate \(|e^{a}-1|\leq C|a|\) uniformly for any \(a\) in compact subsets of \({\rm I\!R}\), Holder's inequality with \(\frac{1}{s_{i}^{\prime}}+\frac{1}{t_{i}^{\prime}}=1\), \(i=0,1\), \(\|\tilde{\phi}_{\mu_{i}}\|\leq M\rho^{\eta_{p}}|\log\rho|\leq C\), \(i=0,1\) and triangle inequality we find that \[\left\|V_{i}e^{(-\tau)^{i}(U+\tilde{\phi}_{\mu_{i}})}\right\|_{q}=O\big{(} \rho^{\eta_{0,q\varkappa_{i}^{\prime}}-1+\eta_{p}}|\log\rho|+\rho^{\eta_{0,q} -1}\big{)}, \tag{3.30}\] where for \(q>1\) we denote \(\eta_{0,q}=\frac{\beta_{1}(2-2q)}{\alpha_{1}q}\) (on the line of [17, eq.(3.14) proof of Lemma 3.3]). Note that \(\eta_{0,q}<0\) for any \(q>1\). Also, choosing \(q\) and \(s_{i}^{\prime}\), \(i=0,1\), close enough to \(1\), we get that \(0<\eta_{p}+\eta_{0,qs_{i}^{\prime}}\). Now, we can conclude the estimate by using (3.23)-(3.30) to get \[\|N(\phi_{1})-N(\phi_{2})\|_{p}\,\leq\,C\sum_{i=0}^{1}\rho\|V_{i}e^{(-\tau)^{i }(U+\tilde{\phi}_{\mu_{i}})}\|_{pr_{i}}\|\phi_{\theta_{i}}\|\|\phi_{1}-\phi_{2} \|\leq\,C\sum_{i=0}^{1}\rho^{\eta_{p}+\eta_{0,pr_{i}}}|\log\rho|\|\phi_{1}- \phi_{2}\|\] and (3.21) follows, where \(\eta_{p}^{\prime\prime}=\frac{1}{2}\min\{\eta_{p}+\eta_{0,pr_{i}}\ :\ i=0,1\}>0\) choosing \(r_{i}\) close to \(1\) so that \(\eta_{p}+\eta_{0,pr_{i}}>0\) for \(i=0,1\). Let us stress that \(p>1\) is chosen so that \(\eta_{p}>0\) and \(\eta_{p}^{\prime}>0\). **Proof of the Proposition 3.2.** Taking into account Proposition 3.1, estimates (3.1), (3.20), (3.21), (3.22) and standard arguments it turns out that for all \(\rho>0\) sufficiently small \(\mathcal{A}\) is a contraction mapping of \(\mathcal{F}_{M}\) (for \(M\) large enough), and therefore a unique fixed point of \(\mathcal{A}\) exists in \(\mathcal{F}_{M}\), where \(\mathcal{A}(\phi):=-T(R+\Lambda(\phi)+N(\phi))\) and \(\mathcal{F}_{M}=\{\phi\in H^{1}_{0}(\Omega_{\epsilon}):\|\phi\|\leq M\rho^{ \eta_{p}}|\log\rho|\}\). See the proof of Proposition 3.2 in [13] for more details. **Proof of the Theorem 1.1.** Taking into account (1.10) and the definition of \(U\), the existence of a solution to equation (1.1) follows directly by Proposition 3.2. The asymptotic behavior of \(u_{\rho}\) as \(\rho\to 0^{+}\) follows from (2.23) in Lemma 2.6 and estimate for \(\phi\) in Proposition 3.2. Furthermore, we have that \(u_{\rho}\) has the desired concentration properties (1.6) as \(\rho\to 0^{+}\) locally uniformly in \(\bar{\Omega}\setminus\{0\}\). ## 4. Problem in mean field form (1.2) ### Error estimate Assume that \(\delta_{i}\)'s are given by (2.2) with \(\alpha_{i}\)'s, \(\beta_{i}\)'s and \(d_{i}\)'s defined in (2.4), (2.9)-(2.10) and (2.15), respectively, and \(\epsilon\) is given by (2.21). Let us evaluate the error term in \(\|\cdot\|_{p}\), encoded in (1.13). **Lemma 4.1**.: _There exist \(\rho_{0}>0\), a constant \(C>0\) and \(1<p_{0}<2\), such that for any \(\rho\in(0,\rho_{0})\) and \(p\in(1,p_{0})\) it holds_ \[\|\mathcal{R}\|_{p}\leq C\rho^{\eta_{p}} \tag{4.1}\] _for some \(\eta_{p}>0\)._ **Proof:** By using (3.5) and (3.8) we have that \[\begin{split}\int_{\Omega_{\epsilon}}V_{0}(x)e^{U}=\sum_{j=1}^{ m}\int_{\frac{A_{j}}{\delta_{j}}}\delta_{j}^{2}V_{0}(\delta_{j}y)e^{U(\delta_{j}y)} dy=\sum_{j=1\atop j\text{ odd}}^{m}\frac{1}{\rho}\int_{\frac{A_{j}}{\delta_{j}}}\frac{2\alpha_{j}^{2}|y|^{ \alpha_{j}-2}}{\delta_{j}^{2}(1+|y|^{\alpha_{j}})^{2}}\left[1+O(\delta_{j}|y|+ \rho^{\eta})\right]dy\\ +\sum_{j=1\atop j\text{ even}}^{m}O\bigg{(}\rho^{1/\tau}\delta_{j }^{2+2/\tau}\int_{\frac{A_{j}}{\delta_{j}}}\frac{(1+|y|^{\alpha_{j}})^{2/\tau} }{|y|^{(\alpha_{j}-2)/\tau}}dy\bigg{)}=\frac{1}{\rho}\bigg{[}4\pi\sum_{j=1 \atop j\text{ odd}}^{m}\alpha_{j}+O(\rho^{\eta})\bigg{]}\end{split} \tag{4.2}\] and similarly, from (3.6) and (3.7) we get that \[\int_{\Omega_{\epsilon}}V_{1}(x)e^{-\tau U}=\!\frac{1}{\rho\tau}\bigg{[}4\pi \sum_{j=1\atop j\text{ even}}^{m}\alpha_{j}+O(\rho^{\eta})\bigg{]}, \tag{4.3}\] in view of \(\int\limits_{\mathbb{R}^{2}}\frac{|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2} }dy=\frac{2\pi}{\alpha_{i}}.\) From assumptions (1.8)-(1.9) it follows that \(\lambda_{0}=4\pi\sum_{j=1\atop j\text{ odd}}^{m}\alpha_{j}\) and \(\lambda_{1}\tau^{2}=4\pi\sum_{j=1\atop j\text{ even}}^{m=j=1\atop j\text{ even}}\alpha_{j}\) so that, by using (4.2)-(4.3) we find that for \(y\in\frac{A_{j}}{\delta_{j}}\) and any \(j\) odd \[\lambda_{0}\frac{V_{0}(\delta_{j}y)e^{U(\delta_{j}y)}}{\int_{\Omega_{\epsilon} }V_{0}(x)e^{U}}=\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{\delta_{j}^{2}(1+|y| ^{\alpha_{j}})^{2}}\left[1+O(\delta_{j}|y|+\rho^{\eta})\right] \tag{4.4}\] and \[\lambda_{1}\tau\frac{V_{1}(\delta_{j}y)e^{-\tau U(\delta_{j}y)}}{\int_{\Omega_ {\epsilon}}V_{1}(x)e^{-\tau U}}=O\bigg{(}\rho^{1+\tau}\delta_{j}^{2\tau}\frac{ (1+|y|^{\alpha_{j}})^{2\tau}}{|y|^{(\alpha_{j}-2)\tau}}\bigg{)}. \tag{4.5}\] Similarly, for any \(y\in\frac{A_{j}}{\delta_{j}}\) and \(j\) even \[\lambda_{0}\frac{V_{0}(\delta_{j}y)e^{U(\delta_{j}y)}}{\int_{\Omega_{\epsilon} }V_{0}(x)e^{U}}=O\bigg{(}\rho^{1+1/\tau}\delta_{j}^{2/\tau}\frac{(1+|y|^{ \alpha_{j}})^{2/\tau}}{|y|^{(\alpha_{j}-2)/\tau}}\bigg{)} \tag{4.6}\] and \[\lambda_{1}\tau\frac{V_{1}(\delta_{j}y)e^{-\tau U(\delta_{j}y)}}{\int_{\Omega_ {\epsilon}}V_{1}(x)e^{-\tau U}}=\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{\delta _{j}^{2}\tau(1+|y|^{\alpha_{j}})^{2}}\left[1+O(\delta_{j}|y|+\rho^{\eta}) \right]. \tag{4.7}\] Hence, similar to the proof of Lemma 3.1 by using (3.2)-(3.4) and (4.4)-(4.7) for \(\delta_{1}y\in A_{1}\) we get that \[\mathcal{R}(\delta_{1}y)=\!\frac{1}{\delta_{1}^{2}}\frac{2\alpha_{1}^{2}|y|^{ \alpha_{1}-2}}{(1+|y|^{\alpha_{1}})^{2}}O(\delta_{1}|y|+\rho^{\eta})+O\bigg{(} \rho^{1+\tau}\delta_{1}^{2\tau}\frac{(1+|y|^{\alpha_{1}})^{2\tau}}{|y|^{(\alpha _{1}-2)\tau}}+\sum_{i=2}^{m}\Big{(}\frac{\delta_{1}}{\delta_{i}}\Big{)}^{\alpha _{i}}\frac{|y|^{\alpha_{i}-2}}{\delta_{1}^{2}}\bigg{)}, \tag{4.8}\] for \(1<j<m\), \(j\) odd and \(\delta_{j}y\in A_{j}\) \[\begin{split}\mathcal{R}(\delta_{j}y)=&\frac{1}{ \delta_{j}^{2}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2 }}O(\delta_{j}|y|+\rho^{\eta})\\ &+O\bigg{(}\rho^{1+\tau}\delta_{j}^{2\tau}\frac{(1+|y|^{\alpha_{j} })^{2\tau}}{|y|^{(\alpha_{j}-2)\tau}}+\sum_{i=1}^{j-1}\Big{(}\frac{\delta_{i} }{\delta_{j}}\Big{)}^{\alpha_{i}}\frac{1}{\delta_{j}^{2}|y|^{\alpha_{i}+2}}+ \sum_{i=j+1}^{m}\Big{(}\frac{\delta_{j}}{\delta_{i}}\Big{)}^{\alpha_{i}}\frac{|y |^{\alpha_{i}-2}}{\delta_{j}^{2}}\bigg{)}\end{split} \tag{4.9}\] for \(1<j<m\), \(j\) even and \(\delta_{j}y\in A_{j}\) \[\begin{split}\mathcal{R}(\delta_{j}y)=&\frac{1}{ \delta_{j}^{2}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2 }}O(\delta_{j}|y|+\rho^{\eta})\\ &+O\bigg{(}\rho^{1+1/\tau}\delta_{j}^{2/\tau}\frac{(1+|y|^{\alpha_ {j}})^{2/\tau}}{|y|^{(\alpha_{j}-2)/\tau}}+\sum_{i=1}^{j-1}\Big{(}\frac{\delta_{i} }{\delta_{j}}\Big{)}^{\alpha_{i}}\frac{1}{\delta_{j}^{2}|y|^{\alpha_{i}+2}}+ \sum_{i=j+1}^{m}\Big{(}\frac{\delta_{j}}{\delta_{i}}\Big{)}^{\alpha_{i}}\frac{|y |^{\alpha_{i}-2}}{\delta_{j}^{2}}\bigg{)}\end{split} \tag{4.10}\] and for \(\delta_{m}y\in A_{m}\) \[\begin{split}\mathcal{R}(\delta_{m}y)=&\,\frac{1}{ \delta_{m}^{2}\tau^{\nu(m)}}\frac{2\alpha_{m}^{2}|y|^{\alpha_{m}-2}}{(1+|y|^{ \alpha_{m}})^{2}}O(\delta_{m}|y|+\rho^{\eta})\\ &+O\bigg{(}\rho^{1+\tau^{(-1)^{m+1}}}\delta_{m}^{2\tau^{(-1)^{m+1 }}}\frac{(1+|y|^{\alpha_{m}})^{2\tau^{(-1)^{m+1}}}}{|y|^{(\alpha_{m}-2)\tau^{(- 1)^{m+1}}}}+\sum_{i=1}^{m-1}\Big{(}\frac{\delta_{i}}{\delta_{m}}\Big{)}^{\alpha _{i}}\frac{1}{\delta_{m}^{2}|y|^{\alpha_{i}+2}}\bigg{)}\end{split} \tag{4.11}\] By (4.8)-(4.11) and similar ideas to get (3.1) we conclude the proof. ### The nonlinear problem and proof of Theorem 1.2 In this section we shall study the following nonlinear problem: \[\left\{\begin{array}{ll}\mathcal{L}(\phi)=-[\mathcal{R}+\Lambda_{0}(\phi)+ \mathcal{N}(\phi)]&\mbox{in }\Omega_{\epsilon}\\ \phi=0,&\mbox{on }\partial\Omega_{\epsilon},\end{array}\right. \tag{4.12}\] where the linear operators \(\mathcal{L},\Lambda_{0}\) are defined as \[\mathcal{L}(\phi)=\Delta\phi+K_{0}\left(\phi-\frac{1}{\lambda_{0}}\int_{ \Omega_{\epsilon}}K_{0}\phi dx\right)+K_{1}\left(\phi-\frac{1}{\lambda_{1}\tau ^{2}}\int_{\Omega_{\epsilon}}K_{1}\phi dx\right) \tag{4.13}\] and \[\begin{split}\Lambda_{0}(\phi)=&\,\lambda_{0}\frac{V_{ 0}e^{U}}{\int_{\Omega_{\epsilon}}V_{0}e^{U}dx}\left(\phi-\frac{\int_{\Omega_{ \epsilon}}V_{0}e^{U}\phi dx}{\int_{\Omega_{\epsilon}}V_{0}e^{U}dx}\right)+ \lambda_{1}\tau^{2}\frac{V_{1}e^{-\tau U}}{\int_{\Omega_{\epsilon}}V_{1}e^{- \tau U}dx}\left(\phi-\frac{\int_{\Omega_{\epsilon}}V_{1}e^{-\tau U}\phi dx}{ \int_{\Omega_{\epsilon}}V_{1}(x)e^{-\tau U}dx}\right)\\ &-K_{0}\left(\phi-\frac{1}{\lambda_{0}}\int_{\Omega_{\epsilon}}K_{ 0}\phi dx\right)-K_{1}\left(\phi-\frac{1}{\lambda_{1}\tau^{2}}\int_{\Omega_{ \epsilon}}K_{1}\phi dx\right)\end{split} \tag{4.14}\] with \[K_{0}=\sum_{k=1\atop k\text{ odd}}^{m}|x|^{\alpha_{k}-2}e^{w_{k}},\quad K_{1}= \sum_{k=1\atop k\text{ even}}^{m}|x|^{\alpha_{k}-2}e^{w_{k}}. \tag{4.15}\] The nonlinear term \(\mathcal{N}(\phi)\) is given by \[\begin{split}\mathcal{N}(\phi)=&\lambda_{0}\left[ \frac{V_{0}e^{U+\phi}}{\int_{\Omega_{\epsilon}}V_{0}e^{U+\phi}dx}-\frac{V_{0}e^ {U}}{\int_{\Omega_{\epsilon}}V_{0}e^{U}dx}-\frac{V_{0}e^{U}}{\int_{\Omega_{ \epsilon}}V_{0}e^{U}dx}\left(\phi-\frac{\int_{\Omega_{\epsilon}}V_{0}e^{U} \phi dx}{\int_{\Omega_{\epsilon}}V_{0}e^{U}dx}\right)\right]\\ &-\lambda_{1}\tau\left[\frac{V_{1}e^{-\tau(U+\phi)}dx}{\int_{ \Omega_{\epsilon}}V_{1}e^{-\tau(U+\phi)}dx}-\frac{V_{1}e^{-\tau U}}{\int_{ \Omega_{\epsilon}}V_{1}e^{-\tau U}dx}+\tau\frac{V_{1}e^{-\tau U}}{\int_{\Omega _{\epsilon}}V_{1}e^{-\tau U}dx}\left(\phi-\frac{\int_{\Omega_{\epsilon}}V_{1}e^ {-\tau U}\phi dx}{\int_{\Omega_{\epsilon}}V_{1}e^{-\tau U}dx}\right)\right]. \end{split} \tag{4.16}\] It is readily checked that \(\phi\) is a solution to (3.13) if and only if \(u_{\epsilon}\) given by (1.11) is a solution to (3.1). In Section 5 we will prove the following result. **Proposition 4.1**.: _For any \(p>1,\) there exists \(\rho_{0}>0\) and \(C>0\) such that for any \(\rho\in(0,\rho_{0})\) and \(h\in L^{p}(\Omega_{\epsilon})\) there exists a unique \(\phi\in H^{1}_{0}(\Omega_{\epsilon})\) solution of_ \[\mathcal{L}(\phi)=h\ \ \mbox{in}\ \ \Omega_{\epsilon},\quad\phi=0\ \ \mbox{on}\ \ \partial\Omega_{\epsilon}, \tag{4.17}\] _which satisfies_ \[\|\phi\|\leq C|\log\rho|\,\|h\|_{p}. \tag{4.18}\] **Lemma 4.2**.: _There exist \(p_{0}>1\) and \(\rho_{0}>0\) so that for any \(1<p<p_{0}\) and all \(0<\rho\leq\rho_{0}\) it holds_ \[\|\Lambda_{0}(\phi)\|_{p}\leq C\rho^{\sigma_{p}^{\prime}}\|\phi\|, \tag{4.19}\] _for all \(\phi\in H^{1}_{0}(\Omega_{\epsilon})\) with \(\|\phi\|\leq M\rho^{\sigma_{p}}|\log\rho|\), for some \(\sigma_{p}^{\prime}>0\)._ **Proof:** Arguing in the same way as in [13, Lemma 3.3], denote \(W_{i}=\frac{\lambda_{i}\tau^{2i}V_{i}e^{(-\tau)^{i}U}}{\int_{\Omega_{\epsilon}}V_ {i}e^{(-\tau)^{i}U}dx}\) for \(i=0,1\). By using (4.4), (4.6), Lemma A.1 and similar computations to prove (3.20), we find that \(\|W_{0}-K_{0}\|_{L^{q}(A_{i})}^{q}\leq C\rho^{q\sigma_{0,q}^{\prime}}\) for any \(i\) for some \(\sigma_{0,q}^{\prime}\). Similarly, we find that \(\|W_{1}-K_{1}\|_{L^{q}(A_{i})}^{q}\leq C\rho^{q\sigma_{1,q}^{\prime}}\) for any \(i\) for some \(\sigma_{1,q}^{\prime}\). It is possible to see that taking \(q>1\) close enough to \(1\), we get that \(\sigma_{i,q}^{\prime}>0\) for \(i=0,1\). Notice that \(\Lambda_{0}\) is a linear operator and we re-write \(\Lambda_{0}(\phi)\) as \[\Lambda_{0}(\phi)=\sum_{i=0}^{1}\bigg{[}\left(W_{i}-K_{i}\right)\phi-\frac{1}{ \lambda_{i}\tau^{2i}}\left(W_{i}-K_{i}\right)\int_{\Omega_{\epsilon}}W_{i}\phi +\frac{1}{\lambda_{i}\tau^{2i}}K_{i}\int_{\Omega_{\epsilon}}\left(K_{i}-W_{i} \right)\phi\bigg{]}.\] Hence, we get that \[\|\Lambda_{0}(\phi)\|_{p} \leq\sum_{i=0}^{1}\bigg{[}\left\|W_{i}-K_{i}\right\|_{pr_{i0}}\| \phi\|_{ps_{i0}}+\frac{\|W_{i}\|_{r_{i1}}}{\lambda_{i}\tau^{2i}}\left\|W_{i}-K _{i}\right\|_{p}\|\phi\|_{s_{i1}}+\frac{\|K_{i}\|_{p}}{\lambda_{i}\tau^{2(i-1) }}\left\|K_{i}-W_{i}\right\|_{r_{i2}}\left\|\phi\right\|_{s_{i2}}\bigg{]}\] \[\leq C\sum_{i=0}^{1}\bigg{[}\rho^{\sigma_{i,pr_{i0}}^{\prime}}\|\phi \|+\rho^{\sigma_{i,p}^{\prime}+\sigma_{3,r_{i1}}}\|\phi\|+\rho^{\sigma_{i,r_{ i2}}^{\prime}+\sigma_{3,p}}\|\phi\|\bigg{]}\] and (4.19) follows, where \(\sigma_{p}^{\prime}=\min\left\{\sigma_{i,pr_{i0}}^{\prime};\sigma_{i,p}^{ \prime}+\sigma_{3,r_{i1}};\sigma_{i,r_{i2}}^{\prime}+\sigma_{3,p}\mid i=0,1\right\}\) with \(r_{ij}\), \(s_{ij}\), \(i=0,1\), \(j=0,1,2\) satisfying \(\frac{1}{r_{ij}}+\frac{1}{s_{ij}}=1\). We have used that \[\|W_{0}\|_{r_{01}}^{r_{01}}\] \[\leq C\rho^{\sigma_{3,r_{01}}}\] and similarly, \(\|W_{1}\|_{r_{11}}^{r_{11}}\leq C\rho^{\sigma_{3,r_{11}}}\), where \(\sigma_{3,q}=\frac{\beta_{1}}{\alpha_{1}}(2-2q)\) and similarly that \(\|K_{i}\|_{p}^{p}\leq C\rho^{\sigma_{3,p}}\), \(i=0,1\). Note that \(\frac{2-2q}{\alpha_{j}q}<1\) for any \(j=1,\ldots,m\). Furthermore, we have used the Holder's inequality \(\|uv\|_{q}\leq\|u\|_{qr}\|v\|_{qs}\) with \(\frac{1}{r}+\frac{1}{s}=1\) and the inclusions \(L^{p}(\Omega_{\epsilon})\hookrightarrow L^{pr}(\Omega_{\epsilon})\) for any \(r>1\) and \(H_{0}^{1}(\Omega_{\epsilon})\hookrightarrow L^{q}(\Omega_{\epsilon})\) for any \(q>1\). Let us stress that we can choose \(p\), \(r_{ij}\) and \(s_{ij}\), \(i=1,2\), \(j=0,1,2\), close enough to \(1\) such that \(\sigma_{p}^{\prime}>0\). **Lemma 4.3**.: _There exist \(p_{0}>1\) and \(\rho_{0}>0\) so that for any \(1<p<p_{0}\) and all \(0<\rho\leq\rho_{0}\) it holds_ \[\|\mathcal{N}(\phi_{1})-\mathcal{N}(\phi_{2})\|_{p}\leq C\rho^{\sigma_{p}^{ \prime\prime}}\|\phi_{1}-\phi_{2}\| \tag{4.20}\] _for all \(\phi_{i}\in H_{0}^{1}(\Omega_{\epsilon})\) with \(\|\phi_{i}\|\leq M\rho^{\sigma_{p}}|\log\rho|\), \(i=1,2\), and for some \(\sigma_{p}^{\prime\prime}>0\). In particular, we have that_ \[\|\mathcal{N}(\phi)\|_{p}\leq C\rho^{\sigma_{p}^{\prime\prime}}\left\|\phi\right\| \tag{4.21}\] _for all \(\phi\in H_{0}^{1}(\Omega_{\epsilon})\) with \(\|\phi\|\leq M\rho^{\sigma_{p}}|\log\rho|\)._ **Proof:** Arguing in the same way as in [1, Lemma 5.1], we denote \(g_{i}(\phi)=\frac{V_{i}(x)e^{(-\gamma)^{i}(U+\phi)}}{\int_{\Omega_{\epsilon}}V _{i}(x)e^{(-\gamma)^{i}(U+\phi)}}\) and point out that \[\mathcal{N}(\phi)=\sum_{i=0}^{1}\lambda_{i}(-\tau)^{i}\left\{g_{i}(\phi)-g_{i} (0)-g_{i}^{\prime}(0)[\phi]\right\}\quad\text{and by the mean value theorem we get that}\] \[\mathcal{N}(\phi_{1})-\mathcal{N}(\phi_{2})=\sum_{i=1}^{2}\lambda_{i}(-\tau)^{ i}g_{i}^{\prime\prime}(\tilde{\phi}_{\mu_{i}})[\phi_{\theta_{i}},\phi_{1}-\phi_{2}], \tag{4.22}\] where \(\phi_{\theta_{i}}=\theta_{i}\phi_{1}+(1-\theta_{i})\phi_{2}\), \(\tilde{\phi}_{\mu_{i}}=\mu_{i}\phi_{\theta_{i}}\) for some \(\theta_{i},\mu_{i}\in[0,1]\), \(i=0,1\), and \[g_{i}^{\prime\prime}(\phi)[\psi,v]= \tau^{2i}\bigg{[}\frac{V_{i}(x)e^{u_{i}}\psi v}{\int_{\Omega_{ \epsilon}}V_{i}(x)e^{u_{i}}}-\frac{V_{i}(x)e^{u_{i}}v\int_{\Omega_{\epsilon}}V _{i}(x)e^{u_{i}}\psi}{\big{(}\int_{\Omega_{\epsilon}}V_{i}(x)e^{u_{i}}\big{)}^ {2}}-\frac{V_{i}(x)e^{u_{i}}\psi\int_{\Omega_{\epsilon}}V_{i}(x)e^{u_{i}}v}{ \big{(}\int_{\Omega_{\epsilon}}V_{i}(x)e^{u_{i}}\big{)}^{2}}\] \[-\frac{V_{i}(x)e^{u_{i}}\int_{\Omega_{\epsilon}}V_{i}(x)e^{u_{i}} \psi v}{\big{(}\int_{\Omega_{\epsilon}}V_{i}(x)e^{u_{i}}\big{)}^{2}}+2\frac{V_{ i}(x)e^{u_{i}}\int_{\Omega_{\epsilon}}V_{i}(x)e^{u_{i}}\psi\int_{\Omega_{\epsilon}}V_{i}(x)e^{u_{i}} \psi}{\big{(}\int_{\Omega_{\epsilon}}V_{i}(x)e^{u_{i}}\big{)}^{3}}\bigg{]},\] where for simplicity we denote \(u_{i}=(-\tau)^{i}(U+\phi)\). Using Holder's inequalities we get that \[\left\|g_{i}^{\prime\prime}(\phi)[\psi,v]\right\|_{p}\leq C\left[\frac{\|V_{i}e^{u_{i}}\|_{pr_{i}}}{\|V_{i}e^{u_{i}}\| _{1}}+\frac{\|V_{i}e^{u_{i}}\|_{pr_{i}}^{2}}{\|V_{i}e^{u_{i}}\|_{1}^{2}}+\frac{ \|V_{i}e^{u_{i}}\|_{pr_{i}}^{3}}{\|V_{i}e^{u_{i}}\|_{1}^{3}}\right]\|\psi\|\| \|v\|, \tag{4.23}\] with \(\frac{1}{r_{i}}+\frac{1}{s_{i}}+\frac{1}{t_{i}}=1\), \(\frac{1}{r_{i}}+\frac{1}{q_{i}}=1\), \(\frac{1}{pr_{i}}+\frac{1}{r_{i}}=1\). We have used the Holder's inequality, the inclusions presented in the previous Lemma and \(\|uvw\|_{q}\leq\|u\|_{q\tau}\|v\|_{qs}\|w\|_{qt}\) with \(\frac{1}{r}+\frac{1}{s}+\frac{1}{t}=1\). Now, let us estimate \(\frac{\|V_{i}e^{u_{i}}\|_{pr_{i}}}{\|V_{i}e^{u_{i}}\|_{1}}\) with \(\phi=\tilde{\phi}_{\mu_{i}}\), \(i=0,1\). Taking into account (3.28)-(3.29) and (4.2)-(4.3), we obtain that for \(i=0,1\) \[\left\|V_{i}e^{(-\tau)^{i}U}\right\|_{q}^{q}=O\left(\rho^{\frac{\beta_{1}}{ \alpha_{1}}(2-2q)-q}\right)\quad\text{for any }q\geq 1\quad\text{and}\quad\left\|V_{i}e^{(-\tau)^{i}U} \right\|_{1}\geq\frac{C_{i}}{\rho}\quad\text{for some }C_{i}>0.\] Note that \(\sigma_{3,q}-1\leq\frac{2-(\alpha_{i}+2)q}{\alpha_{i}q}\quad\text{for any}\quad i=1,\ldots,m\). By the previous estimates we find that \(\left\|V_{i}e^{(-\tau)^{i}U+\tilde{\phi}_{\mu_{i}}}\right\|_{q}=O\big{(}\rho^ {\eta_{0,q_{i}^{*}}-1+\eta_{p}}|\log\rho|+\rho^{\eta_{0,q}-1}\big{)}\) (on the line of [17, eq. (3.14) proof of Lemma 3.3]. Also, choosing \(s_{i}\), \(i=0,1\), close enough to \(1\), we get that \(\sigma_{p}+\eta_{0,s_{i}}>0\) and \[\left\|V_{i}e^{(-\tau)^{i}(U+\tilde{\phi}_{\mu_{i}})}\right\|_{1}\geq\frac{C_ {i}}{\rho}-C\rho^{\eta_{0,s_{i}}-1+\sigma_{p}}|\log\rho|\geq\frac{1}{\rho} \left(C_{i}-C\rho^{\eta_{0,s_{i}}+\sigma_{p}}\ |\log\rho|\right)\geq\frac{C_{i}}{2\rho}.\] Taking \(q=pr_{i}\), we obtain the estimate for \(i=0,1\) \[\frac{\|V_{i}e^{(-\tau)^{i}(U+\tilde{\phi}_{\mu_{i}})}\|_{pr_{i}}}{\|V_{i}e^{ (-\tau)^{i}(U+\tilde{\phi}_{\mu_{i}})}\|_{1}}=O\left(\rho^{\sigma_{3,pr_{i}}} \Big{[}\rho^{\sigma_{p}+\sigma_{3,pr_{i}s_{i}}-\sigma_{pr_{i}}}|\log\rho|+1 \Big{]}\right)=O\left(\rho^{\sigma_{3,pr_{i}}}\right) \tag{4.24}\] choosing \(s_{i}>1\) close enough to \(1\) so that \(\sigma_{p}+\sigma_{3,pr_{i}s_{i}}-\sigma_{pr_{i}}>0\), \(i=1,2\). Now, we can conclude the estimate by using (4.22)-(4.24) to get \[\|\mathcal{N}(\phi_{1})-\mathcal{N}(\phi_{2})\|_{p}\,\leq\,C\sum_{i=0}^{1} \lambda_{i}\rho^{\sigma_{p}+3\sigma_{3,pr_{i}}}|\log\rho|\|\phi_{1}-\phi_{2}\| \leq\,C\rho^{\sigma_{p}^{\prime\prime}}\|\phi_{1}-\phi_{2}\|,\] where \(\sigma_{p}^{\prime\prime}=\frac{1}{2}\min\{\sigma_{p}+3\sigma_{3,pr_{i}}\ :\ i=0,1\}>0\) choosing \(r_{i}\) close to \(1\) so that \(\sigma_{p}+3\sigma_{3,pr_{i}}>0\) for \(i=0,1\). Let us stress that \(p>1\) is chosen so that \(\sigma_{p}>0\). Taking into account previous estimates (4.1), (4.19), (4.20) and (4.21), and by using the same argument as in the proof of Proposition 3.2 we conclude the following **Proposition 4.2**.: _There exist \(p_{0}>1\) and \(\rho_{0}>0\) so that for any \(1<p<p_{0}\) and all \(0<\rho\leq\rho_{0}\), the problem (4.12) admits a unique solution \(\phi(\rho)\in H^{1}_{0}(\Omega_{\epsilon})\), where \(\mathcal{L}\), \(\mathcal{R}\), \(\Lambda_{0}(\phi)\) and \(\mathcal{N}\) are given by (4.13), (1.13), (4.14) and (4.16), respectively. Moreover, there is a constant \(C>0\) such that_ \[\|\phi\|_{\infty}\leq C\rho^{\sigma_{p}}|\log\rho|,\] _for some \(\sigma_{p}>0\)._ **Proof of the Theorem 1.2.** Arguing in the same way as in the proof of Theorem 1.1, the existence of a solution (1.10) to equation (1.2) follows directly by Proposition 4.2 and the definition of \(U\) in (1.11). ## 5. The linear theory In this section we present the invertibility of the linear operators \(L\) and \(\mathcal{L}\) defined in (3.14) and (4.13) respectively. Roughly speaking in the scale annulus \(\frac{A_{i}}{\delta_{j}}\) the operator \(L\) apporaches to the following linear operator in \(\mathrm{I\!R}^{2}\) \[L_{j}(\phi)=\Delta\phi+\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_ {j}})^{2}}\phi,\qquad j=1,\ldots,m.\] It is well known that, in case \(\alpha_{j}\in 2{\rm I\!N}\), the bounded solutions of \(L_{j}(\phi)=0\) in \({\rm I\!R}^{2}\) are precisely linear combinations of the functions \[Y_{1j}(y)=\frac{|y|^{\frac{\alpha_{j}}{2}}}{1+|y|^{\alpha_{j}}}\cos\Big{(}\frac {\alpha_{j}}{2}\theta\Big{)},\ \ \ Y_{2j}(y)=\frac{|y|^{\frac{\alpha_{j}}{2}}}{1+|y|^{\alpha_{j}}}\sin\Big{(} \frac{\alpha_{j}}{2}\theta\Big{)}\ \ \ {\rm and}\ \ \ Y_{0j}(y)=\,\frac{1-|y|^{\alpha_{j}}}{1+|y|^{ \alpha_{j}}},\] which are written in polar coordinates for \(j=1,\ldots,m\). See [10] for a proof. In our case, we will consider solutions of \(L_{j}(\phi)=0\) with \(\alpha_{j}\notin 2{\rm I\!N}\) for all \(j=1,\ldots,m\) such that \(\int_{{\rm I\!R}^{2}}|\nabla\phi(y)|^{2}\,dy<+\infty\), which are multiples of \(Y_{0j}\), see [1, Theorem A.1] for a proof. Another key element in the study of \({\mathcal{L}}\), which shows technical details, is to get rid of the presence of \[\tilde{c}_{j}(\phi)=-\frac{1}{\lambda_{j}\tau^{2j}}\int_{\Omega_{\epsilon}}K_ {j}\phi\qquad j=0,1. \tag{5.1}\] Let us introduce the following Banach spaces for \(j=1,2,\ldots,m\) \[L_{\alpha_{j}}({\rm I\!R}^{2})=\bigg{\{}u\in W^{1,2}_{\rm loc}({\rm I\!R}^{2} )\ :\ \int_{{\rm I\!R}^{2}}\frac{|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2}}|u(y)| ^{2}\,dy<+\infty\bigg{\}} \tag{5.2}\] and \[H_{\alpha_{j}}({\rm I\!R}^{2})=\bigg{\{}u\in W^{1,2}_{\rm loc}({\rm I\!R}^{2} )\ :\ \int_{{\rm I\!R}^{2}}|\nabla u(y)|^{2}\,dy+\int_{{\rm I\!R}^{2}}\frac{|y|^{ \alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2}}|u(y)|^{2}\,dy<+\infty\bigg{\}} \tag{5.3}\] endowed with the norms \[\|u\|_{L_{\alpha_{j}}}:=\left(\int_{{\rm I\!R}^{2}}\frac{|y|^{\alpha_{j}-2}}{ (1+|y|^{\alpha_{j}})^{2}}|u(y)|^{2}\,dy\right)^{1/2}\] and \[\|u\|_{H_{\alpha_{j}}}:=\left(\int_{{\rm I\!R}^{2}}|\nabla u(y)|^{2}\,dy+ \int_{{\rm I\!R}^{2}}\frac{|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2}}|u(y) |^{2}\,dy\right)^{1/2}.\] We point out the compactness of the embedding \(i_{\alpha_{j}}:H_{\alpha_{j}}({\rm I\!R}^{2})\to L_{\alpha_{j}}({\rm I\!R}^{2})\), (see for example [20]). **Proof of Proposition 3.1.** Let us assume the existence of \(p>1\), sequences \(\rho_{n}\to 0\), \(\epsilon_{n}=\epsilon(\rho_{n})\to 0\), functions \(h_{n}\in L^{p}(\Omega_{\epsilon_{n}})\), \(\phi_{n}\in H^{1}_{0}(\Omega_{\epsilon_{n}})\) such that \[L(\phi_{n})=h_{n}\ \ {\rm in}\ \ \Omega_{\epsilon_{n}},\ \ \phi_{n}=0\ \ {\rm on}\ \ \partial\Omega_{\epsilon_{n}}\] \(\|\phi_{n}\|=1\) and \(|\log\rho_{n}|\,\|h_{n}\|_{p}=o(1)\) as \(n\to+\infty\). We will omit the subscript \(n\) in \(\delta_{i,n}=\delta_{i}\). Recall that \(\delta_{i}^{\alpha_{i}}=d_{i,n}\rho_{n}^{\beta_{i}}\). Now, define \(\Phi_{i,n}(y):=\phi_{i,n}(\delta_{i}y)\) for \(y\in\Omega_{i,n}:=\delta_{i}^{-1}\Omega_{\epsilon_{n}}\). Thus, extending \(\phi_{n}=0\) in \({\rm I\!R}^{2}\setminus\Omega_{\epsilon_{n}}\) and arguing in the same way as in [17, Claim 1, section 4] we can prove the following fact. We provide a sketch of the proof. **Claim 5.1**.: _There holds that the sequence \(\Phi_{i,n}\) converges (up to subsequence) to \(\Phi_{i}^{*}=a_{i}Y_{0i}\) for \(i=1,\ldots,m\), weakly in \(H_{\alpha_{i}}({\rm I\!R}^{2})\) and strongly in \(L_{\alpha_{i}}({\rm I\!R}^{2})\) as \(n\to+\infty\) for some constant \(a_{i}\in{\rm I\!R}\), \(i=1,\ldots,m\)._ **Proof:** First, notice that \(\|\Phi_{i,n}\|_{H^{1}_{0}(\Omega_{i,n})}=1\), for \(i=1,\ldots,m\). Then, we want to prove that there is a constant \(M>0\) such for all \(n\) (up to a subsequence) \(\|\Phi_{i,n}\|^{2}_{L_{\alpha_{i}}}\leq M\). Notice that for any \(i\in\{1,\ldots,m\}\) we find that in \(\Omega_{i,n}\) \[\Delta\Phi_{i,n}+\delta_{i}^{2}K(\delta_{i}y)\Phi_{i,n}=\delta_{i}^{2}h_{n}( \delta_{i}y). \tag{5.4}\] Furthermore, it follows that \(\Phi_{i,n}\to\Phi_{i}^{*}\) weakly in \(H^{1}_{0}(\Omega_{i,n})\) and strongly in \(L^{p}(K)\) for any \(K\) compact sets in \({\mathbb{R}}^{2}\). Now, we multiply (5.4) by \(\Phi_{i,n}\) for any \(i\in\{1,\ldots,m\}\), integrate by parts and we obtain that \[\sum_{i=1}^{m}2\alpha_{i}^{2}\|\Phi_{i,n}\|^{2}_{L_{\alpha_{i}}}=\,1+o(1) \tag{5.5}\] since \[\delta_{i}^{2}K(\delta_{i}y)=\frac{2\alpha_{i}^{2}|y|^{\alpha_{i}-2}}{(1+|y|^{ \alpha_{i}})^{2}}+O\bigg{(}\sum_{j<i}\Big{(}\frac{\delta_{j}}{\delta_{i}}\Big{)} ^{\alpha_{j}}+\sum_{i<j}\Big{(}\frac{\delta_{i}}{\delta_{j}}\Big{)}^{\alpha_{j}} \bigg{)}\ \ \ {\rm for\ all}\ i=1,\ldots,m \tag{5.6}\] uniformly for \(y\) on compact subsets of \(\mathrm{I\!R}^{2}\setminus\{0\}\). Thus, we deduce that \(\|\Phi_{i,n}\|_{L_{\alpha_{i}}}^{2}=O(1)\). Therefore, the sequence \(\{\Phi_{i,n}\}_{n}\) is bounded in \(H_{\alpha_{i}}(\mathrm{I\!R}^{2})\), so that there is a subsequence \(\{\Phi_{i,n}\}_{n}\) and functions \(\Phi_{i}^{*}\), \(i=1,2\) such that \(\{\Phi_{i,n}\}_{n}\) converges to \(\Phi_{i}^{*}\) weakly in \(H_{\alpha_{i}}(\mathrm{I\!R}^{2})\) and strongly in \(L_{\alpha_{i}}(\mathrm{I\!R}^{2})\). Hence, taking into account (5.4) and (5.6) we deduce that \(\Phi_{i}^{*}\) a solution to \[\Delta\Phi+\frac{2\alpha_{i}^{2}|y|^{\alpha_{i}-2}}{(1+|y|^{\alpha_{i}})^{2}} \Phi=0,\qquad i=1,\ldots,m,\qquad\mbox{in }\mathrm{I\!R}^{2}\setminus\{0\}.\] It is standard that \(\Phi_{i}^{*}\), \(i=1,\ldots,m\) extend to a solution in the whole \(\mathrm{I\!R}^{2}\). Hence, by using symmetry assumptions if necessary, we get that \(\Phi_{i}^{*}=a_{i}Y_{0i}\) for some constant \(a_{i}\in\mathrm{I\!R}\), \(i=1,\ldots,m\). For the next step consider for any \(j\in\{1,\ldots,m\}\) the functions \(Z_{0j}(x)=Y_{0j}(\delta_{j}^{-1}x)=\frac{\delta_{j}^{\alpha_{j}}-|x|^{\alpha_ {j}}}{\delta_{j}^{\alpha_{j}}+|x|^{\alpha_{j}}}\) so that \(-\Delta Z_{0j}=|x|^{\alpha_{j}-2}e^{w_{j}}Z_{0j}.\) By similar arguments as to obtain expansion in Lemma 2.1, it follows that \(P_{\epsilon}Z_{0j}=Z_{0j}+1-\gamma_{0}G(x,0)+O(\rho^{\tilde{\sigma}})\) uniformly in \(\Omega_{\epsilon}\) for some \(\tilde{\sigma}>0\), as a consequence of [13, Lemma 4.1] with \(m=1\) and \(\xi_{1}=0\). Now, introduce the coefficient \(\gamma_{0}\) as \[\gamma_{0}\left[-\frac{1}{2\pi}\log\epsilon+H(0,0)\right]=2, \tag{5.7}\] so that from (2.21), \(\gamma_{0}=-\frac{4\pi(\alpha_{1}-2)}{\beta_{1}+1}\cdot\frac{1}{\log\rho}+O \big{(}\frac{1}{|\log\rho|^{2}}\big{)}\). For simplicity, we shall denote \(\Pi_{i}(y)=\frac{2\alpha_{i}^{2}|y|^{\alpha_{i}-2}}{(1+|y|^{\alpha_{i}})^{2}}\), for \(i=1,\ldots,m\). **Claim 5.2**.: _For all \(j=1,\ldots,m\), there exist the limit, up to subsequences,_ \[\mu_{j}=\lim_{n\to+\infty}\log\rho_{n}\int_{\Omega_{j,n}}\Pi_{j}(y)\Phi_{j,n}( y)\,dy. \tag{5.8}\] _Furthermore, it holds_ \[2\sum_{i=1}^{m}a_{i}=\bigg{(}-\frac{\beta_{1}+1}{4\pi(\alpha_{1}-2)}+\sum_{i= 1}^{m}(-1)^{i+1}\frac{\beta_{i}}{2\pi\alpha_{i}}\bigg{)}\mu_{1}, \tag{5.9}\] _and for all \(j=1,\ldots,m-1\)._ \[\mu_{j}=(-1)^{j-1}\mu_{1}. \tag{5.10}\] **Proof:** Consider tests functions \(\gamma_{0}^{-1}PZ_{0j}\)'s and the assumption on \(h_{n}\), \(|\log\rho_{n}|\)\(\|h_{n}\|_{*}=o(1)\). Hence, multiplying equation (5.4) by \(P_{\epsilon}Z_{0j}\) and integrating by parts we obtain that \[\int_{\Omega_{\epsilon}}hPZ_{0j}=-\int_{\Omega_{\epsilon}}\phi|x| ^{\alpha_{j}-2}e^{w_{j}}Z_{0j}+\int_{\Omega_{\epsilon}}K\phi P_{\epsilon}Z_{0j }=\int_{\Omega_{\epsilon}}\left(K-|x|^{\alpha_{j}-2}e^{w_{j}}\right)P_{ \epsilon}Z_{0j}\phi\] \[+\int_{\Omega_{\epsilon}}|x|^{\alpha_{j}-2}e^{w_{j}}\left(PZ_{0j} -Z_{0j}\right)\phi=\int_{\Omega_{\epsilon}}\phi\sum_{i=1\atop i\neq j}^{m}|x| ^{\alpha_{i}-2}e^{w_{i}}PZ_{0j}+\int_{\Omega_{\epsilon}}|x|^{\alpha_{j}-2}e^{ w_{j}}\left[1-\gamma_{0}G(x,0)+O(\rho^{\beta})\right]\phi\] in view of \(P_{\epsilon}Z_{0j}=0\) on \(\partial\Omega_{\epsilon_{n}}\) and \(\int_{\Omega_{\epsilon}}\Delta\phi P_{\epsilon}Z_{0j}=\int_{\Omega_{\epsilon}} \phi\ \Delta P_{\epsilon}Z_{0j}=\int_{\Omega_{\epsilon}}\phi\ \Delta Z_{0j}\). Now, multiplying by \(\gamma_{0}^{-1}\) and estimating every integral term we find that \(\gamma_{0}^{-1}\int_{\Omega_{\epsilon}}hPZ_{0j}=O\left(\,|\log\rho|\,\|h\|_{p} \right)=o\left(1\right),\) in view of \(PZ_{0j}=O(1)\) and \(\gamma_{0}^{-1}=-\frac{\beta_{1}+1}{4\pi(\alpha_{1}-2)}\log\rho+O(1)=O(|\log \rho|)\) in (5.7). Next, scaling we obtain that \[\gamma_{0}^{-1}\int_{\Omega_{\epsilon}}\phi|x|^{\alpha_{j}-2}e^{w_{j}}=-\frac{ \beta_{1}+1}{4\pi(\alpha_{1}-2)}\log\rho\int_{\Omega_{j,n}}\frac{2\alpha_{j}^{2 }|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2}}\Phi_{j,n}\,dy+o(1),\] in view of \[\lim\int_{\Omega_{j,n}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{ j}})^{2}}\Phi_{j,n}\,dy=a_{j}\int_{\mathrm{I\!R}^{2}}\frac{2\alpha_{j}^{2}|y|^{ \alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2}}Y_{0,j}\,dy=0\] Also, by using (5.7) and expansion of \(P_{\epsilon}Z_{0j}\), we get that for \(i<j\) \[\gamma_{0}^{-1}\int_{\Omega_{\epsilon}}|x|^{\alpha_{i}-2}e^{w_{i}} \phi PZ_{0j}=\int_{\Omega_{i,n}}\frac{2\alpha_{i}^{2}|y|^{\alpha_{i}-2}}{(1+|y| ^{\alpha_{i}})^{2}}\Phi_{i,n}\left[\frac{2\gamma_{0}^{-1}}{1+(\frac{\delta_{i} |y|}{\delta_{j}})^{\alpha_{j}}}+\frac{1}{2\pi}\log(\delta_{i}|y|)-H(\delta_{i}| y|,0)\right]\,dy\] \[+O(\rho^{\tilde{\sigma}}|\log\rho|)=2\gamma_{0}^{-1}\int_{\Omega_ {i,n}}\Pi_{i}\Phi_{i,n}(y)\,dy+\frac{1}{2\pi}\log\delta_{i}\int_{\Omega_{i,n}} \Pi_{i}\Phi_{i,n}(y)\,dy\] \[+\frac{1}{2\pi}\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i,n}(y)\log|y|\, dy+o(1),\] and for \(i>j\) we have that \[\gamma_{0}^{-1}\int_{\Omega_{\epsilon}}|x|^{\alpha_{i}-2}e^{w_{i }}\phi PZ_{0j}=\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i,n}\left[\left(\frac{\delta_{j }}{\delta_{i}}\right)^{\alpha_{j}}\frac{2\gamma_{0}^{-1}}{(\frac{\delta_{j}} {\delta_{i}})^{\alpha_{j}}+|y|^{\alpha_{j}}}+\frac{1}{2\pi}\log(\delta_{i}|y|)- H(\delta_{i}|y|,0)\right]\,dy\] \[+O(\rho^{\tilde{\sigma}}|\log\rho|)=\frac{1}{2\pi}\log\delta_{i} \int_{\Omega_{i,n}}\Pi_{i}\Phi_{i,n}(y)\,dy+\frac{1}{2\pi}\int_{\Omega_{i,n}} \Pi_{i}\Phi_{i,n}(y)\log|y|\,dy+o(1),\] in view of \(\left(\frac{\delta_{j}}{\delta_{i}}\right)^{\alpha_{j}}\gamma_{0}^{-1}=o(1)\). Furthermore, it follows that \[\int_{\Omega_{\epsilon}}|x|^{\alpha_{j}-2}e^{w_{j}}G(x,0)\phi= -\frac{1}{2\pi}\log\delta_{j}\int_{\Omega_{j,n}}\Pi_{j}(y)\Phi_{j,n}(y)\,dy- \frac{1}{2\pi}\int_{\Omega_{j,n}}\Pi_{j}(y)\Phi_{j,n}(y)\log|y|\,dy+o(1).\] From the choice of \(\delta_{j}\)'s in (2.2), it follows that \(\log\delta_{j}=\frac{1}{\alpha_{j}}\big{[}\log d_{j}+\beta_{j}\log\rho\big{]}\) and from previous expansions we deduce that for all \(j=2,\ldots,m-1\) \[\gamma_{0}^{-1}\int_{\Omega_{\epsilon}}hPZ_{0j}=\sum_{i<j}\left[ \Big{(}-\frac{\beta_{1}+1}{2\pi(\alpha_{1}-2)}+\frac{\beta_{i}}{2\pi\alpha_{i} }\Big{)}\log\rho\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i,n}\,dy+\frac{1}{2\pi}\int_{ \Omega_{i,n}}\Pi_{i}\Phi_{i,n}(y)\log|y|\,dy\right]\] \[+\sum_{i>j}\left[\frac{\beta_{i}}{2\pi\alpha_{i}}\log\rho\int_{ \Omega_{i,n}}\Pi_{i}\Phi_{i,n}(y)\,dy+\frac{1}{2\pi}\int_{\Omega_{i,n}}\Pi_{i} \Phi_{i,n}(y)\log|y|\,dy\right]-\frac{\beta_{1}+1}{4\pi(\alpha_{1}-2)}\log\rho \int_{\Omega_{j,n}}\Pi_{j}\Phi_{j,n}\,dy\] \[+\frac{\beta_{j}}{2\pi\alpha_{j}}\log\rho\int_{\Omega_{j,n}}\Pi_{ j}(y)\Phi_{j,n}\,dy+\frac{1}{2\pi}\int_{\Omega_{j,n}}\Pi_{j}(y)\Phi_{j,n}(y)\log|y| \,dy+o(1),\] Since for all \(i=1,\ldots,m\) \[\lim\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i,n}(y)\log|y|\,dy=a_{i}\int_{\mathbbm{R}^ {2}}\Pi_{i}Y_{0,i}(y)\log|y|\,dy=-4\pi a_{i} \tag{5.11}\] we obtain that for all \(i=2,\ldots,m-1\) \[2\sum_{i=1}^{m}a_{i}=\sum_{i<j}\left[\Big{(}-\frac{\beta_{1}+1} {2\pi(\alpha_{1}-2)}+\frac{\beta_{i}}{2\pi\alpha_{i}}\Big{)}\log\rho\int_{ \Omega_{i,n}}\Pi_{i}\Phi_{i,n}\,dy\right]+\sum_{i>j}\left[\frac{\beta_{i}}{2 \pi\alpha_{i}}\log\rho\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i,n}(y)\,dy\right] \tag{5.12}\] \[\qquad+\Big{(}-\frac{\beta_{1}+1}{4\pi(\alpha_{1}-2)}+\frac{\beta_ {j}}{2\pi\alpha_{j}}\Big{)}\log\rho\int_{\Omega_{j,n}}\Pi_{j}(y)\Phi_{j,n}\,dy+ o(1).\] Choosing \(j\in\{2,\ldots,m-2\}\) we rewrite previous expression for \(j+1\) as \[2\sum_{i=1}^{m}a_{i}=\sum_{i<j}\left[\Big{(}-\frac{\beta_{1}+1}{ 2\pi(\alpha_{1}-2)}+\frac{\beta_{i}}{2\pi\alpha_{i}}\Big{)}\log\rho\int_{ \Omega_{i,n}}\Pi_{i}\Phi_{i,n}\right]+\Big{(}-\frac{\beta_{1}+1}{2\pi(\alpha_{1 }-2)}+\frac{\beta_{j}}{2\pi\alpha_{j}}\Big{)}\log\rho\int_{\Omega_{j,n}}\Pi_{j} \Phi_{j,n} \tag{5.13}\] \[+\sum_{i>j+1}\left[\frac{\beta_{i}}{2\pi\alpha_{i}}\log\rho\int_{ \Omega_{i,n}}\Pi_{i}\Phi_{i,n}\right]+\Big{(}-\frac{\beta_{1}+1}{4\pi(\alpha_{1 }-2)}+\frac{\beta_{j+1}}{2\pi\alpha_{j+1}}\Big{)}\log\rho\int_{\Omega_{j+1,n}} \Pi_{j+1}\Phi_{j+1,n}+o(1).\] Hence, by subtracting (5.12) from (5.13) we obtain that \[0=-\frac{\beta_{1}+1}{4\pi(\alpha_{1}-2)}\log\rho\left[\int_{\Omega_{j,n}}\Pi_{ j}(y)\Phi_{j,n}\,dy+\int_{\Omega_{j+1,n}}\Pi_{j+1}(y)\Phi_{j+1,n}\,dy\right]+o(1),\] so that, for all \(j=2,\ldots,m-2\) \[\lim\log\rho_{n}\left[\int_{\Omega_{j,n}}\Pi_{j}(y)\Phi_{j,n}\,dy+\int_{\Omega_{ j+1,n}}\Pi_{j+1}(y)\Phi_{j+1,n}\,dy\right]=0. \tag{5.14}\] Similarly as above it extends to \(j=1\) \[2\sum_{i=1}^{m}a_{i}=\sum_{i>1}\left[\frac{\beta_{i}}{2\pi\alpha_{i}}\log\rho \int_{\Omega_{i,n}}\Pi_{i}\Phi_{i,n}dy\right]+\Big{(}-\frac{\beta_{1}+1}{4\pi( \alpha_{1}-2)}+\frac{\beta_{1}}{2\pi\alpha_{1}}\Big{)}\log\rho\int_{\Omega_{1,n}}\Pi_{1}\Phi_{1,n}dy+o(1)\] and \[\lim\log\rho_{n}\left[\int_{\Omega_{1,n}}\Pi_{1}(y)\Phi_{1,n}\,dy+\int_{\Omega _{2,n}}\Pi_{2}(y)\Phi_{2,n}\,dy\right]=0;\] and to \(j=m\) \[2\sum_{i=1}^{m}a_{i} =\sum_{i<m}\left[\Big{(}-\frac{\beta_{1}+1}{2\pi(\alpha_{1}-2)}+ \frac{\beta_{i}}{2\pi\alpha_{i}}\Big{)}\log\rho\int_{\Omega_{i,n}}\Pi_{i}\Phi_ {i,n}\,dy\right]\] \[\quad+\Big{(}-\frac{\beta_{1}+1}{4\pi(\alpha_{1}-2)}+\frac{\beta _{m}}{2\pi\alpha_{m}}\Big{)}\log\rho\int_{\Omega_{m,n}}\Pi_{m}(y)\Phi_{m,n}\, dy+o(1)\] and \[\lim\log\rho_{n}\left[\int_{\Omega_{m-1,n}}\Pi_{m-1}(y)\Phi_{m-1,n}\,dy+\int_{ \Omega_{m,n}}\Pi_{m}(y)\Phi_{m,n}\,dy\right]=0.\] Now, note that for \(j=1\) we deduce that \[2\sum_{i=1}^{m}a_{i}=\lim\Big{[}\sum_{i=2}^{m}\frac{\beta_{i}}{2 \pi\alpha_{i}}\log\rho\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i,n}dy+\Big{(}-\frac{ \beta_{1}+1}{4\pi(\alpha_{1}-2)}+\frac{\beta_{1}}{2\pi\alpha_{1}}\Big{)}\log \rho\int_{\Omega_{1,n}}\Pi_{1}\Phi_{1,n}dy\Big{]}\] \[=\lim\Big{(}-\frac{\beta_{1}+1}{4\pi(\alpha_{1}-2)}+\sum_{i=1}^{m }(-1)^{i+1}\frac{\beta_{i}}{2\pi\alpha_{i}}\Big{)}\log\rho\int_{\Omega_{1,n}} \Pi_{1}\Phi_{1,n}\,dy=\bigg{(}-\frac{\beta_{1}+1}{4\pi(\alpha_{1}-2)}+\sum_{i= 1}^{m}(-1)^{i+1}\frac{\beta_{i}}{2\pi\alpha_{i}}\Big{)}\mu_{1},\] and there exists \(\mu_{1}=\lim\log\rho_{n}\int_{\Omega_{1,n}}\Pi_{1}(y)\Phi_{1,n}\,dy\). In fact, by using (5.14) we have that \[\sum_{i=1}^{m} \frac{\beta_{i}}{2\pi\alpha_{i}}\log\rho\int_{\Omega_{i,n}}\Pi_{i }\Phi_{i,n}(y)\,dy\] \[=\sum_{i=1}^{m-2}\frac{\beta_{i}}{2\pi\alpha_{i}}\log\rho\int_{ \Omega_{i,n}}\Pi_{i}\Phi_{i,n}(y)\,dy+\Big{(}\frac{\beta_{m-1}}{2\pi\alpha_{m- 1}}-\frac{\beta_{m}}{2\pi\alpha_{m}}\Big{)}\log\rho\int_{\Omega_{m-1,n}}\Pi_{ m-1}(y)\Phi_{m-1,n}\,dy\] \[+\frac{\beta_{m}}{2\pi\alpha_{m}}\underbrace{\log\rho\bigg{(}\int _{\Omega_{m-1,n}}\Pi_{m-1}(y)\Phi_{m-1,n}\,dy+\int_{\Omega_{m,n}}\Pi_{m}(y) \Phi_{m,n}\,dy\bigg{)}}_{=o(1)}\] \[=\sum_{i=1}^{m-3}\frac{\beta_{i}}{2\pi\alpha_{i}}\log\rho\int_{ \Omega_{i,n}}\Pi_{i}\Phi_{i,n}(y)\,dy+\frac{1}{2\pi}\Big{(}\frac{\beta_{m-2}}{ \alpha_{m-2}}-\frac{\beta_{m-1}}{\alpha_{m-1}}+\frac{\beta_{m}}{\alpha_{m}} \Big{)}\log\rho\int_{\Omega_{m-2,n}}\Pi_{m-2}(y)\Phi_{m-2}(y)\,dy\] \[+\Big{(}\frac{\beta_{m-1}}{2\pi\alpha_{m-1}}-\frac{\beta_{m}}{2 \pi\alpha_{m}}\Big{)}\underbrace{\log\rho\bigg{(}\int_{\Omega_{m-2,n}}\Pi_{m-2}( y)\Phi_{m-2}(y)\,dy+\int_{\Omega_{m-1,n}}\Pi_{m-1}(y)\Phi_{m-1,n}\,dy\bigg{)}}_{=o(1)}+o(1)\] \[\vdots\] \[= \frac{1}{2\pi}\Big{(}\frac{\beta_{1}}{\alpha_{1}}-\frac{\beta_{2} }{\alpha_{2}}+\frac{\beta_{3}}{\alpha_{3}}+\cdots+(-1)^{m+1}\frac{\beta_{m}}{ \alpha_{m}}\Big{)}\log\rho\int_{\Omega_{1,n}}\Pi_{1}(y)\Phi_{1,n}(y)\,dy+o(1).\] Thus, we get (5.9). From the existence of \(\mu_{1}\) and (5.14), it follows the existence of \(\mu_{2},\ldots,\mu_{m}\) satisfying \(\mu_{j}+\mu_{j+1}=0\) for all \(j=1,\ldots,m-1\). Therefore, it is readily checked that (5.10) it holds. **Claim 5.3**.: _There hold that \(a_{j}=0\) for all \(j=1,\ldots,m\)._ **Proof:** To this aim we will use as tests functions \(P_{\epsilon}w_{j}\) and the assumption on \(h_{n},|\log\rho_{n}|\ \|h_{n}\|_{*}=o(1)\). Hence, multiplying equation (5.4) by \(P_{\epsilon}w_{j}\) and integrating by parts we obtain that \[\int_{\Omega_{\epsilon}} hPw_{j}=\int_{\Omega_{\epsilon}}\Delta w_{j}\phi+\int_{\Omega_{ \epsilon}}K\phi P_{\epsilon}w_{j}=-\int_{\Omega_{\epsilon}}\phi|x|^{\alpha_{j }-2}e^{w_{j}}+\int_{\Omega_{\epsilon}}K\phi P_{\epsilon}w_{j}=-\int_{\Omega_{ \epsilon}}\phi|x|^{\alpha_{j}-2}e^{w_{j}}\] \[+\sum_{i=1}^{m}\int_{\Omega_{\epsilon}}|x|^{\alpha_{i}-2}e^{w_{i} }\phi\left[-2\log\big{(}\delta_{j}^{\alpha_{j}}+|x|^{\alpha_{j}}\big{)}+(4\pi \alpha_{j}-\gamma_{j})H(x,0)+\frac{\gamma_{j}}{2\pi}\log|x|+O(\rho^{\eta})\right]\] in view of \(P_{\epsilon}w_{j}=0\) on \(\partial\Omega_{\epsilon_{n}}\) and \(\int_{\Omega_{\epsilon}}\Delta\phi P_{\epsilon}w_{j}=\int_{\Omega_{\epsilon}} \phi\ \Delta P_{\epsilon}w_{j}\). Now, estimating every integral term we find that \[\int_{\Omega_{\epsilon}}hPw_{j}=O\left(\,|\log\rho|\,\|h\|_{p}\right)=o\left( 1\right),\] in view of \(Pw_{j}=O(|\log\rho|)\) and \(G(x,0)=O(|\log\epsilon|)=O(|\log\rho|)\). Next, scaling we obtain that \[\int_{\Omega_{\epsilon}}\phi|x|^{\alpha_{j}-2}e^{w_{j}}=\int_{\Omega_{j,n}} \frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2}}\Phi_{j,n}= a_{j}\int_{\mathds{R}^{2}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{ \alpha_{j}})^{2}}Y_{0j}\,dy+o(1)=o(1).\] For \(i=j\) we have that \[\int_{\Omega_{\epsilon}}|x|^{\alpha_{j}-2}e^{w_{j}}\phi P_{\epsilon }w_{j}=\int_{\Omega_{j,n}}\Pi_{j}\Phi_{j}\Big{[}-2\alpha_{j}\log\delta_{j}-2 \log(1+|y|^{\alpha_{j}})+(4\pi\alpha_{j}-\gamma_{j})H(\delta_{j}y,0)+\frac{ \gamma_{j}}{2\pi}\log|\delta_{j}y|\] \[+O(\rho^{\eta})\Big{]}=-2\alpha_{j}\log\delta_{j}\int_{\Omega_{j,n }}\Pi_{j}\Phi_{j}-2\int_{\Omega_{j,n}}\Pi_{j}\Phi_{j}\log(1+|y|^{\alpha_{j}})+ (4\pi\alpha_{j}-\gamma_{j})\int_{\Omega_{j,n}}\Pi_{j}\Phi_{j}H(\delta_{j}y,0)\] \[+\frac{\gamma_{j}\log\delta_{j}}{2\pi}\int_{\Omega_{j,n}}\Pi_{j} \Phi_{j}+\frac{\gamma_{j}}{2\pi}\int_{\Omega_{j,n}}\Pi_{j}\Phi_{j}\log|y|+o(1),\] similarly, for \(i>j\) we obtain that \[\int_{\Omega_{\epsilon}}|x|^{\alpha_{i}-2}e^{w_{i}}\phi P_{ \epsilon}w_{j}=\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i}\Big{[}-2\alpha_{j}\log\delta _{i}-2\log\Big{(}\Big{(}\frac{\delta_{j}}{\delta_{i}}\Big{)}^{\alpha_{j}}+|y| ^{\alpha_{j}}\Big{)}+(4\pi\alpha_{j}-\gamma_{j})H(\delta_{i}y,0)\] \[+\frac{\gamma_{j}}{2\pi}\log|\delta_{i}y|+O(\rho^{\eta})\Big{]}=-2 \alpha_{j}\log\delta_{i}\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i}-2\int_{\Omega_{i,n }}\Pi_{i}\Phi_{i}\log\Big{(}\Big{(}\frac{\delta_{j}}{\delta_{i}}\Big{)}^{ \alpha_{j}}+|y|^{\alpha_{j}}\Big{)}\] \[+(4\pi\alpha_{j}-\gamma_{j})\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i}H( \delta_{i}y,0)+\frac{\gamma_{j}\log\delta_{i}}{2\pi}\int_{\Omega_{i,n}}\Pi_{i} \Phi_{i}+\frac{\gamma_{j}}{2\pi}\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i}\log|y|+o(1),\] and for \(i<j\) \[\int_{\Omega_{\epsilon}}|x|^{\alpha_{i}-2}e^{w_{i}}\phi P_{ \epsilon}w_{j}=\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i}\Big{[}-2\alpha_{j}\log\delta _{j}-2\log\left(1+\Big{(}\frac{\delta_{i}|y|}{\delta_{j}}\Big{)}^{\alpha_{j}} \right)+(4\pi\alpha_{j}-\gamma_{j})H(\delta_{i}y,0)+\frac{\gamma_{j}}{2\pi} \log|\delta_{i}y|\] \[+O(\rho^{\eta})\Big{]}=-2\alpha_{j}\log\delta_{j}\int_{\Omega_{i,n} }\Pi_{i}\Phi_{i}-2\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i}\log\left(1+\Big{(}\frac{ \delta_{i}|y|}{\delta_{j}}\Big{)}^{\alpha_{j}}\right)+(4\pi\alpha_{j}-\gamma_{ j})\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i}H(\delta_{i}y,0)\] \[+\frac{\gamma_{j}\log\delta_{i}}{2\pi}\int_{\Omega_{i,n}}\Pi_{i} \Phi_{i}+\frac{\gamma_{j}}{2\pi}\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i}\log|y|+o(1),\] Now, by using the definition of \(\gamma_{j}\)'s in (2.20) and Lemma 2.1 we find that \(\frac{\gamma_{j}}{2\pi}=2\beta_{j}\frac{\alpha_{1}-2}{\beta_{1}+1}+O\big{(} \frac{1}{|\log\rho|}\big{)}\) so that from the choice of \(\delta_{j}\)'s we obtain that \(\frac{\gamma_{j}}{2\pi}\log\delta_{i}=2\beta_{j}\frac{\alpha_{1}-2}{\beta_{1}+1} \frac{\beta_{i}}{\alpha_{j}}\log\rho+O(1).\) Hence, from previous expansions we deduce that for all \(j=2,\ldots,m-1\) \[\begin{split} o(1)&=\sum_{i<j}\Bigg{[}\Big{(}-2\alpha_ {j}\log\delta_{j}+\frac{\gamma_{j}\log\delta_{i}}{2\pi}\Big{)}\int_{\Omega_{i,n }}\Pi_{i}\Phi_{i}+\frac{\gamma_{j}}{2\pi}\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i}\log |y|\Big{]}\\ &+\Big{(}-2\alpha_{j}\log\delta_{j}+\frac{\gamma_{j}\log\delta_{j }}{2\pi}\Big{)}\int_{\Omega_{j,n}}\Pi_{j}\Phi_{j}+\frac{\gamma_{j}}{2\pi}\int_ {\Omega_{j,n}}\Pi_{j}\Phi_{j}\log|y|-2\int_{\Omega_{j,n}}\Pi_{j}\Phi_{j}\log(1+ |y|^{\alpha_{j}})\\ &+\sum_{i>j}\Bigg{[}\Big{(}-2\alpha_{j}\log\delta_{i}+\frac{ \gamma_{j}\log\delta_{i}}{2\pi}\Big{)}\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i}+ \frac{\gamma_{j}}{2\pi}\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i}\log|y|\\ &-2\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i}\log\Big{(}\Big{(}\frac{ \delta_{j}}{\delta_{i}}\Big{)}^{\alpha_{j}}+|y|^{\alpha_{j}}\Big{)}\bigg{]}=-2 \int_{\Omega_{j,n}}\Pi_{j}\Phi_{j}\log(1+|y|^{\alpha_{j}})\\ &+\sum_{i\leq j}\Big{(}-2\beta_{j}+2\beta_{j}\frac{\alpha_{1}-2}{ \beta_{1}+1}\frac{\beta_{i}}{\alpha_{i}}\Big{)}\log\rho\int_{\Omega_{i,n}}\Pi_{ i}\Phi_{i}+\sum_{i=1}^{m}2\beta_{j}\frac{\alpha_{1}-2}{\beta_{1}+1}\int_{ \Omega_{i,n}}\Pi_{i}\Phi_{i}\log|y|+o(1)\\ &+\sum_{i>j}\bigg{[}\Big{(}-2\alpha_{j}\frac{\beta_{i}}{\alpha_{ i}}+2\beta_{j}\ \frac{\alpha_{1}-2}{\beta_{1}+1}\frac{\beta_{i}}{\alpha_{i}}\Big{)}\log\rho\int_{ \Omega_{i,n}}\Pi_{i}\Phi_{i}-2\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i}\log\Big{(} \Big{(}\frac{\delta_{j}}{\delta_{i}}\Big{)}^{\alpha_{j}}+|y|^{\alpha_{j}} \Big{)}\bigg{]}.\end{split} \tag{5.15}\] Since for all \(j=1,\ldots,m\) \[\lim\int_{\Omega_{j,n}}\Pi_{j}\Phi_{j}\log(1+|y|^{\alpha_{j}})=a_{j}\int_{ \mathds{R}^{2}}\Pi_{j}Y_{0j}\log(1+|y|^{\alpha_{j}})=-2\pi\alpha_{j}a_{j}, \tag{5.16}\] by using (5.8) \[\lim\int_{\Omega_{i,n}}\Pi_{i}\Phi_{i}\log\Big{(}\Big{(}\frac{\delta_{j}}{ \delta_{i}}\Big{)}^{\alpha_{j}}+|y|^{\alpha_{j}}\Big{)}=\alpha_{j}a_{i}\int_{ \mathds{R}^{2}}\Pi_{i}Y_{0i}\log|y|=-4\pi\alpha_{j}a_{i},\] in view of \(\frac{\delta_{j}}{\delta_{i}}=o(1)\), and (5.11), as \(n\to+\infty\) we get that \[\begin{split} 0=& 4\pi\alpha_{j}a_{j}+\sum_{i\leq j} \Big{(}-2\beta_{j}+2\beta_{j}\frac{\alpha_{1}-2}{\beta_{1}+1}\frac{\beta_{i}}{ \alpha_{i}}\Big{)}\mu_{i}-8\pi\beta_{j}\frac{\alpha_{1}-2}{\beta_{1}+1}\sum_{ i=1}^{m}a_{i}\\ &+\sum_{i>j}\Big{[}\Big{(}-2\alpha_{j}\frac{\beta_{i}}{\alpha_{i} }+2\beta_{j}\ \frac{\alpha_{1}-2}{\beta_{1}+1}\frac{\beta_{i}}{\alpha_{i}}\Big{)}\mu_{i}-2(- 4\pi\alpha_{j}a_{i})\Big{]}\end{split}\] so that \[\begin{split} 0=& 4\pi\alpha_{j}a_{j}+\sum_{i\leq j} \Big{(}-2\beta_{j}+2\beta_{j}\frac{\alpha_{1}-2}{\beta_{1}+1}\frac{\beta_{i}}{ \alpha_{i}}\Big{)}\mu_{i}-8\pi\beta_{j}\frac{\alpha_{1}-2}{\beta_{1}+1}\sum_{ i=1}^{m}a_{i}\\ &+\sum_{i>j}\Big{(}-2\alpha_{j}\frac{\beta_{i}}{\alpha_{i}}+2\beta _{j}\ \frac{\alpha_{1}-2}{\beta_{1}+1}\frac{\beta_{i}}{\alpha_{i}}\Big{)}\mu_{i}+8 \pi\alpha_{j}\sum_{i>j}a_{i}.\end{split}\] It is readily checked that \(8\pi\alpha_{j}\sum_{i>j}a_{i}=4\pi\alpha_{j}\sum_{i>j}a_{i}+4\pi\alpha_{j}\sum_ {i\geq j+1}a_{i}\), so we rewrite \[\begin{split} 0=& 4\pi\alpha_{j}\sum_{i\geq j}a_{i}+4\pi \alpha_{j}\sum_{i\geq j+1}a_{i}-8\pi\beta_{j}\frac{\alpha_{1}-2}{\beta_{1}+1} \sum_{i=1}^{m}a_{i}-2\beta_{j}\mu_{1}\sum_{i\leq j}(-1)^{i+1}-2\alpha_{j}\mu_ {1}\sum_{i>j}(-1)^{i+1}\frac{\beta_{i}}{\alpha_{i}}\\ &+2\beta_{j}\ \frac{\alpha_{1}-2}{\beta_{1}+1}\mu_{1}\sum_{i=1}^{m}(-1)^{i+1} \frac{\beta_{i}}{\alpha_{i}}=4\pi\alpha_{j}\sum_{i\geq j}a_{i}+4\pi\alpha_{j} \sum_{i\geq j+1}a_{i}-8\pi\beta_{j}\frac{\alpha_{1}-2}{\beta_{1}+1}\sum_{i=1}^ {m}a_{i}\\ &+\mu_{1}\bigg{[}-2\beta_{j}\frac{1+(-1)^{j+1}}{2}-2\alpha_{j} \sum_{i>j}(-1)^{i+1}\frac{\beta_{i}}{\alpha_{i}}+2\beta_{j}\ \frac{\alpha_{1}-2}{\beta_{1}+1}\sum_{i=1}^{m}(-1)^{i+1}\frac{\beta_{i}}{ \alpha_{i}}\bigg{]},\end{split}\] in view of (5.10). For simplicity we shall denote for \(j=1,\ldots,m\) \[x_{j}=\sum_{i=j}^{m}a_{i},\qquad A_{j}=-\frac{\beta_{j}}{2\pi}\frac{1+(-1)^{j+1}}{ 2}-\frac{\alpha_{j}}{2\pi}\sum_{i>j}(-1)^{i+1}\frac{\beta_{i}}{\alpha_{i}}+ \frac{\beta_{j}}{2\pi}\ \frac{\alpha_{1}-2}{\beta_{1}+1}B,\] \[\text{and}\quad A_{m+1}=-\frac{1}{2\pi}\Big{(}-\frac{\beta_{1}+1}{\alpha_{1}-2}+B \Big{)},\quad\text{with}\quad B=\sum_{i=1}^{m}(-1)^{i+1}\frac{\beta_{i}}{\alpha_ {i}}.\] Hence, taking into account (5.9) and dividing by \(4\pi\), we have the following linear system in \(m+1\) variables \(X=[x_{2}\ x_{3}\ \ldots\ x_{m}\ x_{1}\ \mu_{1}]^{T}\) \[0 = \alpha_{j}x_{j}+\alpha_{j}x_{j+1}-2\beta_{j}\frac{\alpha_{1}-2}{ \beta_{1}+1}x_{1}+A_{j}\mu_{1},\quad j=2,\ldots,m-1\] \[0 = \alpha_{m}x_{m}-2\beta_{j}\frac{\alpha_{1}-2}{\beta_{1}+1}x_{1}+ A_{m}\mu_{m}\] \[0 = \alpha_{1}x_{2}+\Big{(}\alpha_{1}-2\beta_{1}\frac{\alpha_{1}-2}{ \beta_{1}+1}\Big{)}x_{1}+A_{1}\mu_{1}\] \[0 = 2x_{1}+A_{m+1}\mu_{1}\] which in matrix form becomes \(AX=\vec{0}\) with \[A=\left[\begin{array}{cccccc}\alpha_{2}&\alpha_{2}&0&\cdots&0&-2\beta_{2} \frac{\alpha_{1}-2}{\beta_{1}+1}&A_{2}\\ 0&\alpha_{3}&\alpha_{3}&\cdots&0&-2\beta_{3}\frac{\alpha_{1}-2}{\beta_{1}+1}& A_{3}\\ 0&0&\alpha_{4}&\cdots&0&-2\beta_{4}\frac{\alpha_{1}-2}{\beta_{1}+1}&A_{4}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&\cdots&\alpha_{m}&-2\beta_{m}\frac{\alpha_{1}-2}{\beta_{1}+1}&A_{m}\\ \alpha_{1}&0&0&\cdots&0&\alpha_{1}-2\beta_{1}\frac{\alpha_{1}-2}{\beta_{1}+1}& A_{1}\\ 0&0&0&\cdots&0&2&A_{m+1}\end{array}\right]=\left[\begin{array}{c}f_{1}\\ f_{2}\\ \vdots\\ f_{m}\\ f_{m+1}\end{array}\right]\] Operating with rows \(f_{j}\)'s of \(A\) in the following form \[f_{m}\to f_{m}+(-1)^{j}\frac{\alpha_{1}}{\alpha_{j+1}}f_{j},\qquad j=1, \ldots,m-1\] we are lead to consider the matrix \[\left[\begin{array}{cccccc}\alpha_{2}&\alpha_{2}&0&\cdots&0&-2\beta_{2} \frac{\alpha_{1}-2}{\beta_{1}+1}&A_{2}\\ 0&\alpha_{3}&\alpha_{3}&\cdots&0&-2\beta_{3}\frac{\alpha_{1}-2}{\beta_{1}+1}& A_{3}\\ 0&0&\alpha_{4}&\cdots&0&-2\beta_{4}\frac{\alpha_{1}-2}{\beta_{1}+1}&A_{4}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&\cdots&\alpha_{m}&-2\beta_{m}\frac{\alpha_{1}-2}{\beta_{1}+1}&A_{m}\\ 0&0&0&\cdots&0&\alpha_{1}-2\alpha_{1}B\frac{\alpha_{1}-2}{\beta_{1}+1}&\frac{ \alpha_{1}}{2\pi}\Big{(}-1+\frac{\alpha_{1}-2}{\beta_{1}+1}B\Big{)}\\ 0&0&0&\cdots&0&2&A_{m+1}\end{array}\right]\] since \[\alpha_{1}-2\beta_{1}\frac{\alpha_{1}-2}{\beta_{1}+1}+\sum_{j=1}^{m-1}(-1)^{ j}\frac{\alpha_{1}}{\alpha_{j+1}}(-2\beta_{j+1})\frac{\alpha_{1}-2}{\beta_{1}+1}= \alpha_{1}-2\alpha_{1}B\frac{\alpha_{1}-2}{\beta_{1}+1}\] and \[A_{1}+\sum_{j=1}^{m-1}(-1)^{j}\frac{\alpha_{1}}{\alpha_{j+1}}A_{j+1}=-\frac{ \alpha_{1}}{2\pi}\Big{(}\underbrace{\frac{\beta_{1}}{\alpha_{1}}+\sum_{i=2}^{m }(-1)^{i+1}\frac{\beta_{i}}{\alpha_{i}}}_{=B}\underbrace{\frac{\beta_{1}}{2\pi }\ \frac{\alpha_{1}-2}{\beta_{1}+1}B+\frac{\alpha_{1}}{2\pi}\frac{\alpha_{1}-2}{ \beta_{1}+1}B\sum_{j=1}^{m-1}(-1)^{j}\frac{\beta_{j+1}}{\alpha_{j+1}}}_{=-\frac {\alpha_{1}}{2\pi}\frac{\alpha_{1}-2}{\beta_{1}+1}B^{2}}\] \[+\frac{\alpha_{1}}{2\pi}\underbrace{\sum_{j=1}^{m-1}(-1)^{j}\Big{[}-\frac{\beta _{j+1}}{\alpha_{j+1}}\frac{1+(-1)^{j}}{2}+\sum_{i>j+1}(-1)^{i}\frac{\beta_{i}}{ \alpha_{i}}}_{=0}=\frac{\alpha_{1}}{2\pi}B\Big{(}-1+\frac{\alpha_{1}-2}{\beta_ {1}+1}B\Big{)}.\] Notice that \(A\) is invertible, since \[\det\left[\begin{array}{cc}\alpha_{1}-2\alpha_{1}B\frac{\alpha_{1}-2}{ \beta_{1}+1}&\frac{\alpha_{1}}{2\pi}\Big{(}-1+\frac{\alpha_{1}-2}{\beta_{1}+1}B \Big{)}\\ 2&A_{m+1}\end{array}\right]=\frac{\alpha_{1}}{4\pi}\cdot\frac{\beta_{1}+1}{ \alpha_{1}-2}>0.\] Therefore, \(x_{2}=x_{3}=\cdots=x_{m}=x_{1}=\mu_{1}=0\), which implies our claim. Now, by using Claims 5.1-5.3 we deduce that \(\Phi_{j,n}\) converges to zero weakly in \(H_{\alpha_{j}}(\mathrm{I\!R}^{2})\) and strongly in \(L_{\alpha_{j}}(\mathrm{I\!R}^{2})\) as \(n\to+\infty\). Thus, we arrives at a contradiction with (5.5) and it follows the priori estimate \(\|\phi\|\leq C|\log\rho|\,\|h\|_{p}\). It only remains to prove the solvability assertion. As usual, expressing (3.17) in weak form, with the aid of Riesz's representation theorem and Fredholm's alternative, the existence of a unique solution follows from the a priori estimate (3.18). This finishes the proof of Proposition 3.1. **Proof of the Proposition 4.1.** Arguing as above, it is enough to prove the a-priori estimate (4.18). Let us assume by contradiction the existence of \(p>1\), sequences \(\rho_{n}\to 0\), functions \(h_{n}\in L^{p}(\Omega_{\epsilon_{n}})\), \(\phi_{n}\in W^{2,2}(\Omega_{\epsilon_{n}})\) such that \[\mathcal{L}(\phi_{n})=h_{n}\ \ \text{in}\ \ \Omega_{\epsilon_{n}},\ \ \phi_{n}=0\ \ \text{on}\ \ \partial\Omega_{\epsilon_{n}}, \tag{5.17}\] with \(\|\phi_{n}\|=1\) and \(|\log\rho_{n}|\ \|h_{n}\|_{p}=o(1)\) as \(n\to+\infty\). We will omit the subscript \(n\) in \(\delta_{i,n}=\delta_{i}\). Recall that \(\delta_{i}^{\alpha_{i}}=d_{i,n}\rho_{n}^{\beta_{i}}\). Now, define \(\Phi_{i,n}(y):=\phi_{n}(\delta_{i}y)\) for \(y\in\Omega_{i,n}:=\delta_{i}^{-1}\Omega_{\epsilon_{n}}\), \(i=1,\ldots,m\) and extend \(\phi_{n}=0\) in \(\mathrm{I\!R}^{2}\setminus\Omega_{\epsilon_{n}}\). Arguing in the same way as in [13, Claim 1, section 4], we can prove the following fact. We provide a sketch of the proof. **Claim 5.4.**_The sequence \(\{\Phi_{i,n}\}_{n}\) converges (up to a subsequence) to \(\Phi_{i}^{*}\) weakly in \(H_{\alpha_{i}}(\mathrm{I\!R}^{2})\) and strongly in \(L_{\alpha_{i}}(\mathrm{I\!R}^{2})\)._ **Proof:** First, we shall show that the sequence \(\{\Phi_{i,n}\}_{n}\) is bounded in \(H_{\alpha_{i}}(\mathrm{I\!R}^{2})\). Notice that \(\|\Phi_{i,n}\|_{H^{1}_{0}(\Omega_{i,n})}=1\) for \(i=1,\ldots,m\). Thus, we want to prove that there is a constant \(M>0\) such for all \(n\) (up to a subsequence) \(\|\Phi_{i,n}\|_{L_{\alpha_{i}}}^{2}\leq M.\) Notice that for any \(i\in\{1,\ldots,m\}\) we find that in \(\Omega_{i,n}\) \[\Delta\Phi_{i,n}+\delta_{i}^{2}K_{0}(\delta_{i}y)\left(\Phi_{i,n}+\tilde{c}_{0,n}\right)+\delta_{i}^{2}K_{1}(\delta_{i}y)\left(\Phi_{i,n}+\tilde{c}_{1,n} \right)=\delta_{i}^{2}h_{n}(\delta_{i}y), \tag{5.18}\] where for simplicity we denote \(\tilde{c}_{j,n}=\tilde{c}_{j}(\phi_{n})\), with \(\tilde{c}_{j}\) given by (5.1). Furthermore, it follows that \(\Phi_{i,n}\to\Phi_{i}^{*}\) weakly in \(H^{1}_{0}(\Omega_{i,n})\) and strongly in \(L^{p}(K)\) for any \(K\) compact sets in \(\mathbb{R}^{2}\). Now, let \(\chi\) a smooth function with compact support in \(\mathbb{R}^{2}.\) We multiply (5.18) by \(\chi\), integrate by parts and we obtain that \[-\int_{\Omega_{i,n}} \nabla\Phi_{i,n}\nabla\chi+\int_{\Omega_{i,n}}\bigg{[}\frac{2 \alpha_{i}^{2}|y|^{\alpha_{i}-2}}{(1+|y|^{\alpha_{i}})^{2}}+o(1)\bigg{]}\Phi_{ i,n}\chi+\tilde{c}_{0,n}\int_{\Omega_{i,n}}\bigg{[}\frac{2\alpha_{i}^{2}|y|^{ \alpha_{i}-2}}{(1+|y|^{\alpha_{i}})^{2}}+o(1)\bigg{]}\chi\] \[+\int_{\Omega_{i,n}}o(1)\Phi_{i,n}\chi+\tilde{c}_{1,n}\int_{\Omega _{i,n}}o(1)\chi=\int_{\Omega_{i,n}}\delta_{i}^{2}h_{n}(\delta_{i}y)\chi\] for \(i\) odd and \[-\int_{\Omega_{i,n}} \nabla\Phi_{i,n}\nabla\chi+\int_{\Omega_{i,n}}o(1)\Phi_{i,n}\chi +\tilde{c}_{0,n}\int_{\Omega_{i,n}}o(1)\chi+\int_{\Omega_{i,n}}\bigg{[}\frac{2 \alpha_{i}^{2}|y|^{\alpha_{i}-2}}{(1+|y|^{\alpha_{i}})^{2}}+o(1)\bigg{]}\Phi_{ i,n}\chi\] \[+\tilde{c}_{1,n}\int_{\Omega_{i,n}}\bigg{[}\frac{2\alpha_{i}^{2}| y|^{\alpha_{i}-2}}{(1+|y|^{\alpha_{i}})^{2}}+o(1)\bigg{]}\chi=\int_{\Omega_{i,n}} \delta_{i}^{2}h_{n}(\delta_{i}y)\chi\] for \(i\) even, in view of \[\delta_{i}^{2}K_{0}(\delta_{i}y)=\begin{cases}\frac{2\alpha_{i}^{2}|y|^{ \alpha_{i}-2}}{(1+|y|^{\alpha_{i}})^{2}}+O\bigg{(}\sum_{j<i\atop j\ \text{odd}}\Big{(}\frac{\delta_{j}}{\delta_{i}}\Big{)}^{\alpha_{j}}+\sum_{i< j\atop j\ \text{odd}}\Big{(}\frac{\delta_{i}}{\delta_{j}}\Big{)}^{\alpha_{j}}\bigg{)}&\text{if $i$ is odd}\\ O\bigg{(}\sum_{j<i\atop j\ \text{odd}}\Big{(}\frac{\delta_{j}}{\delta_{i}} \Big{)}^{\alpha_{j}}+\sum_{i<j\atop j\ \text{odd}}\Big{(}\frac{\delta_{i}}{\delta_{j}}\Big{)}^{\alpha_{j}}\bigg{)}& \text{if $i$ is even}\end{cases} \tag{5.19}\] and \[\delta_{i}^{2}K_{1}(\delta_{i}y)=\begin{cases}O\bigg{(}\sum_{j<i\atop j\ \text{even}}\Big{(}\frac{\delta_{j}}{\delta_{i}}\Big{)}^{\alpha_{j}}+\sum_{i< j\atop j\ \text{even}}\Big{(}\frac{\delta_{i}}{\delta_{j}}\Big{)}^{\alpha_{j}}\bigg{)}&\text{if $i$ is odd}\\ \frac{2\alpha_{i}^{2}|y|^{\alpha_{i}-2}}{(1+|y|^{\alpha_{i}})^{2}}+O\bigg{(} \sum_{j<i\atop j\ \text{even}}\Big{(}\frac{\delta_{j}}{\delta_{i}}\Big{)}^{\alpha_{j}}+\sum_{i<j \atop j\ \text{even}}\Big{(}\frac{\delta_{i}}{\delta_{j}}\Big{)}^{\alpha_{j}}\bigg{)}& \text{if $i$ is even}\end{cases} \tag{5.20}\] uniformly on compact subsets of \({\rm I\!R}^{2}\setminus\{0\}\). We re-write the system for \(\tilde{c}_{0,n}\) and \(\tilde{c}_{1,n}\) as a diagonal dominant one as \(n\to+\infty\) \[\tilde{c}_{0,n}\int_{\Omega_{i,n}}\bigg{[}\frac{2\alpha_{i}^{2}|y|^{\alpha_{i}- 2}}{(1+|y|^{\alpha_{i}})^{2}}+o(1)\bigg{]}\chi+o(1)\tilde{c}_{1,n}= \,O(1)\] \[o(1)\ \tilde{c}_{0,n}+\tilde{c}_{1,n}\int_{\Omega_{j,n}}\bigg{[}\frac{2 \alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2}}+o(1)\bigg{]}\chi= \,O(1),\] choosing \(i\) odd and \(j\) even. Thus, if we choose \(\chi\) so that \(\int_{\mathbb{R}^{2}}\frac{2\alpha_{i}^{2}|y|^{\alpha_{k}-2}}{(1+|y|^{\alpha_{ k}})^{2}}\chi\ dy\neq 0\) for \(k=i,j\) then we obtain that \(\tilde{c}_{i,n}=O(1)\), for \(i=0,1\). Now, we multiply (5.18) by \(\Phi_{i,n}\) for any \(i\in\{1,\ldots,m\}\), integrate by parts and we get \[\sum_{i=1}^{m}2\alpha_{i}^{2}\|\Phi_{i,n}\|_{L_{\alpha_{i}}}^{2}=\,1+\lambda_ {0}(\tilde{c}_{0,n})^{2}+\lambda_{1}\tau^{2}(\tilde{c}_{1,n})^{2}+o(1). \tag{5.21}\] Therefore, the sequence \(\{\Phi_{i,n}\}_{n}\) is bounded in \(H_{\alpha_{i}}({\rm I\!R}^{2})\), so that there is a subsequence \(\{\Phi_{i,n}\}_{n}\) and functions \(\Phi_{i}^{*}\), \(i=1,\ldots,m\) such that \(\{\Phi_{i,n}\}_{n}\) converges to \(\Phi_{i}^{*}\) weakly in \(H_{\alpha_{i}}({\rm I\!R}^{2})\) and strongly in \(L_{\alpha_{i}}({\rm I\!R}^{2})\). That proves our claim. Define the sequences \(\psi_{i,n}=\phi_{n}+\tilde{c}_{i,n}\), \(i=0,1\). Notice that clearly \[\Delta\psi_{i,n}+K_{0}\psi_{1,n}+K_{1}\psi_{2,n}=h_{n}\quad\mbox{in}\ \ \Omega_{\epsilon_{n}},\qquad i=0,1. \tag{5.22}\] Now, define \(\Psi_{i,j,n}(y):=\psi_{i,n}(\delta_{j}y)\) for \(y\in\Omega_{j,n}\), \(i=0,1\) and \(j=1,\ldots,m\). Note that \(\Psi_{i,j,n}=\Phi_{j,n}+\tilde{c}_{i,n}\). Thus, we can prove the following fact. **Claim 5.5**.: \(\Psi_{0,j,n}\to a_{j}Y_{0j}\) _for \(j\) odd and \(\Psi_{1,j,n}\to a_{j}Y_{0j}\) for \(j\) even, weakly in \(H_{\alpha_{j}}({\rm I\!R}^{2})\) and strongly in \(L_{\alpha_{j}}({\rm I\!R}^{2})\) as \(n\to+\infty\) for some constant \(a_{j}\in{\rm I\!R}\), \(j=1,\ldots,m\)._ **Proof:** From the previous computations, it is clear that in \(\Omega_{j,n}\) \[\Delta\Psi_{0,j,n}+\delta_{j}^{2}K_{0}(\delta_{j}y)\Psi_{0,j,n}+\delta_{j}^{2 }K_{1}(\delta_{j}y)\left(\Psi_{0,j,n}-\tilde{c}_{0,n}+\tilde{c}_{1,n}\right)= \delta_{j}^{2}h_{n}(\delta_{j}y)\] and \[\Delta\Psi_{1,j,n}+\delta_{j}^{2}K_{0}(\delta_{j}y)\left(\Psi_{1,n}-\tilde{c}_ {1,n}+\tilde{c}_{0,n}\right)+\delta_{j}^{2}K_{1}(\delta_{j}y)\Psi_{1,j,n}= \delta_{j}^{2}h_{n}(\delta_{j}y).\] Furthermore, \(\{\tilde{c}_{i,n}\}\) is a bounded sequence in \({\rm I\!R}\), so it follows that \(\{\Psi_{i,j,n}\}_{n}\) is bounded in \(H_{\alpha_{j}}({\rm I\!R}^{2})\) for \(i=1,2\) and \(j=1,\ldots,m\). Also, we have that \[\int_{\Omega_{j,n}}(\delta_{j}^{2}|h_{n}(\delta_{j}y)|)^{p}\,dy=\delta_{j}^{2 p-2}\int_{\Omega_{\epsilon_{n}}}|h_{n}(x)|^{p}\,dx=\delta_{i}^{2p-2}\|h_{n}\|_{p}^{ p}=o(1).\] Therefore, taking into account (5.19) and (5.20) we deduce that \(\Psi_{i,j,n}\to\Psi_{j}^{*}\) as \(n\to+\infty\) with \(i=1\) if \(j=1,\ldots,m_{1}\) and \(i=2\) if \(j=m_{1}+1,\ldots,m\), where \(\Psi_{j}^{*}\) is a solution to \[\Delta\Psi+\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2}} \Psi=0,\qquad j=1,\ldots,m,\qquad\mbox{in}\ {\rm I\!R}^{2}\setminus\{0\}.\] It is standard that \(\Psi_{j}^{*}\), \(j=1,\ldots,m\), extends to a solution in the whole \({\rm I\!R}^{2}\). Hence, by using symmetry assumptions if necessary, we get that \(\Psi_{j}^{*}=a_{j}Y_{0j}\) for some constant \(a_{j}\in{\rm I\!R}\), \(j=1,\ldots,m\). Denote \(\tilde{c}_{i}=\lim\limits_{n\to+\infty}\tilde{c}_{i,n}\) for \(i=0,1\), up to a subsequence if necessary. Hence, we get that \[\Phi_{j,n}\to a_{j}Y_{0j}-\tilde{c}_{0},\ \ \mbox{for}\ j\ \mbox{odd}\quad\mbox{and} \quad\Phi_{j,n}\to a_{j}Y_{0j}-\tilde{c}_{1},\ \ \mbox{for}\ j\ \mbox{even}, \tag{5.23}\] weakly in \(H_{\alpha_{j}}({\rm I\!R}^{2})\) and strongly in \(L_{\alpha_{j}}({\rm I\!R}^{2})\), since \(\Phi_{j,n}=\Psi_{i,j,n}-\tilde{c}_{i,n}\). **Claim 5.6**.: _For all \(j=1,\ldots,m\), there exist the limit, up to subsequences,_ \[\mu_{j}=\lim_{n\to+\infty}\log\rho_{n}\int_{\Omega_{j,n}}\Pi_{j}(y)\Psi_{l,j,n}( y)\,dy=0, \tag{5.24}\] _where \(l=0\) for \(j\) odd and \(l=1\) for \(j\) even. Furthermore, it holds_ \[\sum_{i=1}^{m}a_{i}=0. \tag{5.25}\] **Proof:** To this aim we use the test function \(P_{\epsilon}Z_{0j}\), where \(Z_{0j}\) as in Claim 5.3. Thus, from the assumption on \(h_{n}\), \(|\log\rho_{n}|\ \|h_{n}\|_{*}=o(1)\), we get (5.24)-(5.25). Assume that either \(l=0\) for all \(j\) odd or \(l=1\) for all \(j\) even. Multiplying equation (5.17) by \(P_{\epsilon}Z_{0j}\) and integrating by parts we obtain that \[\int_{\Omega_{\epsilon}}hP_{\epsilon}Z_{0j}=\left.\int_{\Omega_{\epsilon}} \Delta Z_{0j}\left[\psi_{l}-\tilde{c}_{l,n}\right]+\int_{\Omega_{\epsilon}} \left[K_{0}\psi_{0}+K_{1}\psi_{1}\right]P_{\epsilon}Z_{0j},\right.\] in view of \(P_{\epsilon}Z_{0j}=0\) and \(\psi_{l}=\tilde{c}_{l,n}\) on \(\partial\Omega_{\epsilon}\) and \[\int_{\Omega_{\epsilon}}\Delta\psi_{l}P_{\epsilon}Z_{0j}=\left.\int_{\Omega_ {\epsilon}}\psi_{l}\Delta P_{\epsilon}Z_{0j}-\psi_{l}\right|_{\partial\Omega_ {\epsilon}}\int_{\Omega_{\epsilon}}\Delta PZ_{0j}=\int_{\Omega_{\epsilon}} \Delta Z_{0j}\left[\psi_{l}-\tilde{c}_{l,n}\right].\] Furthermore, we have that \[\int_{\Omega_{\epsilon}}hPZ_{0j}=-\int_{\Omega_{\epsilon}}\left[ \psi_{l}-\tilde{c}_{l,n}\right]|x|^{\alpha_{j}-2}e^{w_{j}}Z_{0j}+\int_{\Omega_ {\epsilon}}\left[K_{0}\psi_{0}+K_{1}\psi_{1}\right]P_{\epsilon}Z_{0j}\] \[= \int_{\Omega_{\epsilon}}|x|^{\alpha_{j}-2}e^{w_{j}}\psi_{l}\left( PZ_{0j}-Z_{0j}\right),+\tilde{c}_{l,n}\int_{\Omega_{\epsilon}}|x|^{\alpha_{j}-2}e^ {w_{j}}Z_{0j}+\int_{\Omega_{\epsilon}}\left(K_{0}\psi_{0}+K_{1}\psi_{1}-|x|^{ \alpha_{j}-2}e^{w_{j}}\psi_{l}\right)P_{\epsilon}Z_{0j}.\] Multiplying by \(\gamma_{0}^{-1}\) and arguing in the same way as in Claim 5.2 (replacing \(\Psi_{l,i,n}\) by \(\Phi_{i,n}\) with \(l=0\) for \(j\) odd and \(l=1\) for \(j\) even in (5.12) and (5.13)) we deduce \[2\sum_{i=1}^{m}a_{i}=\bigg{(}-\frac{\beta_{1}+1}{4\pi(\alpha_{1}-2)}+\sum_{i= 1}^{m}(-1)^{i+1}\frac{\beta_{i}}{2\pi\alpha_{i}}\bigg{)}\mu_{1},\] and \(\mu_{j}=(-1)^{j-1}\mu_{1}\) for all \(j=1,\ldots,m-1\), in view of \[\gamma_{0}^{-1}\int_{\Omega_{\epsilon}}hPZ_{0j}=O\left(\,|\log\rho|\,\|h\|_{p }\right)=o\left(1\right)\quad\text{and}\quad\gamma_{0}^{-1}\int_{\Omega_{ \epsilon}}|x|^{\alpha_{j}-2}e^{w_{j}}Z_{0j}=O(\rho^{\tilde{\sigma}}|\log\rho|).\] On the other hand, it is readily checked that \[\int_{\Omega_{\epsilon}}K_{0}=\sum_{k=1\atop k\text{ odd}}^{m}\int_{\Omega_{ \epsilon}}|x|^{\alpha_{k}-2}e^{w_{k}}\ dx=\sum_{k=1\atop k\text{ odd}}^{m}\left[4\pi\alpha_{k}+O(\delta_{k}^{\alpha_{k}})+O \Big{(}\frac{\epsilon^{2}}{\delta_{k}^{2}}\Big{)}\right]=\lambda_{0}+O(\rho^{ \tilde{\sigma}})\] and similarly \(\int_{\Omega_{\epsilon}}K_{1}=\lambda_{2}\tau^{2}+O(\rho^{\tilde{\sigma}})\) for some \(\tilde{\sigma}>0\), so that for \(\psi_{0}\) and \(\psi_{1}\) we have that \[\int_{\Omega_{\epsilon}}K_{i}\psi_{i}=\left(1-\frac{1}{\lambda_{i}\tau^{2i}} \int_{\Omega_{\epsilon}}K_{i}\right)\int_{\Omega_{\epsilon}}K_{i}\phi=O(\rho^{ \tilde{\sigma}})\int_{\Omega_{\epsilon}}K_{i}\phi=O(\rho^{\tilde{\sigma}}).\] Also, we get that \[\int_{\Omega_{\epsilon}}K_{0}\psi_{0}=\sum_{j=1\atop j\text{ odd}}^{m}\int_{ \Omega_{\epsilon}}|x|^{\alpha_{j}-2}e^{w_{j}}\psi_{i}=\sum_{j=1\atop j\text{ odd}}^{m_{1}}\int_{\Omega_{j,n}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{ \alpha_{j}})^{2}}\Psi_{0,j,n}(y)\,dy\] and \[\int_{\Omega_{\epsilon}}K_{1}\psi_{1}=\sum_{j=1\atop j\text{ even}}^{m}\int_{ \Omega_{j,n}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2} }\Psi_{1,j,n}(y)\,dy.\] Hence, it follows that \[\lim_{n\to+\infty}\log\rho_{n}\int_{\Omega_{\epsilon_{n}}}K_{0}\psi_{0}=0=\sum_{ \genfrac{}{}{0.0pt}{}{j=1}{j\text{ odd}}}^{m}\mu_{j}=\mu_{1}\sum_{\genfrac{}{}{0.0 pt}{}{j=1}{j\text{ odd}}}^{m}1,\] so that, \(\mu_{j}=0\) for all \(j=1,\ldots,m.\) **Claim 5.7**.: _There hold that_ \[0=\tilde{c}_{l}+a_{j}+2\sum_{i>j}a_{i}, \tag{5.26}\] _where \(l=0\) for \(j\) odd and \(l=1\) for \(j\) even._ **Proof:** To this aim we will use as tests functions \(P_{\epsilon}w_{j}\). Hence, multiplying equation (5.4) by \(P_{\epsilon}w_{j}\) and integrating by parts as in the previous Claim we obtain that \[\int_{\Omega_{\epsilon}}hPw_{j}= -\int_{\Omega_{\epsilon}}[\psi_{l}-\tilde{c}_{l,n}]|x|^{\alpha_{ j}-2}e^{w_{j}}+\int_{\Omega_{\epsilon}}[K_{0}\psi_{0}+K_{1}\psi_{j}]P_{ \epsilon}w_{j}\] \[= -\int_{\Omega_{\epsilon}}\psi_{l}|x|^{\alpha_{j}-2}e^{w_{j}}+ \tilde{c}_{l,n}\int_{\Omega_{\epsilon}}|x|^{\alpha_{j}-2}e^{w_{j}}+\int_{ \Omega_{\epsilon}}[K_{0}\psi_{0}+K_{1}\psi_{1}]P_{\epsilon}w_{j}\] in view of \(P_{\epsilon}w_{j}=0\) on \(\partial\Omega_{\epsilon_{n}}\) and \(\int_{\Omega_{\epsilon}}\Delta\psi_{l}P_{\epsilon}w_{j}=\int_{\Omega_{ \epsilon}}\psi_{l}\ \Delta P_{\epsilon}w_{j}\). Arguing in the same way as in Claim 5.3 (replacing \(\Psi_{l,i,n}\) by \(\Phi_{i,n}\) with \(l=0\) for \(j\) odd and \(l=1\) for \(j\) even in (5.15) and (5.16)) we find that \[0= \,4\pi\alpha_{j}\tilde{c}_{l}+4\pi\alpha_{j}a_{j}+\sum_{i\leq j} \Big{(}-2\beta_{j}+2\beta_{j}\frac{\alpha_{1}-2}{\beta_{1}+1}\frac{\beta_{i}} {\alpha_{i}}\Big{)}\mu_{i}-8\pi\beta_{j}\frac{\alpha_{1}-2}{\beta_{1}+1}\sum_ {i=1}^{m}a_{i}\] \[+\sum_{i>j}\Big{(}-2\alpha_{j}\frac{\beta_{i}}{\alpha_{i}}+2\beta _{j}\ \frac{\alpha_{1}-2}{\beta_{1}+1}\frac{\beta_{i}}{\alpha_{i}}\Big{)}\mu_{i}+8 \pi\alpha_{j}\sum_{i>j}a_{i},\] in view of \[\int_{\Omega_{\epsilon}}hPw_{j}=O\left(\,|\log\rho|\,\|h\|_{p}\right)=o\left(1 \right),\] \[\int_{\Omega_{\epsilon}}\psi_{l}|x|^{\alpha_{j}-2}e^{w_{j}}=\int_{\Omega_{j,n} }\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2}}\Psi_{l,j,n }=a_{j}\int_{\mathbb{R}^{2}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{ \alpha_{j}})^{2}}Y_{0j}\,dy+o(1)=o(1)\] and \[\int_{\Omega_{\epsilon}}|x|^{\alpha_{j}-2}e^{w_{j}}=\int_{\Omega_{j,n}}\frac{ 2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2}}\,dy=\int_{ \mathbb{R}^{2}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^ {2}}\,dy+o(1)=4\pi\alpha_{j}+o(1).\] By using (5.24) and (5.25) we obtain that (5.26) for \(l=0\) with \(j\) odd and \(l=1\) with \(j\) even. For the next step, consider the function \(\eta_{j}(x)=\frac{4}{3}\log(\delta_{j}^{\alpha_{j}}+|x|^{\alpha_{j}})\frac{ \delta_{j}^{\alpha_{j}}-|x|^{\alpha_{j}}}{\delta_{j}^{\alpha_{j}}+|x|^{\alpha _{j}}}+\frac{8}{3}\frac{\delta_{j}^{\alpha_{j}}}{\delta_{j}^{\alpha_{j}}+|x|^{ \alpha_{j}}}\), for any \(j\in\{1,\ldots,m\}\) so that \(\Delta\eta_{j}+|x|^{\alpha_{j}-2}e^{w_{j}}\eta_{j}=|x|^{\alpha_{j}-2}e^{w_{j}} Z_{0j}.\) Notice that \[P_{\epsilon}\eta_{j}=\eta_{j}+\frac{8\pi}{3}\alpha_{j}H(x,0)-\tilde{\gamma}_{j }G(x,0)+O(\rho^{\tilde{\sigma}})\quad\text{uniformly in }\Omega_{\epsilon}\text{ for some }\tilde{\sigma}>0,\] by using similar arguments as to obtain expansion in Lemma 2.1, as shown in [13, Lemma 4.1] with \(m=1\) and \(\xi=0\), where the coefficients \(\tilde{\gamma}_{i}\)'s, \(i=1,\ldots,m\), are given by \[\tilde{\gamma}_{i}\left[-\frac{1}{2\pi}\log\epsilon+H(0,0)\right]=\frac{4}{3} \alpha_{i}\log\delta_{i}+\frac{8}{3}+\frac{8\pi}{3}\alpha_{i}H(0,0) \tag{5.27}\] From (2.2) it follows that \(\tilde{\gamma}_{i}=-\frac{8\pi(\alpha_{1}-2)\beta_{i}}{3(\beta_{1}+1)}+O\big{(} \frac{1}{\log\rho}\big{)},\). **Claim 5.8**.: _There hold that \(a_{j}+4\sum_{i>j}a_{i}+2\tilde{c}_{l}=0\) with \(l=0\) for all \(j\) odd and \(l=1\) for all \(j\) even. Consequently, combining with (5.26) it follows that \(a_{i}=0\) for all \(i=1,\ldots,m\)._ **Proof:** We use the following test function \(P_{\epsilon}\eta_{j}\). Thus, from the assumption on \(h_{n}\), \(|\log\rho_{n}|\ \|h_{n}\|_{*}=o(1)\), we get the above relation between \(a_{j}\) and \(\tilde{c}_{i}\) either for \(l=0\) and all \(j\) odd or for \(l=1\) and all \(j\) even. Assume that \(l=0\) for all \(j\) odd or \(l=1\) for all \(j\) even. Multiplying equation (5.17) by \(P_{\epsilon}\eta_{j}\) and again integrating by parts we obtain that \[\int_{\Omega_{\epsilon}}hP\eta_{j}\!=\!\int_{\Omega_{\epsilon}}[ \psi_{l}-\tilde{c}_{l,n}]\left[|x|^{\alpha_{j}-2}e^{w_{j}}Z_{0j}-|x|^{\alpha_{j }-2}e^{w_{j}}\eta_{j}\right]\!+\!\int_{\Omega_{\epsilon}}[K_{0}\psi_{0}+K_{1} \psi_{1}]\,P_{\epsilon}\eta_{j}\!=\!\int_{\Omega_{\epsilon}}\!\psi_{i}|x|^{ \alpha_{j}-2}e^{w_{j}}Z_{0j}\] \[+\int_{\Omega_{\epsilon}}|x|^{\alpha_{j}-2}e^{w_{j}}\psi_{l}\left( P\eta_{j}-\eta_{j}\right)+\int_{\Omega_{\epsilon}}\left(K_{0}\psi_{0}+K_{1} \psi_{1}-|x|^{\alpha_{j}-2}e^{w_{j}}\psi_{l}\right)P_{\epsilon}\eta_{j}-\tilde {c}_{l,n}\int_{\Omega_{\epsilon}}|x|^{\alpha_{j}-2}e^{w_{j}}\left(Z_{0j}-\eta_{ j}\right),\] Now, estimating every integral term we find that \(\int_{\Omega_{\epsilon}}hP\eta_{j}=O\left(\,|\log\rho|\,\|h\|_{p}\right)=o\left(1\right)\) for all \(j=1,\ldots,m\), in view of \(P\eta_{j}=O(|\log\rho|)\) and \(G(x,0)=O(|\log\rho|)\). Next, by scaling we obtain that either for \(l=0\) and all \(j\) odd or \(l=1\) and all \(j\) even, it holds \[\int_{\Omega_{\epsilon}}\psi_{l}|x|^{\alpha_{j}-2}e^{w_{j}}Z_{0j}=\int_{\Omega _{j,n}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2}}\Psi _{l,j,n}Y_{0j}\,dy=a_{j}\int_{\mathbbm{R}^{2}}\frac{2\alpha_{j}^{2}|y|^{\alpha_ {j}-2}}{(1+|y|^{\alpha_{j}})^{2}}Y_{0j}^{2}\,dy+o(1).\] Note that \[\int_{\mathbbm{R}^{2}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha _{j}})^{2}}Y_{0j}^{2}=\int_{\mathbbm{R}^{2}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{ j}-2}}{(1+|y|^{\alpha_{j}})^{2}}\left(\frac{1-|y|^{\alpha_{j}}}{1+|y|^{\alpha_{j}} }\right)^{2}\,dy=\frac{4\pi}{3}\alpha_{j}\] and \[\int_{\mathbbm{R}^{2}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha _{j}})^{2}}Y_{0j}\log|y|=\int_{\mathbbm{R}^{2}}\frac{2\alpha_{j}^{2}|y|^{ \alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2}}\ \frac{1-|y|^{\alpha_{j}}}{1+|y|^{\alpha_{j}}}\ \log|y|\,dy=-4\pi.\] Also, by using (5.27) we get that \[\int_{\Omega_{\epsilon}}|x|^{\alpha_{j}-2}e^{w_{j}}\psi_{l}\left( P_{\epsilon}\eta_{j}-\eta_{j}\right)=\int_{\Omega_{\epsilon}}|x|^{\alpha_{j}-2}e^{w _{j}}\psi_{l}\bigg{[}\left(\frac{8\pi}{3}\alpha_{j}-\tilde{\gamma}_{j}\right) H(x,0)+\frac{1}{2\pi}\tilde{\gamma}_{j}\log|x|+O(\rho^{\tilde{\sigma}})\bigg{]}\] \[=\left(\frac{8\pi}{3}\alpha_{j}-\tilde{\gamma}_{j}\right)\int_{ \Omega_{j,n}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2} }\Psi_{l,j,n}H(\delta_{j}y,0)\,dy+\frac{\tilde{\gamma}_{j}}{2\pi}\log\delta_{j }\int_{\Omega_{j,n}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j }})^{2}}\Psi_{l,j,n}\,dy\] \[+\frac{\tilde{\gamma}_{j}}{2\pi}\int_{\Omega_{j,n}}\frac{2\alpha_ {j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2}}\Psi_{l,j,n}\log|y|dy+O( \rho^{\tilde{\sigma}})=-\frac{4(\alpha_{1}-2)\beta_{j}}{3(\beta_{!}+1)}a_{j} \int_{\mathbbm{R}^{2}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{ j}})^{2}}Y_{0j}\log|y|dy+o(1),\] in view of (5.24). Furthermore, using (4.15) we have that Notice that \[\int_{\Omega_{j,n}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j }})^{2}}\Psi_{i,j,n}(y)\,dy=a_{j}\int_{\mathbbm{R}^{2}}\frac{2\alpha_{j}^{2}|y| ^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2}}Y_{0j}(y)\,dy+o(1)=o(1),\] since \[\int_{\mathbbm{R}^{2}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_ {j}})^{2}}Y_{0j}(y)\,dy=\int_{\mathbbm{R}^{2}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{ j}-2}}{(1+|y|^{\alpha_{j}})^{2}}\cdot\frac{1-|y|^{\alpha_{j}}}{1+|y|^{\alpha_{j}}} \,dy=0.\] If \(i<j\) then \[P_{\epsilon}\eta_{j}(\delta_{i}y)= \bigg{[}\frac{4}{3}\alpha_{j}\log\delta_{j}+\frac{4}{3}\log\left(1 +\left(\frac{\delta_{i}|y|}{\delta_{j}}\right)^{\alpha_{j}}\right)\bigg{]} \frac{1-\left(\frac{\delta_{i}|y|}{\delta_{j}}\right)^{\alpha_{j}}}{1+\left( \frac{\delta_{i}|y|}{\delta_{j}}\right)^{\alpha_{j}}}+\frac{8}{3}\cdot\frac{1}{1+ \left(\frac{\delta_{i}|y|}{\delta_{j}}\right)^{\alpha_{j}}}\] \[+\frac{\tilde{\gamma}_{j}}{2\pi}\log(\delta_{i}|y|)+\Big{(}\frac{8 \pi}{3}\alpha_{j}-\tilde{\gamma}_{j}\Big{)}H(\delta_{i}y,0)+O(\rho^{\tilde{ \sigma}})\] and \[\int_{\Omega_{\epsilon}}|x|^{\alpha_{i}-2}e^{w_{i}}\psi_{l}P_{ \epsilon}\eta_{j} =\int_{\Omega_{i,n}}\Pi_{i}\Psi_{l,i,n}P_{\epsilon}\eta_{j}( \delta_{i}y)=\frac{\tilde{\gamma}_{j}}{2\pi}\int_{\Omega_{i,n}}\Pi_{i}\Psi_{l,i, n}(y)\log|y|+o(1)\] \[= \frac{16\pi(\alpha_{1}-2)\beta_{j}}{3(\beta_{1}+1)}a_{i}+o(1)\] in view of \(\frac{\delta_{j}|y|}{\delta_{j}}=o(1)\) uniformly for \(y\) on compact subsets, (5.24) and dominated convergence. Similarly, if \(i>j\) then \[P_{\epsilon}\eta_{j}(\delta_{i}y)= \left[\frac{4}{3}\alpha_{j}\log\delta_{j}+\frac{4}{3}\log\Big{(} \Big{(}\frac{\delta_{j}}{\delta_{i}}\Big{)}^{\alpha_{j}}+|y|^{\alpha_{j}}\Big{)} \right]\frac{\big{(}\frac{\delta_{j}}{\delta_{i}}\big{)}^{\alpha_{j}}-|y|^{ \alpha_{j}}}{\big{(}\frac{\delta_{i}}{\delta_{i}}\big{)}^{\alpha_{j}}+|y|^{ \alpha_{j}}}+\frac{8}{3}\Big{(}\frac{\delta_{j}}{\delta_{i}}\Big{)}^{\alpha_{j }}\cdot\frac{1}{\big{(}\frac{\delta_{j}}{\delta_{i}}\big{)}^{\alpha_{j}}+|y|^{ \alpha_{j}}}\] \[+\frac{\tilde{\gamma}_{j}}{2\pi}\log(\delta_{i}|y|)+\Big{(}\frac{ 8\pi}{3}\alpha_{j}-\tilde{\gamma}_{j}\Big{)}H(\delta_{i}y,0)+O(\rho^{\widehat{ \sigma}})\] and \[\int_{\Omega_{\epsilon}}|x|^{\alpha_{i}-2}e^{w_{i}}\psi_{l}P_{ \epsilon}\eta_{j}=\int_{\Omega_{i,n}}\Pi_{i}\Psi_{l,i,n}P_{\epsilon}\eta_{j}( \delta_{i}y)=\frac{4}{3}\int_{\Omega_{i,n}}\Pi_{i}\Psi_{l,i,n}\log\Big{(} \Big{(}\frac{\delta_{j}}{\delta_{i}}\Big{)}^{\alpha_{j}}+|y|^{\alpha_{j}} \Big{)}\frac{\big{(}\frac{\delta_{j}}{\delta_{i}}\big{)}^{\alpha_{j}}-|y|^{ \alpha_{j}}}{\big{(}\frac{\delta_{j}}{\delta_{i}}\big{)}^{\alpha_{j}}+|y|^{ \alpha_{j}}}\] \[+\frac{\tilde{\gamma}_{j}}{2\pi}\int_{\Omega_{i,n}}\Pi_{i}\Psi_{l,i,n}(y)\log|y|+o(1)=-\frac{4}{3}\alpha_{j}a_{i}(-4\pi)+\frac{16\pi(\alpha_{1} -2)\beta_{j}}{3(\beta_{1}+1)}a_{i}+o(1)\] in view of \(\frac{\delta_{j}}{\delta_{i}}=o(1)\), \[\frac{4}{3}\alpha_{j}\log\delta_{j}\int_{\Omega_{i,n}}\Pi_{i} \Psi_{l,i,n}\frac{\big{(}\frac{\delta_{j}}{\delta_{i}}\big{)}^{\alpha_{j}}-|y|^ {\alpha_{j}}}{\big{(}\frac{\delta_{j}}{\delta_{i}}\big{)}^{\alpha_{j}}+|y|^{ \alpha_{j}}} =\frac{4}{3}\log d_{j}\int_{\Omega_{i,n}}\Pi_{i}\Psi_{l,i,n}\frac {\big{(}\frac{\delta_{j}}{\delta_{i}}\big{)}^{\alpha_{j}}-|y|^{\alpha_{j}}}{ \big{(}\frac{\delta_{j}}{\delta_{i}}\big{)}^{\alpha_{j}}+|y|^{\alpha_{j}}}- \frac{4}{3}\beta_{j}\log\rho\int_{\Omega_{i,n}}\Pi_{i}\Psi_{l,i,n}\] \[\quad+\frac{8}{3}\beta_{j}\Big{(}\frac{\delta_{j}}{\delta_{i}} \Big{)}^{\alpha_{j}}\log\rho\int_{\Omega_{i,n}}\Pi_{i}\Psi_{l,i,n}\frac{1}{ \big{(}\frac{\delta_{j}}{\delta_{i}}\big{)}^{\alpha_{j}}+|y|^{\alpha_{j}}}=o(1),\] \[\int_{\Omega_{i,n}}\Pi_{i}\Psi_{l,i,n}\log\Big{(}\Big{(}\frac{ \delta_{j}}{\delta_{i}}\Big{)}^{\alpha_{j}}+|y|^{\alpha_{j}}\Big{)}\frac{\big{(} \frac{\delta_{j}}{\delta_{i}}\big{)}^{\alpha_{j}}-|y|^{\alpha_{j}}}{\big{(} \frac{\delta_{j}}{\delta_{i}}\big{)}^{\alpha_{j}}+|y|^{\alpha_{j}}}=-\,\alpha _{j}a_{i}\int_{\mathds{R}^{2}}\Pi_{i}Y_{0i}\log|y|+o(1),\] (5.24) and dominated convergence. If \(l=0\) for \(j\) odd and \(l=1\) for \(j\) even, we get that \[\int_{\Omega_{\epsilon}}\left[K_{0}\psi_{0}+K_{1}\psi_{1}-|x|^{ \alpha_{j}-2}e^{w_{j}}\psi_{l}\right]P_{\epsilon}\eta_{j}=\sum_{i<j}\frac{16\pi( \alpha_{1}-2)\beta_{j}}{3(\beta_{1}+1)}a_{i}+\frac{16\pi(\alpha_{1}-2)\beta_{ j}}{3(\beta_{1}+1)}a_{j}\] \[+\sum_{i>j}\left[\frac{16\pi(\alpha_{1}-2)\beta_{j}}{3(\beta_{1} +1)}a_{i}+\frac{16\pi}{3}\alpha_{j}a_{i}\right]+o(1)=\frac{16\pi(\alpha_{1}-2) \beta_{j}}{3(\beta_{1}+1)}\sum_{i=1\atop i\neq j}^{m}a_{i}+\frac{16\pi}{3} \alpha_{j}\sum_{i>j}a_{i}+o(1).\] Besides, similarly as above we obtain that \[\int_{\Omega_{\epsilon}}|x|^{\alpha_{j}-2}e^{w_{j}}\left(Z_{0j}- \eta_{j}\right)=\int_{\Omega_{\epsilon}}|x|^{\alpha_{j}-2}e^{w_{j}}Z_{0j}-\int_ {\Omega_{\epsilon}}|x|^{\alpha_{j}-2}e^{w_{j}}\eta_{j}=\int_{B_{\frac{\pi}{ \delta_{j}}}(0)\setminus B_{\frac{\pi}{\delta_{j}}}(0)}\frac{2\alpha_{j}^{2}|y| ^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2}}\frac{1-|y|^{\alpha_{j}}}{1+|y|^{ \alpha_{j}}}\] \[-\int_{\Omega_{j,n}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y| ^{\alpha_{j}})^{2}}\left[\frac{4}{3}\log\big{(}\delta_{j}^{\alpha_{j}}+\delta_{ j}^{\alpha_{j}}|y|^{\alpha_{j}}\big{)}\,Y_{0j}(y)+\frac{8}{3}\frac{1}{1+|y|^{ \alpha_{j}}}\right]dy+O(\delta_{j}^{\alpha_{j}})\] \[=O(\rho^{\beta}|\log\rho|)-\frac{4}{3}\alpha_{j}\log\delta_{j}\int_ {\Omega_{j,n}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^{2} }Y_{0j}(y)\,dy-\frac{4}{3}\int_{\Omega_{j,n}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}- 2}}{(1+|y|^{\alpha_{j}})^{2}}Y_{0j}(y)\log\left(1+|y|^{\alpha_{j}}\right)\,dy\] \[-\frac{8}{3}\int_{\Omega_{j,n}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}- 2}}{(1+|y|^{\alpha_{j}})^{2}}\frac{1}{1+|y|^{\alpha_{j}}}\,dy=-\frac{8\pi}{3} \alpha_{j}+o(1),\] in view of \[\int_{\mathds{R}^{2}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y|^{\alpha_{j}})^ {2}}Y_{0j}(y)\log\left(1+|y|^{\alpha_{j}}\right)\,dy=-2\pi\alpha_{j}\qquad \text{and}\qquad\int_{\mathds{R}^{2}}\frac{2\alpha_{j}^{2}|y|^{\alpha_{j}-2}}{(1+|y |^{\alpha_{j}})^{2}}\frac{1}{1+|y|^{\alpha_{j}}}\,dy=2\pi\alpha_{j}.\] Therefore, we conclude that \[o(1)=a_{j}\left(\frac{4\pi}{3}\alpha_{j}+o(1)\right)+\frac{16\pi(\alpha_{1}-2) \beta_{j}}{3(\beta_{1}+1)}a_{j}+\frac{16\pi(\alpha_{1}-2)\beta_{j}}{3(\beta_{1} +1)}\sum_{i=1\atop i\neq j}^{m}a_{i}+\frac{16\pi}{3}\alpha_{j}\sum_{i>j}a_{i }-\tilde{c}_{l,n}\left(-\frac{8\pi}{3}\alpha_{j}+o(1)\right),\] and the conclusion follows. Now, by using Claims 5.4-5.8 and arguing similarly to the proof of Proposition 3.1, we deduce the a-priori estimate (4.18). This finishes the proof. ## Appendix A **Lemma A.1**.: _There exist \(p_{0}>1\) close to 1 such that all the following integrals are of order \(O\left(\rho^{p\eta_{p}}\right)\) for any \(1<p\leq p_{0}\) and for some \(\eta_{p}>0\): \((i)\)\(\delta_{j}^{2-p}\int_{\frac{A_{j}}{2}}\big{|}\frac{|y|^{\alpha_{j}-1}}{(1+|y|^{ \alpha_{j}})^{2}}\big{|}^{p}dy\); \((ii)\)\(\rho^{\eta}\delta_{j}^{2-2p}\int_{\frac{A_{j}}{2}}\big{|}\frac{|y|^{\alpha_{j}-2}}{(1+|y| ^{\alpha_{j}})^{2}}\big{|}^{p}dy\); \((iii)\)\(\delta_{j}^{2-2p}\big{(}\frac{\delta_{i}}{\delta_{j}}\big{)}^{\alpha_{i}p}\int_{ \frac{A_{j}}{2}}\big{|}\frac{1}{|y|^{\alpha_{j}+2}}\big{|}^{p}dy\), for \(i<j\); \((iv)\)\(\delta_{j}^{2-2p}\big{(}\frac{\delta_{i}}{\delta_{i}}\big{)}^{\alpha_{i}p}\int_{ \frac{A_{j}}{2}}\big{|}|y|^{\alpha_{j}-2}\big{|}^{p}dy\), for \(j<i\); \((v)\)\(\rho^{(1+\tau)p}\delta_{j}^{2+2\tau p}\int_{\frac{A_{j}}{2}}\big{|}\frac{(1+|y|^{ \alpha_{j}})^{2\tau}}{|y|^{(\alpha_{j}-2)\tau}}\big{|}^{p}dy\), for \(j\) odd and \((vi)\)\(\rho^{(1+1/\tau)p}\delta_{j}^{2+2p/\tau}\int_{\frac{A_{j}}{2}}\big{|}\frac{(1+|y|^{ \alpha_{j}})^{2/\tau}}{|y|^{(\alpha_{j}-2)\tau}}\big{|}^{p}dy\) for \(j\) even._ **Proof:** It is readily checked that for any \(1<p\) and any \(j=1,\ldots,m\) \[\delta_{j}^{2-p}\int_{\frac{A_{j}}{2}}\bigg{|}\frac{|y|^{\alpha_{j}-1}}{(1+|y |^{\alpha_{j}})^{2}}\bigg{|}^{p}\ dy=O(\delta_{j}^{2-p})=O(\rho^{(2-p)\beta_{j }/\alpha_{j}})\] and \[\rho^{\eta}\delta_{j}^{2-2p}\int_{\frac{A_{j}}{2}}\bigg{|}\frac{|y|^{\alpha_{ j}-2}}{(1+|y|^{\alpha_{j}})^{2}}\bigg{|}^{p}\ dy=O(\rho^{\eta}\delta_{j}^{2-2p})=O(\rho^{\eta+(2-2p) \beta_{j}/\alpha_{j}}).\] Furthermore, fixing \(j=1,\ldots,m\) for any \(i<j\) we have that \[\int_{\frac{A_{j}}{2}}\bigg{|}\frac{1}{|y|^{\alpha_{j}+2}}\bigg{|}^{p}\ dy=O \bigg{(}\Big{(}\frac{\delta_{j-1}}{\delta_{j}}\Big{)}^{1-p-\frac{\alpha_{i}p} {2}}\bigg{)}.\] Hence, there exist \(\eta_{p}>0\) such that \(\delta_{j}^{2-2p}\big{(}\frac{\delta_{i}}{\delta_{j}}\big{)}^{\alpha_{i}p} \big{(}\frac{\delta_{j-1}}{\delta_{j}}\big{)}^{1-p-\frac{\alpha_{i}}{2}}=O( \rho^{p\eta_{p}})\), since for \(p=1\) we have that \[\Big{(}\frac{\delta_{i}}{\delta_{j}}\Big{)}^{\alpha_{i}}\Big{(}\frac{\delta_{j -1}}{\delta_{j}}\Big{)}^{-\frac{\alpha_{i}}{2}}=\Big{(}\frac{\delta_{i}}{ \delta_{j-1}}\Big{)}^{\alpha_{i}}\Big{(}\frac{\delta_{j-1}}{\delta_{j}}\Big{)} ^{\frac{\alpha_{i}}{2}}=O\big{(}\rho^{\alpha_{i}(\frac{\beta_{i}}{\alpha_{i}}- \frac{\beta_{j-1}}{\alpha_{j-1}})+\frac{\alpha_{i}}{2}(\frac{\beta_{j-1}}{ \alpha_{j}}-\frac{\beta_{j}}{\alpha_{j}})}\big{)}\] and \(\alpha_{i}\Big{(}\frac{\beta_{i}}{\alpha_{i}}-\frac{\beta_{j-1}}{\alpha_{j-1} }\Big{)}+\frac{\alpha_{i}}{2}\Big{(}\frac{\beta_{j-1}}{\alpha_{j-1}}-\frac{ \beta_{j}}{\alpha_{j}}\Big{)}>0\), in view of \(i\leq j-1<j\) and \(\frac{\beta_{i}}{\alpha_{i}}\) is decreasing in \(l\). Similarly, for \(j<i\) we have that \[\delta_{j}^{2-2p}\Big{(}\frac{\delta_{j}}{\delta_{i}}\Big{)}^{\alpha_{i}p}\int _{\frac{A_{j}}{2}}\big{|}|y|^{\alpha_{j}-2}\big{|}^{p}\ dy=O\bigg{(}\delta_{j}^{2-2 p}\Big{(}\frac{\delta_{j}}{\delta_{i}}\Big{)}^{\alpha_{i}p}\Big{(}\frac{\delta_{j +1}}{\delta_{j}}\Big{)}^{1-p+\frac{\alpha_{i}}{2}}\bigg{)}\] and for \(p=1\) we find that \(\big{(}\frac{\delta_{i}}{\delta_{i}}\big{)}^{\alpha_{i}}\big{(}\frac{\delta_{i +1}}{\delta_{j}}\big{)}^{\frac{\alpha_{i}}{2}}=\big{(}\frac{\delta_{i+1}}{ \delta_{i}}\big{)}^{\alpha_{i}}\big{(}\frac{\delta_{j+1}}{\delta_{j+1}}\big{)} ^{\frac{\alpha_{i}}{2}}=O\big{(}\rho^{\alpha_{i}(\frac{\beta_{j+1}}{\alpha_{j +1}}-\frac{\beta_{i}}{\alpha_{i}})+\frac{\alpha_{i}}{2}(\frac{\beta_{j}}{\alpha_ {j}}-\frac{\beta_{j+1}}{\alpha_{j+1}})}\big{)}.\) Now, if \(j\) odd we have that \[\int_{\frac{A_{j}}{2}}\bigg{|}\frac{(1+|y|^{\alpha_{j}})^{2\tau}}{|y|^{(\alpha_{j }-2)\tau}}\bigg{|}^{p}\ dy=O\bigg{(}\Big{(}\frac{\delta_{j-1}}{\delta_{j}} \Big{)}^{1+p\tau-\frac{\alpha_{i}p\tau}{2}}+\Big{(}\frac{\delta_{j+1}}{\delta_{j }}\Big{)}^{1+p\tau+\frac{\alpha_{i}p\tau}{2}}\bigg{)}.\] Then, we get that \(\rho^{(1+\tau)p}\delta_{j}^{2+2\tau p}\Big{(}\frac{\delta_{j-1}}{\delta_{j}} \Big{)}^{1+p\tau-\frac{\alpha_{i}p\tau}{2}}=O\big{(}\rho^{(1+\tau)p+\frac{\beta_ {j}}{\alpha_{j}}(1+\tau p+\frac{\alpha_{i}p\tau}{2})+\frac{\beta_{j-1}}{\beta_{j -1}}(1+p\tau-\frac{\alpha_{j}p\tau}{2})}\big{)}\), and for \(p=1\) we find that \[1+\tau+\frac{\beta_{j}}{\alpha_{j}}\Big{(}1+\tau+\frac{\alpha_{j}\tau}{2} \Big{)}+\frac{\beta_{j-1}}{\alpha_{j-1}}\Big{(}1+\tau-\frac{\alpha_{j}\tau}{2} \Big{)}>0.\] (A.1) Indeed, \(\alpha_{j}=\frac{\alpha_{j-1}+2}{\tau}+2\) implies \(\frac{\alpha_{j}\beta_{j-1}\tau}{2\alpha_{j-1}}=\frac{\beta_{j-1}}{2}+\frac{ \beta_{j-1}}{\alpha_{j-1}}(1+\tau)\). Also, \(\beta_{j-1}=\frac{\tau\beta_{j}+\tau+1}{\tau}\) and \(\frac{1}{2}(1+\tau)+\frac{\beta_{j}}{\alpha_{j}}(1+\tau)>0\) implies that \[1+\tau+\frac{\beta_{j}}{\alpha_{j}}\Big{(}1+\tau+\frac{\alpha_ {j}\tau}{2}\Big{)} =1+\tau+\frac{\beta_{j}}{\alpha_{j}}(1+\tau)+\frac{\beta_{j}\tau}{2}> \frac{\beta_{j}\tau}{2}+\frac{1}{2}(1+\tau)\] \[\geq\frac{\beta_{j-1}}{2}=\frac{\alpha_{j}\beta_{j-1}\tau}{2 \alpha_{j-1}}-\frac{\beta_{j-1}}{\alpha_{j-1}}(1+\tau)=-\frac{\beta_{j-1}}{ \alpha_{j-1}}\Big{(}1 and (A.1) follows. Similarly, we get that \[\rho^{(1+\tau)p}\delta_{j}^{2+2rp}\Big{(}\frac{\delta_{j+1}}{\delta_{j}}\Big{)}^{ 1+pr\tau+\frac{\alpha_{j}p\tau}{2}}=O\big{(}\rho^{(1+\tau)p+\frac{\beta_{j}}{ \alpha_{j}}(1+rp-\frac{\alpha_{j}p\tau}{2})+\frac{\beta_{j-1}}{\alpha_{j-1}}(1+ pr+\frac{\alpha_{j}pr}{2})}\big{)},\] and for \(p=1\), \(1+\tau+\frac{\beta_{j}}{\alpha_{j}}\big{(}1+\tau-\frac{\alpha_{j}\tau}{2} \big{)}+\frac{\beta_{j-1}}{\alpha_{j-1}}\big{(}1+\tau+\frac{\alpha_{j}\tau}{2} \big{)}>0\), by using that \(\alpha_{j}=\frac{\alpha_{j+1}-2}{\tau}-2\) and \(\beta_{j+1}=\tau\beta_{j}-\tau-1\). Therefore, there exist \(\eta_{p}>0\) such that \[\rho^{(1+\tau)p}\delta_{j}^{2+2rp}\int_{\frac{A_{j}}{\delta_{j}}}\Big{|}\frac{ (1+|y|^{\alpha_{j}})^{2\tau}}{|y|^{(\alpha_{j}-2)\tau}}\Big{|}^{p}\ dy=O\big{(} \rho^{p\eta_{p}}\big{)}.\] Next, similar arguments as above lead us to obtain that for \(j\) even, \[1+\frac{1}{\tau}+\frac{\beta_{j}}{\alpha_{j}}\Big{(}1+\frac{1}{\tau}+\frac{ \alpha_{j}}{2\tau}\Big{)}+\frac{\beta_{j-1}}{\alpha_{j-1}}\Big{(}1+\frac{1}{ \tau}-\frac{\alpha_{j}}{2\tau}\Big{)}>0,\] by using \(\alpha_{j}=(\alpha_{j-1}+2)\tau+2\) and \(\beta_{j-1}=\frac{\beta_{j}+\tau+1}{\tau}\). Also, by using that \(\alpha_{j}=(\alpha_{j+1}-2)\tau-2\) and \(\beta_{j}=\tau\beta_{j+1}+\tau+1\) it follows that \(1+\frac{1}{\tau}+\frac{\beta_{j}}{\alpha_{j}}\big{(}1+\frac{1}{\tau}-\frac{ \alpha_{j}}{2\tau}\big{)}+\frac{\beta_{j+1}}{\alpha_{j+1}}\big{(}1+\frac{1}{ \tau}+\frac{\alpha_{j}}{2\tau}\big{)}>0.\) Thus, there exist \(\eta_{p}>0\) such that \[\rho^{(1+1/\tau)p}\delta_{j}^{2+2p/\tau}\int_{\frac{A_{j}}{\delta_{j}}}\Big{|} \frac{(1+|y|^{\alpha_{j}})^{2/\tau}}{|y|^{(\alpha_{j}-2)/\tau}}\Big{|}^{p}\ dy=O\big{(} \rho^{p\eta_{p}}\big{)}\] in view of \[\int_{\frac{A_{j}}{\delta_{j}}}\Big{|}\frac{(1+|y|^{\alpha_{j}})^{2/\tau}}{|y| ^{(\alpha_{j}-2)/\tau}}\Big{|}^{p}\ dy=O\bigg{(}\bigg{(}\frac{\delta_{j-1}}{ \delta_{j}}\bigg{)}^{1+\frac{p}{\tau}-\frac{\alpha_{j}p}{2\tau}}+\bigg{(} \frac{\delta_{j+1}}{\delta_{j}}\bigg{)}^{1+\frac{p}{\tau}+\frac{\alpha_{j}p}{2 \tau}}\bigg{)}.\] This completes the proof. **Acknowledgements** The author would like to thank to Professor Angela Pistoia (U. Roma "La Sapienza", Italy) for propose him problem (1.2). Moreover, the author would like to express his gratitude to Professors Angela Pistoia and Pierpaolo Esposito (U. Roma Tre, Italy) for many stimulating discussions about this problem and related ones. This work has been supported by grant Fondecyt Regular N\({}^{\rm o}\) 1201884, Chile.
2308.13792
Out-of-distribution detection using normalizing flows on the data manifold
A common approach for out-of-distribution detection involves estimating an underlying data distribution, which assigns a lower likelihood value to out-of-distribution data. Normalizing flows are likelihood-based generative models providing a tractable density estimation via dimension-preserving invertible transformations. Conventional normalizing flows are prone to fail in out-of-distribution detection, because of the well-known curse of dimensionality problem of the likelihood-based models. According to the manifold hypothesis, real-world data often lie on a low-dimensional manifold. This study investigates the effect of manifold learning using normalizing flows on out-of-distribution detection. We proceed by estimating the density on a low-dimensional manifold, coupled with measuring the distance from the manifold, as criteria for out-of-distribution detection. However, individually, each of them is insufficient for this task. The extensive experimental results show that manifold learning improves the out-of-distribution detection ability of a class of likelihood-based models known as normalizing flows. This improvement is achieved without modifying the model structure or using auxiliary out-of-distribution data during training.
Seyedeh Fatemeh Razavi, Mohammad Mahdi Mehmanchi, Reshad Hosseini, Mostafa Tavassolipour
2023-08-26T07:35:16Z
http://arxiv.org/abs/2308.13792v1
# Out-of-distribution detection using normalizing flows on the data manifold ###### Abstract A common approach for out-of-distribution detection involves estimating an underlying data distribution, which assigns a lower likelihood value to out-of-distribution data. Normalizing flows are likelihood-based generative models providing a tractable density estimation via dimension-preserving invertible transformations. Conventional normalizing flows are prone to fail in out-of-distribution detection, because of the well-known curse of dimensionality problem of the likelihood-based models. According to the manifold hypothesis, real-world data often lie on a low-dimensional manifold. This study investigates the effect of manifold learning using normalizing flows on out-of-distribution detection. We proceed by estimating the density on a low-dimensional manifold, coupled with measuring the distance from the manifold, as criteria for out-of-distribution detection. However, individually, each of them is insufficient for this task. The extensive experimental results show that manifold learning improves the out-of-distribution detection ability of a class of likelihood-based models known as normalizing flows. This improvement is achieved without modifying the model structure or using auxiliary out-of-distribution data during training. keywords: Generative models, Manifold learning, Normalizing flows, Out-of-distribution + Footnote †: journal: Arxiv ## 1 Introduction Out-Of-Distribution (OOD) detection classifies test data into in-/out- of distribution data (Yang et al., 2022). Generative models that rely on the likelihood estimation are a prominent candidate for detecting OOD data. It is expected they can assign low likelihood value to OOD data. However, high-dimensional likelihood-based generative models are susceptible to failure in OOD detection (Nalisnick et al., 2019). Normalizing Flows (NFs) as likelihood-based generative models with tractable likelihood appear to be well-suited for addressing the OOD detection problem. However, their likelihood does not provide a significant distinction for OOD data. In fact, NFs tend to assign high likelihood to OOD data (Nalisnick et al., 2019; Kirichenko et al., 2020). Therefore, it has been stated that they learn local transformations instead of semantics. This problem is fundamental because estimating the likelihood in high-dimensional spaces is challenging (Theis et al., 2016). Using alternative measurement instead of pure likelihood is a candidate solution followed by researchers such as likelihood ratio (Ren et al., 2019), likelihood regret (Xiao et al., 2020), and Input Complexity (IC) (Serra et al., 2020). To the best of our knowledge, the role of manifold learning to tackle the aforementioned problem has not yet been studied for NFs. The important structural limitation of common NFs is that they cannot learn the embedded manifolds of the data. Real data are usually embedded in low-dimension manifolds, and powerful generating models use this intuition (Kingma and Welling, 2014; Goodfellow et al., 2014). NFs use bijective transformation, and therefore they preserve dimensionality between the input and transformed spaces. One can think that this can be easily solved by using non-bijective transformation that can transform the data space to a lower dimensional one, and maximizing the likelihood of the data on the low-dimensional manifold is obtained by such a transformation. But unfortunately, this optimization problem cannot be solved exactly; as is the case with some other well-known likelihood-based methods on the manifold that approximate a lower-bound of the likelihood (Kingma and Welling, 2014). Recently, several researchers proposed solutions to the problem of maximizing the likelihood of the data on a manifold using NFs. Some use an injective transformation in NFs (Brehmer and Cranmer, 2020; Caterini et al., 2021; Huang et al., 2021). Using injective transformations makes the optimization computationally expensive. Brehmer and Cranmer (2020) used two-step training procedure, first the manifold is learned using an NF, then the density is estimated using another NF. This procedure simplifies the training, but it can lead to poor density estimation. Since, learning only the manifold during the first training step while ignoring the density estimation objective raises suspicions about acquiring a manifold that introduces difficulties and sometimes irreparable challenges in density estimation during the second training step. Horvat and Pfister (2021) introduced a method that learns a low-dimensional manifold from NFs in a single-step training procedure without using injective transformations. First, the learning of the manifold coupled with density estimation is achieved using an NF. This is followed by another NF that accurately estimates the density on the learned manifold. This procedure enhances the awareness of manifold learning along with density estimation, which distinguishes it from Brehmer and Cranmer (2020). Inspired by the previously mentioned methods, we provide an approach to overcome the aforementioned inherent shortcoming of NFs as a likelihood-based generative model to estimate a manifold, particularly in high dimensions. Then, we employ the model for OOD detection. Fig. 1 is a toy example that illustrates the intuition behind our proposed method. Suppose that we have a semicircle dataset with unit radius where the data points are generated according to a truncated Gaussian distribution \(\mathcal{N}(\mu=\frac{\pi}{2},\sigma=0.85,a=\frac{-\pi}{1.7},b=\frac{\pi}{1.7})\). The data in this two-dimensional representation clearly resides on a one-dimensional manifold (i.e., a straight finite line). To find this line, for every \(x\in\mathbb{R}\) and \(y\in\mathbb{R}^{+}\), we can project point \((x,y)\) to \(\theta\), where \(0\leq\theta\leq\pi\). By using the equations \(x^{\prime}=\cos(\theta)\) and \(y^{\prime}=\sin(\theta)\), we can then transform this line back to the semicircle and obtain the reconstructed point \((x^{\prime},y^{\prime})\). If the initial point \((x,y)\) lies on the arc, then \((x^{\prime},y^{\prime})=(x,y)\). Otherwise, \((x,y)\) will be mapped to the nearest point \((x^{\prime},y^{\prime})\) on the arc. So, we can define a reconstruction loss between each point and its reconstructed point based on a distance function. Fig. 1(a) shows the reconstruction loss for points on a grid, where we used the Mean Squared Error (MSE) loss (square Euclidean distance). The NLL on the manifold for points on the grid is shown in Fig. 1(b). Now consider points \(A\) and \(C\). Point \(A\) has low likelihood and point \(C\) has high likelihood (see Fig. 1(b)). If we only consider the likelihood on the manifold, we may incorrectly consider \(A\) an OOD sample and \(C\) an In-Distribution (ID) sample. To address this issue, we add the reconstruction loss to the NLL term with a positive constant factor. This way, our OOD score includes both the NLL and the reconstruction loss. We have presented the proposed OOD score in Fig. 1(c). Based on the provided example, it may be tempting to use only the reconstruction loss for OOD detection. However, we should consider points \(B\) and \(D\) to fully understand the importance of the likelihood term. Although points \(B\) and \(D\) have the same reconstruction loss, point \(D\) is near a region with a high density, unlike point \(B\). Therefore, it is reasonable to consider point \(D\) as an ID sample and point \(B\) as an OOD sample. In other words, we can tolerate a bigger reconstruction loss for samples near high-density regions compared to those near low-density regions. This intuition can be clearly observed in Fig. 1(c). To jointly estimate the density and learn the manifold, the objective would be to optimize the likelihood within the on-manifold region while imposing penalties on the off-manifold region. To achieve this, a penalty function is employed, which encourages the inverse mapping to reconstruct the original data only based on the on-manifold subspace. We combine the achieved NLL and the manifold score (reconstruction loss) to create an OOD detection score. We find this approach promising as it enhances the performance of common NFs in OOD detection tasks without increasing architectural complexity or using auxiliary OOD training data. Accordingly, our main contributions are summarized as follows: 1. We explore the impact of incorporating manifold learning into NFs for OOD detection and demonstrate that the use of NFs with manifold learning can enhance OOD detection in some cases. 2. While manifold learning with NFs algorithms typically employs the MSE penalty function, we also investigate the impact of using the Huber penalty function (Huber, 1964) during the manifold learning phase on OOD detection. The paper is arranged as follows. Section 2 reviews the recently proposed methods related to manifold Figure 1: A semicircle toy dataset to illustrate the proposed score for OOD detection. The score is a combination of the NLL and a measure of distance to the manifold (reconstruction loss). learning in NFs and OOD detection. The preliminaries are introduced in Section 3. Section 4 introduces the proposed method. Experimental results are presented in Section 5. In the end, the conclusion is discussed in Section 6. ## 2 Related Work Studies on manifold learning in NFs methods and OOD detection methods are provided in Section 2.1 and Section 2.2, respectively. ### Manifold learning in NFs This section provides a brief overview of the most relevant research about the manifold learning in NFs. In general, the methods are categorized into two classes: NFs on prescribed manifolds and NFs on learnable manifolds. The first class focuses on learning flows on prescribed manifolds, while the second one includes learning the manifold, which aligns closely with our study. We discuss the most related research in the second category in the next. Recently, solutions to the problem of manifold learning and density estimation with NFs have been proposed (Kim et al., 2020; Kothari et al., 2021; Huang et al., 2021; Brehmer and Cranmer, 2020; Caterini et al., 2021; Horvat and Pfister, 2021; Kalatzis et al., 2022; Ross and Cresswell, 2021). In the following, we review the most related works to our research and highlight their key properties. A proposed scientific path is about separating manifold learning and density estimation. The pioneer of this study in NF is a method named \(\mathcal{M}\)-flow (Brehmer and Cranmer, 2020). In \(\mathcal{M}\)-flow, an injective NF transformation is used to transform the data into a manifold. After that, another NF is used to estimate the density on the manifold. Manifold learning and density estimation are done separately to avoid the computation of Jacobian term induced by the injective transformation (two-phase training). An extension of this study called multi-chart flow (Kalatzis et al., 2022), which employs multiple mapping instead of one to find a manifold with multiple charts. It also suffers from two-phase training. Another followed research of \(\mathcal{M}\)-flow, named Rectangular flow (Caterini et al., 2021), overcomes the calculation of injective transformation by relying on automatic differentiation and linear algebra tricks. A recent study, named Denoising NF (DNF) (Horvat and Pfister, 2021), overcomes the limitations of using injective transformations and separating model training by splitting the transformed space of an NF into two parts, noise-insensitive and noise-sensitive. These parts are modeled by another NF and a low-variance Gaussian distribution, respectively. The noise is also added to the input training data. Using this structure, two-phase training is no longer needed and the two NFs are trained simultaneously. ### Out-of-distribution detection A brief overview of recent and prominent developments in OOD detection is provided in the following. It should be noted that while there is a wide range of topics for this discussion (Yang et al., 2022), the main focus of the current research is OOD detection through density estimation. #### Non-density-based methods. Contrary to the main focus of this paper, OOD detection methods are not limited to density-based models. Using data labels can be a guide for this problem. Applying an appropriate threshold on a pre-trained classifier, decreasing the model's overconfidence, changing the training approach, and using an auxiliary OOD training data are common approaches (Lee et al., 2018; Hendrycks et al., 2019; DeVries and Taylor, 2018; Liang et al., 2018). Lee et al. (2018) proposes a pre-training algorithm by using generated auxiliary OOD training data. Since classifiers are not trained for the purpose of OOD detection, but instead to increase accuracy in inference time, a targeted pre-training approach can help the model enrich its features and decrease the model's confidence in OOD data. Same to the mentioned research, one of the related researches (called outlier exposure) defines a specific loss function to discriminate between ID data and an auxiliary OOD data with a ranking loss to enhance the pre-training approach (Hendrycks et al., 2019). Confidence estimation along with the main objective function (DeVries and Taylor, 2018) or using a temperature-based Softmax function (Liang et al., 2018) are other solutions pursued in the literature. #### Density-based methods. It seems likelihood-based generative models can be good candidates for OOD detection by assigning less probability to OOD data. However, a frequent point about likelihood-based models such as Auto-Regressive (AR) models (Ren et al., 2019), Variational Auto-Encoders (VAEs) (Xiao et al., 2020), and NFs (Nalisnick et al., 2019; Kirichenko et al., 2020) is that the likelihood is not a distinguishing score for OOD data (Zhang et al., 2021; Nagarajan et al., 2021). The common weakness of all the mentioned models is that they also learn the background and irrelevant information. In other words, they equally attend to whole space. Using the likelihood ratio instead of pure likelihood in AR models is a well-known solution that was proposed for the first time to solve the sequential genomics OOD detection (Ren et al., 2019). It considers a background/semantic decomposition of data. The ratio score is obtained by dividing the likelihood value of the model by the likelihood value of a background model. The background model is trained by perturbing data to learn the background information. The defined ratio implicitly ignores irrelevant information. Despite the success of this model in genomic OOD detection, it has not achieved state-of-the-art results on image datasets. A remarkable solution for VAE is named likelihood regret (Xiao et al., 2020). This method is based on the principle that OOD data can drastically shift the likelihood after several fine-tuning steps. The authors employ a VAE to avoid overfitting in fine-tuning steps with freezing the decoder block. According to the literature, NFs only learn local transformations rather than semantics (Nalisnick et al., 2019; Kirichenko et al., 2020). Therefore, despite having tractable likelihood, they suffer from shortcomings in OOD detection. Previously proposed methods try to integrate high-level information in any way to avoid the OOD fault. At first, the results of training NFs on the embedded data instead of raw one confirm this claim (Kirichenko et al., 2020). Moreover, improving OOD detection can be achieved by redesigning the model's architecture. Using attention units (Kumar et al., 2021) or deep residual flows (Zisselman and Tamar, 2020) can be mentioned as pioneer works. Same to non-density-based methods, changing training strategies (such as using information theory (Ardizzone et al., 2020)) can be effective for OOD detection. Employing a test-time OOD score based on input complexity is a valuable measurement (Serra et al., 2020). It is considered as a likelihood ratio between the learned density and a universal lossless compressor. Simple data are coded in fewer bits. So, the ratio between the trained model and the compressor achieves a likelihood ratio to reach the actual number of bits. A line of research overcomes this limitation by ensembling models (Choi et al., 2019). Surprisingly, this approach was successful despite its over-parametrization. Although every density estimation model is not able to make an accurate diagnosis, but the mentioned research shows that the combination of such models in aggregate form is successful contrary to expectations. Best of our knowledge, investigating the effect of manifold learning in NFs on OOD detection has not been covered in the literature. ## 3 Preliminaries This section introduces our notations and related preliminary works to make it easier for the reader to follow the subject. The rest of this section is arranged as follows: At first, the standard normalizing flow is discussed in Section 3.1. After that, a robust reconstruction loss function is presented in Section 3.2. ### Normalizing Flow There are a variety of well-known deep likelihood-based generative methods, like VAEs (Kingma and Welling, 2014), NFs (Rezende and Mohamed, 2015), AR models (Murphy, 2022), energy base models (Murphy, 2022), and diffusion models (Sohl-Dickstein et al., 2015). But among the mentioned models, only ARs and NFs can exactly compute the likelihood. VAEs and diffusion models find a lower bound on the likelihood, while energy base models approximate it. Sampling in common AR models is computationally expensive, due to sequential nature of these models. Sampling in NFs is not sequential, but they have a structural limitation (dimension-preserving) that limits their applicability and generating power. NF is a parametric diffeomorphism transformation, \(f_{\phi}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\). So, it is a two-side differentiable bijective transformation. By choosing a random variable from a pre-defined distribution in \(Z\in\mathbb{R}^{D}\) and transferring back it by \(f_{\phi}^{-1}\), the distribution of data can be found by a change-of-variable formula like \[p_{X}(X)=p_{Z}(f_{\phi}(X))|\det J_{f_{\phi}}(f_{\phi}(X))|, \tag{1}\] where \(J_{f_{\phi}}\in\mathbb{R}^{D\times D}\) is the Jacobian matrix of the transformation \(f_{\phi}\). It is clear that the practical computability constraint for this term is the implicit constraint imposed on NFs. Moreover, the model parameters are estimated by the NLL criterion \[\phi^{*}=\arg\min_{\phi}\big{(}-\log p_{X}(x)\big{)}, \tag{2}\] where \(x=\{x_{n}\}_{n=1}^{n=N}\) is the available training data from the distribution \(p_{X}\). ### Reconstruction loss functions During the training of NFs on a manifold, all off-manifold data have a likelihood of 0. In the literature, penalizing the off-manifold part through a quadratic reconstruction function, typically MSE, is a common approach to addressing this problem (Brehmer and Cranmer, 2020; Caterini et al., 2021; Horvat and Pfister, 2021; Ross and Cresswell, 2021). A weakness of quadratic functions is its sensitivity to off-manifold data points. However, a regularized reconstruction function utilizes both linear and quadratic functions, with the function type changing depending on a comparison between the error value and a threshold \(\delta\). Accordingly, this switching leads the off-manifolds are not penalized as much as on-manifold data. Eq. 3 defines a switching function named the Huber function (Huber, 1964) with a strong background in robust statistics. \[H_{\delta}(x,y)=\begin{cases}0.5(x-y)^{2},&\text{if }|x-y|<\delta\\ \delta(|x-y|-0.5\delta),&\text{otherwise}\end{cases} \tag{3}\] ## 4 Proposed method By considering a standard NF like \(f_{\phi}\), the density of data is estimated by Eq. 1. It would be exciting if we could calculate the density of data on the manifold and measure its impact on OOD detection. However, density estimation on manifolds is not easy. In this paper, we take inspiration from existing methods of manifold learning in NFs (\(\mathcal{M}\)-flow (Brehmer and Cranmer, 2020) and DNF (Horvat and Pfister, 2021)) and propose a new one to detect OOD data. From the manifold perspective, the transformed (or latent) space (\(z\)) can be disentangled into two spaces, on-manifold (\(u\)), and off-manifold (\(v\)) space. This is because placing all real-value data on a manifold is not necessarily a correct assumption, and some of the data is generally outside the manifold. The distribution of the transformed data is a joint distribution of these two parts like \(p_{Z}(z)=p_{Z}(u,v)\). With assuming independence among two spaces \(u\perp v\), the distribution of the transformed space is decomposed into the product of two sub-space's distributions as \(p_{Z}(z)=p_{U}(u)\times p_{V}(v)\), where \(U:\mathcal{M}_{M}\subseteq\mathbb{R}^{d}\) and \(V:\mathcal{M}_{O}\subseteq\mathbb{R}^{D-d}\) are the on-manifold and the off-manifold sub-spaces, respectively. Our goal is to rearrange the transformed space of an NF in such a manner that a portion corresponds to the on-manifold part, while the other corresponds to the off-manifold part. Let \(f_{\phi}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}\) be a standard NF. We denote the first \(d\) components of the output of \(f_{\phi}\) as \(u\) and the remaining ones as \(v\) where \(u\) stands for an on-manifold part of data and \(v\) for an off-manifold part. Formally, \(z=f_{\phi}(x)=(z_{1},z_{2},...,z_{D}),u=(z_{1},z_{2},...,z_{d}),v=(z_{d+1},z_{d +2},...,z_{D})\), where \(x\) is the input, and \(z\) is the corresponding transformed data by the NF. We choose \(p_{Z}(z)=\mathcal{N}(0,I_{D})\) and factorize \(p_{Z}(z)\) such that \(p_{Z}(z)=p_{U}(u)\times p_{V}(v)\). It should be mentioned that we can use another NF \(h_{\theta}\) to fit a more complex distribution on the on-manifold part (\(u\)) rather than a Gaussian distribution. This idea was followed by DNF (Horvat and Pfister, 2021) and \(\mathcal{M}\)-flow (Brehmer and Cranmer, 2020) for density estimation. Briefly, in this case, \(p_{U}(u)\) is replaced by \(p_{U^{\prime}}(u^{\prime})|\det G_{h_{\theta}}(u^{\prime})|\), where \(u^{\prime}\) is the transformed space by \(h_{\theta}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\), and \(p_{U^{\prime}}(u^{\prime})=\mathcal{N}(0,I_{d})\). Moreover, DNF fits a tight Gaussian distribution \(p_{V}\sim\mathcal{N}(0,\epsilon I_{D-d})\) on the off-manifold part based on the added noise value \(\epsilon\) to the data. However, the influence of the added noise in DNF was not reported to have a significant effect. This is likely because the model is trained on real-valued data that have inherent quantization noise, making the addition of more noise unnecessary. Consequently, we do not add noise to data in our framework, and so there are some minor differences with DNF. The mentioned changes on standard NFs are not sufficient for manifold learning. It is necessary to penalize the off-manifold sub-space so that the model is able to reconstruct the data from the on-manifold sub-space. Accordingly, a constrained optimization problem on NLL appears like \[\min_{\phi}\text{NLL}(x;\phi)\text{ s.t. }\mathcal{C}(x,\tilde{x})\leq\tau, \tag{4}\] where \(x=(x_{1},x_{2},...,x_{D})\), \(\tilde{x}=(\tilde{x}_{1},\tilde{x}_{2},...,\tilde{x}_{D})=f_{\phi}^{-1}( \text{proj}(f_{\phi}(x)))\), \(\mathcal{C}\), and \(\tau\) are the input data, corresponding reconstruction (computed through the inverse of normalizing flow \(f_{\phi}\)), the penalty function, and the penalization threshold, respectively. Moreover, \(\text{proj}(f_{\phi}(x))\) is the first \(d\) components of \(f_{\phi}(x)\) corresponding to the data manifold, padded with zero as \(\text{proj}(f_{\phi}(x))=(u,\vec{0}_{D-d})\). Existing methods such as \(\mathcal{M}\)-flow (Brehmer and Cranmer, 2020), DNF (Horvat and Pfister, 2021), and Rectangular flow (Caterini et al., 2021) penalize the off-manifold with a quadratic constraint. An important characteristic of a quadratic function is its sensitivity to outliers, that can be interpreted as off-manifold parts of the data. One aspect of our contribution involves using a switching penalization function, containing both linear and quadratic forms, which is determined based on the distance to the manifold. The switching is modeled with a Huber function (Eq. (3)). In the proposed method, an element-wise Huber function with threshold \(\delta\), denoted \(H_{\delta}\), is applied to the difference between the input data (\(x\)) and the reconstructed one (\(\tilde{x}\)). The reconstructed data is a function of manifold (\(\tilde{x}=f_{\phi}^{-1}(\text{proj}(f_{\phi}(x)))\)) rather than joint on-manifold and off-manifold parts. Therefore, the penalization term is computed by averaging an element-wise function like \[\mathcal{C}(x,\tilde{x};\delta)=\frac{1}{D}\sum_{i=1}^{D}H_{\delta}(|x_{i}- \tilde{x}_{i}|,0). \tag{5}\] Inspired by the Lagrange multipliers method, the proposed constrained optimization problem can be converted to its unconstrained equivalent one \[\ell=-\log P_{u}(u)-\log P_{v}(v)+\log|\det J_{f_{\phi}}(x)|+\lambda\mathcal{C }(x,\tilde{x};\delta), \tag{6}\] where \(\lambda\) is the penalization term hyper-parameter, and the first three terms are the objective of a standard NF (called NLL). The proposed approach not only facilitates the estimation of manifold density for NFs but also addresses the limitations of relying solely on likelihood or reconstruction for OOD detection, as extensively explained in Section 1 (see Fig. 1). In other words, high likelihood or low reconstruction error alone are not reliable indicators for identifying ID data. However, by combining these two factors using the introduced cost function (Eq. 6), we not only estimate the manifold using NFs but also introduce an appropriate indicator for OOD detection that is validated through comprehensive experiments. ## 5 Results The primary goal of the experiments is to explore the significance of manifold learning in NFs for enhancing OOD detection. Moreover, the performance of the proposed method is also evaluated in terms of image generation because it is fundamentally a generative model. As previously mentioned, \(\mathcal{M}\)-flow (Brehmer and Cranmer, 2020) and DNF (Horvat and Pfister, 2021) are existing methods that closely resemble our approach for manifold learning in NFs. Our proposed OOD detection score, combines the precise likelihood value with the reconstruction loss. However, \(\mathcal{M}\)-flow cannot effectively achieve high-dimensional likelihood due to using injective transformations. Therefore, our focus is on comparing our approach with the DNF framework in terms of image generation performance. Subsequently, we compare our OOD detection results with some other state-of-the-art likelihood-based OOD detection methods such as likelihood ratio (Ren et al., 2019) and IC (Serra et al., 2020). All in all, we observed that penalizing the learned manifold in the proposed method leads to consider the dominant manifold as the ID manifold and the remains (like less frequent details) as OOD regions. ### Experimental setting The used dataset and experimental setting are described in Section 5.1.1 and Section 5.1.2, respectively. #### 5.1.1 Dataset The seven used datasets in the current study are introduced in Table 1. We could increase the number of layers and parameters in proportion to the original image's dimensions. Still, due to resource allocation limitations, we resize the color images to \(32\times 32\) in our experiments. #### 5.1.2 Architecture For two-blocks models such as standard DNF or architecture searching experiments, we use the Glow model (Kingma and Dhariwal, 2018) for \(f_{\phi}\) and the RealNVP model (Dinh et al., 2017) for \(h_{\theta}\). Meanwhile, for one-block models, we use either one of the models. Our Glow model has three blocks, each with 32 steps of the flow. Each step consists of an actnorm layer, an invertible \(1\times 1\) convolution layer, and an affine coupling layer (see Kingma and Dhariwal (2018) for the details of each layer). There is also a squeeze layer before each step and a split layer after each step. The RealNVP model comprises six affine coupling layers and six masking layers before each affine coupling layer. While DNF has chosen a specific architecture in its experiments, the employed architectures in this paper are based on two well-known NFs, Glow and RealNVP. Our main goal is to show the importance of manifold learning in NFs for OOD detection, regardless of the specific architecture. #### 5.1.3 Training All implementations are done in the PyTorch1(version 1.10.0+cu102) framework. We use Adam (Kingma and Ba, 2015) as our optimization algorithm with a learning rate of 1e-5 and a batch size of 64. Also, we set the value of the penalization term coefficient (\(\lambda\)) to 100. All training experiments were done on a single GPU (GTX 1080 Ti), with Cuda version 11.4 and 11019GB memory. The implementation codes of the proposed methods are attached to this file. ### Architecture searching In this section, we consider four structures: DNF, MSE-based off-manifold penalization with two blocks (called Proposed-M), Huber-based off-manifold penalization with two blocks (called Proposed-H), and Huber-based off-manifold penalization with one block (called Proposed-D). In the first three, two NFs (named \(f_{\phi}\) and \(h_{\theta}\), Glow followed by RealNVP) are used, and in the last structure, a single NF (named \(f_{\phi}\)) is used. Briefly, we employ two blocks together as an NF to be more comparable in structure with DNF. Therefore, the two-blocks method means using a dimension preserving NF (RealNVP in our setting) to estimate the density of the manifold part (same as DNF) instead of the Gaussian distribution. We choose all DNF backbone models the same as ours in all experiments for a fair comparison. We only present the results of CelebA here and the rest are available in the supplementary files. Based on the reported results for CelebA dataset by Horvat & Pfister (2021) (DNF), we consider \(d=|\mathcal{M}_{M}|\sim 500\) \begin{table} \begin{tabular}{l l} \hline \hline Data & Description \\ \hline \hline MNIST (LeCun \& Cortes, 2010) & Containing a set of \(28\times 28\) gray-scale hand-written digit images in 10 classes (50k training images, 10k test images) \\ \multirow{2}{*}{FMNIST (Xiao et al., 2017)} & Including a set of \(28\times 28\) gray-scale fashion products images from 10 different categories (60k training images, 10k test images). The dataset’s original title is Fashion-MNIST, which we abbreviate as FMNIST. \\ \multirow{2}{*}{SVHN (Netzer et al., 2011)} & Consisting of \(32\times 32\) RGB house number plates (about 73k trainig images and 26k test images) \\ & Containing 60k \(32\times 32\) RGB images of 10 categories (50k training images, 10k test images) \\ \multirow{2}{*}{CelebA (Liu et al., 2015)} & Consisting about 200k face images of 10,177 persons (180k training images, 20k test images). A resized version (\(32\times 32\)) is used in this paper. \\ \multirow{2}{*}{LSUN (Yu et al., 2016)} & Including a variety of large-scale images categorized into 10 classes. In this study, we only used bedroom class images. This class comprises more than 3 million images in the original dataset, but we use only about 300k \(32\times 32\) resized version of them. \\ \multirow{2}{*}{Constant} & A manual created dataset of 60k \(32\times 32\) constant color images (50k training images, 10k test images) \\ & \\ \end{tabular} \end{table} Table 1: The information of the used datasets in the paper. for architecture searching. It is worth noting that, we assume prior knowledge of the manifold's intrinsic dimension from existing works in the field. Our goal is not to determine the intrinsic dimension of the manifold. Readers seeking dimensionality determination can refer to Horvat and Pfister (2022) and Pope et al. (2021) The results of experiment on CelebA dataset are shown in Table 2 and Fig. 2. The main goal of the designed experiment is to find an appropriate architecture for generating data from the manifold as long as it does not decrease the likelihood of data. The reported results in Table 2 confirm that our proposed methods (Proposed-H, Proposed-M, and Proposed-D) outperform DNF in terms of Bit-Per-Dim (BPD) (calculated by \(\frac{\text{NLL}}{D\times\log 2}\)), whereas their generated images look visually similar, according to Fig. 2. An important aspect of DNF is its reliance on incorporating noise during training. It is noteworthy that the reported inference BPD for the DNF is computed using noiseless data, and making it an approximation. It is worth mentioning that best generated images in Horvat and Pfister (2021) (DNF) are based on a well-defined StyleGAN manifold (Karras et al., 2020), not on raw data. Additionally, their reported generated images on the CelebA dataset are not visually perfect. Moreover, since the Huber function applies a linear function to penalize the off-manifold pixels rather than a quadratic one, we anticipated that the MSE would be higher in the models that were penalized with the Huber function. In addition to the aforementioned factors, the integration of reconstruction loss and BPD \begin{table} \begin{tabular}{c c c c c} \hline \hline Criterion & DNF & Proposed-M & Proposed-H & Proposed-D \\ & & (two-block) & (two-block) & (one-block) \\ \hline \hline MSE & 0.004 \(\pm\) 0.0002 & 0.005 \(\pm\) 0.0003 & 0.01 \(\pm\) 0.001 & 0.02 \(\pm\) 0.001 \\ BPD & 3.98 \(\pm\) 0.03 & 3.52 \(\pm\) 0.04 & 3.51 \(\pm\) 0.04 & 3.52 \(\pm\) 0.04 \\ \#Params & \(\approx\) 59M & \(\approx\) 59M & \(\approx\) 59M & \(\approx\) 44M \\ \hline \hline \end{tabular} \end{table} Table 2: The best MSE/BPD scores for data lie in \(\mathcal{M}\subset\mathbb{R}^{500}\). Dimension changing of the one-block (Proposed-D) and two-block (DNF, Proposed-M, Proposed-H) methods are \(\mathbb{R}^{3072}\rightarrow\mathbb{R}^{500}\) and \(\mathbb{R}^{3072}\rightarrow\mathbb{R}^{500}\rightarrow\mathbb{R}^{500}\). #Params means the number of trainable parameters. Figure 2: Generated CelebA images corresponding to experiments in Table 2. value is a crucial component in our study. Although the reconstruction loss may slightly lag behind the DNF method in Table 2, our approach does not just depend on reconstruction or BPD (as scaled NLL) for OOD detection. Based on the evaluation of the results in the latter three columns, we can confidently assert that the proposed method offers a promising advantage in this regard. Another strength is that the Proposed-D method achieves same images (in terms of visual quality in Fig. 2) to DNF and other two-blocks proposed methods, with fewer parameters while maintaining the likelihood. Considering the outcomes, the proposed-D model is chosen for further experiments in the following. To summarize, we use the term "proposed" to denote the model referred to as "proposed-D". By choosing Proposed-D as the base model, more experiments are presented here to evaluate the performance of the model in terms of generation and manifold learning on ID/OOD data. The order of experiments is in such a way that the ability of the models is measured for different ID/OOD data and different manifold dimensions \((10,50,100,500,1000)\). The results for CelebA test data are presented here. The results of other datasets are available in the supplementary file. Fig. 3 contains the generated and reconstructed data (learned manifold) for a model trained on CelebA. In low-dimensional manifolds, images generated by the model are limited to displaying the main manifold of the face and the background details are not generated. However, as the manifold dimensions increase, the inclusion of less frequent details (mostly related to the background) in the generated image is observed. If the data manifold has simplistic visual attributes (e.g., as observed in the SVHN dataset results presented in the supplementary file), the likelihood can be better fit to the data, and thus sharpened images can be generated as manifold dimension increases. This trend is also observed in more complex datasets such as CIFAR10 (as noted in the supplementary file) when the likelihood is fitted using various dimensions of the manifold. The second row of Fig. 3 contains the reconstruction of CelebA test data on the model trained with CelebA for the corresponding manifold dimension. The reconstruction ability is improved by increasing the dimension. As it is clear, the model's invertibility power will appear by increasing manifold dimension, the same as standard NF. The model trained on CelebA not only excels at generating and reconstructing ID data but also demonstrates great performance in discriminating OOD data. In case of reconstructiong OOD data in third and forth rows of Fig. 3, especially from low dimensions, it cannot reconstruct OOD data well. In other words, learning low dimension manifold for ID data leads to focusing on ID specific features instead of global features. It is worth noting that our point is about manifold learning while in case of dimension preserving (like standard NFs), the reconstruction is done without error. ### Image OOD detection In this section, we present the results of OOD detection for gray-scale images, and we also expand our analysis to include high-dimensional color images and hyper-parameter (\(d\) and \(\delta\)) search. At first, the pre-defined RealNVP is employed as a backbone model for manifold learning and OOD detection of gray-scale images. The purpose of the designed experiment is to evaluate the performance of various penalty functions in cases where distinguishing between ID and OOD data becomes challenging due **Fig. 3** Several generated (first row) and reconstructed images (CelebA as ID data in the second row, SVHN and CIFAR10 as OOD data in the third and forth rows, respectively) from the proposed-D method for different manifold dimensions (10, 50, 100, 500, and 1000 from left to right) for a model trained on the CelebA dataset. \(\mathcal{M}_{G}\subset\mathbb{R}^{d}\) and \(\mathcal{M}_{R}\subset\mathbb{R}^{d}\) represent image generation and image reconstruction from a manifold fall in dimension \(d\), respectively. to visual similarity. The model is trained on the MNIST dataset as ID data with an embedded manifold in \(\mathbb{R}^{3}\). Then, it is evaluated with \(28\times 28\) gray-scale SVHN as OOD data. A hard OOD detection threshold is determined as the maximum score obtained from the MNIST test data. Any sample with higher score than this threshold is classified as OOD data. The reported results in Fig. 4 confirm the considerable superiority of using the Huber function compared to MSE in successfully identifying OOD data with high overlap. Based on Fig. 4, the Huber function exhibits significantly fewer mistakes on the SVHN test data compared to MSE. Furthermore, the mistakes made by the Huber function are more meaningful, particularly in situations where the data visually resembles the ID data (MNIST). To provide further clarification, the Huber function nearly almost tends to make errors on images resembling data within a distribution that consists of a single digit on a dark background. It is important to highlight that MNIST is structured in such a manner that each image contains a single digit placed on a black background. Nevertheless, when the data deviates from this pattern, as seen in cases like SVHN (including house license plate numbers) where multiple digits may appear in a single image, the Huber function may exhibit fewer errors. On the other hand, the MSE function exhibits a higher number of errors and even mistakenly recognizes images containing multiple digits as ID data. The effect of manifold learning on color image OOD detection for NFs in comparison with other methods (Kingma and Dhariwal, 2018; Ren et al., 2019; Serra et al., 2020) is evaluated in terms of AUROC (Area Under the Receiver Operating Characteristic Curve) criterion in Table 3 (trained on SVHN), Table 4 (trained on CIFAR10), and Table 5 (trained on CelebA). AUROC computes the area under the ROC curve by using un-thresholded normalized predictions to the range of \([0,1]\) on the data along with binary true labels. In our configuration, we assign a true label of 0 for ID data and a true label of 1 for OOD data. A higher AUROC value indicates better performance. The most important reason behind selecting the methods for comparison in Table 3, Table 4, and Table 5 is their consistency in the assumptions with our proposed method, meaning that auxiliary OOD training data is not employed, and the model structure is not tuned for this task, despite still achieving state-of-the-art results. Each reported table consists of two separate parts (first for the Huber and the next for MSE penalization). The first column includes the name of the OOD data used during testing. The next columns are associated with different manifold dimensions. In the case of dimension-preserving (\(d=D\)), the baseline method (Kingma and Dhariwal, 2018) (reported architecture in Section 5.1.2), the likelihood ratio method (Ren et al., 2019) Figure 4: All misdetected ID \(28\times 28\) gray-scale SVHN test data for a trained model on MNIST. (with the same architecture as reported Glow network in Section 5.1.2), and the input complexity method (Serra et al., 2020) (with the same architecture as reported Glow network in Section 5.1.2) are evaluated. The next columns for different manifold dimensions have the same partitioning: the proposed method (named P), the proposed method with considering likelihood ratio (named \(\mathrm{P}+\mathrm{ratio}\)), and the proposed method with considering input complexity (named \(\mathrm{P}+\mathrm{IC}\)). To summarize the approach of combining the proposed method with the two mentioned methods (likelihood ratio and input complexity), it can be stated that, based on the details provided in Section 2, as both methods are test time scores, we incorporate only the compressor loss from the input complexity method and the background loss from the likelihood ratio method into our score. Table 3 shows the superiority of the proposed method for different manifold dimensions in complicated datasets. However, for simple-manifold family datasets, the combination of the proposed method and IC is more effective. Additionally, the performance of the proposed method appears to be nearly indistinguishable when using both types of penalty functions. The Huber function only demonstrates a slight advantage over MSE in very low dimensions. In short, regardless of the specific penalty function used, the concept of proposed method demonstrates some advantageous in OOD detection. The presented results in Table 4 and Table 5 demonstrate similar performance to those in Table 3. The key difference is that the proposed method alone or its combination with the two mentioned methods (ratio, IC) outperform the likelihood ratio and IC methods in terms of AUROC. At end, one notable observation derived from analyzing the outcomes of the trained model on the SVHN dataset, which can be generalized to other datasets, is the effect of adjusting the parameters \(d\) and \(\delta\) on the AUROC value. These results emphasize the significance of \(\delta\) and \(d\) in determining the model's performance, as depicted in Fig. 5. The left figure in Fig. 5 illustrates the changes in the model's performance as \(\delta\) is increased. The mean and standard deviation reported for each \(\delta\) are based on models trained with that specific threshold but with varying values of \(d\). It is obvious that increasing \(\delta\) leads to higher AUROC values for datasets with simpler data manifolds (like MNIST, FMNIST, and constant). However, the impact of \(\delta\) on the AUROC value appears to be unnoticeable for datasets with complex manifolds (like CelebA, CIFAR10, and LSUN). It is important to note that excessively high values of \(\delta\) may deviate from the primary objective of the problem. The mean and standard deviation reported for each \(\delta\) are based on models trained with that specific threshold but with varying values of \(d\). The right plot in Fig. 5 demonstrates the effect of \(d\) on the AUROC value. Each star point on the diagram represents the mean and standard deviation of the output from trained models with different \(\delta\) for a fixed \(d\). We observe that as the dimensionality increases, the model's ability for OOD detection decreases, particularly in datasets with simpler manifolds. However, it is clear that excessively reducing the dimensionality by choosing a value of \(d\) smaller than the actual data manifold is not suitable. \begin{table} \begin{tabular}{l l l l l l l l l l} & & \multicolumn{2}{c}{\(d=3072\)} & \multicolumn{3}{c}{\(d=1000\)} & \multicolumn{3}{c}{\(d=500\)} \\ \cline{2-10} & \multicolumn{1}{c}{Glow} & \multicolumn{1}{c}{ratio} & \multicolumn{1}{c}{IC} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{P+ratio} & \multicolumn{1}{c}{P+IC} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{P+ratio} & \multicolumn{1}{c}{P+IC} \\ \hline **Huber** & & & & & & & & & \\ MNIST & 0.662 & 0.588 & **1.0** & 0.722 & 0.369 & **1.0** & 0.873 & 0.391 & **1.0** \\ FMNIST & 0.863 & 0.998 & **1.0** & 0.872 & 0.626 & **0.986** & 0.932 & 0.629 & **0.986** \\ Cifar10 & 0.99 & **1.0** & 0.408 & **0.991** & 0.286 & 0.079 & **0.992** & 0.287 & 0.08 \\ LSUN & 0.996 & **1.0** & 0.647 & **0.997** & 0.174 & 0.091 & **0.997** & 0.174 & 0.091 \\ CelebA & 0.998 & **1.0** & 0.423 & **0.998** & 0.115 & 0.051 & **0.998** & 0.115 & 0.051 \\ Constant & 0.543 & 0.685 & **1.0** & 0.501 & 0.535 & **1.0** & 0.567 & 0.532 & **1.0** \\ \hline **MSE** & & & & & & & & & \\ MNIST & 0.662 & 0.588 & **1.0** & 0.751 & 0.375 & **1.0** & 0.916 & 0.433 & **1.0** \\ FMNIST & 0.863 & 0.998 & **1.0** & 0.896 & 0.632 & **0.986** & 0.948 & 0.635 & **0.986** \\ Cifar10 & 0.99 & **1.0** & 0.408 & **0.991** & 0.289 & 0.08 & **0.993** & 0.291 & 0.08 \\ LSUN & 0.996 & **1.0** & 0.647 & **0.997** & 0.176 & 0.091 & **0.997** & 0.177 & 0.091 \\ CelebA & 0.998 & **1.0** & 0.423 & **0.998** & 0.117 & 0.051 & **0.998** & 0.117 & 0.051 \\ Constant & 0.543 & 0.685 & **1.0** & 0.545 & 0.523 & **1.0** & 0.579 & 0.533 & **1.0** \\ \hline \hline & \multicolumn{2}{c}{\(d=100\)} & \multicolumn{3}{c}{\(d=50\)} & \multicolumn{3}{c}{\(d=10\)} \\ \cline{2-10} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{P+ratio} & \multicolumn{1}{c}{P+IC} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{P+ratio} & \multicolumn{1}{c}{P+IC} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{P+ratio} & \multicolumn{1}{c}{P+IC} \\ \hline **Huber** & & & & & & & & & \\ MNIST & 0.889 & 0.378 & **1.0** & 0.934 & 0.395 & **1.0** & 0.994 & 0.511 & **1.0** \\ FMNIST & 0.954 & 0.632 & **0.986** & 0.96 & 0.637 & **0.986** & **0.996** & 0.706 & 0.988 \\ Cifar10 & **0.992** & 0.288 & 0.08 & **0.989** & 0.29 & 0.08 & **0.979** & 0.295 & 0.08 \\ LSUN & 0.996 & 0.175 & 0.091 & 0.993 & 0.176 & 0.091 & 0.985 & 0.179 & 0.092 \\ CelebA & 0.997 & 0.116 & 0.051 & 0.995 & 0.117 & 0.051 & 0.989 & 0.122 & 0.052 \\ Constant & 0.588 & 0.53 & 1.0 & 0.671 & 0.534 & 1.0 & 0.719 & 0.538 & 1.0 \\ \hline **MSE** & & & & & & & & & \\ MNIST & 0.906 & 0.386 & **1.0** & 0.935 & 0.399 & **1.0** & 0.99 & 0.549 & **1.0** \\ FMNIST & 0.964 & 0.639 & **0.986** & 0.967 & 0.647 & **0.987** & **0.991** & 0.732 & 0.989 \\ Cifar10 & **0.992** & 0.293 & 0.08 & **0.986** & 0.296 & 0.08 & **0.986** & 0.304 & 0.081 \\ LSUN & **0.996** & 0.179 & 0.091 & **0.991** & 0.18 & 0.092 & **0.978** & 0.187 & 0.093 \\ CelebA & **0.997** & 0.119 & 0.051 & **0.993** & 0.12 & 0.051 & **0.983** & 0.13 & 0.053 \\ Constant & 0.562 & 0.525 & **1.0** & 0.681 & 0.532 & **1.0** & 0.752 & 0.542 & **1.0** \\ \hline \hline \end{tabular} \end{table} Table 3: AUROC score for Glow (Kingma & Dhariwal, 2018), likelihood ratio method (named ratio) (Ren et al., 2019), input complexity method (named IC) (Serra et al., 2020) with a lossless PNG compressor, and all cases of the proposed method, namely the pure case (named P), including ratio (P+ratio) and including IC (P+IC). Models are trained on SVHN. \begin{table} \begin{tabular}{l l l l l l l l l l} & & \multicolumn{2}{c}{\(d=3072\)} & \multicolumn{2}{c}{\(d=1000\)} & \multicolumn{2}{c}{\(d=500\)} \\ \cline{2-9} & \multicolumn{1}{c}{Glow} & \multicolumn{1}{c}{ratio} & \multicolumn{1}{c}{IC} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{P+ratio} & \multicolumn{1}{c}{P+IC} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{P+ratio} & \multicolumn{1}{c}{P+IC} \\ \hline **Huber** & & & & & & & & & \\ MNIST & 0.002 & 0.004 & **1.0** & 0.002 & 0.992 & **1.0** & 0.003 & 0.992 & **1.0** \\ FMNIST & 0.016 & 0.014 & **0.999** & 0.021 & 0937 & **0.997** & 0.025 & 0.938 & **0.997** \\ SVHN & 0.057 & 0.004 & **0.86** & 0.055 & 0.435 & **0.921** & 0.052 & 0.435 & **0.921** \\ LSUN & 0.541 & 0.544 & **0.779** & 0.535 & 0.464 & **0.665** & 0.534 & 0.464 & **0.665** \\ CelebA & 0.486 & **0.558** & **0.558** & 0.484 & **0.571** & 0.559 & 0.49 & **0.571** & 0.559 \\ Constant & 0.112 & 0.087 & **1.0** & 0.158 & 0.687 & **1.0** & 0.12 & 0.687 & **1.0** \\ \hline **MSE** & & & & & & & & \\ MNIST & 0.002 & 0.004 & **1.0** & 0.004 & 0.992 & **1.0** & 0.008 & 0.992 & **1.0** \\ FMNIST & 0.016 & 0.014 & **0.999** & 0.031 & 0.938 & **0.997** & 0.067 & 0.938 & **0.997** \\ SVHN & 0.057 & 0.004 & **0.86** & 0.054 & 0.435 & **0.921** & 0.046 & 0.432 & **0.921** \\ LSUN & 0.541 & 0.544 & **0.779** & 0.543 & 0.465 & **0.665** & 0.542 & 0.465 & **0.665** \\ CelebA & 0.486 & **0.558** & **0.558** & 0.487 & **0.571** & 0.559 & 0.5 & **0.572** & 0.56 \\ Constant & 0.112 & 0.087 & **1.0** & 0.157 & 0.697 & **1.0** & 0.146 & 0.687 & **1.0** \\ \hline \hline & \multicolumn{2}{c}{\(d=100\)} & \multicolumn{2}{c}{\(d=50\)} & \multicolumn{2}{c}{\(d=10\)} \\ \cline{2-9} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{P+ratio} & \multicolumn{1}{c}{P+IC} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{P+ratio} & \multicolumn{1}{c}{P+IC} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{P+ratio} & \multicolumn{1}{c}{P+IC} \\ \hline **Huber** & & & & & & & & \\ MNIST & 0.002 & 0.992 & **1.0** & 0.004 & 0.992 & **1.0** & 0.008 & 0.992 & **1.0** \\ FMNIST & 0.027 & 0.938 & **0.997** & 0.031 & 0.938 & **0.997** & 0.049 & 0.938 & **0.997** \\ SVHN & 0.057 & 0.436 & **0.921** & 0.06 & 0.436 & **0.921** & 0.065 & 0.436 & **0.921** \\ LSUN & 0.525 & 0.464 & **0.665** & 0.521 & 0.464 & **0.665** & 0.503 & 0.464 & **0.665** \\ CelebA & 0.494 & **0.571** & 0.559 & 0.495 & **0.571** & 0.559 & 0.511 & **0.572** & 0.559 \\ Constant & 0.256 & 0.691 & **1.0** & 0.293 & 0.695 & **1.0** & 0.416 & 0.718 & **1.0** \\ \hline **MSE** & & & & & & & & \\ MNIST & 0.027 & 0.992 & **1.0** & 0.359 & 0.993 & **1.0** & 0.93 & 0.995 & **1.0** \\ FMNIST & 0.122 & 0.939 & **0.997** & 0.317 & 0.942 & **0.997** & 0.857 & 0.951 & **0.997** \\ SVHN & 0.047 & 0.429 & **0.921** & 0.067 & 0.425 & **0.921** & 0.119 & 0.418 & **0.921** \\ LSUN & 0.523 & 0.464 & **0.666** & 0.53 & 0.464 & **0.666** & 0.54 & 0.464 & **0.667** \\ CelebA & 0.488 & **0.572** & 0.56 & 0.547 & **0.574** & 0.561 & 0.688 & **0.588** & 0.566 \\ Constant & 0.197 & 0.691 & **1.0** & 0.396 & 0.703 & **1.0** & 0.538 & 0.725 & **1.0** \\ \hline \hline \end{tabular} \end{table} Table 4: AUROC score for Glow (Kingma & Dhariwal, 2018), likelihood ratio method (named ratio) (Ren et al., 2019), input complexity method (named IC) (Serra et al., 2020) with a lossless PNG compressor, and all cases of the proposed method, namely the pure case (named P), including ratio (P+ratio) and including IC (P+IC). Models are trained on CIFAR10. \begin{table} \begin{tabular}{l l l l l l l l l l} \hline \hline & & \multicolumn{2}{c}{\(d=3072\)} & \multicolumn{4}{c}{\(d=1000\)} & \multicolumn{4}{c}{\(d=500\)} \\ \cline{2-10} & \multicolumn{1}{c}{Glow} & \multicolumn{1}{c}{ratio} & \multicolumn{1}{c}{IC} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{P+ratio} & \multicolumn{1}{c}{P+IC} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{P+ratio} & \multicolumn{1}{c}{P+IC} \\ \hline **Huber** & & & & & & & & & \\ MNIST & 0.0 & 0.001 & **1.0** & 0.0 & **1.0** & **1.0** & 0.0 & **1.0** & **1.0** \\ FMNIST & 0.13 & 0.004 & **1.0** & 0.014 & 0.906 & **1.0** & 0.017 & 0.906 & **1.0** \\ Cifar10 & **0.7** & 0.624 & 0.655 & **0.703** & 0.276 & 0.442 & **0.707** & 0.276 & 0.442 \\ LSU & 0.698 & 0.619 & **0.871** & **0.696** & 0.292 & 0.635 & **0.699** & 0.292 & 0.635 \\ SVHN & 0.061 & 0. & **0.935** & 0.061 & 0.262 & **0.95** & 0.06 & 0.262 & **0.95** \\ Constant & 0.111 & 0.062 & **1.0** & 0.07 & 0.062 & **1.0** & 0.149 & 0.562 & **1.0** \\ \hline **MSE** & & & & & & & & \\ MNIST & 0.0 & 0.001 & **1.0** & 0.0 & **1.0** & **1.0** & 0.0 & **1.0** & **1.0** \\ FMNIST & 0.13 & 0.004 & **1.0** & 0.024 & 0.906 & **1.0** & 0.055 & 0.907 & **1.0** \\ Cifar10 & **0.7** & 0.624 & 0.655 & **0.71** & 0.276 & 0.442 & **0.737** & 0.277 & 0.442 \\ LSUN & 0.698 & 0.619 & **0.871** & **0.707** & 0.293 & 0.635 & **0.738** & 0.293 & 0.635 \\ SVHN & 0.061 & 0. & **0.935** & 0.062 & 0.262 & **0.95** & 0.059 & 0.261 & **0.95** \\ Constant & 0.111 & 0.062 & **1.0** & 0.077 & 0.565 & **1.0** & 0.187 & 0.572 & **1.0** \\ \hline \hline & & \multicolumn{2}{c}{\(d=100\)} & \multicolumn{4}{c}{\(d=50\)} & \multicolumn{4}{c}{\(d=10\)} \\ \cline{2-10} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{P+ratio} & \multicolumn{1}{c}{P+IC} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{P+ratio} & \multicolumn{1}{c}{P+IC} & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{P+ratio} & \multicolumn{1}{c}{P+IC} \\ \hline **Huber** & & & & & & & & \\ MNIST & 0.0 & **1.0** & **1.0** & 0.0 & **1.0** & 0.0 & **1.0** & **1.0** \\ FMNIST & 0.018 & 0.906 & **1.0** & 0.019 & 0.906 & **1.0** & 0.029 & 0.907 & **1.0** \\ Cifar10 & **0.709** & 0.276 & 0.442 & **0.71** & 0.276 & 0.442 & **0.722** & 0.276 & 0.442 \\ LSUN & **0.699** & 0.292 & 0.635 & **0.696** & 0.292 & 0.635 & **0.703** & 0.292 & 0.635 \\ SVHN & 0.063 & 0.262 & **0.95** & 0.067 & 0.262 & **0.95** & 0.085 & 0.263 & **0.95** \\ Constant & 0.175 & 0.565 & **1.0** & 0.138 & 0.564 & **1.0** & 0.51 & 0.59 & **1.0** \\ \hline **MSE** & & & & & & & & \\ MNIST & 0.082 & **1.0** & **1.0** & 0.527 & **1.0** & **1.0** & 0.894 & **1.0** & **1.0** \\ FMNIST & 0.16 & 0.909 & **1.0** & 0.337 & 0.911 & **1.0** & 0.835 & 0.932 & **1.0** \\ Cifar10 & **0.782** & 0.278 & 0.444 & **0.79** & 0.28 & 0.445 & **0.745** & 0.285 & 0.447 \\ LSUN & **0.785** & 0.295 & 0.637 & **0.801** & 0.297 & 0.639 & **0.75** & 0.301 & 0.641 \\ SVHN & 0.076 & 0.258 & **0.95** & 0.122 & 0.256 & **0.95** & 0.135 & 0.252 & **0.951** \\ Constant & 0.155 & 0.563 & **1.0** & 0.448 & 0.576 & **1.0** & 0.659 & 0.605 & **1.0** \\ \hline \hline \end{tabular} \end{table} Table 5: AUROC score for Glow (Kingma & Dhariwal, 2018), likelihood ratio method (named ratio) (Ren et al., 2019), input complexity method (named IC) (Serra et al., 2020) with a lossless PNG compressor, and all cases of the proposed method, namely the pure case (named P), including ratio (P+ratio) and including IC (P+IC). Models are trained on CelebA. ## 6 Conclusion NFs cannot directly provide an estimation of the likelihood on the data manifold. To address this, we adopted an algorithmic approach to jointly estimate the likelihood and data manifold, and used it for OOD detection. Our proposed method is based on the intuition that relying only on likelihood value on the manifold or reconstruction loss is insufficient for OOD detection. Accordingly, we found that using these two indicators, namely likelihood and reconstruction loss, jointly significantly improves OOD detection. Through extensive experiments, we showed the strength of the proposed method in finding data manifold and estimating its likelihood, besides showing excellent performance for image OOD detection. The proposed method can also generate samples similar to the latest state-of-the-art manifold learning using normalizing flows models in terms of visual quality. A per-pixel (or element-wise) Huber loss function was used in this paper for manifold learning. The Huber function was employed here due to its transition mode from linear to quadratic. It penalizes off-manifold parts linearly, and the model focuses on learning the on-manifold sub-space well. Per-object (a group of pixels) and per-sample (all pixels) loss can be an interesting work to follow in a future work. It should be noted that a primary assumption of the current study is that data lie on a manifold with one chart. In future work, we aim to extend a multi-chart version of the current method with possibly improved ability of OOD detection. Figure 5: The effect of changing the manifold dimension (\(d\)) and threshold value of the Huber loss (\(\delta\)) on AUROC for trained models on SVHN.
2310.09580
Where to Decide? Centralized vs. Distributed Vehicle Assignment for Platoon Formation
Platooning is a promising cooperative driving application for future intelligent transportation systems. In order to assign vehicles to platoons, some algorithm for platoon formation is required. Such vehicle-to-platoon assignments have to be computed on-demand, e.g., when vehicles join or leave the freeways. In order to get best results from platooning, individual properties of involved vehicles have to be considered during the assignment computation. In this paper, we explore the computation of vehicle-to-platoon assignments as an optimization problem based on similarity between vehicles. We define the similarity and, vice versa, the deviation among vehicles based on the desired driving speed of vehicles and their position on the road. We create three approaches to solve this assignment problem: centralized solver, centralized greedy, and distributed greedy, using a Mixed Integer Programming (MIP) solver and greedy heuristics, respectively.Conceptually, the approaches differ in both knowledge about vehicles as well as methodology. We perform a large-scale simulation study using PlaFoSim to compare all approaches. While the distributed greedy approach seems to have disadvantages due to the limited local knowledge, it performs as good as the centralized solver approach across most metrics. Both outperform the centralized greedy approach, which suffers from synchronization and greedy selection effects. The centralized solver approach however assumes global knowledge and requires a complex MIP solver to compute vehicle-to-platoon assignments. Overall, the distributed greedy approach achieves close to optimal results but requires the least assumptions and complexity.Therefore, we consider the distributed greedy approach the best approach among all presented approaches.
Julian Heinovski, Falko Dressler
2023-10-14T13:09:49Z
http://arxiv.org/abs/2310.09580v3
# Where to Decide? Centralized vs. Distributed Vehicle Assignment for Platoon Formation ###### Abstract Platooning is a promising cooperative driving application for future intelligent transportation systems. In order to assign vehicles to platoons, some algorithm for platoon formation is required. Such vehicle-to-platoon assignments have to be computed on-demand, e.g., when vehicles join or leave the freeways. In order to get best results from platooning, individual properties of involved vehicles have to be considered during the assignment computation. In this paper, we explore the computation of vehicle-to-platoon assignments as an optimization problem based on similarity between vehicles. We define the similarity and, vice versa, the deviation among vehicles based on the desired driving speed of vehicles and their position on the road. We create three approaches to solve this assignment problem: _centralized solver, centralized greedy_, and _distributed greedy_, using a Mixed Integer Programming solver and greedy heuristics, respectively. Conceptually, the approaches differ in both knowledge about vehicles as well as methodology. We perform a large-scale simulation study using PlaFoSim to compare all approaches. While the _distributed greedy_ approach seems to have disadvantages due to the limited local knowledge, it performs as good as the _centralized solver_ approach across most metrics. Both outperform the _centralized greedy_ approach, which suffers from synchronization and greedy selection effects. Since the _centralized solver_ approach assumes global knowledge and requires a complex Mixed Integer Programming solver to compute vehicle-to-platoon assignments, we consider the _distributed greedy_ approach to have the best performance among all presented approaches. Intelligent Transportation Systems, Platoon Formation, Vehicle-to-Platoon Assignment. ## I Introduction The amount of road traffic has been constantly growing in recent years, leading to more and more congestion of the roads and environmental pollution. To cope with these negative effects, vehicle manufacturers and researchers are striving to improve today's driving to be more efficient and comfortable. Vehicles are equipped with more and more technology to assist the driver in his tasks in order to make driving more efficient and safe, transforming them into Intelligent Transportation Systems (ITS). Using Inter-Vehicle Communication (IVC) technologies like 5G-based Cellular Vehicle-to-Everything (C-V2X) communication or Distributed Short-Range Communication (DSRC), vehicles are now able to cooperate with each other. This allows applications such as cooperative driving [1, 2]. One such application is vehicular platooning where multiple vehicles are grouped into a convoy and electronically coupled via Cooperative Adaptive Cruise Control (CACC or Cooperative Adaptive Cruise Controller) [3, 4, 5]. This coupling allows small safety gaps (e.g., 5 m) between platoon members while still enabling string-stable and safe operation. Platooning thus promises to enhance today's driving by improving traffic flow [6]. The technical feasibility of string-stable platooning has been investigated in depth in the literature [3, 5, 7, 8, 9]. Most studies typically consider pre-configured and well-defined platoons. This assumption, however, is somewhat unrealistic as some form of bootstrapping a platoon is required: Vehicles will initially drive individually until they encounter an appropriate (existing) platoon to join or another individual vehicle to form a new platoon with. In order to perform the necessary driving tasks (e.g., approaching, lane changing, switching to CACC) to become a platoon member, cooperative maneuvers need to be executed [10, 11]. Deciding which vehicles should form a platoon together is part of the general challenge of assigning vehicles to platoons, which requires computing so called _vehicle-to-platoon assignments_. While simple ad-hoc approaches enable fast setup of platooning [12], vehicle-to-platoon assignments typically require more complex computation in order to optimize for certain factors [13]. Some first solutions for computing such assignments in an optimal way are limited to, e.g., a certain set of vehicles with known trips and requirements, allowing a priori computations [14]. However, the trips and requirements of vehicles are not always known, which requires on-demand and en route computation of assignments based on the current situation. Thus, new vehicles can only be considered for platooning after entering the freeway and announcing their interest in platooning. Only now, vehicles or other entities can start searching for appropriate platoon candidates and compute corresponding assignments. While there are ideas to wait for other vehicles to form platoons before entering the actual freeway [15], doing so will unnecessarily delay the trips. Therefore, the entire platoon formation process including the computation of vehicle-to-platoon assignments should happen during the vehicles' trips on the freeway. To avoid assigning vehicles to platoons with very heterogeneous properties, the individual vehicles properties should be considered during the assignment computation [16, 17]. Thus, the platooning benefits for the individual vehicles can be optimized without giving up to much of the own requirements instead of system-level aspects such as traffic flow. Therefore, solving the challenge of vehicle-to-platoon assignments with respect to these aspects is the next important step towards large-scale deployment of platooning [18]. In this paper, inspired by our earlier work [17], we explore the computation of vehicle-to-platoon assignments as an optimization problem based on similarity between vehicles. We define the similarity and, vice versa, the deviation among vehicles based on the desired driving speed of vehicles and their position on the road, thereby considering their individual requirements. We aim to to increase vehicles' similarity, thereby minimizing their deviation in desired driving speed and position. We create three approaches to solve this assignment problem: _centralized solver_, _centralized greedy_, and _distributed greedy_, using a Mixed Integer Programming (MIP) solver and greedy heuristics, respectively. Conceptually, the approaches differ in both knowledge about vehicles as well as methodology. We assume that such coordination only requires very limited network access, which can easily be provided by modern technologies such as 5G-based C-V2X or IEEE 802.11p-based DSRC. We perform a large-scale simulation study with PlaFoSim to compare both greedy approaches to the optimal solution. We report extensive simulation results to evaluate the impact of the knowledge as well as the performance of the approximation by the heuristics. Our main contributions can be summarized as follows: * We explore the problem of vehicle-to-platoon assignments as an optimization problem based on similarity between vehicles. * We developed three approaches to solve this optimization problem: _centralized solver_, _centralized greedy_, and _distributed greedy_, using a MIP solver and greedy heuristics, respectively. * We perform a large-scale simulation study with PlaFoSim to compare both greedy approaches to the optimal solution. * We show that the _distributed greedy_ approach leads to the best results while requiring the least assumptions and complexity among all presented approaches. The remainder of this paper is structured as follows: First, we review related work from the literature in Section II. We then propose our idea of computing vehicle-to-platoon assignments based on similarity between vehicles in Section III. Afterwards, we illustrate our methodology for evaluating the proposed idea and report corresponding results in Section IV. Finally, we summarize and conclude our findings in Section V. ## II Related Work A simple approach of forming platoons is to perform spontaneous (ad-hoc) platoon formation with other vehicles, using only limited consideration of vehicles' properties and avoiding complex decision making [12]. Vehicles can join close platoons based on their current position without considering a particular constraint or properties of other vehicles [12]. Adding a little more complexity, vehicles can evaluate whether the estimated benefit of catching up or slowing down and joining a platoon is more fuel-efficient than driving alone [19]. Combining catch-up and slow-down, Saeednia and Menendez [20] propose a hybrid platooning strategy targeting the formation of truck platoons with the highest possible platooning speed. Woo and Skabardonis [21] propose a flow-aware strategy for platoon organization, which performs formation conditionally on the local traffic state (i.e., flow and speed) in order to avoid degrading its performance. Ad-hoc approaches often form platoons under the assumption that platooning is desired and to improve macroscopic metrics such as lane capacity and traffic flow. However, when considering a more microscopic level, i.e., metrics that directly influence and are of interest to a driver, this might not be optimal as individual properties and capabilities of the trips and vehicles, i.e., destination, desired driving speed, trip duration, fuel consumption, are not considered. In the following, we separately report on centralized, decentralized, and distributed platoon formation solutions in the literature. ### _Centralized Coordination_ In _centralized_ coordination, vehicle-to-platoon assignments are computed from a global perspective for all vehicles. Thus, it is assumed that details such as trip information are known to the coordination system. Many studies consider platooning of trucks, where trips and schedules or deadlines are known beforehand. This allows to use static planning models, where a central coordination instance computes vehicle-to-platoon assignments based on trucks' travel information a priori [14]. Considering trucks from a single fleet, many studies propose approaches for offline optimization of the trucks' fuel consumption by using platooning [22, 23, 24, 25]. In addition to the vehicle-to-platoon assignments, the algorithms compute speed profiles, routes, and departure times [26] for the trucks in order to fulfill the desired platoons. Recently, such optimization was proposed also for trucks from multiple fleets with different objectives, utilizing the trucks' computational resources [27]. The complexity of such centralized optimization has been shown to be NP hard [22]. Since these offline approaches are limited to known sets of vehicles including their properties and trip data, they do not work well for general traffic. Thus, on-demand and en route computation of vehicle-to-platoon assignments based on the current situation on the freeway is required. Liang et al. [28] study fuel-efficient platooning for trucks and form platoons on the fly. They consider an optimization problem for pairwise coordination of vehicles in a centralized manner, where the leading vehicle slows down and the trailing vehicle speeds up. Hoef et al. [29] define a combinatorial optimization problem that uses the transport assignments of trucks for en route formation of truck platoons. The process is repeated whenever assignments change or deviation from the plans are detected. Considering normal passenger vehicles, Krupitzer et al. [30] propose a centralized platoon coordination system that receives information (i.e., destination) from drivers via Vehicle-to-Everything (V2X) communication, searches for a feasible platoon, and performs the corresponding join maneuver. Liu et al. [31] split the highway into zones of 2 km and propose an optimization problem that decides whether or not each single vehicle should join a specific platoon within each zone. ### _Decentralized Coordination_ In _decentralized_ coordination, vehicle-to-platoon assignments are computed in multiple locations (e.g., Roadside Units) from limited global perspectives for a subset of vehicles. Thus, decentralized approaches have only limited global knowledge about an area (e.g., a road section) or other vehicles in V2X communication proximity. Several studies used a decentralized approach that sorts vehicles into platoons at the entrance ramp of the freeway [15, 32]. These approaches group vehicles according no or limited constraints (e.g., their destination) and let them enter the freeway as an already constructed platoon at a certain time. Similar to ramp-grouping, platoons can be formed with trucks waiting at hubs along the freeway. Larsen et al. [33] and Johansson et al. [34] propose to use local coordinators at hubs for dispatching multiple trucks at the same time in order to form a platoon. The goal is to optimize fuel savings and minimize the cost of waiting at the hub, while adhering to the waiting time windows of trucks. Larson et al. [35, 36] deploy a distributed network of local controllers at junctions in a road network. The controllers monitor approaching trucks and coordinate platoon formation among all vehicles in proximity. Using vehicles' information such as speed, position, and destination, an optimization problem for required speed adjustments and expected fuel savings is solved via heuristics. Other studies divide the highway into sections and coordinate platooning within those sections with local centralized controllers. Krupitzer et al. [37] extend their centralized platoon coordination system with subsystems that individually coordinate vehicles within sections in a regional planner-like approach. While only limited details are provided, their approach uses Vehicle-to-Infrastructure (V2I) communication for exchanging vehicles' desired driving speed and route data with the coordination system, which forms platoons among vehicles with a similar route or destination. Similarly, Burov et al. [38] use local centralized controllers for highway sections of 300 m to coordinate platooning. The controllers collect and transmit data between platoons and vehicles via V2I communication. Going one step further, Zhu et al. [39] use local controllers to provide general driving guidance and platoon coordination for vehicles within a controlling zone (i.e., a freeway section). They create an optimization problem for jointly considering speed, safety, and energy efficiency, in order to maximize overall traffic velocity and minimize collision risk and fuel consumption. ### _Distributed Coordination_ In _distributed_ coordination, vehicle-to-platoon assignments are computed within the individual vehicles. Here, the vehicle itself is the actor taking the decision about forming a platoon with other vehicles. Thus, distributed approaches have only limited local knowledge about other vehicles in V2X communication proximity. Khan and Boloni [40] develop a system which evaluates the cost and benefit of forming a platoon with other vehicles in proximity. The system continuously evaluates the utility of platooning in terms of deviation from desired driving speed and fuel consumption; it successful, it indicates the decision to the driver using an LED to adjust the Adaptive Cruise Control (ACC) accordingly. Since vehicles' communication range is limited to 20 m, vehicles continue evaluation after joining a platoon, allowing them to switch to a platoon with higher utility if encountered. One way of grouping vehicles is based on their destination, thereby maximizing the distance a platoon stays intact and vehicles can share platoon benefits [41, 42]. Vehicles on entrance ramps of a freeway use V2X to communicate with other vehicles and platoons in range to find feasible platooning opportunities. They select a platoon with members that have similar destinations (within a certain range), thereby increasing lane capacity and traffic throughput [41] as well as fuel consumption [42]. If vehicles cannot immediately find a feasible platoon, they can temporarily join a non feasible one and perform a one-time change to a better fitting platoon later on [43]. Also using vehicles' destination and route, Dokur et al. [44] propose a system that exchanges this data with broadcasts via DSRC. If vehicles are heading to the same destination or to different destinations but share a common path, they start negotiations for forming a platoon via messages. A negotiation resolver running on the vehicles settles the negotiations between the vehicles to initiate platooning, also deciding about the platoon leader. In contrast, many studies use vehicles' speed and position for forming platoons. Su and Ahn [45] propose a distributed algorithm that uses other vehicles' speed and position data from IEEE 802.11p broadcasts. Vehicles calculate the difference between their own value and the average of all neighbors in speed and position. They also choose suitable platoon leaders for new platoons based on an exponential distribution of the speed difference and announce their result using a slotted persistence approach. ### _Open Research Problems_ The aforementioned strategies for platoon formation show that optimal groupings substantially improve the performance gain. However, many studies are targeted towards trucks using known transport assignments and deadlines. We target individual vehicles, where vehicles' properties are not known beforehand and are quite divers (e.g., variation in desired driving speed) Thus, vehicle-to-platoon assignments need to be computed on-demand and en route, based on similarity (e.g., in driving speed) among individual vehicles instead of total cost (such as fuel consumption). While some studies perform a comparison of their centralized optimal solution with a centralized heuristic, a proper comparison between centralized and distributed approaches is still missing from the literature. In earlier work [17], we studied en route platoon formation for individual passenger vehicles based on their similarity in desired driving speed and position on the road, thereby optimizing vehicle-to-platoon assignments regarding individual properties. Besides formally describing the platoon formation problem, we introduced both a centralized and a distributed heuristic for solving the assignment problem. Simulations already indicated that the choice of the approach, and the willingness to deviate from individual objectives, have a huge impact on the resulting assignments. In this paper, we go one step further and add an approach to optimally solve our similarity-based optimization problem using a MIP solver. For this, we refined the entire problem formulation to make it more applicable for centralized optimization. We compare both greedy approaches to the optimal solution in detail using a wide range of performance metrics in a large-scale simulation study with realistic freeway traffic in PlaFoSim. ## III Computing vehicle-to-platoon Assignments In this section, we define the problem of computing vehicle-to-platoon assignments, which has to be solved as part of the platoon formation process. We describe our perspective on this problem as well as our approach for solving the problem using the concept of similarity among vehicles, which we base on the their individual properties. ### _Assumptions_ We focus on individual traffic, where vehicles have different properties (such as driving speed) and start their trips unsynchronized. Thus, the trips and requirements of vehicles are not known beforehand, which requires dynamic on-demand computation of assignments based on the current situation. Vehicles will initially drive individually until they encounter an appropriate (existing) platoon to join or an individual vehicle to form a new platoon with. We generally assume that driving in a platoon is preferred because of the expected benefits such as driving efficiency as well as increased safety and traffic throughput. When vehicles start their trip (outside of the freeway), they have no knowledge about other already existing vehicles and vice versa. We assume that the entire platoon formation process including the computation of vehicle-to-platoon assignments happens during the vehicles' trips on the freeway (en route). After assignment, vehicles create a new platoon or join an existing one by performing a maneuver. Vehicles can communicate with each other by means of 5G-based C-V2X or DSRC to exchange platooning information and maneuver control. As platoon coordination has very low requirements on the communication, we neglect this part in the evaluation to make the approach also technology agnostic. After successful competition of the join maneuver, vehicles stay within the platoon until they reach their destination, at which a leave maneuver is performed. Assuming a fully operational Cooperative Adaptive Cruise Controller, vehicles in a platoon always mirror the behavior of the platoon leader and keep a constant gap of 5 m [9, 46]. ### _Problem Formulation_ Early approaches to find candidate vehicles to construct platoons consider different constraints and optimization goals, such as grouping by destination or route, or (pair-wise) by fuel efficiency. To avoid assigning vehicles to platoons with very heterogeneous properties, we aim to consider the individual vehicles' properties during the assignment computation. Thus, the platooning benefits for the individual vehicles can be optimized without giving up to much of the own requirements instead of system-level aspects such as traffic flow. Assigning vehicles to platoons based on their properties is similar to clustering vehicles according to some similarity metric corresponding to the constraints and goals introduced by a formation strategy. Inspired by our earlier work [17], we are using the desired driving speed as a primary similarity metric. Additionally, we also take the position of the vehicles on the freeway into account. Since joining a platoon which is far away can add a lot of overhead in fuel-consumption, it not useful to consider such cases as possible options. In order to come up with a formation strategy, we formalize the problem as follows. Let a vehicle or platoon be represented by the set \[\left\{n,D,p,l\right\}\,, \tag{1}\] where \(n\) is the identifier, \(D\) is the desired driving speed, \(p\) is the current position, and \(l\) is the current position of the last vehicle in the platoon (\(l=p\) if \(n\) is driving individually). We define \(C\) as the set of all individual vehicles that need to be assigned to a vehicle or platoon and \(P\) as the set of all already existing platoons. The union thereof \(T:=C\cup P\) defines the set of all potential assignment targets. We define \(I\) and \(J\) as index sets of \(C\) and \(T\), thus \(c_{i}\in C\) and \(t_{j}\in T\) define the \(i\)-th and \(j\)-th element of \(C\) and \(T\), respectively. We define \(A\) as a \(\left|C\right|\times\left|T\right|\) matrix that contains a decision variable \(a_{ij}\) for every possible vehicle-to-platoon assignment between elements of \(C\) and \(T\). The decision variable is used to determine whether an arbitrary \(c_{i}\in C\) is assigned to an arbitrary \(t_{j}\in T\), using the definition of \(a_{ij}\) as \[a_{ij}=\begin{cases}1,&\text{if \ $c_{i}$ is assigned to $t_{j}$}\\ 0,&\text{otherwise}\end{cases}\,. \tag{2}\] We call an assignment \(a_{ij}=1\) with \(c_{i}=t_{j}\) a _self-assignment_ of vehicle \(c_{i}\), indicating that the vehicle \(c_{i}\) is not assigned to another vehicle or platoon but will keep driving individually. Note that \(A\) in general is not symmetric and not all assignments of \(c_{i}\) and \(t_{j}\) are technically possible (see below). We also define \(F\) as a \(\left|C\right|\times\left|T\right|\) matrix that contains the similarity (or rather the deviation) \(f_{ij}=f\left(c_{i},t_{j}\right)\) in their properties between elements of \(C\) and \(T\), using the definition of \(f\left(c,t\right)\) as \[f\left(c,t\right)=\alpha\cdot d_{s}\left(c,t\right)+\left(1-\alpha\right)\cdot d _{p}\left(c,t\right)\,\,, \tag{3}\] where, \(\alpha\in\left[0,1\right]\) is a weighting coefficient. The deviation between \(c_{i}\) and \(t_{j}\) in desired speed \(d_{s}\left(c_{i},t_{j}\right)\) as well as in position on the freeway \(d_{p}\left(c_{i},t_{j}\right)\) are defined as \[d_{s}\left(c,t\right)=\frac{\left\|D_{c}-D_{t}\right\|}{m\cdot D _{c}},\quad m\in\left[0,1\right]\,, \tag{4}\] \[d_{p}\left(c,t\right)=\frac{\text{min}\left(\left\|p_{c}-p_{t} \right\|,\left\|l_{t}-p_{c}\right\|\right)}{r},\quad r\in\mathbb{N}\,\,, \tag{5}\] where \(m\) is the maximum allowed deviation from the desired driving speed of vehicle \(c\) and \(r\) is the maximum allowed deviation from the position on the freeway of vehicle \(c\) (i.e., the search range) for vehicles being considered as potential candidates. Equations (4) and (5) both calculate a deviation relative to the respective maximum value (i.e., \(m\) and \(r\)), thereby defining a window of allowed deviation in \(\left[0,1\right]\). While Equation (4) simply uses the desired driving speed of \(c\) and \(t\), Equation (5) considers the location of vehicle or platoon \(t\) in relation to vehicle \(c\). Typical approaches for calculating user similarity include the Jaccard coefficient, the cosine similarity, and the multi-dimensional Euclidean distance [47]. These, however, cannot be applied to our problem as they either work only on sets (Jaccard) or do not support defining a maximum deviation as well as importance per property (cosine and Euclidean). Thus, our model uses a one-dimensional Euclidean distance that is normalized based on a maximum allowed deviation (a deviation window) and weighted using a coefficient per property. Thus, Equation (3) can flexibly be extended with arbitrary other properties while keeping a total value in \(\left[0,1\right]\). The previous equations are subject to the following constraints: \[d_{s}\left(c,t\right)\leq 1.0\text{,} \tag{6}\] \[d_{p}\left(c,t\right)\leq 1.0\text{,}\] (7) \[p_{c}\leq l_{t}\text{,} \tag{8}\] where \(p_{c}\) is the position of vehicle \(c\) and \(l_{t}\) is the position of the last vehicle in the platoon \(t\) (\(l_{t}=p_{c}\) if \(t\) is driving individually). It is important to mention that Equation (8) requires that a vehicle or a platoon \(t\) is in front of the searching vehicle \(c\) in order to be considered as a potential candidate. Thereby, without loss of generality, we currently only allow joining at the back of a vehicle or platoon, which requires less coordination [43, p. 98] and does not influence other vehicles as well as already existing platoon members as much (assuming only the joining vehicle adjusts its driving speed in order to approach the platoon). "If an existing platoon [had] to slow down to allow a following single vehicle to join (faster), there would be significant wasted energy as multiple vehicles [had] to undergo braking and subsequent acceleration." [31, p. 7574]. Considering also vehicles in the front can increase the probability of finding a candidate and will potentially influence the role the vehicles will have in the resulting platoon (leader vs. follower). For conciseness, we do not consider those effects in this work. Note that like \(A\) also \(F\) in general is not symmetric. Using the previous definitions, we can describe the objective of our optimization problem as \[\text{minimize}\sum_{i\in I}\sum_{j\in J}a_{ij}f_{ij}\text{,} \tag{9}\] subject to the following constraints \[\forall i\in I:\sum_{j\in J}a_{ij}=1\text{,} \tag{10}\] \[\forall j\in J:\sum_{i\in I}a_{ij}\leq 1\text{,} \tag{11}\] \[\forall i\in I:a_{ij}=1\wedge c_{i}=t_{j},\] \[\text{if }\exists k\in I:a_{kl}=1\wedge c_{i}=t_{l}\wedge c_{k} \neq t_{j}\text{.} \tag{12}\] To summarize, we try to find the best fitting platoon candidate (i.e., the candidate with the biggest similarity, or, in other words, with the smallest deviation) \(t_{j}\) for each vehicle \(c_{i}\) to form a platoon with. Equation (10) assures that vehicle \(c_{i}\) is assigned to exactly one target vehicle or platoon \(t_{j}\). Here, a self-assignment indicates that the vehicle continues to drive individually. Equation (11) assures that at most one vehicle is assignment to every target vehicle or platoon \(t_{j}\). This is related to the join maneuver that needs to be executed in order to successfully implement an assignment. Equation (12) assures that vehicle \(c_{i}\) is not assigned to any vehicle or platoon \(t_{j}\) (i.e., a self-assignment) if another vehicle \(c_{k}\) (\(\neq t_{j}\)) is already assigned to it (\(t_{l}=c_{i}\)). After assignment (\(a_{ij}=1\)), vehicle \(c_{i}\) performs a join maneuver towards its designated target platoon or vehicle \(t_{j}\). Assuming a successful join maneuver, vehicle \(c_{i}\) became part of a platoon with vehicle \(t_{j}\) (or the platoon \(t_{j}\) was already part of). Once vehicles become platoon members, they stay in the platoon until they reach their destination. ### _Example Scenario_ As an example for the optimization problem, consider the scenario depicted in Figure 1, where four vehicles are driving individually on an arbitrary road with two lanes (e.g., a freeway) and now try to find a platoon. The vehicles in the example are defined by their set of properties, \(\{5,121\,\mathrm{km}/\mathrm{h},430\,\mathrm{m},430\,\mathrm{m}\},\)\(\{13,89\,\mathrm{km}/\mathrm{h},270\,\mathrm{m},270\,\mathrm{m}\},\)\(\{20,107\,\mathrm{km}/\mathrm{h},250\,\mathrm{m},250\,\mathrm{m}\},\)\(\{37,93\,\mathrm{km}/\mathrm{h},70\,\mathrm{m},70\,\mathrm{m}\},\) and the algorithm in this case uses the following set of parameters: \[\alpha=0.6,m=0.4,r=400\,\mathrm{m}\text{.}\] By using these properties and parameters, the list of possible platoon candidates and their corresponding deviation \(f\left(c,t\right)\) can be calculated as \[f\left(13,5\right)=0.6\cdot 0.599+0.4\cdot 0.4=0.519\] \[f\left(20,5\right)=0.6\cdot 0.218+0.4\cdot 0.45=0.31\] \[f\left(20,13\right)=0.6\cdot 0.28+0.4\cdot 0.05=0.188\] \[f\left(37,5\right)=0.6\cdot 0.501+0.4\cdot 0.9=0.66\] \[f\left(37,13\right)=0.6\cdot 0.071+0.4\cdot 0.5=0.242\] \[f\left(37,20\right)=0.6\cdot 0.25+0.4\cdot 0.45=0.33\text{.}\] From the list of possible candidates and their corresponding deviations, the (optimal) solution minimizing the overall deviation is \[f\left(37,13\right)=0.242\] \[f\left(20,5\right)=0.31\] with a total deviation of \(0.242+0.31=0.552\). Here, selecting a candidate pair blocks both involved vehicles, making them unavailable for further selection. Since a vehicle can only be in one maneuver at a time, at most two assignments can be executed in parallel. After these maneuvers are finished, the vehicles in the scenario are grouped into two platoons: \(\{13,37\}\) and \(\{5,20\}\). ### _Solution using centralized solver approach_ In this approach, the optimization problem is solved periodically for every vehicle in the scenario at the same time. We aim Figure 1: Example scenario: Four vehicles are driving individually on an arbitrary road with two lanes (e.g., a freeway) and try to find a platoon. to minimize the overall deviation from the desired metrics while assigning as many vehicles to platoons as possible. A central server has global knowledge about all vehicles and their corresponding properties, which is collected by means of an infrastructure based network such as 5G-based C-V2X. Using the vehicles' information, a mathematical solver can compute an optimal solution. For all possible vehicle-to-platoon assignments between \(c_{i}\in C\) and \(t_{j}\in T\), the decision variable \(a_{ij}\) (see Equation (2)) needs to be created. This also includes vehicles that cannot be assigned (at the time being). We show the process of creating the decision variables in Algorithm 1. Note that not all combinations of \(c_{i}\) and \(t_{j}\) are technically possible (see Equations (6) to (8)). In such cases, no decision variable \(a_{ij}\) is created. Besides the list of decision variables and corresponding deviations, the solver needs some further constraints in order to compute a useful solution (see Equations (10) to (12)). We depict the process of adding the constraints to the model for the solver in Algorithm 2. After the solver has computed a solution, it has to be applied to the vehicles. The solution is stored within the decision variables of the model, which are used to trigger corresponding join maneuvers among involved vehicles. In case of a self-assignment, no maneuver is triggered and the vehicle stays individual. ``` 0: all vehicles in the scenario for all vehicles \(c_{i}\) in the scenario; do if\(c_{i}\) in platoon or\(c_{i}\) in maneuver then next; \(\triangleright\)\(c_{i}\) is (currently) not available for all vehicles / platoons \(t_{j}\) in the scenario with \(t_{j}\neq c_{i}\) do if(\(t_{j}\) in platoon and\(t_{j}\) not platoon leader) or\(t_{j}\) in maneuver then next; \(\triangleright\)\(t_{j}\) is (currently) not available if\(p_{c_{i}}>l_{t_{j}}\)then next; \(\triangleright\)do not consider vehicles / platoons behind if\(d_{s}\left(c_{i},t_{j}\right)>1.0\) or\(d_{p}\left(c_{i},t_{j}\right)>1.0\)then next; \(\triangleright\) too large deviation in speed or position if\(c_{i}=t_{j}\)then \(f_{ij}=1.0\); \(\triangleright\) use large deviation for self-assignment else calculate \(f_{ij}\) based on \(d_{s}\) and \(d_{p}\); add decision variable and deviation \(\{a_{ij},f_{ij}\}\); all decision variables and deviations: list(\(\{a_{ij},f_{ij}\}\)) ``` **Algorithm 2** Addition of constraints for the solver in the _centralized solver_ approach ### _Solution using centralized greedy approach_ The complexity of centralized optimization in general has been shown to be NP hard [22]. Thus, we propose the _centralized greedy_ approach that follows the idea of the _centralized solver_ approach: it computes vehicle-to-platoon assignments for every vehicle in the scenario at the same time using full knowledge about all vehicles. It, however, uses greedy heuristics to compute the assignments. In this approach, we first calculate the deviation \(f\left(c_{i},t_{j}\right)\) for all vehicles in the neighborhood which do not violate the constraints given in Equations (6) to (8) and add an entry for them to a list of possible assignments, using Algorithm 3. An entry \(\{c_{i},t_{j},f\left(c_{i},t_{j}\right)\}\) in this list contains vehicle \(c_{i}\in C\) and vehicle or platoon \(t_{j}\in T\) as well as the total deviation of vehicle or platoon \(t_{j}\) from vehicle \(c_{i}\) (see Equation (3)). Note that the deviation of \(t_{j}\) from \(c_{i}\) is not symmetric as the opposite direction produces a different value. Once all possible assignments and the corresponding deviations are computed, we use Algorithm 4 to select the best target \(t_{j}\) for every searching vehicle \(c_{i}\) from this list and trigger a corresponding join maneuver. In particular, we select the \(t_{j}\) with the smallest deviation \(f\left(c_{i},t_{j}\right)\) and remove all entries which contain vehicles \(c_{i}\) or \(t_{j}\) because they became unavailable for further selection. This heuristic is greedy as it makes decisions solely from the perspective of a searching vehicle \(c_{i}\) without considering consequences for other searching vehicles, which could join the same vehicle or platoon \(t_{j}\). Due to its nested for-loop, the computational complexity of this approach is \(O\left(n^{2}\right)\). Looking again at the example scenario from Section III-B, we observe that the _centralized greedy_ heuristic also produces two platoons, but it does not necessarily compute the aforementioned optimal solution. Instead, it might select \[f\left(13,5\right)=0.519\] \[f\left(37,20\right)=0.33\] with a total deviation of \(0.519+0.33=0.849\), given the current order of possible assignments. Here, selecting a candidate pair blocks both involved vehicles, making them unavailable for further selection. Since a vehicle can only be in one maneuver at a time, at most two assignments can be executed in parallel. After these maneuvers are finished, the vehicles in the scenario are grouped into two platoons: \(\left\{5,13\right\}\) and \(\left\{20,37\right\}\). ### _Solution using distributed greedy approach_ In contrast to the _centralized greedy_ approach, our _distributed greedy_ approach works fully distributed: every vehicle \(c_{i}\in C\) tries to compute a vehicle-to-platoon assignment to a vehicle or platoon in its neighborhood \(\Omega_{i}\subset T\) individually. In contrast to both centralized approaches, vehicles in the _distributed greedy_ case only have local knowledge of the scenario: they know other vehicles within a certain (communication) range, assuming some general neighbor management, e.g., from regular DSRC beacons [17], and can directly access other vehicles' information (i.e., speed, position, platoon state, and maneuver state). In all approaches, we assume that the information is always up-to-date, thus, vehicles which are not applicable anymore cannot be used as platooning opportunity. Using the information about nearby vehicles, the heuristic given in Algorithm 5 is executed to prepare the list of possible assignments. Note that the deviation of \(t_{j}\) from \(c_{i}\) is not symmetric as the opposite direction produces a different value. In all approaches, we assume that the information is always up-to-date, thus, vehicles which are not applicable anymore cannot be used as platooning opportunity. A heuristic very similar to Algorithm 4 is used to select a candidate vehicle or platoon \(t_{j}\) with the smallest deviation to join. Conceptually, the same assignments as in the _centralized greedy_ approach are selected. However, the selection of possible candidates is limited to the restricted nature of the local knowledge and, therefore, depends on the time the heuristic is evaluated. Also, the vehicles are not synchronized and thus perform the assignment process asynchronously, leading to assignments and maneuvers not being triggered at the same time. The computational complexity of this approach is \(O\left(n\right)\). Looking again at the example scenario from Section III-B, we observe that the _distributed greedy_ heuristic can trigger join attempts for the following selection: \[f\left(13,5\right)=0.519\] \[f\left(20,13\right)=0.188\] \[f\left(37,20\right)=0.33\.\] The outcome of this depends on the order and the timing of the join attempts and can vary between one and two resulting platoons. ## IV Evaluation Within this work, we evaluate the proposed platoon formation approaches in an extensive simulation study. We do so by comparing them to each other and to two baseline approaches without platooning, using typical metrics for traffic modeling as well as platooning. In the evaluation, we first report details of our three formation algorithms to show the impact of knowledge (local vs. global). We then discuss results from a macroscopic, system-level perspective of the scenario to show the impact on the general traffic system. Finally, we report results for typical platooning metrics from a microscopic, vehicle-level perspective of the individual vehicles. ### _Methodology_ To observe effects of platooning and platoon formation algorithms as well as their impact, large-scale simulation studies with large scenarios and many vehicles are required. Many studies use Plexe [48] for this purpose. While being perfectly suited for fine-grained simulation of platoon controllers including the necessary wireless communication link, the approach suffers from limited scalability. Therefore, we use PlaFoSim1 in our study as we are more interested in large-scale effects of platoon management rather than microscopic control of the involved vehicles [49]. Besides validating its underlying mobility models in a comparison to SUMO and Plexe, we ran the experiments from our earlier work [17] to validate the algorithm behavior using PlaFoSim, producing qualitatively equal results. We were able to increase the scenario in size and simulation time as well as in the number of parameter configurations. We use PlaFoSim version 0.15.4 with some additional changes, which we will also integrate into its upstream version.2 A screenshot from the SUMO-based live GUI of PlaFoSim is shown in Figure 2. Footnote 1: [https://www.plafosim.de](https://www.plafosim.de) Footnote 2: [https://github.com/heinovski/plafosim](https://github.com/heinovski/plafosim) In our simulation study, we consider a 3-lane freeway (see Figure 2) of 100 km length with periodic on-/off-ramps every 10 km, which allow vehicles to enter and leave the freeway. Vehicles perform trips of 50 km between a pair of randomly selected on-/off ramps. The desired driving speed is sampled from a normal distribution with a mean of 120 km/h (33 m/s), a variation of 10%, and limited to values from \([$80\,\mathrm{km}\mathrm{/}\mathrm{h}$,$160\,\mathrm{km}\mathrm{/}\mathrm{h}$]\).4 Vehicles start their trips individually, using either the popular Krauss model [50] or a standard ACC [46]. Footnote 4: Only very few vehicles (0.4%) choose a desired driving speed equal to the artificial limits of our distribution. In our study, we compare our three proposed platoon formation algorithms with two baseline approaches without platooning: * human driving (following the Krauss model) * vehicles controlled by a standard ACC * vehicle platooning using our distributed (greedy) approach (see Section III-F) for platoon formation * vehicle platooning using our centralized (greedy) approach (see Section III-E for platoon formation * vehicle platooning using our optimal approach (see Section III-D) for platoon formation We model vehicle demand in a macroscopic way via a flow with constant insertion (i.e., departure rate) to roughly keep a given target density of vehicles on the road in order to achieve a stable traffic situation. We do so because we are only interested in a relative comparison between non-platooning and platooning approaches and not in the maximum possible traffic flow for every approach. The corresponding departure rate for a given target density is calculated as \[t_{\text{expected}}=\frac{d_{\text{trip}}}{v_{\text{desired}}} \tag{13}\] \[P_{\text{arrival}}=\frac{1}{t_{\text{expected}}}\] (14) \[R_{\text{departure}}=[P_{\text{arrival}}\cdot D_{\text{desired}} \cdot L\cdot d_{\text{road}}\cdot 3600] \tag{15}\] where \(t_{\text{expected}}\) is the expected travel time of the vehicles, given their fixed trip length and their desired driving speed (in m/s), \(P_{\text{arrival}}\) is the probability of vehicle arrivals per second, \(R_{\text{departure}}\) is the corresponding departure rate, given the target density \(D_{\text{desired}}\), number of lanes \(L\), and the length of the road \(d_{\text{road}}\). A summary of all simulation parameters for the road network and traffic can be found in Table I. We chose 25 veh/km/lane as highest target vehicle density as initial simulations showed that at higher densities the scenario is too crowded to insert further vehicles in all simulated approaches. Platoon formation is performed in regular intervals every 60 s. Forming platoons always consist of the following steps: (1) data collection of available vehicles and platoons, (2) computation of vehicle-to-platoon assignments depending on the selected approach, (3) execution of join maneuvers to implement the computed assignment(s). As the focus of this work is on the assignment process, platooning mobility, wireless communication, and platooning maneuvers are implemented in an abstracted way [49]. A summary of all simulation parameters for the platoon formation can be found in Table II. In order to consider deviation in desired speed and position equally, we set \(\alpha=0.5\) within this study. Further, we set the maximal allowed distance between vehicles \(r=$1000\,\mathrm{m}$\) to investigate the benefit of the centralized approaches, which are not limited by the communication rage (500 m). At the same time, it avoids assignments to far away platoon candidates that cannot be reached within a reasonable time. The _centralized solver_ approach is implemented by using a MIP solver from Google's OR-Tools library.5 The constraints for the assignment defined in Section III-D are modeled by row constraints and a integer decision variable with a coefficient corresponding to the respective deviation is added for every possible vehicle-to-platoon assignment. Since self-assignments Figure 2: Screenshot from the SUMO-based live GUI of PlaFoSim, showing a small region of our simulation scenario: 3 lane freeway, 5 individually driving vehicles, and 1 formed platoon consisting of 3 members. All members of one platoon are shown in the same color, which makes it easy to recognize them. are inherently allowed but not desired, a deviation value of \(1.0\) (the maximum) is used for these cases. To achieve a finite but useful execution time, we limit the solver's execution time to \(600\,\mathrm{s}\). While the solver is running, the simulation time does not advance and its state does not change. Thus, the solver's execution time does not impact the vehicles' travel time and corresponding statistics. It is important to mention that we currently only allow joining at the rear of a vehicle or platoon and thus only vehicles in front are considered as potential platooning candidates (cf. Section III-B). After successful competition of the join maneuver, vehicles stay within the platoon until they reach their destination ramp, at which a leave maneuver is performed. If the platoon leader leaves the platoon, the next remaining vehicle within the formation becomes the new leader, keeping all properties of the platoon. We simulate \(7200\,\mathrm{s}\) (\(2\,\mathrm{h}\)) of traffic in multiple repetitions for every configuration and approach. We first pre-fill the road network with the desired number of vehicles before the simulation starts. To cut off the initial transient period, we use the first \(1800\,\mathrm{s}\) (\(0.5\,\mathrm{h}\)) of the simulation time as a warm-up period and ignore all results in this interval, only considering vehicles that departed after this period. Using all approaches and parameter combinations defined in Tables I and II, we simulate a total of \(165\) individual runs. Using an AMD Ryzen 9 5900X 12-Core processor @3.7GHz, we observe the shortest simulation time (\(50\,\mathrm{min}\) wall-clock time) with the smallest configuration (speed window 0.1, density 5 veh/km/lane) for the ACC approach. In contrast, we observe the longest simulation time (\(4\,\mathrm{h}\) wall-clock time) with the largest configuration (speed window 0.3, density \(25\,\mathrm{veh}\)/km/lane) using the _centralized solver_ approach. In the following, we omit showing confidence intervals for any reported average value, but we made sure that they are reasonably small. ### _Validation_ We performed an extensive validation of the implemented solutions. As an example, we show the total number of vehicles in platoons for an exemplary simulation run over its entire simulation time in Figure 3. Starting with 0 vehicles at \(0\,\mathrm{s}\), the total number of vehicles grows over time until it reaches a almost constant value of \(\approx 8000\). Slowly, platoons are formed and we can see that platoons sized of up to 14 vehicles are created. After the warm-up period of \(1800\,\mathrm{s}\) (dashed line), the total number of vehicles in the simulation as well the distribution of platoon sizes stays almost constant. Other approaches and configurations show similar simulation behavior, but the total number of vehicles in the simulation as well as the distribution of platoon sizes are slightly different as we will report in the following. ### _Available Candidates_ The first step in the process of forming platoons is to find available platooning opportunities (candidates). These can be either individual vehicles or already existing platoons. We show the results of the found candidates metric (i.e., the average number of potential platooning opportunities known to the platoon formation algorithm) in Figure 4. The found candidates metric counts the number of possible assignments for platoon formation as identified for a single vehicle. A fundamental difference between the approaches is the available knowledge about other vehicles. While vehicles using the _distributed greedy_ approach only have local knowledge of other vehicles in their vicinity limited by a certain communication range, both centralized approaches have global knowledge of all vehicles within the entire scenario. Obviously, the more vehicles are known to the respective approach, the more vehicles can be used within the formation algorithm, increasing the probability to find a possible candidate. Of course, non-platooning approaches such as _human_ and _ACC_ do not have any knowledge of other vehicles since no V2X communication is used. Thus, these approaches are not shown Figure 4: The average number of potential platooning candidates known to the platoon formation algorithm in every iteration (with standard deviation) per vehicle and for all platooning approaches. The x-axis shows various values of the speed window (parameter \(m\)) for our platoon formation algorithms. The facets show various values for the desired vehicle density in the scenario. Figure 3: Total number of vehicles in platoons over the entire simulation time for the _distributed greedy_ approach using the largest configuration (speed window 0.3, density 25 veh/km/lane). The platoon size is indicated by color and the warm-up period by the dashed line. here. As expected, both centralized approaches similarly find a lot more potential candidates to use for the platoon formation than the _distributed greedy_ approach due to their overall knowledge of the scenario. In fact, the _centralized greedy_ approach finds a few more candidates than the _centralized solver_ approach, starting at medium densities, because the _centralized solver_ is able to fit more vehicles to platoons, thus, less are available for the next assignment rounds. The number of found candidates for all approaches is also impacted by the speed window (i.e., the allowed speed deviation) as well as the desired vehicle density in the scenario. Larger speed deviation thresholds lead to a less strict definition of similarity (in desired driving speed), which leads to more potential platooning candidates being found. Similarly, higher desired vehicle densities lead to more vehicles in the overall scenario and thus more platooning opportunities being available in general. This effect is more prominent for both centralized approaches as their global knowledge of the scenario allows for a significant increase in candidates with a bigger density. The variance in values (we show the standard deviation) for all approaches is related to the desired driving speed (distribution). Vehicles with a lower desired driving speed also have a smaller absolute (in m/s) speed window and thus less candidates. Further, vehicles with a desired driving speed from the outer area of the normal distribution (e.g., \(\leq$90\,\mathrm{km}/\mathrm{h}$\) and \(\geq$150\,\mathrm{km}/\mathrm{h}$\)) that we use to assign the speed, will have less candidates due to the limiting of the possible values, which are also less frequent. In contrast, vehicles with a desired driving speed of a frequent value (e.g., 120 km/h) have a lot of candidates. As expected, a larger speed window and a higher desired vehicle density increase the variance as well. This effect is pronounced by the global knowledge of both centralized approaches. Even though the general pattern of the found candidates metric is in line with our expectations, the actual numbers are smaller than expected. The reason for this is that some vehicles are not available (anymore) to be considered for vehicle-to-platoon assignments. They are either already in a platoon (i.e., a follower) or currently in the process of forming a platoon (i.e., in a maneuver). Similarly to the found candidates metric, we count these vehicles within the filtered candidates metric for all vehicles and in all approaches. Vehicles that are not available because of the limited communication range with the _distributed greedy_ approach are not counted as filtered. The results are shown in Figure 5. Due to the global knowledge and decision making for all vehicles in the scenario, both centralized approaches filter many more candidates than the _distributed greedy_ approach, which has only limited local knowledge. In the _centralized solver_ approach, more vehicles are already in a platoon and thus filtered in comparison to the _centralized greedy_ approach due to the synchronized and balancing (leader vs. follower) decision making for all vehicles in the scenario. ### _Formation Process_ After analyzing the number of platooning opportunities, we now take a look at the actual formation process. #### Iv-D1 Time to Platoon Figure 6 shows the average time required to become a platoon member per vehicle for all platooning approaches. This time is measured from the departure of a vehicle until a join maneuver is completed successfully. Thus, it includes execution of the formation algorithm to compute a vehicle-to-platoon assignment as well as the corresponding join process itself. Similarly, this time is measured if a vehicle became a leader of a newly formed platoon. We immediately observe that the _distributed greedy_ approach experiences the longest time for all configurations. This is expected, as this approach has only limited local knowledge about available platooning opportunities. This effect becomes Figure 5: The average number of filtered platooning candidates in every iteration (with standard deviation) per vehicle and for all platooning approaches. The x-axis shows various values of the speed window (parameter \(m\)) for our platoon formation algorithms. The facets show various values for the desired vehicle density in the scenario. The y-axis is split and does not show values between 80 and 800. Figure 6: The average time (with standard deviation) required to become a platoon member per vehicle for all platooning approaches. The x-axis shows various values of the speed window (parameter \(m\)) for our platoon formation algorithms. The facets show various values for the desired vehicle density in the scenario. less prominent for bigger speed windows and larger vehicle densities. Both centralized approaches require less time in general and differ only marginally. This is expected, as both are using global knowledge about the vehicles and can thus compute successful assignments already at the first execution of the algorithm. The standard deviations due to the variance in the number of available candidates (see Figure 4) as well as the duration between departure and execution of the assignment process. #### V-E2 Platoon Size Another typical metric in the context of platooning is the platoon size and its distribution. Unfortunately, the interpretation of the platoon size is difficult and requires specific context of the underlying design goals of an algorithm. In general, however, we aim at a low number of vehicles that are not in a platoon. We report the distribution of the platoon size over various desired traffic densities in Figure 7. First, we observe that all platooning approaches produce platoons (platoon size \(>1\)) and only a few vehicles do not find a platoon. While, the _centralized greedy_ approach builds slightly smaller platoons than _centralized solver_, the _distributed greedy_ approach builds slightly larger ones. For all approaches, most of the vehicles are in platoons with a size of 2-5. ### _System-level Metrics (Macroscopic)_ After reporting details of the formation process itself, we now take a look at traffic-related metrics from a broader, macroscopic perspective. #### V-E1 Traffic Flow Since platooning is a cooperative driving application, the performance of our algorithms and the platooning itself is influenced by the general traffic in the scenario. All approaches will have an impact on the resulting traffic, and so does the general configuration of our simulation scenario (e.g., the departure rate). In order to get an overview of the general traffic behavior in the scenario, Figure 8 shows the actual departure rate (Figure 8a) as well as the resulting vehicle density (Figure 8b) observed from the simulation for all of our simulated approaches and target vehicle densities. Since both metrics are directly related to each other, the observed effects correlate as well. The observed values for departure rate and vehicle density increase with higher desired densities. This is consistent as we construct our departure rate from the desired vehicle density using Equation (15). With human and ACC driven vehicles, it is possible to meet the desired departure rates and vehicle densities at low and medium desired vehicle densities. Starting at a desired density of 20 veh/km/lane, however, the target cannot be met anymore. Here, the crowded traffic makes it very difficult to insert new vehicles as there is only little remaining capacity (free space) on the road. Although the platooning approaches also suffer from the same effect, they are able to to fulfill the desired departure rates and vehicle densities for all traffic configurations and produce a smaller deviation from the desired vehicle density. #### V-E2 Driving Speed When looking at the average driving speed of all vehicles in the scenario (see Figure 9), we can observe the effect of high traffic very prominently. As expected, the driving speed is reduced for all approaches when the scenario becomes more crowded due to natural traffic effects. The effect is generally worse for non-platooning approaches due to larger safety gaps between vehicles, resulting in stop-and-go traffic [51]. For all platooning approaches, the speed is similar to the smallest speed window as options are limited. But, a larger speed window allows more deviation and leads to an overall higher speed in both greedy approaches. In comparison, the _centralized solver_ approach minimizes the deviation in speed and position for all vehicles simultaneously and, thus, leads to a slightly reduced driving speed. Since larger windows allow more vehicles to become platoon leaders (instead of platoon followers), and a platoon is always driving at the speed of its leader, the deviation (in driving speed) stays small for more vehicles. The _centralized solver_ approach also keeps a slightly decreasing trend over windows, being more similar to the original objective of vehicles. The trend for the platooning approaches remains similar for a desired vehicle density of up to 15 veh/km/lane. After that, the overall driving speed in the scenario decreases more Figure 7: The distribution of platoon sizes depicted by the number of vehicles in platoons with their respective size for all platooning approaches. The facets show various values for the desired vehicle density in the scenario. The horizontal arrows and corresponding numbers indicate selected full width at half maximum values of the distribution with the same color. noticeably for both greedy approaches than for the optimal approach; At the largest density, despite global knowledge, the _centralized greedy_ approach performs the worst due to synchronization and greedy selection effects. ### _Vehicle-level Metrics (Microscopic)_ We now analyze results for platooning-related effects from a microscopic (i.e., the individual vehicles') perspective. #### Vi-F1 Deviation From Desired Driving Speed Vehicles are inherently required to deviate from their desired driving speed in order to form platoons with other vehicles, which likely drive at different speeds. The main goal of our formation algorithms is to minimize the vehicles' deviation from their own desired driving speed. Figure 10 shows the average relative deviation from the desired driving speed per vehicle for all simulated approaches. The platooning approaches have both a negative and a positive deviation, which is due to the compromise that vehicles need to make when forming platoons (i.e., they are adjusting their driving speed to the platoon leader). All platooning approaches perform more or less similarly and their positive deviation (driving faster than desired) mostly stays within the corresponding defined speed window (see Figure (a)a). Naturally, the value of the speed window impacts the actual deviation from the desired driving speed. As an example, we show results for a selected desired vehicle density of 5 veh/km/lane in Figure (a)a. As expected, both greedy heuristics lead to a larger spread in deviation values with the _distributed greedy_ approach having the largest. This is due to the smaller number of platooning opportunities in this approach, which requires vehicles to join platoons with large deviations in order to fulfill the goal of platooning. Increasing the speed window allows for even more deviation, thus also increasing the spread of values. While the values for the _centralized solver_ approach are mostly symmetrically located around 0, both greedy heuristics tend to deviate more positively. Independent of platooning, the amount of vehicles in the scenario has an impact on the deviation from the desired driving speed (see Figure (b)b). When comparing the smallest and the largest simulated densities (5 and 25 veh/km/lane), the negative deviation increases from nearly 0% to \(-13\)% on average for the non-platooning approaches. This is expected as the freeway becomes more crowded with larger densities and the vehicles need to adjust their driving speed based on the decreasing gaps to other vehicles in order to avoid crashes. The platooning approaches cope better with a crowded scenario. Their negative deviation reaches only \(-5\), \(-6\) and \(-9\)% on average at the largest desired vehicle density for the _distributed greedy_, the _centralized solver_, and the _centralized greedy_ approach, respectively. In fact, more vehicles even lead to a slightly lower positive deviation as more platooning Figure 8: The general traffic behavior observed from the simulation (with standard deviations) over all simulated approaches. The x-axis shows various values for the desired vehicle density in the scenario. Figure 9: The average driving speed (with standard deviation) of all vehicles in the simulation for all simulated approaches. The x-axis shows various values of the speed window (parameter \(m\)) for our platoon formation algorithms. The facets show various values for the desired vehicle density in the scenario. opportunities become available and vehicles have to make less compromises. In order to study the effects of the platooning approaches in more detail, we show the ratio of vehicles deviating from the allowed speed window in Figure 11. For low densities, i.e., a small amount of available platooning opportunities, only few vehicles deviate from their desired driving speed for all approaches. When increasing the vehicle density, the deviation increases and a difference between approaches becomes visible. As expected, the _centralized solver_ approach has the lowest deviation as it tries to minimize the deviation for all vehicles in an optimal way. With the largest speed window (0.3), all approaches perform much better because enough platooning opportunities are available to them. #### Vi-B2 Travel Time Naturally, the aforementioned effects on the driving speed can be seen when looking at vehicles' trip duration as their trip lengths are equal. When starting the trip, every vehicle estimates the time it is going to travel to its destination, assuming a constant speed at the desired value. Upon arrival, vehicles also record their real travel time and calculate the travel time ratio (i.e., the ratio of expected to real travel time). We use this metric to show the impact of the traffic and the use of platooning in general on vehicles' trip duration, which is a frequently-used metric when assessing platooning systems. We show the travel time ratio for all vehicles in Figure 11: The ratio of vehicles deviating from the allowed speed window for all platooning approaches. The x-axis shows various values of the speed window (parameter \(m\)) for our platoon formation algorithms. The facets show various values for the desired vehicle density in the scenario. Figure 12: The ratio of expected to real travel time for all simulated approaches. The x-axis shows various values for the desired vehicle density in the scenario. The triangle within a box depicts the mean value. A value smaller than 1.0 indicates that the vehicle reached its destination faster than expected, a value larger than 1.0 in contrast indicates that the vehicle was slower than expected. Figure 10: The average relative deviation from the desired driving speed per vehicle for all simulated approaches. The triangle within a box depicts the mean value. Figure 12. The general tendency follows our expectations: the ratio increases with vehicle density. This indicates that vehicles generally need longer for their trips than expected. Both non-platooning approaches (i.e., _human_ and _ACC_) share 1.0 as minimum value as their trips can only get influenced negatively by delays due to traffic. This effect is increased with higher vehicle densities as more vehicles are more impacted by the traffic. Similarly, the traffic also impacts all three platooning approaches and vehicles' trips are delayed (the ratio is larger than 1.0 on average). When platooning is enabled, deviations from the initially expected travel time in both directions can be observed. Since vehicles will adjust their driving speed to the target platoon (based on the configured speed window of the formation algorithm), their speed and the resulting travel time can be faster than expected. #### Iv-B3 Fuel Consumption A major argument for platooning is the reduced air drag between vehicles and the resulting reduced pollutants emission and fuel consumption. Similar results hold for electrified vehicles, we fall back to gas consumption simply because of the availability of validated and widely accepted consumption models. For modeling general vehicle emissions, we use the Handbook Emission Factors for Road Transport (HBEFA)6 version 3.1, following the approach implemented in SUMO. In particular, we currently use the _PC_G_EU4_ emission class, which represents a gasoline driven passenger car with an engine corresponding to the European norm version 4. For vehicles that are driving in a platoon, we calculate a reduction in emissions and fuel consumption based on their position within the platoon [52], applying the approach from our earlier work [17]. Footnote 6: [https://www.hbfa.net/e/index.html](https://www.hbfa.net/e/index.html) We show the fuel consumption in Figure 13. The values plotted represent the fuel consumption of all vehicles in liters per 100 km, which is computed by using the total fuel consumption and the trip distance of the vehicles. Following the effects shown in Figure 9, a higher target vehicle density in general reduces the fuel consumption due to the slower driving speed with more traffic in the scenario. For human driving, starting at 20 veh/km/lane, the traffic transforms into stop-and-go, which leads to a lot of accelerations & decelerations increasing the average value as well as the spread of the fuel consumption. Synchronized driving using _ACC_ already helps reducing fuel consumption, in fact resulting in the lowest values, but also suffers from slower driving speed in general (cf. Figure 9). All platooning approaches result in similar values, but with increasing densities, platooning seemingly performs slightly worse than _ACC_. This is due to the higher average speed that is achieved with platooning. ### _Performance of the MIP Solver_ We finally take a look at the performance of the MIP solver for the _centralized solver_ approach. Figure 14 shows the average runtime (with standard deviation) of all executions the solver. This parameter measures the time that the solver uses to solve the optimization problem after all decision variables and constraints are defined (see Algorithms 1 and 2). As expected, the runtime increases with a larger speed window and a higher vehicle density as the number of vehicles that the solver needs to consider as source and destination increases. It should be noted that in all configurations, the reported solution quality was well above 99.99%. ## V Conclusion In this paper, we introduced and explored three approaches for vehicle-to-platoon assignments: _centralized solver_, _centralized greedy_, and _distributed greedy_, using a MIP solver and greedy heuristics, respectively. Conceptually, the approaches differ in both knowledge about other vehicles as well as the used methodology. We first define a similarity goal and, vice versa, the deviation among vehicles, thereby considering their individual requirements. The aim is to increase vehicles' similarity, thus, minimizing their deviation in desired driving speed and position. Our results show that all presented platooning approaches Figure 14: The average runtime (with standard deviation) of all executions of the MIP solver in the _centralized solver_ approach. The x-axis shows various values of the speed window (parameter \(m\)) for our platoon formation algorithms. The facets show various values for the desired vehicle density in the scenario. Figure 13: The fuel consumption per vehicle for all simulated approaches. The x-axis shows various values for the desired vehicle density in the scenario. The triangle within a box depicts the mean value. help in comparison to _human_ and _ACC_ driving. However, the selection of the formation algorithm is important, as it significantly influences the platoon assignment. Our simulations show that the _centralized solver_ approach performs best in terms of individual platooning benefits but is conservative in terms of deviating from individual objectives. In contrast, both presented greedy heuristics are less conservative and lead to a larger deviation from individual objectives. We also see that the willingness to compromise can pay off as more vehicles are able to benefit from platooning. While the _distributed greedy_ approach has some disadvantages due to the limited local knowledge, it performs as good as the _centralized solver_ approach in most metrics. Both outperform the _centralized greedy_ approach, which suffers from synchronization and greedy selection effects. Since the _centralized solver_ approach assumes global knowledge and requires a complex MIP solver to compute vehicle-to-platoon assignments, we consider the _distributed greedy_ approach to have the best performance among all presented approaches. In future work, we want to investigate how a decentralized approach can increase vehicle knowledge while reducing synchronization effects at the same time. Also, we plan to focus more on the heterogeneity of vehicles by considering more of their properties such as their destination and capabilities within the similarity function. ## VI Acknowledgements The authors would like to thank Max Schettler, Dominik S. Buse, and Agon Memedi for their support and feedback during the creation of this work.
2304.05225
Is $f_2(1950)$ the tensor glueball?
Glueballs remain an experimentally undiscovered expectation of QCD. Lattice QCD (As well as other theoretical approaches) predicts a spectrum of glueballs, with the tensor ($J^{PC}=2^{++}$) glueball being the second lightest, behind the scalar glueball. Here, using a chiral hadronic model, we compute decay ratios of the tensor glueball into various meson decay channels. We find the tensor glueball to primarily decay into two vector mesons, dominated by $\rho \rho $ and $K^*K^*$ channels. These results are compared to experimental data of decay rates of isoscalar tensor mesons. Based on this comparison, we make statements on the eligibility of these mesons as potential tensor glueball candidates: the resonance $f_2(1950)$ turns out to be, at present, the best match as being predominantly a tensor glueball.
Arthur Vereijken, Shahriyar Jafarzade, Milena Piotrowska, Francesco Giacosa
2023-04-11T13:51:56Z
http://arxiv.org/abs/2304.05225v3
# Is \(f_{2}(1950)\) the tensor glueball? ###### Abstract Glueballs remain an experimentally undiscovered expectation of QCD. Lattice QCD (As well as other theoretical approaches) predicts a spectrum of glueballs, with the tensor (\(J^{PC}=2^{++}\)) glueball being the second lightest, behind the scalar glueball. Here, using a chiral hadronic model, we compute decay ratios of the tensor glueball into various meson decay channels. We find the tensor glueball to primarily decay into two vector mesons, dominated by \(\rho\rho\) and \(K^{*}K^{*}\) channels. These results are compared to experimental data of decay rates of isoscalar tensor mesons. Based on this comparison, we make statements on the eligibility of these mesons as potential tensor glueball candidates: the resonance \(f_{2}(1950)\) turns out to be, at present, the best match as being predominantly a tensor glueball. ## I Introduction Glueballs are mesons made up by gluons only. They are one of the earliest predictions of Quantum Chromodynamics (QCD) that follow from the non-abelian nature of the \(SU(3)\) gauge symmetry and confinement [1]. The existence of glueballs has been since then confirmed by various theoretical approaches [2; 3; 4; 5], most notably Lattice QCD (LQCD), according to which a whole tower of such states has been computed [6; 7; 8; 9; 10; 11; 12] in the Yang-Mills (YM) sector of QCD (that is, QCD with without dynamical quarks; for an unquenched study see Ref. [13]). Similar outcomes were found in recent functional approaches in e.g. [14; 15]. The lightest gluonic state is a scalar (\(J^{PC}=0^{++}\)), the second-lightest a tensor (\(J^{PC}=2^{++}\) (for the tensor/scalar mass ratio see[16]), and the third-lightest a pseudoscalar (\(J^{PC}=0^{-+}\)). Quite interestingly, as shown in Ref. [17], the glueball spectrum of the recent LQCD compilation of Ref. [8] generates a glueball resonance gas which is well consistent with the pressure of pure YM below \(T_{c}\) as also evaluated in LQCD [18], thus being a further confirmation of the existence and accuracy of the LQCD masses. Yet, although glueballs are such a long-standing prediction of QCD, their experimental status is still inconclusive. Admittedly, some notable candidates do exist, especially for the three lightest ones, e.g. Refs. [4]). Nevertheless, the problem of a definitive identification of glueballs in experiments is made difficult by the mixing with nearby conventional quark-antiquark mesons and by their poorly known decay strength(s). In this work, we concentrate on the tensor glueball, which is is interesting because of various reasons. First, as already mentioned, is the second lightest gluonium. Second, the experimental observation of several isoscalar-tensor resonances such as \(f_{2}(1430)\), \(f_{2}(1565)\), \(f_{2}(1640)\), \(f_{2}(1810)\), \(f_{2}(1910)\), \(f_{2}(1950)\), \(f_{2}(2010)\), \(f_{2}(2150)\), \(f_{J}(2220)\) etc. implies that one of them (especially close to 2 GeV, see below) could be the tensor glueball. The attempt to interpret one of those \(f_{2}\) states as a tensor glueball has a long history, see e.g [1; 19; 20; 21]. In particular the \(f_{J}(2220)\) was historically considered as a good candidate for the tensor glueball [22], but, as we shall see, this is no longer the case. In LQCD simulations various physical masses for the tensor glueball are predicted such as: 2150(30)(100) MeV [9], 2324(42)(32) MeV [10], 2376(32) MeV [8], 2390(30)(120) MeV [6], 2400(25)(120) MeV [7]. Moreover, QCD sum rules predict the tensor glueball mass in the range of 2000-2300 MeV [23; 24]. A recent functional method result for the tensor glueball mass is around 2610 MeV [14]. Holographic methods are also used to study glueballs for instance in [25; 26; 27; 28; 29; 30; 31; 32; 33], with tensor glueball masses ranging from 1900 to 4300 MeV depending on the model [30; 32]. Thus, besides differences, the results converge on the existence of a tensor glueball in the region near 2000 MeV. In [27; 28] the decays of the tensor glueball are computed in the so-called Witten-Sakai-Sugimoto Model and it is found that this state is broad if the mass is above the \(\rho\rho\) threshold. Other hadronic approaches are studied the decays of the tensor glueball [34; 35], where different decay ratios were presented. Yet, an explicit inclusion of chiral symmetry was then not considered. The tensor glueball is also speculated to be related to the occurrence of OZI forbidden process such as \(\pi^{-}p\to\phi\phi n\)[36]. A novel data analysis on \(J/\psi\) decays is looking promising for the identification of the scalar and tensor glueballs [37; 38; 39; 1; 37], with a tensor glueball mass of around 2210 MeV. Further recent measurable production of tensor glueballs in experiments are predicted in [40; 41]. In view of this revival of both theoretical and experimental interest on the enigmatic tensor glueball, a quite natural question is what a chiral hadronic approach can tell about this state, especially in connection to certain decay ratios. The present paper deals exactly with this question: we study the tensor glueballs via a suitable extension of the Linear Sigma Model (LSM), called extended Linear Sigma model (eLSM), e.g. [42; 43], in which chiral symmetry is linearly realized and undergoes both explicit and spontaneous symmetry breaking (SSB) and where, besides (pseudo)scalar mesons also (axial-)vector states and the dilaton are included from the very beginning. Previous applications of the eLSM to the scalar [43; 44], pseudoscalar [45], and vector[46] glueballs were performed. It is then natural to apply the formalism to the tensor gluonium. Quite recently, we have employed the eLSM to study the tensor and axial-tensor mesons in Ref. [47], which sets the formal framework to model the tensor glueball, as it has the same quantum numbers \(J^{PC}=2^{++}\) as the tensor mesons (details of the fields and Lagrangians in Sec. II). Within this framework, we evaluate (Sec. III) various decay rates, most importantly the \(\rho\rho/\pi\pi\) decay ratio, which turns out to be a prediction of the chiral approach (and its SSB). We then compare the outcomes to the isoscalar-tensor states in the range close to 2000 MeV and find out that the resonance \(f_{2}(1950)\) is, according to present data, our best candidate. We also speculate (Sec. IV) about the partial decay widths of the glueball and the overall assignment of excited tensor states. Finally, conclusions are outlined in Sec. V. ## II Chiral model and nonets The eLSM is an effective model based on chiral symmetry in the linear form, together with explicit and spontaneous breaking aimed to reproduce -at the hadronic level- basic QCD features. In this section we shall describe the fields entering the Lagrangian as well as the interaction terms. While the eLSM can contain many different fields and interactions, in this manuscript we shall only highlight those relevant for the decays of a tensor glueball. For a more complete review of the eLSM we refer to [42; 43; 48; 49] and references therein (applications at nonzero temperature and density can be found in e.g. Refs. [50; 51; 52]). ### Meson nonets The mesonic \(\bar{q}q\) fields are contained in nonets. The list of decay products of the tensor glueball is, in matrix form, the following: * Vector mesons \(\{\rho(770),\,K^{*}(892),\,\omega(782),\,\phi(1020)\}\) with the quantum number \(J^{PC}=1^{--}\) (\({}^{3}S_{1}\)) can be classified to the following nonet \[V^{\mu}=\frac{1}{\sqrt{2}}\begin{pmatrix}\frac{\omega_{N}^{\mu}+\rho^{\mu}}{ \sqrt{2}}&\rho^{+\mu}&K^{*+\mu}\\ \rho^{-\mu}&\frac{\omega_{N}^{\mu}-\rho^{\mu}}{\sqrt{2}}&K^{*0\mu}\\ K^{*-\mu}&\bar{K}^{*0\mu}&\omega_{S}^{\mu}\end{pmatrix}\,,\] (1) where \(\omega_{N}\) and \(\omega_{S}\) are purely non-strange and strange, respectively. The physical fields arise upon mixing \[\left(\begin{array}{c}\omega(782)\\ \phi(1020)\end{array}\right)=\left(\begin{array}{cc}\cos\beta_{V}&\sin\beta_ {V}\\ -\sin\beta_{V}&\cos\beta_{V}\end{array}\right)\left(\begin{array}{c}\omega_{ N}\\ \omega_{S}\end{array}\right)\,\] (2) where the small isoscalar-vector mixing angle \(\beta_{V}=-3.9^{\circ}\)[53]. The physical states \(\phi\) and \(\omega\) are predominantly strange and non-strange components, respectively. This is a reflection of the "homochiral" nature of these states [54]. * The chiral partners of the vector mesons [55] having the quantum number \(J^{PC}=1^{++}\) (\({}^{3}P_{1}\)) forms the following nonet \[A_{1}^{\mu}=\frac{1}{\sqrt{2}}\begin{pmatrix}\frac{f_{1,N}^{\mu}+a_{1}^{0\mu} }{\sqrt{2}}&a_{1}^{+\mu}&K_{1A}^{+\mu}\\ a_{1}^{-\mu}&\frac{f_{1,N}^{\mu}-a_{1}^{0\mu}}{\sqrt{2}}&K_{1A}^{0\mu}\\ K_{1A}^{-\mu}&\bar{K}_{1A}^{0\mu}&f_{1,S}^{\mu}\end{pmatrix}\.\] (3) The isoscalar sector reads \[\left(\begin{array}{c}f_{1}(1285)\\ f_{1}(1420)\end{array}\right)=\left(\begin{array}{cc}\cos\beta_{A_{1}}&\sin \beta_{A_{1}}\\ -\sin\beta_{A_{1}}&\cos\beta_{A_{1}}\end{array}\right)\left(\begin{array}{c}f _{1,N}^{\mu}\\ f_{1,S}^{\mu}\end{array}\right)\,\] (4) where the mixing angle \(\beta_{A_{1}}\) is expected to be small because of the homochiral nature of the multiplet [54], see also Ref. [56]. In the computations we shall set this angle to zero for simplicity. * Pseudoscalar mesons with \(J^{PC}=0^{-+}\) (\({}^{1}S_{0}\)) consisting of three pions, four kaons, the \(\eta(547)\) and the \(\eta^{\prime}(958)\). They are collected into the following nonet with light-quark elements \(P_{ij}\equiv 2^{-1/2}\bar{q}_{j}i\gamma^{5}q_{i}\): \[P=\frac{1}{\sqrt{2}}\left(\begin{array}{ccc}\frac{\eta_{N}+\pi^{0}}{\sqrt{2 }}&\pi^{+}&K^{+}\\ \pi^{-}&\frac{\eta_{N}-\pi^{0}}{\sqrt{2}}&K^{0}\\ K^{-}&\bar{K}^{0}&\eta_{S}\end{array}\right)\,,\] (5) where \(\eta_{N}\equiv\sqrt{1/2}\,(\bar{u}u+\bar{d}d)\) stands for the purely non-strange state and \(\eta_{S}\equiv\bar{s}s\) for the purely strange one. The physical isoscalar fields appear upon mixing Ref. [57] as: \[\left(\begin{array}{c}\eta(547)\\ \eta^{\prime}\equiv\eta(958)\end{array}\right)=\left(\begin{array}{cc}\cos \beta_{P}&\sin\beta_{P}\\ -\sin\beta_{P}&\cos\beta_{P}\end{array}\right)\left(\begin{array}{c}\eta_{ N}\\ \eta_{S}\end{array}\right)\,\] (6) with mixing angle \(\beta_{P}=-43.4^{\circ}\). This sizable mixing angle is a consequence of the \(U_{A}(1)\) axial anomaly [58; 59]. Namely, pseudoscalar mesons belong to a "heterochiral" multiplet [54]. * The axial-vector matrix is shifted due to spontaneous chiral symmetry breaking. This breaking induces a mixing of pseudoscalar and axial-vector fields which allows the decay into pseudoscalar mesons. The shift is as follows: \[A_{1\,\mu}\to A_{1\,\mu}+\partial_{\mu}{\cal P},\ {\cal P}:=\frac{1}{\sqrt{2}} \left(\begin{array}{ccc}\frac{Z_{\pi}w_{\pi}(\eta_{N}+\pi^{0})}{\sqrt{2}}&Z _{\pi}w_{\pi}\pi^{+}&Z_{K}w_{K}K^{+}\\ Z_{\pi}w_{\pi}\pi^{-}&\frac{Z_{\pi}w_{\pi}(\eta_{N}-\pi^{0})}{\sqrt{2}}&Z_{K}w_ {K}K^{0}\\ Z_{K}w_{K}K^{-}&Z_{K}w_{K}\bar{K}^{0}&Z_{\eta_{S}}w_{\eta_{S}}\eta_{S}\end{array} \right)\,\] (7) where the numerical values \(Z_{\pi}=Z_{\eta_{N}}=1.709\), \(Z_{K}=1.604\), \(Z_{\eta_{S}}=1.539\) and \(w_{\pi}=w_{\eta_{N}}=0.683\,\mathrm{GeV}^{-1}\), \(w_{K}=0.611\,\mathrm{GeV}^{-1}\), \(w_{\eta_{S}}=0.554\,\mathrm{GeV}^{-1}\) are taken from [42]. Note, this shift will be particularly important in this work since it allows to link different decay channels (such as \(\rho\rho/\pi\pi\)) that otherwise could not be connected by flavor symmetry alone. * The \(J^{PC}=2^{++}\) (\({}^{3}P_{2}\)) tensor states with elements \(T_{ij}^{\mu\nu}=2^{-1/2}\bar{q}_{j}(i\gamma^{\mu}\partial^{\nu}+\cdots)q_{i}\): \[T^{\mu\nu}=\frac{1}{\sqrt{2}}\left(\begin{array}{ccc}\frac{\eta_{2,N}^{\mu \nu}+\alpha_{2}^{0\mu\nu}}{\sqrt{2}}&a_{2}^{+\mu\nu}&K_{2}^{*+\mu\nu}\\ a_{2}^{-\mu\nu}&\frac{f_{2,N}^{\mu\nu}-\alpha_{2}^{0\mu\nu}}{\sqrt{2}}&K_{2}^{* 0\mu\nu}\\ K_{2}^{*-\mu\nu}&\bar{K}_{2}^{*0\mu\nu}&f_{2,S}^{\mu\nu}\end{array}\right)\,.\] (8) The physical isoscalar-tensor states are \[\left(\begin{array}{c}f_{2}(1270)\\ f_{2}^{\prime}(1525)\end{array}\right)=\left(\begin{array}{ccc}\cos\beta_{ T}&\sin\beta_{T}\\ -\sin\beta_{T}&\cos\beta_{T}\end{array}\right)\left(\begin{array}{c}f_{2,N}\\ f_{2,S}\end{array}\right)\,\] (9) where \(\beta_{T}\simeq 5.7^{\circ}\) is the small mixing angle reported in the PDG. Tensor mesons belong to a homochiral multiplet, just as (axial-)vector states. We have no further experimental information about the chiral partner of this nonet \(A_{2}^{\mu\nu}\) (see e.g. [47] for recent phenomenological analyses of this nonet). ### Interaction terms In this subsection, we list the chiral interaction involving a tensor glueball field \(G_{2,\mu\nu}\). A possible way to understand how the searched terms emerge is to consider the interaction terms that describe the decays of (axial-)tensor mesons in the recent Ref. [47]. What one needs to apply is the following replacement into the Lagrangians with ordinary tensor meson nonet: \[T_{\mu\nu}\longrightarrow\frac{1}{\sqrt{6}}G_{2,\mu\nu}\cdot{\bf 1}_{3}\, \tag{10}\] thus effectively realizing flavor blindness. Of course, the coupling constants must be renamed and are - at first - not known. Note, we follow the same convention that implies the normalization w.r.t. the flavor singlet mode. Yet, the chirally invariant interaction terms of the tensor glueball with other mesons that we are going to introduce stand on their own and can be formally introduced without resorting to (axial-)tensor mesons. The first term in the eLSM leading to tensor glueball decays involves solely left- and right-handed chiral fields: \[{\cal L}_{\lambda}=\frac{\lambda}{\sqrt{6}}G_{2,\mu\nu}\Big{(}{\rm Tr}\Big{[} \{L^{\mu},L^{\nu}\}\Big{]}+{\rm Tr}\Big{[}\{R^{\mu},R^{\nu}\}\Big{]}\Big{)}\, \tag{11}\] where the chiral fields consist of the vector and axial-vector mesons which we shall define with nonets \[L^{\mu}:=V^{\mu}+A_{1}^{\mu}\ \,\,R^{\mu}:=V^{\mu}-A_{1}^{\mu}\, \tag{12}\] which transform as \(L^{\mu}\to U_{L}L^{\mu}U_{L}^{\dagger}\), \(R^{\mu}\to U_{R}R^{\mu}U_{R}^{\dagger}\) under the chiral transformations of \(U_{L}(3)\times U_{R}(3)\). The Lagrangian (11) leads to three kinematically allowed decay channels with the following expressions for the decay rates with three momentum \[|\vec{k}_{a,b}|:=\frac{1}{2\,m_{G_{2}}}\sqrt{(m_{G_{2}}^{2}-m_{a}^{2}-m_{b}^{ 2})^{2}-4m_{a}^{2}m_{b}^{2}}\ . \tag{13}\] * Decays of the tensor glueball into two vector mesons has the following decay rate formula \[\Gamma_{G_{2}\to V^{(1)}V^{(2)}}(m_{G_{2}},m_{v^{(1)}},m_{v^{(2)}}) =\frac{\kappa_{gvv\,,i}\lambda^{2}|\vec{k}_{v^{(1)},v^{(2)}}|}{120\,\pi\,m_{G_ {2}}^{2}}\Big{(} 15+\frac{5|\vec{k}_{v^{(1)},v^{(2)}}|^{2}}{m_{v^{(1)}}^{2}}+ \frac{5|\vec{k}_{v^{(1)},v^{(2)}}|^{2}}{m_{v^{(2)}}^{2}}\] \[+\frac{2|\vec{k}_{v^{(1)},v^{(2)}}|^{4}}{m_{v^{(1)}}^{2}m_{v^{(2) }}^{2}}\Big{)}\,\Theta(m_{G_{2}}-m_{v^{(1)}}-m_{v^{(2)}})\,;\] (14) * while into 2 pseudoscalar mesons (SSB-driven shift of Eq. 7 applied twice) \[\Gamma_{G_{2}\longrightarrow P^{(1)}P^{(2)}}^{tl}(m_{G_{2}},m_{p^{(1)}},m_{p^{ (2)}})=\frac{\kappa_{gpp\,,i}\,\lambda^{2}\,|\vec{k}_{p^{(1)},p^{(2)}}|^{5}}{6 0\,\pi\,m_{G_{2}}^{2}}\Theta(m_{G_{2}}-m_{p^{(1)}}-m_{p^{(2)}})\,;\] (15) * and into the axial-vector and pseudoscalar mesons (SSB-driven shift of Eq. 7 applied once) \[\Gamma_{G_{2}\longrightarrow A_{1}P}^{tl}(m_{G_{2}},m_{a_{1}},m_{p})=\frac{ \kappa_{gqp\,,i}\,\lambda^{2}\,|\vec{k}_{a_{1},p}|^{3}}{120\,\pi\,m_{G_{2}}^{ 2}}\left(5+\frac{2\,|\vec{k}_{a_{1},p}|^{2}}{m_{a_{1}}^{2}}\right)\,\Theta(m_{ G_{2}}-m_{a_{1}}-m_{p})\,.\] (16) The coupling constant \(\lambda\) is not known and thus we are limited to computing branching ratios rather than decay rates. The (sometimes dimensionful) coefficients \(\kappa_{g\circ,i}\) are shown in table 1. For the 2-pseudoscalar channel, \(\kappa\) has mass dimension of -4, in the vector-vector channel it is dimensionless, in the axial-vector plus pseudoscalar channel it has mass dimension -2, for the tensor and pseudoscalar channel it has mass dimension 2. The second chirally invariant term we will use for tensor glueball decays is of the form \[{\cal L}_{\alpha}=\frac{\alpha}{\sqrt{6}}G_{2,\mu\nu}\Big{(}{\rm Tr}\Big{[} \Phi{\bf R}^{\mu\nu}\Phi^{\dagger}\Big{]}+{\rm Tr}\Big{[}\Phi^{\dagger}{\bf L }^{\mu\nu}\Phi\Big{]}\Big{)}\, \tag{17}\] where \(\Phi=S+iP\) is the linear combination of scalar1 and pseudoscalar nonets, and \({\bf L}^{\mu\nu}=T^{\mu\nu}+A_{2}^{\mu\nu},{\bf R}^{\mu\nu}=T^{\mu\nu}-A_{2}^{ \mu\nu}\) combine the tensor and axial-tensor nonets. Their chiral transformation rules are \(\Phi\to U_{L}\Phi U_{R}^{\dagger},{\bf R}^{\mu\nu}\to\Phi\) and \({\bf R}^{\mu\nu}\to\Phi\). The \(\Phi\)-meson is then obtained by the following relations: Footnote 1: We do not study the decay to scalar mesons and so do not discuss the scalar nonet in this work. \[{\cal L}_{\alpha}=\frac{\alpha}{\sqrt{6}}G_{2,\mu\nu}\Big{(}{\rm Tr}\Big{[} \Phi{\bf R}^{\mu\nu}\Phi^{\dagger}\Big{]}+{\rm Tr}\Big{[}\Phi^{\dagger}{\bf L }^{\mu\nu}\Phi\Big{]}\Big{)}\, \tag{18}\] where \(\Phi=S+iP\) is the linear combination of scalar\({}^{1}\) and pseudoscalar nonets, and \({\bf L}^{\mu\nu}=T^{\mu\nu}+A_{2}^{\mu\nu},{\bf R}^{\mu\nu}=T^{\mu\nu}-A_{2}^{ \mu\nu}\) combine the tensor and axial-tensor nonets. Their chiral transformation rules are \(\Phi\to U_{L}\Phi U_{R}^{\dagger},{\bf R}^{\mu\nu}\to\Phi\) and \({\bf R}^{\mu\nu}\to\Phi\). The \(\Phi\)-meson is then obtained by the following relations: \[{\cal L}_{\alpha}=\frac{\alpha}{\sqrt{6}}G_{2,\mu\nu}\Big{(}{\rm Tr}\Big{[} \Phi{\bf R}^{\mu\nu}\Phi^{\dagger}\Big{]}+{\rm Tr}\Big{[}\Phi^{\dagger}{\bf L }^{\mu\nu}\Phi\Big{]}\Big{)}\, \tag{19}\] where \(\Phi=S+iP\) is the linear combination of scalar\({}^{1}\) and pseudoscalar nonets, and \({\bf L}^{\mu\nu}=T^{\mu\nu}+A_{2}^{\mu\nu},{\bf R}^{\mu\nu}=T^{\mu\nu}-A_{2}^{ \mu\nu}\) combine the tensor and axial-tensor nonets. Their chiral transformation rules are \(\Phi\to U_{L}\Phi U_{R}^{\dagger},{\bf R}^{\mu\nu}\to\Phi\) and \({\bf R}^{\mu\nu}\to\Phi\). The \(\Phi\)-meson is then obtained by the following relations: \[{\cal L}_{\alpha}=\frac{\alpha}{\sqrt{6}}G_{2,\mu\nu}\Big{(}{\rm Tr}\Big{[} \Phi{\bf R}^{\mu\nu}\Phi^{\dagger}\Big{]}+{\rm Tr}\Big{[}\Phi^{\dagger}{\bf L }^{\mu\nu}\Phi\Big{]}\Big{)}\, \tag{20}\] where \(\Phi=S+iP\) is the linear combination of scalar\({}^{1}\) and pseudoscalar nonets, and \({\bf L}^{\mu\nu}=T^{\mu\nu}+A_{2}^{\mu\nu},{\bf R}^{\mu\nu}=T^{\mu\nu}-A_{2}^{ \mu\nu}\) combine the tensor and axial-tensor nonets. Their chiral transformation rules are \(\Phi\to U_{L}\Phi U_{R}^{\dagger},{\bf R}^{\mu\nu}\to\Phi\) and \({\bf R}^{\mu\nu}\to\Phi\). The \(\Phi\)-meson is then obtained by the following relations: \[{\cal L}_{\alpha}=\frac{\alpha}{\sqrt{6}}G_{2,\mu\nu}\Big{(}{\rm Tr}\Big{[} \Phi{\bf R}^{\mu\nu}\Phi^{\dagger}\Big{]}+{\rm Tr}\Big{[}\Phi^{\dagger}{\bf L }^{\mu\nu}\Phi\Big{]}\Big{)}\, \tag{21}\] where \(\Phi=S+iP\) is the linear combination of scalar\({}^{1}\) and pseudoscalar nonets, and \({\bf L}^{\mu\nu}=T^{\mu\nu}+A_{2}^{\mu\nu},{\bf R}^{\mu\nu}=T^{\mu\nu}-A_{2}^{ \mu\nu}\) combine the tensor and axial-tensor nonets. Their chiral transformation rules are \(\Phi\to U_{L}\Phi U_{R}^{\dagger},{\bf R}^{\mu\nu}\to\Phi\) and \({\bf R}^{\mu\nu}\to\Phi\). The \(\Phi\)-meson is then obtained by the following relations: \[{\cal L}_{\alpha}=\frac{\alpha}{\sqrt{6}}G_{2,\mu\nu}\Big{(}{\rm Tr}\Big{[} \Phi{\bf R}^{\mu\nu}\Phi^{\dagger}\Big{]}+{\rm Tr}\Big{[}\Phi^{\dagger}{\bf L }^{\mu\nu \(U_{R}\mathbf{R}^{\mu\nu}U_{R}^{\dagger},\mathbf{L}^{\mu\nu}\to U_{L} \mathbf{L}^{\mu\nu}U_{L}^{\dagger}\). The linear combination of the scalar and pseudoscalar contains the chiral condensate \(\Phi=\Phi+\Phi_{0}\) and so this term leads to the decay of tensor glueball into a tensor meson and pseudoscalar meson. The decay rate for this process is given by \[\Gamma_{G_{2}\longrightarrow TP}^{tl}(m_{G_{2}},m_{t},m_{p})=\frac{\alpha^{2}| \vec{k}_{t,p}|}{2\,m_{G_{2}}^{2}\pi}\Big{(}1+\frac{4|\vec{k}_{t,p}|^{4}}{45m_{ t}^{4}}+\frac{2|\vec{k}_{t,p}|^{2}}{3m_{t}^{2}}\Big{)}\,\kappa_{g_{2}tp\,,i}\, \Theta(m_{G_{2}}-m_{t}-m_{p})\,. \tag{18}\] The coupling \(\alpha\) is not fixed, so branching ratios are only calculated for decays in this channel. The third chiral Lagrangian describes the decay of a tensor glueball into a vector and pseudoscalar meson: \[\mathcal{L}_{c^{\mathrm{em}}} = \frac{c_{1}}{\sqrt{6}}\,\partial^{\mu}G_{2}^{\nu\alpha}\mathrm{ Tr}\Big{[}\tilde{L}_{\mu\nu}\,\partial_{\alpha}\Phi\,\Phi^{\dagger}-\Phi^{\dagger}\, \partial_{\alpha}\Phi\tilde{R}_{\mu\nu}-\tilde{R}_{\mu\nu}\partial_{\alpha} \Phi^{\dagger}\Phi+\Phi\partial_{\alpha}\Phi^{\dagger}\tilde{L}_{\mu\nu} \Big{]} \tag{19}\] \[+\frac{c_{2}}{\sqrt{6}}\,\partial^{\mu}G_{2}^{\nu\alpha}\mathrm{ Tr}\Big{[}\partial_{\alpha}\Phi\tilde{R}_{\mu\nu}\,\,\Phi^{\dagger}-\Phi^{\dagger}\, \tilde{L}_{\mu\nu}\partial_{\alpha}\Phi-\partial_{\alpha}\Phi^{\dagger} \tilde{L}_{\mu\nu}\Phi+\Phi\tilde{R}_{\mu\nu}\partial_{\alpha}\Phi^{\dagger} \Big{]}\;,\] where \(\tilde{L}_{\mu\nu}:=\frac{c_{\mu\nu\rho\sigma}}{2}(\partial^{\rho}L^{\sigma}- \partial^{\sigma}L^{\rho})\) and similarly for \(\tilde{R}_{\mu\nu}\). Defining \(c:=c_{1}+c_{2}\), the tree-level decay rate formula reads \[\Gamma_{G\longrightarrow VP}^{tl}(m_{G_{2}},m_{v},m_{p})=\frac{c^{\,2}\,|\vec{k }_{x,p}|^{5}}{40\,\pi}\,\kappa_{gvp\,,i}\,\Theta(m_{G_{2}}-m_{v}-m_{p})\,, \tag{20}\] The decay of \(G_{2}\) into a vector and pseudoscalar is suppressed;namely, the only nonzero \(\kappa\) is for \(KK^{*}\) decay products, which is suppressed by a factor of \((\phi_{N}-\sqrt{2}\phi_{S})\), and vanishes in the chiral limit. ## III Results for the decay ratios Recent investigation devoted to the search of the tensor glueball in the BESIII data obtained an enhancement on the mass around 2210 MeV [37]. Thus, for illustrative purposes, we first present our decay ratios for this mass value. \begin{table} \begin{tabular}{|c|c|} \hline Decay process & \(\kappa_{g\circ\,,i}\) \\ \hline \hline \(G_{2}\longrightarrow\rho(770)\,\rho(770)\) & \(1\) \\ \hline \(G_{2}\longrightarrow\overline{K}^{\ast}\,(892)\,K^{\ast}(892)\) & \(\frac{4}{3}\) \\ \hline \(G_{2}\longrightarrow\omega(782)\,\omega(782)\) & \(\frac{1}{3}\) \\ \hline \(G_{2}\longrightarrow\omega(782)\,\phi(1020)\) & \(0\) \\ \hline \(G_{2}\longrightarrow\phi(1020)\,\phi(1020)\) & \(\frac{1}{3}\) \\ \hline \hline \(G_{2}\longrightarrow\pi\,\pi\) & \((Z_{s}^{2}w_{s}^{2})^{2}\) \\ \hline \(G_{2}\longrightarrow\tilde{K}\,K\) & \(\frac{4}{3}\times(Z_{k}^{2}w_{k}^{2})^{2}\) \\ \hline \(G_{2}\longrightarrow\eta\,\eta\,\eta\) & \(\frac{1}{3}\times(Z_{\eta_{N}}^{2}w_{{}_{N}}^{2}\cos\beta_{P}^{2}+Z_{\eta_{S}}^ {2}w_{{}_{N}}^{2}\sin\beta_{P}^{2})^{2}\) \\ \hline \(G_{2}\longrightarrow\eta\,\eta^{\prime}(958)\) & \(\frac{1}{3}\times(Z_{\eta_{N}}^{2}w_{{}_{N}}^{2}-Z_{\eta_{S}}^{2}w_{{}_{N}}^{2} )\cos\beta_{P}\sin\beta_{P})^{2}\) \\ \hline \(G_{2}\longrightarrow\eta^{\prime}(958)\,\eta^{\prime}(958)\) & \(\frac{1}{18}\times(Z_{\eta_{S}}^{2}w_{{}_{N}}^{2}\cos\beta_{P}^{2}+Z_{\eta_{N} }^{2}w_{{}_{N}}^{2}\sin\beta_{P}^{2})^{2}\) \\ \hline \hline \(G_{2}\longrightarrow a_{1}(1260)\,\pi\) & \(\frac{1}{2}\times(Z_{\pi}w_{\pi})^{2}\) \\ \hline \(G_{2}\longrightarrow f_{1}(1285)\,\eta\) & \(\frac{1}{6}(Z_{{}_{N}}w_{{}_{N}}\sin\beta_{A_{1}}\sin\beta_{P}+Z_{\eta_{N}}w_{{} _{N}}\cos\beta_{A_{1}}\cos\beta_{P})^{2}\) \\ \hline \(G_{2}\longrightarrow K_{1\,,A}\,K\) & \(\frac{2}{3}\times(Z_{k}w_{k})^{2}\) \\ \hline \(G_{2}\longrightarrow f_{1}(1420)\,\eta^{\prime}(958)\) & \(\frac{1}{6}(Z_{{}_{N}}w_{{}_{N}}\sin\beta_{A_{1}}\sin\beta_{P}+Z_{\eta_{S}}w_{{} _{N}}\cos\beta_{A_{1}}\cos\beta_{P})^{2}\) \\ \hline \(G_{2}\longrightarrow f_{1}(1285)\,\eta^{\prime}(958)\) & \(\frac{1}{6}(Z_{{}_{N}}w_{{}_{N}}\sin\beta_{A_{1}}\cos\beta_{P}-Z_{\eta_{N}}w_{{} _{N}}\cos\beta_{A_{1}}\sin\beta_{P})^{2}\) \\ \hline \hline \(G_{2}\longrightarrow f_{2}(1270)\,\eta\) & \(\frac{1}{24}\big{(}\phi_{N}\cos\beta_{p}\cos\beta_{T}+\phi_{S}\sin\beta_{p}\sin \beta_{T}\big{)}^{2}\) \\ \hline \(G_{2}\longrightarrow a_{2}(1320)\,\pi\) & \(\frac{\partial_{N}^{2}}{8}\) \\ \hline \hline \(G_{2}\longrightarrow K^{\ast}(892)\,K+\text{c.c.}\) & \(\frac{1}{48}\left(\sqrt{2}\phi_{N}-2\phi_{S}\right)^{2}\) \\ \hline \end{tabular} \end{table} Table 1: Coefficients for the decay channel \(G_{2}\longrightarrow A+B\). \(\phi_{N}\approx 0.158\,\text{GeV}\) and \(\phi_{S}\approx 0.138\,\text{GeV}\) are due to the chiral condensate. The \(\eta\eta^{\prime}\) coefficient is small because it only occurs due to the flavor symmetry breaking Using the \(\kappa_{g^{\circ\circ},i}\) in Table 1 and equations (14), (15), (16), we obtain the decay ratios shown in Table 2. The decays are primarily to 2 vector mesons, with \(\rho\rho\) and \(K^{*}K^{*}\) being the two largest. The two \(\rho\) mesons would further decay into four pions before reaching the detector in an experiment. Likewise the \(K^{*}K^{*}\) pair decays to 2 \(K\pi\) pairs. It should be stressed that we expect a quite large indeterminacy of our results, in particular in connection to the \(\rho\rho\)-\(\pi\pi\) ratio, that is at least of 50 percent. In fact, as discussed in Ref. [47], the \(\rho\rho\) mode involves (because of the SSB-shift) the factor \(Z_{\pi}^{4}\), thus even a quite small indeterminacy of \(Z_{\pi}\) may generate large changes. The error on \(Z_{\pi}\) is of the order of 20%, that translates into a factor two for \(\rho\rho\). Yet, the main point of our study is that the ratio \(\rho\rho\)-\(\pi\pi\) is expected to be large, as shown in figure 1. A similar dominance of \(\rho\rho\) and \(K^{*}K^{*}\) decay products was found in [27] in a holographic model, but to a somewhat lesser extent (with a ratio of around 10, depending on the parameters). Potential glueball candidates and some relevant data is given in table 3 and in Fig.2. While the mass value in table 2 is not in line with every glueball candidate, it gives an overall picture on the decay ratios. One should also keep in mind that lattice determinations still have sizable errors (of the order of 100-200 MeV) and do not include the role of meson-meson loops, that can be quite relevant if the tensor glueball turns out to be broad (this being our favored scenario). Below we discuss each tensor-glueball candidate separately. The experimental data and theoretical predictions used for this discussion are shown in table 4. * The meson \(f_{2}(1910)\) has a width of \(167\pm 21\) MeV and it decays into (among others) \(\eta\eta\) and \(K\overline{K}\). The decay ratio \(\rho(770)\rho(770)/\omega(782)\omega(782)\) of about \(2.6\pm 0.4\) is not far from the theoretical result of 3.1, and the data on the decay ratio of \(f_{2}(1270)\eta/a_{2}(1320)\pi\) also agrees with the theory. Yet, this state cannot be mainly gluonic since the experimental ratio \(\eta\eta/\eta\eta^{\prime}(958)\) is less than 0.05, while the theoretical result is much larger (about 8), and the ratio of \(\omega\omega\) to \(\eta\eta\prime\) is very large in theory, but only \(2.6\pm 0.6\) in data. Summarizing, the \(\eta\eta^{\prime}(958)\) mode is a clear drawback for \(f_{2}(1910)\) being predominantly gluonic. * where we assume \(\rho\rho\) decays into 4 pions - but both agree that this ratio is large. While the theory fits the data well, its large total decay width of 460 MeV would imply a broad tensor glueball candidate. Quite interestingly, the decay \(J/\psi\to\gamma K^{*}(892)\bar{K}^{*}(892)\) shows a relatively large branching ratio of \((7.0\pm 2.2)\,10^{-4}\). A strong coupling to \(J/\psi\) is an expected feature of a tensor glueball. * The resonance \(f_{2}(2010)\) has a total decay width of \(202\pm 60\) MeV. Yet, only \(K\overline{K}\) and \(\phi(1020)\phi(1020)\) decays have been seen. This fact suggests a large strange-antistrange content for this resonance, rather than a predominantly gluonic state, see also the discussions in Sec. IV. * In view of the LQCD prediction for the tensor glueball mass around 2.2 GeV, one of the closest resonances is \(f_{2}(2150)\). Yet, the ratio \(K\overline{K}/\eta\eta\) is \(1.28\pm 0.23\), while the theoretical prediction is about 4. Similarly, the ratio of \(\pi\pi/\eta\eta\) is experimentally less than 0.33, while the theoretical estimate is about 10. The predicted ratio of 0.1 for \(f_{2}(1270)\eta/a_{2}(1320)\pi\) also does not fit the data of \(0.79\pm 0.11\) * The meson \(f_{J}(2220)\) (with \(J=2\) or 4) was historically treated as a good candidate [21]. However, in light of the new PDG compilation, most of the decays that the theory predicts are not seen experimentally. Moreover, the only channel denoted as'seen' is the \(\eta\eta^{\prime}(958)\) mode, which is expected to be extremely suppressed for a glueball state. PDG data offers one the decay ratio for \(\pi\pi/K\bar{K}\) of \(1.0\pm 0.5\) even though in the 'decay modes' table these channels are marked as not seen) while the theoretical prediction is about 2.5. For these reasons, we conclude that \(f_{J}(2220)\) should not be considered as a good candidate any longer. * The resonance \(f_{2}(2300)\), with a total width of \(149\pm 41\) MeV, decays only into \(K\overline{K}\) and \(\phi\phi\), thus suggesting that it is predominantly a strange-antistrange object. Figure 2: Masses and widths of isoscalar-tensor resonances heavier than 1.9 GeV [53]. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Resonances & Masses (MeV) & Decay Widths (MeV) & Decay Channels \\ \hline \(f_{2}(1910)\) & \(1900\pm 9\) & \(167\pm 21\) & \(\pi\pi\), \(KK\), \(\eta\eta\), \(\omega\omega\), \(\eta\eta^{\prime}\), \(\eta^{\prime}\eta^{\prime}\), \(\rho\rho\), \(a_{2}(1320)\pi\), \(f_{2}(1270)\eta\) \\ \hline \(f_{2}(1950)\) & \(1936\pm 12\) & \(464\pm 24\) & \(\pi\pi\), \(KK\), \(\eta\eta\), \(K^{*}K^{*}\), \(4\pi\) \\ \hline \(f_{2}(2010)\) & \(2011^{+60}_{-80}\) & \(202\pm 60\) & \(KK\), \(\phi\phi\) \\ \hline \(f_{2}(2150)\) & \(2157\pm 12\) & \(152\pm 30\) & \(\pi\pi\), \(\eta\eta\), \(KK\), \(a_{2}(1320)\pi\), \(f_{2}(1270)\eta\) \\ \hline \(f_{J}(2220)\) & \(2231.1\pm 3.5\) & \(23^{+8}_{-7}\) & \(\eta\eta^{\prime}\) \\ \hline \(f_{2}(2300)\) & \(2297\pm 28\) & \(149\pm 41\) & \(KK\), \(\phi\phi\) \\ \hline \(f_{2}(2340)\) & \(2345^{+50}_{-40}\) & \(322^{+70}_{-60}\) & \(\eta\eta\), \(\phi\phi\) \\ \hline \end{tabular} \end{table} Table 3: Spin-2 resonances heavier than 1.9 GeV listed in PDG [53]. * Finally, the resonance \(f_{2}(2340)\) decays into \(\eta\eta\) and \(\phi(1020)\phi(1020)\) that may also imply a large strange-antistrange component. Yet, both latter resonances should be investigated in more details in the future, especially for what concerns other possible decay modes of these resonances. We also refer to the discussion in the next Section for additional discussions. From the considerations above, it turns out that the resonances \(f_{2}(1950)\) seems to be the best fit, although this would imply a broad tensor glueball state. Namely, all other states seem disfavored for various reasons, such as specific branching or decay ratios with available data, (\(f_{2}(1910)\) and the \(f_{2}(2150)\)) or decays in strange states only (\(f_{2}(2010),f_{2}(2300)\), and \(f_{2}(2340)\)). It is of primary importance to monitor the experimental status of the states above in the future. In particular, the analysis for the states \(f_{J}(2220),f_{2}(2300)\), and \(f_{2}(2340)\) would benefit from more experimental data, with special attention to the latter broad one. ## IV Discussions In this section, we discuss two additional important points. The first one addresses the actual partial decay widths of the tensor glueball; while a rigorous treatment is not possible within our framework, a 'guess' is achieved by using large-\(N_{c}\) arguments and the decays of the conventional ground-state tensor mesons. The second point discusses the assignment of various tensor states as radially and orbitally excited conventional tensor mesons. In this framework, the tensor glueball should be a'supernumerary' state that does not fit into the quark-antiquark nonet picture. For the first point, let us consider the conventional mesons \(f_{2}\equiv f_{2}(1270)\simeq\sqrt{1/2}(\bar{u}u+\bar{d}d)\) and \(f^{\prime}_{2}\equiv f^{\prime}_{2}(1525)\simeq\bar{s}s\), whose decays into \(\pi\pi\) are well known: \(\Gamma_{f_{2}\to\pi\pi}=157.2\) MeV and \(\Gamma_{f^{\prime}_{2}\to\pi\pi}=0.71\) MeV (for our qualitative purposes, we neglect the anyhow small errors). The amplitude for the decay \(A_{f_{2}\to\pi\pi}\) requires the creation of a single \(\bar{q}q\) pair from the vacuum and scales as \(1/\sqrt{N_{c}}\), where \(N_{c}\) is the number of colors. On the other hand, the amplitude \(A_{f^{\prime}_{2}\to\pi\pi}\) implies that \(\bar{s}s\) first converts into two gluons (\(gg\)) that subsequently transforms into \(\sqrt{1/2}(\bar{u}u+\bar{d}d)\) (the very same mechanism is responsible for a small (about \(3^{\circ}\)) but nonzero mixing angle of the physical states in the strange-nonstrange basis [47]). Schematically: \[\bar{s}s\to gg\to\sqrt{1/2}(\bar{u}u+\bar{d}d). \tag{21}\] The amplitude \(A_{f^{\prime}_{2}\to\pi\pi}\) scales as \(1/N_{c}^{3/2}\) and is OZI [60; 61; 62] suppressed w.r.t. the previous one. In order to be more \begin{table} \begin{tabular}{|c|c|c|c|} \hline Resonances & Branching Ratios & PDG & Model Prediction \\ \hline \hline \(f_{2}(1910)\) & \(\rho(770)\rho(770)/\omega(782)\omega(782)\) & \(2.6\pm 0.4\) & \(3.1\) \\ \hline \(f_{2}(1910)\) & \(f_{2}(1270)\eta/a_{2}(1320)\pi\) & \(0.09\pm 0.05\) & \(0.07\) \\ \hline \(f_{2}(1910)\) & \(\eta\eta/\eta\eta^{\prime}(958)\) & \(<0.05\) & \(\sim 8\) \\ \hline \(f_{2}(1910)\) & \(\omega(782)\omega(782)/\eta\eta(958)\) & \(2.6\pm 0.6\) & \(\sim 200\) \\ \hline \hline \(f_{2}(1950)\) & \(\eta\eta/\pi\pi\) & \(0.14\pm 0.05\) & \(0.081\) \\ \hline \(f_{2}(1950)\) & \(K\overline{K}/\pi\pi\) & \(\sim 0.8\) & \(0.32\) \\ \hline \(f_{2}(1950)\) & \(4\pi/\eta\eta\) & \(>200\) & \(>700\) \\ \hline \hline \(f_{2}(2150)\) & \(f_{2}(1270)\eta/a_{2}(1320)\pi\) & \(0.79\pm 0.11\) & \(0.1\) \\ \hline \(f_{2}(2150)\) & \(K\overline{K}/\eta\eta\) & \(1.28\pm 0.23\) & \(\sim 4\) \\ \hline \(f_{2}(2150)\) & \(\pi\pi/\eta\eta\) & \(<0.33\) & \(\sim 10\) \\ \hline \hline \(f_{J}(2220)\) & \(\pi\pi/K\overline{K}\) & \(1.0\pm 0.5\) & \(\sim 2.5\) \\ \hline \end{tabular} \end{table} Table 4: Decay ratios for the decay channels with available data. specific, let us consider the transition Hamiltonian \(H_{int}=\lambda\left(\left|\bar{u}u\right\rangle\left\langle gg\right|+\left| \bar{d}d\right\rangle\left\langle gg\right|+\left|\bar{s}s\right\rangle\left\langle gg \right|+h.c.\right)\), where \(\lambda\) controls the mixing and therefore scales as \(1/\sqrt{N_{c}}\). Then: \(A_{f_{2}^{\prime}\to\pi\pi}\simeq\sqrt{2}\lambda^{2}A_{f_{2}\to\pi\pi}\), hence \(\Gamma_{f_{2}^{\prime}\to\pi\pi}\simeq 2\lambda^{4}\Gamma_{f_{2}\to\pi\pi}\), implying \(\lambda\simeq 0.22\). Next, let us consider the tensor glueball decay into \(\pi\pi\). Intuitively speaking, it is at an 'intermediate stage', since it starts with a \(gg\) pair. One has: \(A_{G_{2}\to\pi\pi}\simeq\sqrt{2}\lambda A_{f_{2}\to\pi\pi}\), then \(\Gamma_{G_{2}\to\pi\pi}\simeq 2\lambda^{2}\Gamma_{f_{2}\to\pi\pi}\simeq\sqrt{2} \sqrt{\Gamma_{f_{2}\to\pi\pi}\Gamma_{f_{2}^{\prime}\to\pi\pi}}\simeq 15\) MeV. (Note, for a similar idea for the estimate of the coupling of glueballs to mesons, see Refs. [63, 64].) \(\Gamma_{G_{2}\to\pi\pi}\) scales as \(1/N_{c}^{2}\) (as expected for glueballs), thus realizing the expectation \(\Gamma_{f_{2}^{\prime}\to\pi\pi}<\Gamma_{G_{2}\to\pi\pi}<\Gamma_{f_{2}\to\pi\pi}\). Of course, the estimate \(\Gamma_{G_{2}\to\pi\pi}\simeq 15\) MeV is only a very rough approximation for various reasons: it does not take into account phase space (that would increase the glueball width) or form factors (that would decrease it). It also avoids a microscopic evaluation (which is a difficult task). Yet, it gives an idea on how large the \(\pi\pi\) mode (and as a consequence other \(PP\) channels) could be, see Table 5. It is interesting to point out that our \(\pi\pi\) decay rate is comparable to the one obtained within so-called Witten-Sakai-Sugimoto model [27], in which \(\Gamma_{G_{2}\to\pi\pi}/m_{G(2000)}\simeq 0.014\), thus implying \(\Gamma_{G_{2}\to\pi\pi}\simeq 28\) MeV. As a consequence of such decay width into \(\pi\pi\), a large decay width into \(\rho\rho\to\pi\pi\pi\pi\) is expected due to the evaluated large \(\rho\rho/\pi\pi\) ratio. Next, the second point to discuss relates to the assignment of quark-antiquark tensor states. Namely, up to now, we have considered all isoscalar \(f_{2}\) states in the energy region about 2 GeV 'democratically' as putative tensor glueball candidates. Yet, it is clear that many (if not all) of them should be rather interpreted as standard \(\bar{q}q\) objects. For this reason, it is useful to classify them accordingly. While the ground-state tensor mesons (with spectroscopic notation \(1^{3}P_{2}\) are well established as \(a_{2}(1320)\), \(K_{2}^{*}(1430)\), \(f_{2}(1270)\), and \(f_{2}^{\prime}(1525)\), one expects a nonet of radially excited tensor states with \(2^{3}P_{2}\) as well as a nonet of orbitally excited tensor states with \(1^{3}F_{2}\). The former are predicted to be lighter than the latter [65], what is in agreement with the excited vector mesons [66]. The nonet of radially excited tensor mesons contains the isotriplet \(a_{2}(1700)\), the isodoublet states \(K_{2}^{*}(1980)\), the isoscalar (mostly nonstrange) \(f_{2}(1640)\). The \(\bar{s}s\) member of the nonet may be assigned to \(f_{2}(1910)\), or \(f_{2}(1950)\), or \(f_{2}(2010)\). Indeed, the quark model review of the PDG [53] considers \(f_{2}(1950)\) as a possible \(\bar{s}s\) state. However, the decay of that state does not fit quite well with a predominantly \(\bar{s}s\) object. For instance, the experimental \(\pi\pi/KK\) ratio is of order unity, while a \(\bar{s}s\) state should imply a small ratio. In fact, the radially excited tensor states should display a small isoscalar mixing angle, just as the ground state (see discussion above). Moreover, the mass is too close to the tensor member \(K_{2}^{*}(1980)\). Note that, resonance "\(f_{2}(2000)\)" included in the further states list of PDG, is proposed as a tensor glueball candidate in Ref.[21] due to its poorly fitted location in the Regge trajectory of spin-2 states. Similar to \(f_{2}(1950)\), it has also a broad decay width. The state \(f_{2}(1910)\) is presently omitted from the summary table and it is not clear if it corresponds to an independent pole (for instance, the PDG notes that the \(\bar{K}K\) mode could be well correspond to \(f_{2}(1950)\)). Its mass is even lighter than \(K_{2}^{*}(1980)\), what disfavors this state as being an \(\bar{s}s\). The decays -which are quite uncertain- confirm this view: the modes \(\rho\rho\), and \(a_{2}(1320)\pi\) should be suppressed, and the latter should be smaller then \(f_{2}(1270)\eta\), contrary to data (see above). On the other hand, the state \(f_{2}(2010)\) is well established and decays into \(\phi\phi\) and \(KK\), which indicates a strange content. Summarizing the discussion above, we temptatively identify the nonet of radially excited tensor mesons as \[a_{2}(1700),\ K_{2}^{*}(1980),\ f_{2}(1640),\ f_{2}(2010)\ \mbox{with}\ 2^{3}P_{2}\ \mbox{and}\ J^{PC}=2^{++}. \tag{22}\] Within this assignment and upon neglecting the unsettled \(f_{2}(1910)\), the state \(f_{2}(1950)\) may be seen as supernumerary. This argumentation can be an additional hint toward the exotic nature of \(f_{2}(1950)\). Next, what about the orbitally excited states? Here the situation is much more unclear. There are no isotriplet or isodoublet states that could be used to identify the nonet. In the listing of the PDG one has \(f_{2}(2150)\) (status unclear, omitted from the summary table) and \(f_{2}(2300)\) and \(f_{2}(2340)\). The latter two states have a prominent decay into \(\phi\phi\) then, due to the vicinity of mass, one may regard them as a unique state corresponding to \(\bar{s}s\) resonance. On the other hand, \(f_{2}(2150)\) could be tentatively correspond to a nonstrange isoscalar state belonging to the next radially excited nonet \(3^{3}P_{2}\). Definitely, more data and studies are needed for these excited tensor states. ## V Conclusions In this work, we have applied the eLSM to the study of the tensor glueball, constructing chirally invariant Lagrangians describing the tensor glueball decays. From this we computed tensor glueball decay ratios, with dominant decay channels being the vector-vector decay products, especially \(\rho\rho\) and \(K^{*}\bar{K}^{*}.\) A quite large tensor glueball follows from the large predicted \(\rho\rho/\pi\pi\) ratio, with the \(\pi\pi\) mode being of the order of 10 MeV. Moreover, we also predict a very small decay width of the tensor glueball into \(K^{*}(892)K\), which render this mode potentially interesting to exclude eventual glueball candidates in the future. We compared the theoretical predictions to the experimental data. At present, the best match is for the resonance \(f_{2}(1950)\), implying that the tensor glueball is a relatively broad state. The \(f_{2}(1950)\) might be thought of as too light to be the tensor glueball, which - according to most lattice studies - has its mass in the range 2.2-2.4 GeV. However, unquenching effects included in additional meson-meson loop corrections are expected to bring the glueball mass down. The large width \(f_{2}(1950)\) of the glueball is indeed in agreement with this view. Here, the tensor glueball mixing with other conventional meson states was not taken into account. While mixing with the ground state tensor meson is expected to be small due to the large mass difference (recently, a small mixing of the pseudoscalar glueball and the \(\eta\) was studied on the lattice in [67]), this could be not the case for the nearby excited tensor states. The generalization to the eLSM is in principle possible and can be undertaken once better data, both form experiments and from lattice, will be available. In the future, more information for decays of all tensor states into vector-vector, pseudoscalar-pseudoscalar as well as ground-state tensor-pseudoscalar would be very helpful to constrain models and falsify different scenarios. Moreover, also the decay of \(J/\psi\) into tensors as well as radiative (such as photon-photon) decays of tensor states could be of great use. In particular, more information about the broad state \(f_{2}(2340)\) could shed light on its nature. Another interesting future line of research is the study of glueball molecules [68; 69]. While two scalar glueballs may interact and form a bound state (which is stable in pure YM), the question for the analogous tensor-tensor case (and also tensor-scalar and heavier glueballs, such as the pseudoscalar one) is unsettled. ## Acknowledgements A.V. and F.G. acknowledge financial support from the Polish National Science Centre (NCN) via the OPUS project 2019/ 33/B/ST2/00613. S.J. acknowledges the financial support through the project "Development Accelerator of the Jan Kochanowski University of Kielce", co-financed by the European Union under the European Social Fund, \begin{table} \begin{tabular}{|c|c|} \hline Resonances & Interpretation status \\ \hline \(f_{2}(1910)\) & Agreement with some data, but excluded by \(\eta\eta/\eta\eta^{\prime}\) and \(\omega\omega/\eta\eta^{\prime}\) ratios \\ \hline \(f_{2}(1950)\) & \(\eta\eta/\pi\pi\) agrees with data, no contradictions with data, but implies broad tensor glueball \\ & Best fit as predominantly glueball among considered resonances \\ \hline \(f_{2}(2010)\) & Likely primarily strange-antistrange content \\ \hline \(f_{2}(2150)\) & All available data contradicts theoretical prediction \\ \hline \(f_{J}(2220)\) & Data on \(\pi\pi/K\bar{K}\) disagrees with theory, largest predicted decay channels are not seen \\ \hline \(f_{2}(2300)\) & Likely primarily strange-antistrange content \\ \hline \(f_{2}(2340)\) & Likely primarily strange-antistrange content, would also imply a broad glueball \\ \hline \end{tabular} \end{table} Table 6: Spin 2 resonances and the status as the tensor glueball. with no. POWR.03.05. 00-00-Z212 / 18.
2305.12470
Quasi-Monte Carlo Graph Random Features
We present a novel mechanism to improve the accuracy of the recently-introduced class of graph random features (GRFs). Our method induces negative correlations between the lengths of the algorithm's random walks by imposing antithetic termination: a procedure to sample more diverse random walks which may be of independent interest. It has a trivial drop-in implementation. We derive strong theoretical guarantees on the properties of these quasi-Monte Carlo GRFs (q-GRFs), proving that they yield lower-variance estimators of the 2-regularised Laplacian kernel under mild conditions. Remarkably, our results hold for any graph topology. We demonstrate empirical accuracy improvements on a variety of tasks including a new practical application: time-efficient approximation of the graph diffusion process. To our knowledge, q-GRFs constitute the first rigorously studied quasi-Monte Carlo scheme for kernels defined on combinatorial objects, inviting new research on correlations between graph random walks.
Isaac Reid, Krzysztof Choromanski, Adrian Weller
2023-05-21T14:12:02Z
http://arxiv.org/abs/2305.12470v1
# Quasi-Monte Carlo Graph Random Features ###### Abstract We present a novel mechanism to improve the accuracy of the recently-introduced class of _graph random features_ (GRFs) (Choromanski, 2023). Our method induces negative correlations between the lengths of the algorithm's random walks by imposing _antithetic termination_: a procedure to sample more diverse random walks which may be of independent interest. It has a trivial drop-in implementation. We derive strong theoretical guarantees on the properties of these _quasi-Monte Carlo GRFs_ (q-GRFs), proving that they yield lower-variance estimators of the \(2\)-regularised Laplacian kernel under mild conditions. Remarkably, our results hold for any graph topology. We demonstrate empirical accuracy improvements on a variety of tasks including a new practical application: time-efficient approximation of the graph diffusion process. To our knowledge, q-GRFs constitute the first rigorously studied quasi-Monte Carlo scheme for kernels defined on combinatorial objects, inviting new research on correlations between graph random walks.1 Footnote 1: We will make all code publicly available. ## 1 Introduction and related work Kernel methods are ubiquitous in machine learning (Canu and Smola, 2006; Smola and Scholkopf, 2002; Kontorovich et al., 2008; Campbell, 2002). Via the kernel trick, they provide a mathematically principled and elegant way to perform nonlinear inference using linear learning algorithms. The positive definite _kernel function_\(k:\mathcal{X}\times\mathcal{X}\to\mathbb{R}\), defined on an input domain \(\mathcal{X}\), measures the'similarity' between two datapoints. Examples in Euclidean space include the Gaussian, linear, Matern, angular and arc-cosine kernels (Williams and Rasmussen, 2006; Cho and Saul, 2011). Though very effective on small datasets, kernel methods suffer from poor scalability. The need to materialise and invert the _kernel matrix_ typically leads to a time-complexity cubic in the size of the dataset. Substantial research has been dedicated to improving scalability by approximating this matrix, notably including _random features_ (RFs) (Rahimi and Recht, 2007, 2008; Avron et al., 2017; Liu et al., 2022). These randomised mappings \(\phi:\mathbb{R}^{d}\to\mathbb{R}^{s}\) construct low-dimensional feature vectors whose dot product equals the kernel evaluation in expectation: \[k(\mathbf{x},\mathbf{y})=\mathbb{E}\left(\phi(\mathbf{x})^{\top}\phi(\mathbf{y})\right). \tag{1}\] This permits a low-rank decomposition of the kernel matrix which enables better time- and space-complexity than exact kernel methods. Random feature methods exist for a variety of Euclidean kernels with properties engineered for the desiderata and symmetries of the particular kernel being approximated (Dasgupta et al., 2010; Johnson, 1984; Choromanski et al., 2020; Goemans and Williamson, 2004; Rahimi and Recht, 2007). Kernels can also be defined on discrete input spaces such as _graphs_, which are the natural way to represent data characterised by local relationships (e.g. social networks or interacting chemicals (Albert and Barabasi, 2002)) or when data is restricted to a lower-dimensional manifold than the original space (Roweis and Saul, 2000; Belkin and Niyogi, 2003). We consider _graph kernels_\(k:\mathcal{V}\times\mathcal{V}\rightarrow\mathbb{R}\) on the set of nodes \(\mathcal{V}\) of a graph \(\mathrm{G}\). Examples include the diffusion, regularised Laplacian, \(p\)-step random walk and cosine kernels (Smola and Kondor, 2003; Kondor and Lafferty, 2002; Chung and Yau, 1999). Substantial research effort has also been devoted to developing and analysing graph kernels \(k:\mathcal{G}\times\mathcal{G}\rightarrow\mathbb{R}\), now taking entire graphs \(\mathrm{G}\in\mathcal{G}\) from graph spaces \(\mathcal{G}\) as inputs rather than their nodes (Shervashidze et al., 2009; Vishwanathan et al., 2006; Shervashidze and Borgwardt, 2009), but we stress that these are not the subject of this paper. The problem of poor kernel scalability is exacerbated in the graph domain because even computing the kernel matrix \(\mathbf{K}\) is typically of at least cubic time-complexity in the number of nodes \(N\). In contrast to kernels defined on points in \(\mathbb{R}^{d}\), random feature methods for fixed graph kernels (c.f. kernel learning (Fang et al., 2021)) have proved challenging to construct. Only recently has a viable _graph random feature_ (GRF) mechanism been proposed, which uses a series of random walkers depositing 'load' at each node they pass through (Choromanski, 2023). GRFs provide a low-rank unbiased estimate of the matrix \((\mathbf{I}_{N}-\mathbf{U})^{-d}\), where \(\mathbf{U}\) is a weighted adjacency matrix of the graph and \(d\in\mathbb{N}\). The decomposition supports subquadratic time-complexity (again with respect to the number of nodes \(N\)) in downstream algorithms applying regularised Laplacian kernels. Moreover, the computation of GRFs admits a simple distributed algorithm that can be applied if a large graph needs to be split across machines. The author demonstrates the strong empirical performance of GRFs in speed tests, Frobenius relative error analysis and a \(k\)-means graph clustering task. In the Euclidean setting, significant research has been dedicated to developing _quasi-Monte Carlo_ (QMC) variants to RF methods that enjoy better convergence properties (Yang et al., 2014; Lyu, 2017; Dick et al., 2013). By using correlated ensembles rather than i.i.d. random variables in the feature maps, one can suppress the mean squared error (MSE) of the kernel estimator. For example, _orthogonal random features_ (ORFs) (Yu et al., 2016; Choromanski et al., 2020) improve the quality of approximation of the Gaussian kernel when using trigonometric or positive random features, and of the linear kernel in the _orthogonal Johnson-Lindenstrauss transformation_(Choromanski et al., 2017). With positive random features, the recently-introduced class of _simplex random features_ (SimRFs) performs even better (Reid et al., 2023). This has been used to great effect in estimating the attention mechanism of Transformers (Vaswani et al., 2017), overcoming its prohibitive quadratic time-complexity scaling with token sequence length. This invites the central question of this work: **how can we implement a QMC mechanism for random walks on a graph?** What do we mean by a 'diverse' sample in this context? Choromanski (2023) first identified the challenge of constructing _quasi-Monte Carlo GRFs_ (q-GRFs). They suggested a high-level approach of reinforced random walks, but left to future work its theoretical and empirical analysis. In this paper, we provide a first concrete implementation of q-GRFs, proposing an unbiased scheme that correlates the length of random walks by imposing _antithetic termination_. We derive strong theoretical guarantees on the properties of this new class, proving that the correlations reduce the variance of estimators of the \(2\)-regularised Laplacian kernel under mild conditions. Our results hold for any graph topology. We also demonstrate empirical accuracy improvements on a variety of tasks. We hope our new algorithm (hereafter referred to as 'q-GRFs' for brevity) will spur further research on correlations between graph random walks in machine learning. We emphasise that, whilst we have presented antithetic termination through the lens of q-GRFs (an analytically tractable and important use case), it is fundamentally a procedure to obtain a more diverse ensemble of graph random walks. It may be of independent interest, e.g. for estimating graphlet statistics for kernels between graphs (Chen et al., 2016; Wu et al., 2019; Ribeiro et al., 2021) or in some GNN architectures (Nikolentzos and Vazrigiannis, 2020). The remainder of the manuscript is organised as follows. In **Sec. 2** we introduce the mathematical concepts and existing algorithms to be used in the paper, including the \(d\)-regularised Laplacian and diffusion kernels and the GRF mechanism. **Sec. 3** presents our novel q-GRFs mechanism and discusses its strong theoretical guarantees - in particular, that it provides lower kernel estimator variance than its regular predecessor (GRFs) under mild conditions. We provide a brief proof-sketch for intuition but defer full technical details to App. 8.3. We conduct an exhaustive set of experiments in **Sec. 4** to compare q-GRFs to GRFs, including: (a) quality of kernel approximation via computation of the relative Frobenius norm; (b) simulation of the graph diffusion process; (c) kernelised \(k\)-means node clustering; and (d) kernel regression for node attribute prediction. q-GRFs nearly always perform better and in some applications the difference is substantial. ## 2 Graph kernels and GRFs ### The Laplacian, heat kernels and diffusion on graphs An undirected, weighted graph \(G\) is defined by a set of vertices \(\mathcal{V}\) enumerated \(1\) to \(N\) and a set of edges \(E\) given by the unordered vertex pairs \(\{v_{i},v_{j}\}\) where \(v_{i}\) and \(v_{j}\) are neigbours (denoted \(i\sim j\)), themselves associated with weights \(W_{ij}\in\mathbb{R}\). The _weighted adjacency matrix_\(\mathbf{W}\in\mathbb{R}^{N\times N}\) has matrix elements \(W_{ij}\): that is, the associated edge weights if \(i\sim j\) and \(0\) otherwise. Denote by \(\mathbf{D}\in\mathbb{R}^{N\times N}\) the diagonal matrix with elements \(D_{ii}\coloneqq\sum_{j}W_{ij}\), the sum of edge weights connecting a vertex \(i\) to its neighbours. The _Laplacian_ of \(G\) is then defined \(\mathbf{L}\coloneqq\mathbf{D}-\mathbf{W}\). The _normalised Laplacian_ is \(\widetilde{\mathbf{L}}\coloneqq\mathbf{D}^{\frac{1}{2}}\mathbf{L}\mathbf{D}^{ -\frac{1}{2}}\), which rescales \(\mathbf{L}\) by the (weighted) number of edges per node. \(\mathbf{L}\) and \(\widetilde{\mathbf{L}}\) share eigenvectors in the case of a \(d\)-regular graph and play a central role in spectral graph theory; their analytic properties are well-understood (Chung, 1997). In classical physics, diffusion through continuous media is described by the equation \[\frac{d\mathbf{u}}{dt}=\nabla^{2}\mathbf{u} \tag{2}\] where \(\nabla^{2}=\frac{\partial^{2}}{\partial x_{1}^{2}}+\frac{\partial^{2}}{ \partial x_{2}^{2}}+...\frac{\partial^{2}}{\partial x_{N}^{2}}\) is the _Laplacian operator_ on continuous spaces. The natural analogue on discrete spaces is \(\mathbf{L}\), where we now treat \(\mathbf{L}\) as a linear operator on vectors \(\mathbf{u}\in\mathbb{R}^{N}\) (or equivalently functions \(u:V\rightarrow\mathbb{R}\)). This can be seen by noting that, if we take \(\mathbf{W}\) to be the _unweighted_ adjacency matrix, \(\langle\mathbf{u},\mathbf{L}\mathbf{u}\rangle=\mathbf{u}^{\top}\mathbf{L} \mathbf{u}=-\frac{1}{2}\sum_{i\sim j}(u_{i}-u_{j})^{2}\) so, like its continuous counterpart, \(\mathbf{L}\) measures the the local smoothness of its domain. In fact, in this case \(-\mathbf{L}\) is exactly the finite difference discretisation of \(\nabla^{2}\) on a square grid in \(N\)-dimensional Euclidean space. This motivates the _discrete heat equation_ on \(G\), \[\frac{d\mathbf{u}}{dt}=-\widetilde{\mathbf{L}}\mathbf{u}, \tag{3}\] where we followed literature conventions by using the normalised variant whose spectrum is conveniently contained in \([0,2]\)(Chung, 1997). This has the solution \(\mathbf{u}_{t}=\exp(-\widetilde{\mathbf{L}}t)\mathbf{u}_{0}\), where \(\exp(-\widetilde{\mathbf{L}}t)=\lim_{n\rightarrow\infty}\left(1-\frac{t \widetilde{\mathbf{L}}}{n}\right)^{n}\). The symmetric and positive semi-definite matrix \[\mathbf{K}_{\text{diff}}(t)\coloneqq\exp(-\widetilde{\mathbf{L}}t) \tag{4}\] is referred to as the _heat kernel_ or _diffusion kernel_(Smola and Kondor, 2003). The exponentiation of the generator \(\widetilde{\mathbf{L}}\), which by construction captures the _local_ structure of \(G\), leads to a kernel matrix \(\mathbf{K}_{\text{diff}}\) which captures the graph's _global_ structure. Upon discretisation of Eq. 3 with the backward Euler step (which is generally more stable than the forward), we have that \[\mathbf{u}_{t+\delta t}=(\mathbf{I}_{N}+\delta t\widetilde{\mathbf{L}})^{-1} \mathbf{u}_{t}, \tag{5}\] where the discrete time-evolution operator \(\mathbf{K}_{\text{lap}}^{(1)}=(\mathbf{I}_{N}+\delta t\widetilde{\mathbf{L}}) ^{-1}\) is referred to as the \(1\)_-regularised Laplacian kernel_. This is a member of the more general family of \(d\)-regularised Laplacian kernels, \[\mathbf{K}_{\text{lap}}^{(d)}=(\mathbf{I}_{N}+\delta t\widetilde{\mathbf{L}}) ^{-d}, \tag{6}\] for which we can construct an unbiased low-rank approximation using GRFs (Choromanski, 2023). We predominantly consider \(d=2\) with the understanding that estimators for other values of \(d\) are straightforward to obtain (App. 8.1). This demonstrates the intimate connection between the graph-diffusion and Laplacian kernels, and how a QMC scheme that improves the convergence of our randomised estimator of \(\mathbf{K}_{\text{lap}}^{(d)}\) will permit more accurate simulation of diffusion on a graph. ### Graph random features (GRFs) Here we recall the GRF mechanism, which offers a rich playground for our novel antithetic termination QMC scheme. The reader should consult [1] (especially Algorithm 1.1) for a full discussion, but for convenience we provide a cursory summary. Suppose we would like to estimate the matrix \((\mathbf{I}_{N}-\mathbf{U})^{-2}\), with \(\mathbf{U}\in\mathbb{R}^{N\times N}\) a weighted adjacency matrix of a graph with \(N\) nodes and no loops. [1] introduces a novel algorithm to construct a low-rank decomposition of this matrix. The author uses _graph random features_ (GRFs) \(\phi(i)\in\mathbb{R}^{N}\), with \(i\) the index of one of the nodes, designed such that \[(\mathbf{I}_{N}-\mathbf{U})_{ij}^{-2}=\mathbb{E}\left(\phi(i)^{\top}\phi(j) \right). \tag{7}\] They construct \(\phi(i)\) by taking \(m\) random walks \(\{\bar{\Omega}(k,i)\}_{k=1}^{m}\) on the graph out of node \(i\), depositing a 'load' at every node that depends on i) the product of edge weights traversed by the subwalk and ii) the marginal probability of the subwalk. Importantly, each walk terminates with probability \(p\) at every timestep. The \(x\)th component of the \(i\)th random feature is given by \[\boldsymbol{\phi}(i)_{x}\coloneqq\frac{1}{m}\sum_{k=1}^{m}\sum_{\omega_{1} \in\Omega_{ix}}\frac{\widetilde{\omega}(\omega_{1})}{p(\omega_{1})}\mathbb{I} (\omega_{1}\in\bar{\Omega}(k,i)), \tag{8}\] where: \(k\) enumerates \(m\) random walks we sample out of node \(i\); \(\omega_{1}\) denotes a particular walk from the set of all walks \(\Omega_{ix}\) between nodes \(i\) and \(x\); \(\widetilde{\omega}(\omega_{1})\) is the product of weights of the edges traversed by the walk \(\omega_{1}\); \(p(\omega_{1})\) is the marginal probability that a walk contains a subwalk \(\omega_{1}\), given by \(p(\omega_{1})=((1-p)/d)^{\text{len}(\omega_{1})}\) in the simplest case of a walk of length \(\text{len}(\omega_{1})\) on a \(d\)-regular graph; \(\mathbb{I}(\omega_{1}\in\bar{\Omega}(k,i))\) is an indicator function that evaluates to \(1\) when the walk \(\omega_{1}\) is a subwalk of the \(k\)th random walk sampled from \(i\) (itself denoted \(\bar{\Omega}(k,i)\)) and is \(0\) otherwise. It is simple to see how Eq. 8 satisfies Eq. 7. By construction \[\mathbb{E}\left[\mathbb{I}(\omega_{1}\in\bar{\Omega}(k,i))\mathbb{I}(\omega_{ 2}\in\bar{\Omega}(l,j))\right]=p(\omega_{1})p(\omega_{2}) \tag{9}\] for independent walks, whereupon \[\mathbb{E}\left(\boldsymbol{\phi}(i)^{\top}\boldsymbol{\phi}(j)\right)=\sum_ {x\in\mathcal{V}}\sum_{\omega_{1}\in\Omega_{ix}}\sum_{\omega_{2}\in\Omega_{jx }}\widetilde{\omega}(\omega_{1})\widetilde{\omega}(\omega_{2})=\sum_{\omega \in\Omega_{ij}}(\text{len}(\omega)+1)\widetilde{\omega}(\omega)=(\mathbf{I}- \mathbf{U})_{ij}^{-2}\,. \tag{10}\] This shows us that the estimator is unbiased. The central contribution of this work is a QMC scheme that induces correlations between the \(m\) walks out of each node to suppress the variance of the estimator \(\phi(i)^{\top}\phi(j)\) without breaking this unbiasedness. ## 3 q-GRFs and antithetic termination We will now present our novel antithetic termination mechanism. It generalises the notion of antithetic variates - a common, computationally cheap variance-reduction technique when sampling in Euclidean space [14] - to the termination behaviour of random walks. We have seen that, in the i.i.d. implementation of the GRF algorithm, each walker terminates independently with probability \(p\) at every timestep. For a pair of i.i.d. walkers out of node \(i\), this is implemented by independently sampling two _termination random variables_ (TRVs) between \(0\) and \(1\) from a uniform distribution, \(t_{1,2}\sim\text{Unif}\,(0,1)\). Each walker terminates if its respective TRV is less than \(p\), \(t_{1,2}<p\). In contrast, we define the _antithetic_ walker as follows. **Definition 3.1** (Antithetic walkers).: _We refer to a pair of walkers as antithetic if their TRVs are marginally distributed as \(t_{1,2}=\mathrm{Unif}(0,1)\) but are offset by \(\frac{1}{2}\),_ \[t_{2}=\mathrm{mod}_{1}\left(t_{1}+\frac{1}{2}\right), \tag{11}\] _such that we have the conditional distribution_ \[p(t_{2}|t_{1})=\delta\left(\mathrm{mod}_{1}(t_{2}-t_{1})-\frac{1}{2}\right). \tag{12}\] **Computational cost**: we note that the computational cost of generating an antithetic TRV according to Eq. 11 is no greater than the cost of generating an independent TRV, so this drop-in replacement in the GRF algorithm is cheap. We provide a schematic in Fig. 0(a). Since the marginal distributions over \(t_{i}\) are unchanged our estimator remains unbiased, but the couplings between TRVs lead to statistical correlations between the walkers' terminations. Denoting by \(s_{1}\) the event that event that walker \(1\) terminates at some timestep, \(s_{2}\) the event that walker \(2\) terminates and \(\bar{s}_{1,2}\) their complements, it is straightforward to convince oneself that for \(p\leq\frac{1}{2}\) \[p(s_{1})=p(s_{2})=p, p(\bar{s}_{1})=p(\bar{s}_{2})=1-p, p(s_{2}|s_{1})=0, \tag{13}\] \[p(\bar{s}_{2}|s_{1})=1, p(s_{2}|\bar{s}_{1})=\frac{p}{1-p}, p(\bar{s}_{2}|\bar{s}_{1})=\frac{1-2p}{1-p}.\] This termination coupling modifies the joint probability over walk lengths. In the i.i.d. scheme, the walks are independent and are of expected length \[\mathbb{E}(\text{len}(\omega))=\frac{1-p}{p}. \tag{14}\] These _marginal_ expectations are preserved in the antithetic scheme, but now the expected length of one walk _conditioned on the length of the other_ is \[\mathbb{E}(\text{len}(\omega_{2})|\text{len}(\omega_{1})=m)=\frac{1-2p}{p}+2 \left(\frac{1-2p}{1-p}\right)^{m}, \tag{15}\] which we derive in App. 8.2. It is straightforward to see that the two lengths are negatively correlated. Antithetic termination 'diversifies' the lengths of random walks we sample, preventing them from clustering together, and in the spirit of QMC this turns out to suppress the kernel estimator variance. See Fig. 0(b) for a schematic. We refer to random features constructed with antithetic walkers as _quasi-Monte Carlo graph random features_ (q-GRFs). ### Theoretical results In this section, we state and discuss our central theoretical results for the q-GRFs mechanism. Sec. 3.1.1 provides a sketch, but full proofs are deferred to App. 8.3. We remind the reader that results for \((\mathbf{I}_{N}-\mathbf{U})^{-2}\) are trivially applied to \(\mathbf{K}_{\text{lap}}^{(2)}\) (see App. 8.1). **Theorem 3.2** (Antithetic termination is better than i.i.d.).: _For any graph, q-GRFs will give lower variance on estimators of \((\mathbf{I}_{N}-\mathbf{U})^{-2}\) than regular GRFs provided either i) the termination probability \(p\) is sufficiently small or ii) the spectral radius \(\rho(\mathbf{U})\) is sufficiently small._ Figure 1: _Left_: schematic of the i.i.d. (GRF) and antithetic (q-GRF) mechanisms in termination space. \(f(t)\) is the probability density of the termination random variable (TRV) \(t\). Vertical arrows represent draws of \(t\), with the walker terminating if they lie in the pink region where \(t<p\). With q-GRFs the TRVs are offset by \(\frac{1}{2}\), modifying the joint distribution over walk lengths. _Right_: demonstration with \(4\) random walks on the karate graph, beginning at some node labelled \(i\). The blue pair of antithetic walks (q-GRFs) have very different lengths; they cannot terminate simultaneously. The red pair of i.i.d. walks (GRFs) have similar lengths. We prove that q-GRFs give lower variance estimators of the \(2\)-regularised Laplacian kernel. By'sufficiently small' we mean that for a fixed \(\rho(\mathbf{U})\) there exists some value of \(p\) below which antithetic termination will outperform i.i.d.. Likewise, for fixed \(p\) there exists some value of \(\rho(\mathbf{U})\). These conditions turn out to not be too restrictive in our experiments; antithetic termination is actually very effective at \(p=0.5\) which we use for practical applications. Considering Eq. 11 carefully, it is easy to see that the termination probabilities in Eq. 13 are not particular to a TRV offset equal to \(\frac{1}{2}\), and in fact hold for any offset \(\Delta\) satisfying \(p\leq\Delta\leq 1-p\). An immediate corollary is as follows. **Corollary 3.3** (Maximum size of an antithetic ensemble).: _For a termination probability \(p\), up to \(\lfloor p^{-1}\rfloor\) random walkers can all be conditioned to exhibit mutually antithetic termination._ This is achieved by offsetting their respective TRVs by \(\Delta=p\). The resulting _antithetic ensemble_ will have lower kernel estimator variance than the equivalent number of i.i.d. walkers or a set of mutually independent antithetic pairs. We make one further interesting remark. **Theorem 3.4** (Termination correlations beyond antithetic).: _A pair of random walkers with TRVs offset by \(p(1-p)<\Delta<p\) will exhibit lower variance on estimators of \((\mathbf{I}_{N}-\mathbf{U})^{-2}\) than independent walkers, provided either i) the termination probability \(p\) is sufficiently small or ii) the spectral radius of the weighted adjacency matrix \(\rho(\mathbf{U})\) is sufficiently small._ This provides an upper limit of \(\lfloor(p(1-p))^{-1}\rfloor\) on the number of walkers we can simultaneously correlate before we can no longer guarantee that coupling in a further walker's TRV will be better than sampling it independently. Intuitively, Theorem 3.4 tells us that we can space TRVs even more closely than \(p\), allowing us to increase the number of simultaneously anticorrelated random walkers at the cost of the strength of negative correlations between walkers with neighbouring TRVs. #### 3.1.1 Proof sketch In this section, we will outline a proof strategy for the results reported earlier in this section. Full technical details are reported in App. 8.3. From its Taylor expansion, the \((ij)\)-th element of \((\mathbf{I}_{N}-\mathbf{U})^{-2}\) is nothing other than a sum over all possible paths between the nodes \(i\) and \(j\), weighted by their lengths and the respective products of edge weights. GRFs use a Monte Carlo scheme to approximate this sum by sampling such walks at random - concretely, by first sampling separate random walks out of nodes \(i\) and \(j\) and adding contributions wherever they intersect. In order to sample the space of walks between nodes \(i\) and \(j\) more efficiently, it follows that we should make our ensemble of walks out of each node more diverse. q-GRFs achieve this by inducing negative correlations such that they are different lengths. In more detail, it is clear that the variance of the kernel estimator will depend upon the expectation of the square of \(\phi(i)^{\top}\phi(j)\). Each term in the resulting sum will take the form \[\begin{split}&\sum_{x,y\in\mathcal{V}}\sum_{\omega_{1}\in\Omega_{ ix}}\sum_{\omega_{2}\in\Omega_{jx}}\sum_{\omega_{3}\in\Omega_{ij}}\sum_{\omega_{4} \in\Omega_{jj}}\frac{\widetilde{\omega}(\omega_{1})}{p(\omega_{1})}\frac{ \widetilde{\omega}(\omega_{2})}{p(\omega_{2})}\frac{\widetilde{\omega}(\omega _{3})}{p(\omega_{3})}\frac{\widetilde{\omega}(\omega_{4})}{p(\omega_{4})}\\ &\quad\quad\cdot p(\omega_{1}\in\bar{\Omega}(k_{1},i),\omega_{3} \in\bar{\Omega}(k_{2},i))p(\omega_{2}\in\bar{\Omega}(l_{1},j),\omega_{4}\in \bar{\Omega}(l_{2},j)),\end{split} \tag{16}\] where we direct the reader to Sec. 2.2 for the symbol definitions. Supposing that all edge weights are equal (an assumption we relax later), the summand is a function of the _length_ of each of the walks \(\omega_{1,2,3,4}\). It is then natural to write the sum over all walks between nodes \(i\) and \(x\) as a sum over walk lengths \(m\), with each term weighted by a combinatorial factor that counts the number of walks of said length. This factor is \((\mathbf{A}^{m})_{ix}\), with \(\mathbf{A}\) the unweighted adjacency matrix. That is, \[\sum_{\omega_{1}\in\Omega_{ix}}\left(\cdot\right)=\sum_{m=1}^{\infty}(\mathbf{ A}^{m})_{ix}\left(\cdot\right). \tag{17}\] \((\mathbf{A}^{m})_{ix}\) is readily written as its eigendecomposition, \(\sum_{p=1}^{N}\lambda_{p}^{m}k_{pi}k_{px}\), with \(\lambda_{p}\) the \(p\)th eigenvalue and \(k_{pi}\) the \(i\)-th coordinate of the \(p\)-th eigenvector \(\mathbf{k}_{p}\). We put these into Eq. 16 and perform the sums over each path length \(m_{1,2,3,4}\) from \(1\) to \(\infty\), arriving at \[\sum_{x,y\in\mathcal{V}}\sum_{k_{1},k_{2},k_{3},k_{4}}f(\lambda_{1},\lambda_{2 },\lambda_{3},\lambda_{4})k_{1i}k_{1x}k_{2j}k_{2x}k_{3i}k_{3y}k_{4j}k_{4y} \tag{18}\] where \(f(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})\) is a function of the eigenvalues of \(\mathbf{A}\) that depends on whether we correlate the terminations of the walkers. Since the eigenvectors are orthogonal we have that \(\sum_{x\in\mathcal{V}}k_{1x}k_{2x}=\delta_{12}\), so this reduces to \[\sum_{k_{1},k_{3}}f(\lambda_{1},\lambda_{1},\lambda_{3},\lambda_{3})k_{1i}k_{1 j}k_{3i}k_{4j}. \tag{19}\] Our task becomes to prove that this expression becomes smaller when we induce antithetic termination. We achieve this by showing that a particular matrix is negative definite. ## 4 Experiments In this section we report on empirical evaluations of q-GRFs. We confirm that they give lower kernel estimator variance than regular GRFs and show that this often leads to substantially better performance in downstream tasks, including simulation of graph diffusion, \(k\)-means node clustering and kernel regression for node attribute prediction. We use ensembles of antithetic pairs as described in Def. 3.1. ### Estimation of the \(2\)-regularised Laplacian kernel We begin with the simplest of tasks: estimation of the \(2\)-regularised Laplacian kernel, \[\mathbf{K}_{\text{lap}}^{(2)}=(\mathbf{I}_{N}+\sigma^{2}\widetilde{\mathbf{L }})^{-2}, \tag{20}\] where \(\widetilde{\mathbf{L}}\in\mathbb{R}^{N\times N}\) is the symmetrically normalised Laplacian. \(0<\sigma<1\) is a regulariser. We use both GRFs and q-GRFs to generate unbiased estimates \(\widetilde{\mathbf{K}}_{\text{lap}}^{(2)}\) (see App. 8.1), then compute the relative Frobenius norm \(\|\mathbf{K}_{\text{lap}}^{(2)}-\widetilde{\mathbf{K}}_{\text{lap}}^{(2)}\|_ {\text{F}}^{2}/\|\mathbf{K}_{\text{lap}}^{(2)}\|_{\text{F}}^{2}\) between the true and approximated kernel matrices. This enables us to compare the quality of the estimators. As a benchmark, we also include an implementation of the high-level reinforced random walk QMC mechanism suggested (but not tested) by Choromanski (2023). We choose the exponential mapping as the reinforcement function \(f\) (used to downweight the probability of traversing previously-visited edges), although the optimal choice remains an open problem. We refer to this mechanism as _q-RRW-GRFs_ to disambiguate from our instantiation of q-GRFs (which uses antithetic termination). Fig. 2 presents the results for a broad class of graphs: small Erdos-Renyi, larger Erdos-Renyi, a binary rooted tree, a ladder, and four real-world examples available from (Ivashkin, 2023) (karate, Figure 2: Relative Frobenius norm error of estimator of the \(2\)-regularised Laplacian kernel with GRFs (red circle) and q-GRFs (green cross). Lower is better. We also include q-RRW-GRFs (Choromanski, 2023) (blue triangle), instantiated with \(f=\exp\), as a benchmark. Our novel q-GRFs perform the best on every graph considered. \(N\) is the number of nodes and, for the Erdős-Renyi (ER) graphs, \(p\) is the edge-generation probability. One standard deviation is shaded but it is too small to easily see. dolphins, football and eurosis). We consider \(2\), \(4\), \(8\) and \(16\) walks, taking \(100\) repeats for the variance of the approximation error. We use the regulariser \(\sigma=0.1\) and the termination probability \(p=0.5\). The quality of kernel approximation naturally improves with the number of walkers. Inducing antithetic coupling consistently reduces estimator variance, with our q-GRF mechanism outperforming regular GRFs in every case. The exact size of the gain depends on the particular graph (according to the closed forms derived in App. 8.3), but improvement is always present. It is intriguing that tree-like and planar graphs tend to enjoy a bigger gap; we defer a rigorous theoretical analysis to future work. Meanwhile, the q-RRW-GRF variant with the exponential \(f\) is often worse than the regular mechanism and is substantially more expensive. We do not include it in later experiments. ### Scalable and accurate simulation of graph diffusion In Sec. 2.1, we noted that the Laplacian \(\mathbf{L}\) (or \(\widetilde{\mathbf{L}}\)) is the natural operator to describe diffusion on discrete spaces and that the \(1\)-regularised Laplacian kernel constitutes the corresponding discrete time-evolution operator. Here we will show how, by leveraging q-GRFs to provide a lower-variance low-rank decomposition of \(\mathbf{K}_{\text{lap}}^{(2)}\), we can simulate graph diffusion in a scalable and accurate way. Choosing a finite (even) number of discretisation timesteps \(N_{t}\), we can approximate the final state \(\mathbf{u}_{t}\) \[\mathbf{u}_{t}\simeq\widetilde{\mathbf{u}}_{t}\coloneqq\left[(\mathbf{I}_{N}+ \frac{t}{N_{t}}\widetilde{\mathbf{L}})^{-2}\right]^{\frac{N_{t}}{2}}\mathbf{ u}_{0}=\mathbf{K}_{\text{lap}}^{(2)\frac{N_{t}}{2}}\mathbf{u}_{0}. \tag{21}\] We can efficiently compute this using our low-rank GRF or q-GRF decomposition of \(\mathbf{K}_{\text{lap}}^{(2)}\) and compare the accuracy of reconstruction of \(\mathbf{u}_{t}\). In particular, we take an initial one-hot state \(\mathbf{u}_{0}=(1,0,0,...,0)^{\top}\) and simulate diffusion for \(t=1\) divided into \(N_{t}=1000\) timesteps, using \(m=10\) walkers and a termination probability \(p=0.5\). Fig. 3 gives a schematic. We average the MSE of \(\widetilde{\mathbf{u}}_{t}\) over \(1000\) trials. Table 1 reports the results; q-GRFs approximate the time evolution operator more accurately so consistently give a lower simulation error. Since the time-evolution operator is applied repeatedly, even modest improvements in its approximation can lead to a substantially better reconstruction of the final state. In extreme cases the simulation error is halved. \begin{table} \begin{tabular}{l|c|c c} \hline \hline Graph & \(N\) & \multicolumn{2}{c}{Sim error, \(\operatorname{MSE}(\widetilde{\mathbf{u}}_{t})\)} \\ & & GRFs & q-GRFs \\ \hline Small ER & 20 & 0.0210(5) & **0.0160(3)** \\ Larger ER & 100 & 0.0179(9) & **0.0085(3)** \\ Binary tree & 127 & 0.0161(6) & **0.0106(3)** \\ Ladder & 100 & 0.0190(8) & **0.0105(3)** \\ karate & 34 & 0.066(2) & **0.054(1)** \\ dolphins & 62 & 0.0165(4) & **0.0139(3)** \\ football & 115 & 0.0170(3) & **0.0160(2)** \\ eurosis & 1272 & 0.089(1) & **0.084(1)** \\ \hline \hline \end{tabular} \end{table} Table 1: MSE of the approximation of the final state \(\mathbf{u}_{t}\) by repeated application of a low-rank decomposition of the discrete time-evolution operator, constructed with (q-)GRFs. Brackets give one standard deviation. Figure 3: Schematic of diffusion on a small graph, with heat initially localised on a single node spreading under the action of the Laplace operator. We use (q-)GRFs to estimate the quantity on the right. ### Kernelised \(k\)-means clustering for graph nodes Next we evaluate the performance of GRFs and q-GRFs on the task of assigning nodes to clusters using the graph kernel, as described by Dhillon et al. (2004). We first run the algorithm to obtain \(N_{c}=2\) clusters with the exact \(2\)-regularised Laplacian kernel, then compare the results when we use its approximation via GRFs and q-GRFs. In each case, we report the _clustering error_, defined by \[E_{c}\coloneqq\frac{\text{no. wrong pairs}}{N(N-1)/2}. \tag{22}\] This is simply the number of misclassified pairs (in the sense of being assigned to the same cluster when the converse is true or vice versa) divided by the total number of pairs. The results are less straightforward because curiously the clustering error does not generally vary monotonically with the variance on the kernel estimator, but nonetheless in six out of eight cases q-GRFs provide equally good or better results. ### Kernel regression for node attribute prediction Lastly, we consider the problem of kernel regression on a triangular mesh graph. Each node is associated with a normal vector \(\mathbf{v}^{(i)}\) (equal to the mean of the normal vectors of its surrounding faces). We consider five meshes of different sizes, available from (Dawson-Haggerty, 2023). We predict a random \(5\%\) split of the vectors \(\mathbf{v}^{(i)}\) ('test') from the remaining \(95\%\) ('train') using \[\widetilde{\mathbf{v}}^{(i)}\coloneqq\frac{\sum_{j}\mathbf{K}_{\text{lap}}^{(2)}( i,j)\mathbf{v}^{(j)}}{\sum_{j}\mathbf{K}_{\text{lap}}^{(2)}(i,j)}, \tag{23}\] where \(j\) sums over the training vertices. This is simply a linear combination of all the training node normal vectors, weighted by their respective kernel evaluations. We compute the average angular error \(1-\cos\theta\) between the prediction \(\widetilde{\mathbf{v}}^{(i)}\) and groundtruth \(\mathbf{v}^{(i)}\) across the test set, comparing the result when \(\mathbf{K}_{\text{lap}}^{(2)}\) is approximated with GRFs and q-GRFs with \(m=6\) random walks at a termination probability \(p=0.5\). The regulariser is \(\sigma=0.1\). q-GRFs enjoy lower estimator variance so consistently give better predictions of the missing vectors. ## 5 Conclusion We have proposed a novel class of quasi-Monte Carlo graph random features (q-GRFs) for unbiased and efficient estimation of kernels defined on the nodes of a graph. We have proved that our new algorithm, which induces negative statistical correlations between the lengths of graph random walks via antithetic termination, enjoys better convergence properties than its regular predecessor (GRFs). This very often permits better performance in empirical tasks and for some applications the improvement is substantial. Our work ushers in further research in this new domain of quasi-Monte Carlo methods for kernels on combinatorial objects. It may be of broader interest including for algorithms that sample random walks. ## 6 Broader impacts and limitations We envisage several possible **impacts** of our novel algorithm. First, graphs provide a natural way to describe systems characterised by complex biological interactions such as the proteins at the proteome \begin{table} \begin{tabular}{l|c|c c} \hline \hline Graph & \(N\) & \multicolumn{2}{c}{Clustering error, \(E_{c}\)} \\ & & GRFs & q-GRFs \\ \hline karate & 34 & 0.11 & **0.05** \\ databases & 1046 & 0.17 & **0.11** \\ polbooks & 105 & **0.28** & **0.28** \\ dolphins & 62 & 0.40 & **0.38** \\ football & 115 & 0.10 & **0.09** \\ citeseer & 3300 & 0.02 & **0.01** \\ eurosis & 1285 & **0.15** & 0.19 \\ eu-core & 1005 & **0.04** & 0.05 \\ \hline \hline \end{tabular} \end{table} Table 2: Errors in kernelised \(k\)-means clustering when approximating the Gram matrix with q-GRFs and GRFs. Lower is better. \begin{table} \begin{tabular}{l|c|c c} \hline \hline Graph & \(N\) & \multicolumn{2}{c}{Pred error, \(1-\cos\theta\)} \\ & & GRFs & q-GRFs \\ \hline cylinder & 210 & 0.104(1) & **0.101(1)** \\ teapot & 480 & 0.0531(5) & **0.0493(5)** \\ idler-riser & 782 & 0.0881(6) & **0.0852(5)** \\ busted & 1941 & 0.00690(4) & **0.00661(4)** \\ torus & 4350 & 0.00131(1) & **0.00120(1)** \\ \hline \hline \end{tabular} \end{table} Table 3: \(1-\cos\theta\) between true and predicted node vectors when approximating the Gram matrix with q-GRFs and GRFs. Lower is better. Brackets give one standard deviation. scale or drugs in the body (Ingraham et al., 2019). Our novel QMC algorithm might be of translational impact in this **bioinformatics** setting. Anttichele termination is at its heart a procedure to improve the sampling efficiency of random walks, so it could also help mitigate the notoriously high **energy and carbon** cost of large models (Strubell et al., 2019). Lastly, we believe our results are of **intrinsic interest** as the first (to our knowledge) rigorously studied QMC scheme defined on a combinatorial object. They might spur further research in this new domain. Our work is **foundational** with no immediate direct negative societal impacts that we can see. However, it is important to note that increases in scalability afforded by GRF and q-GRF algorithms could amplify risks of graph-based machine learning, either from bad actors or as unintended consequences. The work also has some **limitations** and natural **directions for future research**. First, although we have derived closed-form expressions for the kernel estimator variance with GRFs and q-GRFs in App. 8.3, they are still complicated functions of the spectra of the respective graph Laplacians. Understanding **what characterises graphs that particularly benefit** from antithetic termination (empirically, tree-like and planar graphs) is an important future direction. Moreover, our scheme only correlates walk lengths. A more sophisticated mechanism that **couples walk directions** might do even better. Lastly, further work is needed to fully understand the applicability of antithetic termination **beyond the GRF setting**. Relative contributions and acknowledgements IR devised the antithetic termination scheme, proved all theoretical results and ran the experiments in Secs 4.1, 4.2 and 4.4. KC provided crucial support throughout, particularly: running the clustering experiment in Sec. 4.3, showing how \(d\)-regularised Laplacian kernels can be used to approximate the graph diffusion process, and proposing to apply q-GRFs to the problems in Secs 4.2 and 4.4. AW gave important guidance and feedback on the manuscript. IR acknowledges support from a Trinity College External Studentship. AW acknowledges support from a Turing AI fellowship under grant EP/V025279/1 and the Leverhulme Trust via CFI. We thank Austin Tripp and Kenza Tazi for their thoughtful feedback on earlier versions of the text, and Michael Ren for his excellent suggestion to treat the negative definite property perturbatively.
2308.03581
Towards Controllable Natural Language Inference through Lexical Inference Types
Explainable natural language inference aims to provide a mechanism to produce explanatory (abductive) inference chains which ground claims to their supporting premises. A recent corpus called EntailmentBank strives to advance this task by explaining the answer to a question using an entailment tree \cite{dalvi2021explaining}. They employ the T5 model to directly generate the tree, which can explain how the answer is inferred. However, it lacks the ability to explain and control the generation of intermediate steps, which is crucial for the multi-hop inference process. % One recent corpus, EntailmentBank, aims to push this task forward by explaining an answer to a question according to an entailment tree \cite{dalvi2021explaining}. They employ T5 to generate the tree directly, which can explain how the answer is inferred but cannot explain how the intermediate is generated, which is essential to the multi-hop inference process. In this work, we focus on proposing a controlled natural language inference architecture for multi-premise explanatory inference. To improve control and enable explanatory analysis over the generation, we define lexical inference types based on Abstract Meaning Representation (AMR) graph and modify the architecture of T5 to learn a latent sentence representation (T5 bottleneck) conditioned on said type information. We also deliver a dataset of approximately 5000 annotated explanatory inference steps, with well-grounded lexical-symbolic operations. Experimental results indicate that the inference typing induced at the T5 bottleneck can help T5 to generate a conclusion under explicit control.
Yingji Zhang, Danilo S. Carvalho, Ian Pratt-Hartmann, Andre Freitas
2023-08-07T13:37:05Z
http://arxiv.org/abs/2308.03581v1
# Towards Controllable Natural Language Inference through ###### Abstract Explainable natural language inference aims to provide a mechanism to produce explanatory (abductive) inference chains which ground claims to their supporting premises. A recent corpus called EntailmentBank strives to advance this task by explaining the answer to a question using an entailment tree (Dalvi et al., 2021). They employ the T5 model to directly generate the tree, which can explain how the answer is inferred. However, it lacks the ability to explain and control the generation of intermediate steps, which is crucial for the multi-hop inference process. In this work, we focus on proposing a controlled natural language inference architecture for multi-premise explanatory inference. To improve control and enable explanatory analysis over the generation, we define lexical inference types based on Abstract Meaning Representation (AMR) graph and modify the architecture of T5 to learn a latent sentence representation (T5 bottleneck) conditioned on said type information. We also deliver a dataset of approximately 5000 annotated explanatory inference steps, with well-grounded lexical-symbolic operations. Experimental results indicate that the inference typing induced at the T5 bottleneck can help T5 to generate a conclusion under explicit control. ## 1 Introduction Explanation-based Natural Language Inference (NLI) aims to provide a mechanism to produce explanatory (abductive) inference chains which ground claims to their supporting premises (Thayaparan et al., 2020). This inference process allows a systematic way to support the construction of a step-wise explanation hierarchy, via a particular type of textual entailment which encodes claim-premise relationships at different levels of abstraction. This type of inference is particularly relevant in scientific domains, where reasoning needs to be grounded on sets of known facts and plausible and verifiable abstraction steps. One common way of organising these explanatory chains is with the support of entailment trees (Figure 1), a hierarchical structure linking multiple layers of multiple premises to claims. Recent datasets such as the EntailmentBank (Dalvi et al., 2021) provide curated collections of such entailment trees. Although large-scale pretrained language models, such as T5 (Raffel et al., 2020), have been a fundamental component of explanation-based NLI models due to their transferability across different NLI tasks (Thayaparan et al., 2020), the controllability and step-wise explainability of its internal reasoning have not been fully addressed. For example, Dalvi et al. (2021) deploy T5 as the baseline for generating an entailment tree given a set of premises directly, which can explain how the final conclusion is inferred but cannot explain how the intermediate steps are generated. In this case, the multi-hop explanation-based NLI based on T5 cannot be fully controlled and explained, which is also a potential source of semantic drift, namely when one or more inference steps do not lead to the inferred conclusion. Delivering a step-wise explainable NLI from a language model requires mechanisms which can define a quasi-symbolic inference behaviour from lexical patterns. For example, looking at each premise-claim step in Figure 1, there are clear patterns of symbolic inference behaviour such as substitution Figure 1: Example of entailment tree. Three steps (1, 2, 3) are present from bottom to top. (step 2), conjunction (step 1), and specification (step 3). Those lexical inference patterns (named here inference types) reflect a step-wise/localised semantic change that can guide a T5-based model to perform explainable inference operations. More importantly, we may control the inference process by manipulating those types at each inference step. For example, given two premises: _milk is a kind of liquid_ and _liquids can flow_, we can infer the conclusion _milk can flow_ by substituting _liquid_ with _milk_ or _both milk and liquids can flow_ by conjunction, when the inference types are _argument substitution_ and _conjunction_, respectively. To deliver an explainable and controllable inference process at each step of the entailment tree, this work focuses on the task of generating a next-step conclusion given two premises. We define a set of universal lexical inference types based on symbolic transformations over explanatory sentences grounded on AMR Banarescu et al. (2013). By incorporating these implicit lexical transformation patterns behind inference data as explicit labels, the reasoning process can be better controlled. To incorporate inference type information into the language model, we explore T5 as an inference model with latent sentence representation (named T5 bottleneck) as the semantic information can be controlled over the sentence space Hu et al. (2017); Logeswaran et al. (2018). More importantly, the sentence space can be carefully designed in a disentangled manner and optimized by joint loss terms Bao et al. (2019); John et al. (2019) to better guide the inference behaviour in the Encoder and the generation of the Decoder. In summary, this work proposes a mechanism to induce controllable inference behaviour for explanation-based NLI and comprises two contributions: (1) We define a set of fine-grained lexical inference types which reflect quasi-symbolic premises-claim entailment mechanisms. Said type information is injected into the language model using T5 bottleneck, a modification of the original architecture to elicit latent sentence representations. This modification preserves most of the original model's generation capabilities while allowing a more explicit internal projection of the explanatory sentences, which has the potential to better guide the inference and generation processes by manipulating the latent spaces. (2) a systematically annotated corpus of lexical inference types over entailment trees in EntailmentBank corpus Dalvi et al. (2021). ## 2 Related work Multi-step textual entailmentMulti-step textual entailment refers to the generation of reasoning mechanisms that can interpret how an answer is inferred by given pieces of step-wise evidence (as shown in Figure 1). Clark et al. (2020) first framed the use of transformer-based approaches, such as RoBERTa Liu et al. (2019), as a soft NLI reasoner. Based on their work, Tafjord et al. (2020) introduced ProofWriter, a model which is built on T5 for generating an answer to a question, taking into account facts and rules. Both works aim to control the inference process by natural language rules. E.g., given a fact: _Erin is young_ and a rule: _if someone is young then they are big_, a conclusion can be inferred _Erin is big_. Recent work Dalvi et al. (2021) focused on more natural and complex inference patterns where abstraction, specification, generalisation, and rule-based deduction are included in their corpus (the EntailmentBank dataset). Their model, EntailmentWriter (T5), can directly generate entailment trees to explain the inference process. However, the proposed model does not have specific mechanisms for step-wise symbolic control. In this work, we propose to control the inference process by guiding its generation through explanatory lexical inference types. Latent sentence embeddingLet sentence embeddings are widely deployed in AutoEncoder setup, such as Variational Kingma and Welling (2013), Denoising Vincent et al. (2008), Adversarial Makhzani et al. (2016). Due to their explainability and controllability, they have been applied in various tasks, including story generation Fang et al. (2021), dialogue generation Zhao et al. (2017), text style transfer John et al. (2019); Shen et al. (2020), text paraphrasing Bao et al. (2019), among others. However, those models are less explored in the NLI task. As the first step, this work introduces the T5 bottleneck and evaluates it on the NLI task. Exploring the utilization of sentence embedding to enhance reasoning and generation is a novel and less-explored avenue Lee et al. (2019). One potential approach is to focus on learning disentangled sentence embeddings in which different semantic information can be separate Bao et al. (2019); John et al. (2019); Zhang et al. (2022, 2023); Carvalho et al. (2023). This separability can precisely con trol the generation process in the Decoder, which can avoid the semantic drift issue. Meanwhile, the inference process in the Encoder can be potentially optimized by incorporating multi-task loss terms that constrain the latent space [1] (e.g., predicting the inference behaviour according to inference types). Therefore, this work proposes incorporating the inference type information to the language model at sentence level through T5 bottleneck. ## 3 Defining lexical inference types Valentino et al. valentino2021natural has demonstrated that stepwise explanation-based natural language inference cannot be directly framed as pure logical reasoning. Explanatory chains, while looking plausible at first inspection, commonly have subtler incompleteness and consistency problems from a logical point of view. At the same time, explanatory chains correspond to definable inference patterns and localised symbolic operations over the sentence structure. Motivated by this middle ground between logical representations and lexico-semantic inference patterns, in this section, we introduce a set of granular inference types grounded on the abstract meaning representations of explanatory sentences. Based on the inference types described in [1], we contribute to this work with the derivation of a more granular set of sentence-level lexico-semantic inference types. In order to systematically ground these sentence-level inference types, Abstract Meaning Representation [1] (AMR) graphs are used as a mechanism to precisely define the symbolic operations entailed in the multi-hop inference process - linking the transformations from premises to the conclusion through these granular symbolic operations. Please note that AMR is not used as a representation mechanism in the proposed architecture, but only to precisely ground these symbolic operations within a well-defined semantic representation structure. Table 1 describes the original inference types from the EntailmenBank corpus (which were more informally defined), contrasting them to the corresponding AMR- grounded and fine-grained operation inference types. The AMR formalisation of lexical inference patterns allows for the definition of a more granular inference typing system, grounded on localised symbolic operations over semantic representations of sentences. Next, we define each lexico-semantic inference type and the corresponding symbolic forms. The substitution category refers to obtaining a conclusion by replacing a predicate/argument term from one premise with a predicate/argument term from the other premise. Possible variations of this category include _argument (ARG) substitution_, _predicate (PRED) substitution_, and _frame (PRED+ARG) substitution_. In this category, most entailment relations are straightforward. One premise is used to connect two terms which are usually connected by _is a kind of, is a part of, is a source of, means, is, are_, etc. This can be symbolically represented as a subgraph substitution operation over the premise graphs, as illustrated in Figure 2. The _PRED substitution_ category works in a similar manner, but replacing a predicate term. The two predicates are usually linked by the following patterns: \(v_{1}\)_is a kind of \(v_{2}\), to \(v_{1}\)_something means to \(v_{2}\)_something, \(v\)_something has a negative impact on that something_, etc. The _frame (PRED+ARG) substitution_ category combines both previous categories by replacing a frame (predicate subgraph) of one of the premises with one from the other premise. The _further specification_ or _conjunction_ categories allows for obtaining a conclusion by joining Figure 2: AMR argument substitution. both premises. It includes _ARG insertion_, _frame insertion_ and _frame conjunction_. In the case of _ARG insertion_, the conclusion is obtained by connecting an argument from one of the premises to a frame of the other, as illustrated in Figure 3. As for _frame conjunction/disjunction_, the conclusion is obtained by joining the premises graphs through a conjunction/disjunction node (_and_) or (_or_). The _inference from rule_ category from (Dalvi et al., 2021) comprises a specific case of insertion or substitution categorised as _conditional frame insertion/substitution_, where a frame is inserted or replaced as an argument of a premise following a conditional path in the other premise, as illustrated in Figure 4. The inference type _infer class from properties_ has been re-categorised as _ARG or PRED gener \begin{table} \begin{tabular}{l c c l} Original inference type & AMR op inference type & Prop. & Example entailment relation \\ \hline \multirow{7}{*}{Substitution} & ARG substitution & & P1: a scar on the knee is a kind of scar \\ & (ARG-SUB) & 19\% & P2: a scar is an acquired characteristic \\ & & & C: a scar on the knee is an acquired characteristic \\ & & & P1: food contains nutrients and energy for living things \\ & PRED substitution & 5\% & P2: to contain something can mean to store something \\ & (PRED-SUB) & & C: food stores nutrients and energy for living things \\ & Frame substitution & & P1: the formation of diamonds requires intense pressure \\ & (FRAME-SUB) & 20\% & P2: the pressure is intense deep below earth ’s crust \\ & & & C: the formation of diamonds occurs deep below the crust of the earth \\ \hline \multirow{2}{*}{Inference from Rule} & Conditional frame & & P1: if something is renewable then that something is not a fossil \\ & insertion/substitution & 12\% & P2: fuel wood is a renewable resource \\ & (COND-FRAME) & & C: wood is not a fossil fuel \\ \hline \multirow{7}{*}{Further Specification} & ARG insertion & & P1: solar energy comes from the sun \\ & (ARG-INS) & 18\% & P2: solar energy is a kind of energy \\ & (ARG-INS) & & P3: solar energy is a kind of energy that comes from the sun \\ & (FRAME-CONJ) & & P1: photosynthesis stores energy \\ & (FRAME-CONJ) & 6\% & P2: respiration releases energy \\ & & & C: photosynthesis stores energy and respiration releases energy \\ \hline \multirow{2}{*}{Infer Class} & ARG/PRED & & P1: rock is a hard material \\ & generalisation & 1\% & P2: granite is a hard material \\ & (ARG/PRED-GEN) & & C: granite is a kind of rock \\ \hline \multirow{2}{*}{Property Inheritance} & ARG substitution & & P1: blacktop is made of asphalt concrete \\ & (Property Inheritance) & 0.4\% & P2: asphalt has a smooth surface \\ & (ARG-SUB-PROP) & & C: a blacktop has a smooth surface \\ \hline \multirow{7}{*}{Unknown} & Example (EXAMPLE) & 0.9\% & a shelter can be used for living in by raccoons \\ & & & some raccoons live in hollow logs \\ & & & an example of a shelter is a raccon living in a hollow log \\ & & & an optical telescope requires visible light for human to use \\ & If... then... (IFT) & 0.8\% & clouds / dusts block visible light \\ & & & if there is clouds or dusts, then the optical telescope cannot be \\ & & & used \\ & Others (UNK) & 16\% & spiral is a kind of shape \\ & & & galaxies can be classified by shape \\ & & & spiral galaxy is a type of galaxy \\ \hline \end{tabular} \end{table} Table 1: Example of inference types. Their abbreviations are described in the parentheses, which is used in the paper. The Prop. represents the proportion of each category. The size of corpus is 5134 in our task. Figure 3: AMR argument insertion. alisation_, where a new _:domain_ relation frame is created if both premise graphs differ by a single predicate/argument term, as illustrated in Figure 5. _Property inheritance_, on the other hand, is a special case of _ARG substitution_, where one of the premises describes a _is made of_ relationship between the entity in the other premise and its replacement, as illustrated in Figure 6. Finally, the _other inference types_ category includes _example_, _if-then_, and _others_. Those inference types are defined according to the key lexical characteristic of the conclusion, as systematic AMR transformations which could be applied without rephrasing the underlying explanatory sentences could not be determined. For example, the _if-then_ category refers to a conclusion that has an _if then_ sentence structure in which both premises do not contain it. The _example_ category is a conclusion with the signalling word _example_, which does not appear in the either premise. As for other unknown categories, we do not further specify them, as they either require knowledge outside of the scope of the premises or do not have a consistent symbolic transformation expression. An additional subtype called _premise copy_ was included for the cases where the conclusion has the same graph as one of the premises. Annotation procedureAnnotation was performed manually for a set of 5000 entailment triples (two premises, one conclusion) from the EntailmentBank corpus Dalvi et al. (2021), according to Algorithm in Appendix A. Graph subset relations and root matching are relaxed for non-argument (:ARG*, op*) edges, meaning relations such as _:manner_ or _:time_ can be ignored for this purpose. Two annotators were used in this process, on a consensus-based annotation scheme where a first annotator defined the transformations and a second annotator verified and refined the annota Figure 4: AMR conditional frame insertion. Figure 5: AMR argument generalisation. Figure 6: AMR argument substitution (property inheritance). tion scheme, in two iterations. ## 4 Controlled inference architecture In this section, we describe the methodological framework behind the construction of the T5 sentence bottleneck, which is illustrated in Figure 7. The inference types are injected at the token level at either the input embeddings for the encoder or to the decoder embeddings (see Section 5.2). The first case provides typing information to the sentence embeddings, the latter provides contextual bias for the token generation by the decoder. Sentence embeddingWhile designing the sentence bottleneck, We compare the four most frequently used methods to transform token embeddings into sentence embeddings for a variety of current NLP tasks: (1) Mean pooling: calculating the mean of each dimension on all token embeddings and feeding the resulting vector into a multi-layer perceptron to obtain the sentence embedding (used in SentenceBERT Reimers and Gurevych (2019)). (2) multi-layer perceptron (MLP): applying an MLP to perform dimensionality reduction on token embeddings, and concatenating them together as sentence embedding, which can be described as: \[z=\text{concat}\Big{[}\text{MLP}_{1}(x_{1});...;\text{MLP}_{T}(x_{T})\Big{]}\] where \(\text{MLP}_{i}(x_{i})\) represents the \(i\)-th neural network for input representation of token \(x_{i}\), \(z\) is the latent sentence representation, and \(T\) is 70 in the experiment. (3) multi-head attention: feeding each token embedding into the multi-head attention and considering the first output embedding as the sentence embedding Montero et al. (2021), described as the next equation: \[z=\text{MultiHead}\left(XW^{q},XW^{k},XW^{v}\right)[0]\] where \(X=[x_{1},...,x_{T}]\) and \(W^{q}\), \(W^{k}\), and \(W^{v}\) are the weights for learning \(q\), \(k\), \(v\) embeddings in self-attention, respectively. (4) Sentence T5: reloading the weights from the pre-trained sentence T5 Ni et al. (2021) as sentence embeddings. This process is similar to mean pooling, however the pre-trained sentence representations may deliver better performance. Decoder connectionWe consider four common strategies to inject sentence embeddings into the decoder. (1) Cross-attention input embedding (CA Input): reconstructing the token embeddings from a sentence representation and directly feeding them into the cross-attention layers of the decoder. This mechanism is described in the next equation, where \(\hat{Y}\) is the reconstruction of decoder input sequence \(Y=[y_{1},...,y_{K}]\). \[\hat{Y}=\text{MultiHead}\left(YW^{q},\text{MLP}(z)W^{k},\text{MLP}(z)W^{v}\right)\] (2) Cross-attention KV embedding (CA KV): instead of reconstructing the token embeddings, it consists of directly learning the Key and Value that is used in the cross-attention mechanism. This is the strategy used in Optimus Li et al. (2020), which is formalised in the following equation, where \(\text{MLP}_{k}\) and \(\text{MLP}_{v}\) are neural layers for learning \(k\)\(v\) embeddings. \[\hat{Y}=\text{MultiHead}\Big{(}YW^{q},\text{MLP}_{k}(z),\text{MLP}_{v}(z) \Big{)}\] (3) Non-cross-attention input connection (NCA Input): reconstructing the token embeddings and element-wisely adding them with the word embeddings, positional embeddings of the decoder, which is similar to Fang et al. (2021). (4) Non-cross-attention output connection (NCA Output): reconstructing the token embeddings and adding them with the output embedding of the decoder. ## 5 Experiments ### Analysing the T5 bottleneck baselines In the experimental analysis, we analyze the performance of the T5 bottleneck on the target NLI task (conclusion generation from premises). We use the base version as the baseline for T5 and sentence bottleneck T5. The sentence representation dimension size is set to 768 for all experiments. Syllogism-style NLIFirstly, we compare the effect of different architectures on the performance of sentence bottleneck T5. We quantitatively evaluate the performance of different architectures. Table 3 illustrates the test losses. From it, we can observe that (1) pooling is always better than MLP and MHA. (2) cross-attention always outperforms non cross-attention. (3) pre-trained pooling is better than non pre-trained pooling. (4) reconstructing token embedding as the input of cross-attention is better than reconstructing Key and Value weights. Therefore, the best combination is using pre-trained sentenceT5 as the sentence encoder and reconstructing the token embedding as the input of cross-attention **(conclusion1)**. We also record the loss of T5 for comparison purposes. Additionally, we also qualitatively evaluate the models' performances in this task. We implement T5 and sentence bottleneck T5 (pre-trained sentenceT5 and CA input) as our baselines. As illustrated in Table 2, we can observe that both baselines can generate proper conclusions, although they are not identical to the golden conclusions. E.g., _food is an essential ingredient for a plant to grow_ from the original T5 compared with the golden conclusion: _a plant requires food to grow_. Additional examples can be found in Appendix C. the T5 bottleneck with Bert(base) Devlin et al. (2019) and pretrained Optimus (large VAE, Li et al. (2020)) and evaluate them via mean average precision (MAP). Those are not fine-tuned on the target dataset (WorldTree, Jansen et al. (2018)). Table 4 illustrated the performance of SCAR with different dense encoders. We can observe that the T5 bottleneck outperforms Optimus to represent the explanatory sentences, indicating that it has the potential to deliver better inference and generation via sentence space. ### Analysing the lexical inference types In this experiment, we evaluate the impact of the granular inference types (using the new annotated EntailmentBank corpus) on the performance of the baseline models, including T5(small), Bart(base) Lewis et al. (2019), T5 bottleneck, and Optimus Li et al. (2020). We focus on three target questions: **1.** Can the inference type information help model training and inference? If this is the case, the model trained with the inference types should achieve higher scores (such as BLEU Papineni et al. (2002) and BLEURT Sellam et al. (2020)) on the test set, and lower test loss. **2.** Can the inference type information be used to control the inference? If the inference type rules are learned by the model, during inference, we could potentially control the generated conclusion, while keeping it valid, by modifying the inference type. **3.** Can the model predict the correct inference type? If the inference type information is supporting and consistent with model inference, the model would be able to predict the correct the inference type, given the paired multi-premise and conclusion. We consider three ways to inject inference-type information into the architecture. These three mechanisms are described below, where p1, p2, and con are the premises and conclusion, respectively, and </s> is a special token in T5 for separating different sentences. **1.** The inference type as the prefix for the premises at the Encoder. This can be used to control the generation of conclusions by modifying the inference type during inference. The input format of the encoder can be described as: _the inference type is [type] </s> p1 </s> p2_ **2.** The inference type as the prefix for the conclusion in the Decoder, which is the common way to control the generation of transformer-based models Li and Liang (2021); Qian et al. (2022). The input format of the decoder is described as: _</s> the inference type is [type]. con_ **3.** The inference type at the end of the conclusion in the Decoder, which can help us to check whether T5 generates correct inference types. The input format of the decoder is described as: _</s> con. the inference type is [type]_ Quantitative evaluationWe first quantitatively evaluate model performance based on five metrics: test loss (cross-entropy), perplexity, BLEU, BLEU, and cosine similarity of Google sentenceT5. As illustrated in Table 5, we can observe that all baselines with inference type information always have lower test losses and perplexities (PPLs), which means the inference type can help the training process of the model **(conclusion2)**. Moreover, for all baselines, feeding the inference types into the model can achieve a consistently positive impact over BLEU, Cosine, and BLEURT metrics, which indicates that the inference types can help the NLI process **(conclusion3)**. We also conducted ablation studies on the inference types, which are included in the supplementary material (Appendix G). Additionally, we can observe that the T5 sentence bottleneck as a baseline has a lower BLEURT score. Therefore, we qualitatively evaluate its inference performance by checking some cases with negative scores. We find that some conclusions with lower scores are semantically equivalent to the golden ones. For example, the pred conclusion: _burning coal and burning oil are two sulfur dioxide emitters_ which achieves -0.52 scores compared with the golden conclusion: _burning coal and oil emit sulfur dioxide_. Therefore, we argue that BLEURT scores are not always reliable, especially for evaluating the correctness of the entailment in \begin{table} \begin{tabular}{c c c} \hline \hline Dense Encoder & t & MAP(\(\times\)100) \\ \hline \multirow{4}{*}{Bert(base, 768)} & 1 & 38.62 \\ & 2 & 39.34 \\ & 3 & 38.88 \\ & 4 & 38.71 \\ \hline \hline \multirow{4}{*}{T5 bottleneck(base, 768)} & 1 & 34.47 \\ & 2 & 35.28 \\ & 3 & 34.50 \\ & 4 & 34.47 \\ \hline \multirow{4}{*}{Optimus(768)} & 1 & 28.21 \\ & 2 & 29.35 \\ \cline{1-1} & 3 & 28.35 \\ \cline{1-1} & 4 & 28.27 \\ \hline \hline \end{tabular} \end{table} Table 4: Evaluating sentence embedding on SCAR where t represents the depth of entailment tree. this task **(conclusion4)**. Further examples are included in Table 10 in Appendix D. Controllability evaluationSecondly, we qualitatively evaluate the controllability of conclusion generation by modifying the inference type before feeding it into the Encoder. As illustrated in Tables 6 and 7, we can observe that the conclusion can be controlled according to the different inference types, which indicates that our inference type modelling approach can improve inference control **(conclusion5)**. For example, when the type is ARG substitution (ARG-SUB), the model can generate _the blacktop is made of smooth surface_ by replacing the argument _asphalt concrete_ with _smooth surface_. And the conclusions are changed to _asphalt and blacktop have the same surface_ when the inference type is the conjunction (FRAME-CONJ). Table 7 also illustrates how the sentence bottleneck can incorporate contextual information relevant to the type transformation, for example generating _a pumpkin is a kind of plant_, when _plant_ is not in the premises. More examples are provided in Appendix E. Inference types predictionLastly, we check the prediction of the inference type from the original T5(base). In terms of qualitative evaluation, the analysis of prediction errors can also help us to understand how T5 infers a conclusion. In table 8, we can observe that the predicted inference type is highly related to the conclusion. For example, the predicted (PRED) conclusion _a classroom is usually measured in a si unit for measuring length_ can be obtained by replacing _meter_ with _a si unit for measuring length_. This is also a reasonable conclusion. More examples are provided in the Appendix F. As for the quantitative evaluation, we calculate the accuracy of inference-type prediction. Specifically, during inference, instead of feeding a start token </s> to trigger the Decoder, we feed the conclusion into the Decoder directly. Therefore, it can try to generate the inference type based on the entailment representation learned from cross-attention. The accuracy is 40 %, which indicates the model cannot accurately predict the inference type according to the information from Decoder. This result is consistent with results from Table 5, in that the best way to inject inference type into the model is through the encoder instead of the decoder. ## 6 Conclusion In this work, we focus on the development of a model for multi-hop explanatory natural language inference. At the center of the architecture is the \begin{table} \begin{tabular}{c introduction of two key modelling mechanisms: (i) the injection of granular inference types and (ii) the introduction of a T5 sentence bottleneck. The combination of those mechanisms allow effective inference control at a cost of fluency performance. We concentrate on the task of generating a conclusion based on two premises. In the experiment, we first modify the architecture of the original T5 into sentence bottleneck T5, as the latent sentence representation has more potential for control of T5 sentence generation. From the qualitative and quantitative performance evaluation of both architectures, we found that both can perform well in our task. Secondly, we argue that the lexical inference type can help model training and control. We could empirically observe improvement in controlling the conclusion generation and better training results. In the future, we will explore the reasoning process via the T5 sentence bottleneck, where the structure (such as AMR) and content (such as Bag-of-Words) information are disentangled in its latent space. The disentangled latent space has the potential to better guide the inference process of the Encoder and the generation of the Decoder. ## 7 Limitations Evaluation metrics: as described in Table 5 the metrics, such as BLEURT, cannot accurately reflect the correctness of generated conclusion. Therefore, how to quantitatively evaluate the correctness of predicted conclusions from the inference side should be considered in the future.
2302.07964
On Rank Energy Statistics via Optimal Transport: Continuity, Convergence, and Change Point Detection
This paper considers the use of recently proposed optimal transport-based multivariate test statistics, namely rank energy and its variant the soft rank energy derived from entropically regularized optimal transport, for the unsupervised nonparametric change point detection (CPD) problem. We show that the soft rank energy enjoys both fast rates of statistical convergence and robust continuity properties which lead to strong performance on real datasets. Our theoretical analyses remove the need for resampling and out-of-sample extensions previously required to obtain such rates. In contrast the rank energy suffers from the curse of dimensionality in statistical estimation and moreover can signal a change point from arbitrarily small perturbations, which leads to a high rate of false alarms in CPD. Additionally, under mild regularity conditions, we quantify the discrepancy between soft rank energy and rank energy in terms of the regularization parameter. Finally, we show our approach performs favorably in numerical experiments compared to several other optimal transport-based methods as well as maximum mean discrepancy.
Matthew Werenski, Shoaib Bin Masud, James M. Murphy, Shuchin Aeron
2023-02-15T22:02:09Z
http://arxiv.org/abs/2302.07964v1
On Rank Energy Statistics via Optimal Transport: Continuity, Convergence, and Change Point Detection ###### Abstract This paper considers the use of recently proposed optimal transport-based multivariate test statistics, namely rank energy and its variant the soft rank energy derived from entropically regularized optimal transport, for the unsupervised nonparametric change point detection (CPD) problem. We show that the soft rank energy enjoys both fast rates of statistical convergence and robust continuity properties which lead to strong performance on real datasets. Our theoretical analyses remove the need for resampling and out-of-sample extensions previously required to obtain such rates. In contrast the rank energy suffers from the curse of dimensionality in statistical estimation and moreover can signal a change point from arbitrarily small perturbations, which leads to a high rate of false alarms in CPD. Additionally, under mild regularity conditions, we quantify the discrepancy between soft rank energy and rank energy in terms of the regularization parameter. Finally, we show our approach performs favorably in numerical experiments compared to several other optimal transport-based methods as well as maximum mean discrepancy. + Footnote †: The first two student authors contributed equally. \({}^{(\diamond)}\) Department of Computer Science, Tufts University \({}^{(\dagger)}\)Department of Electrical and Computer Engineering, Tufts University \({}^{(\ddagger)}\) Department of Mathematics, Tufts University ## 1 Introduction The problem of detecting changes or transitions in a multivariate time series data \((X_{t})\subset\mathbb{R}^{d}\), referred to henceforth as _change point detection (CPD)_, is a central problem in a number of scientific domains [33, 52, 1, 50, 12, 13, 20]. This entails estimating time points \((\tau_{i})\) at which the underlying process that generates the data \((X_{t})\) changes in a meaningful way. Equivalently, the CPD problem amounts to partitioning the time series data into disjoint segments, with data in each consecutive segment being statistically distinct. Specifically, we consider an _unsupervised_ setting in which no prior examples of change points are made available from which to learn. Motivated by recent developments in multivariate goodness-of-fit (GoF) tests, based on the the theory of optimal transport (see [31] for a recent survey) we propose the use of the rank energy [23] and its numerically and sample efficient variant, the soft rank energy [41], for performing CPD. As noted in [23, 22] there are several advantages in considering rank energy for GoF testing. First, they are distribution free under the null, a property which we numerically observe to be approximately shared by the soft rank energy (See Figure 3) and that is lacking in other popular multivariate GoF measures such as the maximum mean discrepancy (MMD) [29], Wasserstein distances [51], and Sinkhorn divergences [26]. In the context of CPD, distribution-freeness potentially allows one to select a threshold for detection independent of the underlying distribution. Furthermore, statistical testing based on rank energy is shown to be robust to outliers and has better power for heavy tailed distributions [23]. Note while one can consider other OT-rank based GoF measures such as the rank MMD [23], Hotteeling's-\(T^{2}\)[22], and soft rank MMD [41], in this paper we focus on rank energy and soft rank energy and leave analogous development for these cases for future investigation. We make the following fundamental contributions in this paper, keeping in view the practical utility of these tests towards robust CPD. 1. **Wasserstein Continuity Properties of GoF Statistics**: Theorems 4.1 and 4.2 provide novel analytic insights into rank energy and soft rank energy which explain their behaviors in practice. We show that the soft rank energy is Lipschitz with respect to the Wasserstein-1 metric while the rank energy fails to even be continuous. These properties translate to the smoothness (or lack thereof) of the GoF statistics in the proposed CPD algorithm with respect to small perturbations that are typical of real data. 2. **Convergence of Soft Rank Energy to Rank Energy**: In Theorem 4.4, under appropriate technical conditions, we provide an explicit convergence rate of the soft rank energy to the rank energy in terms of the regularization parameter. We also provide a non-asymptotic result under milder conditions in Theorem B.5. These results relax the conditions required to obtain convergence in existing work on the soft rank energy [41]. 3. **Realistic and Fast Sample Convergence:** In Theorem 4.6 we establish the fast convergence rate of \(n^{-1/2}\) of the plugin estimate of the soft rank energy to its population counterpart. This provides a non-asymptotic guarantee for the soft rank energy using the same set up as is used in the seminal work on rank energy [23]. Importantly, this allows our method to avoid using an out-of-sample extension as is required in [41], as well as make the most out of limited samples since it does not require a secondary hold out set of samples. 4. **Applications to CPD**: We numerically investigate and compare the performance of the soft rank energy with other OT-based and popular GoF statistics for CPD [12, 13, 2, 37]. Our results demonstrate the effectiveness of the soft rank energy, particularly when the data dimension \(d\) is large. ## 2 Overview of Change Point Detection Consider a sequence of samples \((X_{t})\in\mathbb{R}^{d}\), which may be finite or infinite, and assume that the sequence can be sequentially partitioned so that \(X_{1},...,X_{\tau_{1}}\sim P_{1}\), \(X_{\tau_{1}+1},...,X_{\tau_{2}}\sim P_{2},X_{\tau_{2}+1},...,X_{\tau_{3}}\sim P _{3}\) and so on. The distributions \(P_{i}\) and the change points \(\tau_{i}\) are not known in advance and must be uncovered. The goal of a CPD algorithm is to use the observed sequence \((X_{t})\) to output a sequence of predicted change points indices \((\hat{\tau}_{k})\) such that the sequence \((\hat{\tau}_{k})\) is close to the true sequence \((\tau_{j})\). In addition to being unsupervised, we focus on the nonparametric setting in which the data generating distributions \((P_{j})\) are not assumed to belong to a parametric family of distributions. One method to estimate change points is the "sliding window" approach [4, 13, 12] visualized in Figure 1 and outlined in Algorithm 1. For each time \(t\in\{n,n+1,\ldots,T-n\}\) let \(z_{t}\) be a GoF statistic computed between the time-adjacent sets \(\{X_{t-n+1},...,X_{t}\}\), and \(\{X_{t+1},...,X_{t+n}\}\). Repeating this for every time \(t\) creates a sequence \((z_{t})\) from which a sequence \((\hat{\tau}_{i})\) of predicted change points can be extracted. For example, one can predict that \(\hat{\tau}\) is a change point when \(z_{\hat{\tau}}\) takes a large value or is a local maximizer within the sequence \((z_{\tau})\). Since the predicted change points \((\hat{\tau}_{k})\) are extracted from the sequence \((z_{\tau})\) which is determined by a GoF statistic, the choice of GoF statistic will make a substantial difference in the quality of the predicted change points. The purpose of this paper is to argue theoretically and empirically that the soft rank energy (given in Definition 3.6) is a strong choice of statistic, due to its favorable theoretical and computational properties. Within the context of the sliding window approach we discuss several ways to quantify how close the sequence \((\hat{\tau}_{k})\) is to \((\tau_{j})\) which are made precise in Section 5. Heuristically a sequence \((\hat{\tau}_{k})\) is close to \((\tau_{j})\) if it has the following two properties. 1. **(High True Change Point Detection Rate)** For most \(\tau_{j}\), there should be a point \(\hat{\tau}_{k}\) close to it. For a pre-specified tolerance \(\xi>0\), we say that the change point \(\tau_{j}\) is detected if there is a \(\hat{\tau}_{k}\) such that \(|\tau_{j}-\hat{\tau}_{k}|\leq\xi\), otherwise it is missed. This requires the algorithm to identify and localize changes in the distribution. 2. **(Low False Alarm Rate)** For almost every \(\hat{\tau}_{k}\) there should be a point \(\tau_{j}\) such that \(|\hat{\tau}_{k}-\tau_{j}|\leq\xi\). A predicted change point \(\hat{\tau}_{k}\) that is far from every true change point \(\tau_{j}\) is considered a _false alarm_. This rules out algorithms which find true change points by proposing many spurious ones. The CPD setting described above is idealized in two important ways. First, in practice there is often no sharp threshold where a shift in the sampling distribution occurs. The distribution may undergo a short phase transition, and one would like to register this as only a single change point instead of a change point at every time step during the transition [11]. Second, real data distributions may exhibit subtle fluctuations around a typical distribution, and only occasionally undergo meaningful transitions. This can make statistical tests which are _too_ powerful ineffective in practice because one often does not want to register small fluctuations as change points. Figure 1: \((X_{t})_{t=1}^{T}\) is time series data (gray) with several change points (dashed purple lines). The window of \(n\) samples between \(t\) and \(t+n\) is \(X_{t},...,X_{t+n}\). The discrepancy between practical and theoretical change points should inform the design of a CPD algorithm. Any CPD algorithm should be _sensitive_ enough to identify potentially subtle shifts and capture all the true changes, while being _robust_ to insignificant fluctuations. In addition one would like to have good sample convergence properties to mitigate the impact of statistical noise in the observed samples. In the standard GoF setting, these properties can be stated in terms of the power and confidence level of the test in order to minimize false alarms and missing true change points, respectively [61]. For further discussion of specific approaches to CPD see Section A in the appendix. ## 3 Background ### Optimal Transport and Rank Energy Let \(\mathcal{P}(\Omega)\) denote the space of probability measures over an open set \(\Omega\subseteq\mathbb{R}^{d}\) and let \(\mathcal{P}_{ac}(\Omega)\) be those measures which are absolutely continuous with respect to the Lebesgue measure on \(\Omega\) (i.e. those that admit a density function). For two measures \(P,Q\in\mathcal{P}(\Omega)\), the optimal transport problem with squared Euclidean ground cost seeks an optimal _coupling_\(\pi\) between the source distribution \(P\) and the target distribution \(Q\) via solving [55] \[W_{2}^{2}(P,Q)\triangleq\min_{\pi\in\Pi(P,Q)}\int\frac{1}{2}\|x-y\|^{2}d\pi(x, y), \tag{1}\] where \(\Pi(P,Q)\) is the set of joint probability measure on \(\mathcal{P}(\Omega\otimes\Omega)\) with marginals \(P\) and \(Q\). The connection between optimal transport and ranking can be understood starting with \(d=1\), where when \(P\in\mathcal{P}_{ac}(\Omega)\) and \(Q=\mathrm{Unif}[0,1]\), the optimal plan is supported on \(\{(x,\mathrm{CDF}_{P}(x))\}\)[55] (\(\mathrm{CDF}_{P}(x)\) is the cumulative distribution function of \(P\)), which corresponds to a cyclically monotone rearrangement that in turn aligns with the natural ordering on \(\mathbb{R}\). In higher dimensions, as implicitly noted in the seminal papers extending the notion of ranks to higher dimensions via optimal transport [30, 14, 23], the _key geometric_ property of cyclical monotonicity is preserved in that when \(P\in\mathcal{P}_{ac}(\Omega)\) by the Brenier-McCann theorem [7, 42], the optimal transport plans are supported on cyclically monotone sets i.e. on \(\{(x,T(x)):x\in\mathrm{supp}(P)\}\) for a map \(T:\mathbb{R}^{d}\to\mathbb{R}^{d}\), which is a gradient of a convex function (hence cyclically monotone by a well-known theorem of Rockafeller [54]) and satisfies \((T\#P)[A]=P[T^{-1}(A)]\) for all measurable sets \(A\). This allows one to meaningfully interpret _multivariate ranks via optimal transport maps_ as corresponding to a cyclically monotone rearrangement with respect to a target measure \(Q\). Fixing the target measure \(Q\) to be \(\mathrm{Unif}([0,1]^{d})\) motivates the following definition of the _multivariate rank map_. **Definition 3.1** ([23]).: Let \(P\in\mathcal{P}_{ac}(\Omega)\) and let \(Q\!=\!\mathrm{Unif}([0,1]^{d})\). The _(multivariate) rank map_ of \(P\) is defined as \(\mathtt{R}=\nabla\phi:\mathbb{R}^{d}\to\mathbb{R}^{d}\) where \(\phi\) is the convex function such that \(\nabla\phi\) optimally transports \(P\) to \(Q\). Using this notion of rank, rank energy is defined as follows. **Definition 3.2** (Definition 3.2, [23]).: Let \(P_{X},P_{Y}\in\mathcal{P}_{ac}(\Omega)\) and let \(X,X^{\prime}\stackrel{{ i.i.d.}}{{\sim}}P_{X}\) and \(Y,Y^{\prime}\stackrel{{ i.i.d.}}{{\sim}}P_{Y}\). Let \(P_{\lambda}=\lambda P_{X}+(1-\lambda)P_{Y}\) denote the mixture distribution for any \(\lambda\in(0,1)\) and let \(\mathtt{R}_{\lambda}\) be the multivariate rank map of \(P_{\lambda}\) as in Definition 3.1. The _(population) rank energy_ is given by \[\mathtt{RE}_{\lambda}(P_{X},P_{Y})^{2}\triangleq 2\mathbb{E}\left\|\mathtt{R}_{ \lambda}(X)-\mathtt{R}_{\lambda}(Y)\right\|-\mathbb{E}\left\|\mathtt{R}_{ \lambda}(X)-\mathtt{R}_{\lambda}(X^{\prime})\right\|-\mathbb{E}\left\|\mathtt{ R}_{\lambda}(Y)-\mathtt{R}_{\lambda}(Y^{\prime})\right\|.\] In [23] it is shown that the the rank energy is 1. distribution free under the null 2. consistent against alternatives (under the alternative hypothesis the probability of accepting the null hypothesis goes to zero as \(n\) goes to infinity), and 3. computationally feasible for \(n\) not too large. These make the rank energy a very attractive statistic for GoF since ### Entropic Optimal Transport and Soft Rank Energy In order to define the soft rank energy we begin by introducing the entropically regularized-OT (EOT) problem. The _entropy-regularized_ version of (1) adds an additional term to the objective [18, 47, 28]. For \(\varepsilon>0\), the primal formulation of EOT is given by \[\min_{\pi\in\Pi(P,Q)}\int\frac{1}{2}\|x-y\|^{2}\mathrm{d}\pi(x,y)+\varepsilon \mathrm{KL}(\pi\ ||\ P\otimes Q), \tag{2}\] where \[\mathrm{KL}(\pi|P\otimes Q)\triangleq\int\ln\left(\frac{d\pi(x,y)}{dP(x)dQ(y) }\right)d\pi(x,y).\] Let \(\pi_{\varepsilon}\) denote the solution to (2). Extending the ideas in [19], [41] proposed the following. **Definition 3.3** ([41]).: Let \(P\in\mathcal{P}_{ac}(\Omega)\) and \(Q=\mathrm{Unif}([0,1]^{d})\). Define the _entropic rank map_ via \(\mathbb{R}_{\varepsilon}(x)\triangleq\mathbb{E}_{Y\sim\pi_{\varepsilon}} \left[Y|X=x\right],\) the conditional expectation under the coupling \(\pi_{\varepsilon}\). _Remark 3.4_.: We note that \(\mathbb{R}_{\varepsilon}\) is a gradient of a convex function [15] thereby maintaining the key geometric property of rank maps, namely cyclical monotonicity. For general measures \(P,Q\), the entropic map \(T_{\varepsilon}\triangleq\mathbb{E}_{Y\sim\pi_{\varepsilon}}\left[Y|X=x\right]\) was considered in [49] and shown to be a good estimator of the unregularized optimal transport map under regularity conditions on \(P,Q\). Based on this notion, and motivated by the nicer sample and computational complexity as well as differentiability of entropic rank maps in [41] the following variant of rank energy was proposed and utilized for learning generative models. **Definition 3.5** (Soft Rank Energy, [41]).: Let \(P_{X},P_{Y}\in\mathcal{P}_{ac}(\Omega)\) and let \(X,X^{\prime}\overset{i.i.d.}{\sim}P_{X},Y,Y^{\prime}\overset{i.i.d.}{\sim}P_{Y}\). Let \(P_{\lambda}=\lambda P_{X}+(1-\lambda)P_{Y}\) for \(\lambda\in(0,1)\) and let \(\mathbb{R}_{\lambda}^{\varepsilon}\) be the entropic rank map of \(P_{\lambda}\). The _soft rank energy_ (sRE) is defined as: Note that while \(\mathtt{sRE}_{\lambda}^{\varepsilon}\) is not a metric, it is symmetric and \(\mathtt{sRE}_{\lambda}^{\varepsilon}(P_{X},P_{Y})=0\) if \(P_{X}=P_{Y}\), which is useful in CPD applications. ### Estimating \(\mathtt{RE}_{\lambda}\) and \(\mathtt{sRE}_{\lambda}^{\varepsilon}\) from samples Let \(X_{1},...,X_{n}\sim P\) and \(Y_{1},...,Y_{n}\sim Q\) be jointly independent samples. Using these samples one constructs the empirical measures \(P^{n}=\frac{1}{n}\sum_{i=1}^{n}\delta_{X_{i}},\ Q^{n}=\frac{1}{n}\sum_{j=1}^{n }\delta_{Y_{j}}.\) The plug-in estimates of the OT map \(\hat{T}\) and the optimal EOT coupling \(\hat{\pi}_{\varepsilon}\) are obtained by solving \[\hat{T} =\operatorname*{arg\,min}_{T:T\#P^{n}=Q^{n}}\frac{1}{n}\sum_{i=1} ^{n}\left\|T(X_{i})-X_{i}\right\|^{2},\] \[\hat{\pi}_{\varepsilon} =\operatorname*{arg\,min}_{\pi\in\Pi(P^{n},Q^{n})}\sum_{i,j=1}^{ n}\pi_{ij}\left\|X_{i}-Y_{j}\right\|^{2}+\varepsilon\pi_{ij}\log\pi_{ij}.\] The plug-in estimate of the entropic map \(\hat{T}_{\varepsilon}\) is given by \[\hat{T}_{\varepsilon}(X_{i})=\mathbb{E}_{Y\sim\hat{\pi}_{\varepsilon}}\left[Y| X=X_{i}\right]=n\sum_{j=1}^{n}(\hat{\pi}_{\varepsilon})_{ij}Y_{j}.\] Note that like \(\hat{T}\), the map \(\hat{T}_{\varepsilon}\) is only defined on the samples \(\{X_{1},...,X_{n}\}\). In [21, 49] these maps are extended to the whole space, and it is a novel feature of this work that _no extensions are required_. When \(Q=\mathrm{Unif}([0,1]^{d})\) we say that the estimate \(\hat{T}\) is the _sample rank_ and denote it by \(\mathtt{R}_{m,n}\), with the subscript referring to the number of samples used. Analogously the estimate \(\hat{T}_{\varepsilon}\) is referred to as the _entropic sample rank_ and denoted by \(\mathtt{R}^{\varepsilon}_{m,n}\). We can now define the _sample rank energy_ and _sample soft rank energy_. **Definition 3.6**.: Given two sets of samples \(X_{1},\ldots,X_{m}\sim P_{X}\) and \(Y_{1},\ldots,Y_{n}\sim P_{Y}\), define the empirical mixture of the two sets of samples \(P^{m+n}=\frac{1}{m+n}\left(\sum_{i=1}^{m}\delta_{X_{i}}+\sum_{j=1}^{n}\delta_{ Y_{j}}\right)\). Let \(Q^{m+n}=\frac{1}{m+n}\sum_{i=1}^{n+m}\delta_{U_{i}}\) where \(U_{i}\sim\text{Unif}([0,1]^{d})\). Let \(\mathtt{R}_{m,n}\) be the sample rank of \(P^{m+n}\) to \(Q^{m+n}\). The _sample rank energy_ is given by \[\mathtt{RE}_{m,n}(P_{X},P_{Y})^{2} \triangleq\frac{2}{mn}\sum_{i,j=1}^{m,n}\|\mathtt{R}_{m,n}(X_{i} )-\mathtt{R}_{m,n}(Y_{j})\|-\frac{1}{m^{2}}\sum_{i,j=1}^{m}\|\mathtt{R}_{m,n} (X_{i})-\mathtt{R}_{m,n}(X_{j})\|\] \[\quad-\frac{1}{n^{2}}\sum_{i,j=1}^{n}\|\mathtt{R}_{m,n}(Y_{i})- \mathtt{R}_{m,n}(Y_{j})\|.\] Let \(\mathtt{R}^{\varepsilon}_{m,n}\) be the entropic sample rank of \(P^{m+n}\) to \(Q^{m+n}\). The _sample soft rank energy_ is given by \[\mathtt{sRE}^{\varepsilon}_{m,n}(P_{X},P_{Y})^{2} \triangleq\frac{2}{mn}\sum_{i,j=1}^{m,n}\|\mathtt{R}^{\varepsilon }_{m,n}(X_{i})-\mathtt{R}^{\varepsilon}_{m,n}(Y_{j})\|-\frac{1}{m^{2}}\sum_{i,j=1}^{m}\|\mathtt{R}^{\varepsilon}_{m,n}(X_{i})-\mathtt{R}^{\varepsilon}_{m,n }(X_{j})\|\] \[\quad-\frac{1}{n^{2}}\sum_{i,j=1}^{n}\|\mathtt{R}^{\varepsilon}_{ m,n}(Y_{i})-\mathtt{R}^{\varepsilon}_{m,n}(Y_{j})\|.\] ## 4 Utilizing \(\mathtt{sRE}^{\varepsilon}_{\lambda}\) for Change Point Detection The primary application of this paper is the use of \(\mathtt{sRE}^{\varepsilon}_{\lambda}\) as a means to solve the CPD problem introduced in Section 2 by using it as a GoF statistic (see Algorithm 1 and Figure 1). While it is now well established that entropic OT maps can be computed much faster with practical methods that are parallelizable [3], computation of OT maps that requires solving a linear program still does not scale well in practice [38], thereby putting use of RE at a computational disadvantage compare to sRE. In this Section, we argue that soft rank energy has several more important advantages over rank energy for CPD. ### Wasserstein Continuity Properties of Rank and Soft Rank Energy In [23] it was shown that the rank energy is distribution-free under the null hypothesis that \(P_{X}=P_{Y}\). Given that the soft rank energy is "close" the rank energy (as quantified by Theorems 4.4 and B.5), it is reasonable to hope that it should retain this property in an approximate sense. While the rank energy enjoys this important theoretical property, it poses issues for CPD beyond its considerable computational and statistical burdens [41]. In particular, the rank energy can be highly unstable to small Wasserstein perturbations in the underlying distributions, as shown in the following theorem. **Theorem 4.1**.: _For any \(\lambda\in(0,1)\) and any \(\epsilon,\delta>0\) there exists a pair of measures \(P_{X},P_{Y}\) with \(W_{1}(P_{X},P_{Y})<\delta\) and_ \[\mathtt{RE}_{\lambda}(P_{X},P_{Y})\geq\sup_{Q_{X},Q_{Y}\in\mathcal{P}_{\text{ ac}}(\Omega)}\mathtt{RE}_{\lambda}(Q_{X},Q_{Y})-\epsilon.\] The proof is deferred to Section B.1, and relies on an invariance property of the OT map. This result shows that the rank energy strongly distorts the Wasserstein-1 metric, in the sense that there are no universal constants \(0<\alpha\leq\beta<\infty\) such that \[\alpha W_{1}(P_{X},P_{Y})\leq\mathtt{RE}_{\lambda}(P_{X},P_{Y})\leq\beta W_{1} (P_{X,Y}\,)\] for any pair \(P_{X},P_{Y}\). The nonexistence of \(\beta\) follows immediately from Theorem 4.1. The nonexistence of \(\alpha\) follows by taking a sequence \(\{(P_{X}^{i},P_{Y}^{i})\}_{i=1}^{\infty}\) so that \(W_{1}(P_{X}^{i},P_{Y}^{i})\to\infty\) and noting that by definition, for any \(P_{X}^{i},P_{Y}^{i}\), \(\mathtt{RE}_{\Delta}(P_{X}^{i},P_{Y}^{i})\leq 2\sqrt{d}\). At this level, the rank energy fails to properly capture a standard notion of distance between measures, and can either greatly inflate or diminish relative to Wasserstein-1. This stands in contrast with several other common measures of similarity between probability measures [48, 25], including as we will see the soft rank energy. We posit that the lack of an upper bound makes the rank energy overly sensitive and leads to a high false alarm rate in the CPD problem. This is because in practice there is an implicit, application-dependent threshold of distributional change which should be tolerated and not be flagged as a change point. In contrast, the rank energy aims to capture _any_ change, no matter how subtle, which leads to the identification of change points below the implicit threshold. The proof of Theorem 4.1 also suggests that the rank energy is unstable when working with distributions that are much more concentrated than \(\mathrm{Unif}([0,1]^{d})\). In contrast, the soft rank energy enjoys a stability property with respect to \(W_{1}\), which suggests that it is robust to small Wasserstein perturbations and may not raise a false alarm in these circumstances. **Theorem 4.2**.: _For any \(\lambda\in(0,1)\) and \(P_{X},P_{Y}\in\mathcal{P}_{ac}(\mathbb{R}^{d})\) it holds_ \[\mathtt{sRE}_{\lambda}^{\varepsilon}(P_{X},P_{Y})^{2}\leq\frac{2d}{ \varepsilon}W_{1}(P_{X},P_{Y}).\] The proof is deferred to Section B.2 and relies crucially on the Lipschitz continuity of the entropic map (see Lemma B.1). _Remark 4.3_.: In Theorem 4.2, the factor \(2d\) in the bound is an artifact of using \(Q=\mathrm{Unif}([0,1]^{d}).\) If instead one chose \(Q=\mathrm{Unif}(B_{2}^{d}(u,1))\) for any \(u\in\mathbb{R}^{d}\), then the bound above could be replaced by a dimension-free \(8\). Additionally, since \(W_{1}(P_{X},P_{Y})\leq W_{p}(P_{X},P_{Y})\) for all \(p\geq 1\) the conclusion also holds for these variants of the Wasserstein distance. We state it in terms of \(W_{1}\) since it is the strongest bound of this form. Comparing Theorem 4.1 to Theorem 4.2, there is a clear qualitative difference between the rank energy and the soft rank energy. This sensitivity also appears empirically and is demonstrated in Figure 2. In this figure when the samples are highly concentrated the rank energy suffers from large fluctuations while the soft rank energy remains comparatively smooth. This leads the RE to produce a few false positives while the sRE shows stability against those fluctuations. Theorem 4.2 also suggests the role of \(\varepsilon\) may act as a sensitivity knob with small \(\varepsilon\) leading to a highly sensitive signal while a large \(\varepsilon\) is more stable against perturbations. Figure 2: The left plot shows a sequence of HASC-PAC2016 dataset with the detected change points marked by red dots. The right plot is a zoomed-in version of a short segment. Both RE and sRE can detect the true changes (black dashed line) within a certain margin (dashed purple), but RE also produces false positives due to sensitivity to small signal fluctuations. On the other hand, sRE displays greater stability in this aspect, leading to superior performance as seen in Table 1. ### Convergence of \(\mathtt{sRE}^{\varepsilon}_{\lambda}\) to \(\mathtt{RE}_{\lambda}\) While Theorems 4.1 and 4.2 suggest that there may be some fundamental differences between the soft rank energy and the rank energy, it is still possible to derive convergence results between them. This is to be expected since the optimal entropic coupling, \(\pi_{\varepsilon}\) between \(P\in\mathcal{P}_{ac}(\Omega)\) and \(Q\) is known to converge weakly to the unregularized coupling \(\pi=[\mathrm{Id}\otimes T]\#P\) (where \(T\) is the OT map from \(P\) to \(Q\)) as \(\varepsilon\to 0\). An asymptotic result demonstrating this is given in Theorem B.5. If one imposes further assumptions on the OT map, namely Lipschtiz continuity, one may use the recent results from [8] to arrive at a _quantitative_ estimate of the difference between \(\mathtt{sRE}^{\varepsilon}_{\lambda}\) and \(\mathtt{RE}_{\lambda}\). The proof is deferred to Section B.3. **Theorem 4.4**.: _Under assumptions of compactness of domains and \(L\)-Lipschitz continuity of \(\mathtt{R}_{\lambda}\), it holds that \(|\mathtt{sRE}^{\varepsilon}_{\lambda}(P_{X},P_{Y})^{2}-\mathtt{RE}_{\lambda} (P_{X},P_{Y})^{2}|\leq CL\sqrt{d\varepsilon\log(1/\varepsilon)+O(\varepsilon)},\) for some constant \(C\)._ Theorem 4.4 implies that for small \(\varepsilon\) the soft rank energy is a close approximation of the rank energy. This is important because the rank energy is distribution-free under the null hypothesis that \(P=Q\). It is reasonable to expect that the soft rank energy with small \(\varepsilon\) approximately inherits this property; this is empirically observed in Figure 3, where one can see that the soft rank energy is nearly distribution-free under the null. _Remark 4.5_.: The convergence of \(\mathtt{sRE}^{\varepsilon}_{\lambda}\) to \(\mathtt{RE}_{\lambda}\) in the presence of Theorems 4.1 and 4.2 may seem suprising since these results suggest that RE and sRE are fundamentally different. This can be reconciled by noting one the bound \(\mathtt{sRE}^{\varepsilon}_{\lambda}(P_{X},P_{Y})^{2}\leq 2\sqrt{d}\) for any \(P_{X},P_{Y}\) and \(\varepsilon\) and by choosing \(\varepsilon\) small enough relative to \(W_{1}(P_{X},P_{Y})\) it will hold that \(W_{1}(P_{X},P_{Y})/\varepsilon>2\sqrt{d}\), which leads to the bound in Theorem 4.2 becoming vacuous. Figure 3: Kernel density estimates of RE (\(\varepsilon=0\)) and sRE under the null for v1 (Cauchy), v2 (multivariate Gaussian), v3 (multivariate Gaussian with diagonal covariance), and v4 (Laplace) distributional settings scaled by a factor of \(mn/(m+n)\). RE exhibits distribution-free behavior under the null, and sRE shows qualitatively similar behavior for small values of \(\varepsilon\). However, for larger values of \(\varepsilon\), the density curves of sRE deviate from this pattern, indicating a loss of the distribution-freeness property. Here \(m=n=200\) and the statistics are plotted using 1000 random draws. ### Statistical Properties Sadly, the plug-in estimate of the optimal map suffers from the curse of dimensionality. When working in \(\mathbb{R}^{d}\) and in the absence of further assumptions, the plug-in estimate \(\hat{T}\) may converge to the true map \(T\) as slowly as \(n^{-1/d}\)[24]. In fact, [34] show that for _any (measurable) estimator_\(T_{0}\) there exists a measure \(P\) with \[\mathbb{E}\int_{\mathbb{R}^{d}}\left\|T_{0}(x)-T(x)\right\|^{2}dP(x)\gtrsim n ^{-2/d}.\] This says that OT truly does suffer from the curse of dimensionality unless further assumptions are placed on the measures \(P\) and \(Q\). Practically, the poor statistical convergence rates coupled with the computational problems when working with large sample sizes makes standard map estimation difficult to accurately perform on high-dimensional data. In turn, \(\mathtt{RE}_{m,n}(P_{X},P_{Y})^{2}\) may also converge very slowly to \(\mathtt{RE}_{\lambda}(P_{X},P_{Y})^{2}\). The \(n^{-1/d}\) or \(n^{-2/d}\) rate is typical for quantities associated to OT and this issue is often alleviated by imposing additional structural conditions on the measures, most often some type of smoothness condition [56, 46]. In stark contrast, once entropy regularization is introduced the statistical and computational issues are largely avoided. In [41] it is shown that when \(P\) is sub-gaussian and \(Q\) has bounded support then \[\mathbb{E}||T_{\varepsilon}^{n,n}-T_{\varepsilon}||_{L^{2}(P)}^{2}\lesssim \log(n)n^{-1/2}.\] When both \(P\) and \(Q\) have bounded support it is shown in [53] that \[\mathbb{E}||T_{\varepsilon}^{n,n}-T_{\varepsilon}||_{L^{2}(P)}^{2}\lesssim n ^{-1}.\] These fast rates of convergence are typical of entropy regularized optimal transport [27, 44]. It is also possible to show that \(\mathtt{sRE}_{m,n}^{\varepsilon}\) has a fast rate of convergence to \(\mathtt{sRE}_{\lambda}^{\varepsilon}\), even when the dimension is large. This is stated in the following result. **Theorem 4.6**.: _Let \(P_{X},P_{Y}\in\mathcal{P}(B(0,r))\). Let \(X_{1},...,X_{n}\sim P_{X}\) and \(Y_{1},...,Y_{n}\sim P_{Y}\) be jointly independent. Then_ \[\mathbb{E}|\mathtt{sRE}_{n,n}^{\varepsilon}(P_{X},P_{Y})^{2}-\mathtt{sRE}_{1/ 2}^{\varepsilon}(P_{X},P_{Y})^{2}|\leq\frac{24r\sqrt{1+\varepsilon^{2}}}{ \sqrt{2n}}\exp(22r^{2}/\varepsilon)+8\sqrt{\frac{d\pi}{n}}.\] The proof is deferred to Section B.5. The first step in proving this result is to introduce an intermediary term which approximates the \(\mathtt{sRE}_{\lambda}^{\varepsilon}\) by a discrete sum evaluated at the sample points \(X_{1},...,X_{n},Y_{1},...,Y_{n}\) and then apply the triangle inequality. This breaks the estimate into two terms, the first a Monte Carlo estimate of an expectation (which is easily controlled), the second measuring how closely \(\mathtt{R}_{m,n}^{\varepsilon}\) approximates \(\mathtt{R}_{\lambda}^{\varepsilon}\) on the sampled points. Controlling the second term is substantially complicated by two things. First the distribution of \((X_{1},...,X_{m},Y_{1},...,Y_{n})\) is _not the same_ as \((P_{\lambda})^{m+n}\). Second there is a dependence between the sample entropic rank \(\mathtt{R}_{m,n}^{\varepsilon}\) and the points used to estimate it, and one must ensure that on these points the soft rank map is well behaved. In [41] these issues are handled via resampling ideas, however this approach wastes samples, requires artificially sampling from \(P_{\lambda}\), and requires an out-of-sample extension of the soft rank map since \(\mathtt{R}_{m,n}^{\varepsilon}\) is only defined at the sample points. However our approach is limited to measures with compact support while [41] are able to cover subgaussian measures, a question we leave to future work. ## 5 Numerical Evaluation The main hyperparameters for Algorithm 1 are the window size \(n\) and threshold parameter \(\eta\). While evaluating CPD on synthetic data, we study the effect of \(n\) on performance. On the other hand, for real-world data, we use our domain knowledge of the typical frequency of change points to set the window size appropriately. Once we have calculated the change point statistics, we use a standard peak finding procedure1 with thresholding to identify potential change points. Since the statistical guarantees of the various tests differ, we evaluate their performance using metrics that vary the threshold parameter \(\eta\) over all possible values2. Footnote 1: We use scipy.signal.find_peaks from Python Scipy1.9.1. Footnote 2: Code to reproduce results are available at [https://github.com/ShoaibBinMasud/CPD-using-aRE](https://github.com/ShoaibBinMasud/CPD-using-aRE) One potential drawback of the peak search algorithm is that it may generate many small sub-peaks around the largest peaks. To prevent the detection of multiple successive change points when only one change point is present, we apply a minimal horizontal distance \(\Delta\) in samples to ensure that every pair of predicted change points \(\hat{\tau}\neq\hat{\tau}^{\prime}\) are at least \(\Delta\) samples apart. ### Evaluation Metrics We consider two widely used metrics in CPD literature [4, 12] to evaluate the performance, (a) area under the precision-recall curve, (b) best F1-score across all detection thresholds. The F1-score is defined as: \[\text{F1-score}=\frac{2\cdot\text{precision}\times\text{recall}}{\text{ precision}+\text{recall}},\] \[\text{precision}=\frac{TP}{TP+FP},\hskip 14.226378pt\text{recall}=\frac{TP}{ TP+FN},\] where TP, FP, and FN represent the total number of true positive, false positive, and false negative points, respectively. To account for uncertainty in the exact annotation of true change points, we allow a margin of error \(\xi\) when declaring a point either as TP or FP or FN. A predicted change point \(\hat{\tau}_{k}\) is considered a TP if it is within \(\xi\) of a true change point \(\tau_{j}\) (i.e., \(|\tau_{j}-\hat{\tau}_{k}|\leq\xi\)), otherwise it is considered a FP. A true change point \(\tau_{j}\) that does not have a detected change within \(\xi\) is considered a FN. The choice of \(\delta\) is important for proper performance assessment. A small \(\xi\) may increase the number of FPs, while a larger \(\xi\) may misleadingly improve performance by considering detected change points far from true change points as TPs. Additionally, multiple true change points in close proximity may increase ambiguity when using a larger \(\xi\). To ensure fairness in comparison, we use the same \(\xi\) for all methods. ### Results In this study, we evaluate and compare the performance of the sRE method with other GoF statistics for CPD on a synthetic dataset, as well as on 5 real-world datasets including 4 time-series datasets and a hyperspectral image dataset. Detailed descriptions of the datasets as well as discussion of the various hyperparameters used in this study can be found in Section C in the appendix. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Method} & \multicolumn{8}{c|}{AUC-PR} & \multirow{2}{*}{Average} & \multicolumn{8}{c|}{Best F1-score} & \multirow{2}{*}{Average} \\ \cline{2-2} \cline{4-14} & HSAC & HSAC & & & \multirow{2}{*}{Beedance} & & & HSAC & HSAC & \multirow{2}{*}{Beedance} & \multirow{2}{*}{Salinas} & \multirow{2}{*}{ECG} \\ & PAC2016 & 2011 & & & & & PAC2016 & 2011 & & & & \\ \hline \hline M-stat [37] & 0.688 & 0.565 & 0.566 & 0.471 & 0.442 & 0.546 & 0.804 & 0.676 & 0.723 & 0.708 & 0.667 & 0.716 \\ SinkDiv[2] & 0.679 & 0.578 & **0.764** & 0.501 & **0.487** & 0.601 & 0.791 & 0.699 & **0.823** & 0.558 & 0.682 & 0.710 \\ W1[13] & 0.678 & **0.652** & 0.763 & 0.252 & 0.441 & 0.557 & 0.806 & 0.702 & 0.820 & 0.525 & 0.682 & 0.707 \\ WQT[12] & 0.638 & 0.411 & 0.424 & 0.308 & 0.449 & 0.446 & 0.772 & 0.636 & 0.698 & 0.598 & 0.682 & 0.677 \\ RE & 0.596 & 0.382 & 0.367 & 0.312 & 0.482 & 0.427 & 0.779 & 0.641 & 0.646 & 0.523 & 0.684 & 0.654 \\ sRE & **0.740** & 0.598 & 0.687 & **0.714** & 0.473 & **0.647** & **0.831** & **0.709** & 0.801 & **0.772** & 0.682 & **0.756** \\ \hline \end{tabular} \end{table} Table 1: Performance comparison of RE and sRE with other statistics used for CPD(bold: best). Synthetic:In Figure 4 we compare \(\texttt{RE}_{\lambda}\) and \(\texttt{sRE}_{\lambda}^{\varepsilon}\) with various choices of regularization on a synthetic dataset. In this figure we see that increasing the regularization parameter \(\varepsilon\) of \(\texttt{sRE}_{\lambda}^{\varepsilon}\) does indeed produce a smoothing effect on the generated signal in agreement with Theorem 4.2 which in turn leads to fewer false alarms. In contrast \(\texttt{RE}_{\lambda}\) is highly oscillatory and creates many false alarms which we believe is because of its poor continuity properties as discussed in Theorem 4.1. However, _over-regularizing_ leads to missed change points due to the bound \(W_{1}/\varepsilon\) becoming small yet still dominating \(\texttt{sRE}_{\lambda}^{\varepsilon}\). In the appendix both an additional figure illustrating the effect of the window size (Figure 5) and numeric results comparing against other statistics are given (Table 3) are given. Hasc-Pac2016, Hasc-2011:On HASC-PAC2016, sRE performs the best on all metrics. On HASC2011, sRE also performs better overall compared to most of the methods. In contrast, RE has the lowest overall performance on both datasets. This is because RE produces false positives in low amplitude regions between activities, called "rest," due to its sensitivity to any signal regardless of its amplitude (Figure 2). In contrast, sRE provides smoother statistics compared to RE which is validated by Theorem 4.2 and ignores changes in those regions, resulting in a significant improvement in performance. change points with a small margin of error (\(\xi=2\)). Moreover, the results from Section 3.3 suggest that sRE is more easily estimable in high dimension compared to RE, which may be a contributing factor to its relatively strong performance on this dataset. ECG:Despite being developed based on the concept of multivariate rank, both RE and sRE are effective at detecting change points even when the signal is one-dimensional. As shown in Table 1, all methods, including RE and sRE, perform similarly well on the univariate ECG signal. Overall:sRE performs better than RE and is either superior or comparable to other methods on all datasets. On the high-dimensional hyperspectral image dataset, sRE outperforms all other methods, while performing competitively on low-dimensional and even on one-dimensional data. In addition, sRE achieves the best average AUC-PR and the best average F1-score across all datasets, making it a strong candidate for GoF statistic to be used in a sliding-window based offline and unsupervised CPD method. ## 6 Conclusion and Future Work We have established that soft rank energy enjoys efficient statistical and computational complexity, is Lipschitz with respect to Wasserstein-1, and performs well as a GoF measure on a range of real-world CPD problems. However these considerations are all made under compactness assumptions on all the measures involved. A problem left to future work is to extend these results to measures with unbounded support under certain concentration assumptions, namely the subgaussian or subexponential distributions. Results on entropic optimal transport under these assumptions exist in the literature [44, 41] but do not appear to be able to directly applicable to the soft rank energy. In addition, while we have chosen the uniform distribution on the unit cube \([0,1]^{d}\) as the target measure for the rank maps in this paper, it is of interest to consider the role of this distribution and if other distributions may allow for better convergence bounds (see discussion following Theorem 4.4). Noting that the rank maps allow for comparisons of distributions vis-a-vis their transport maps to a specified target distribution, it is of interest to investigate the complimentary picture namely comparing distributions via their multivariate quantile maps [30, 14]. and connections with the _linear optimal transport_ framework [60], where one compares the distributions via the transport maps from a specific reference measure to these distributions as the target measures.
2307.13117
Towards $α'$-finiteness: $q$-deformed open string amplitude
Revisiting the Coon amplitude, a deformation of the Veneziano amplitude with a logarithmic generalization of linear Regge trajectories, we scrutinize its potential origins in a worldsheet theory by proposing a definition of its $q$-deformation through the integral representation of the $q$-beta function. By utilizing $q$-deformed commutation relations and vertex operators, we derive the Coon amplitude within the framework of the dual resonance model. We extend this to the open-string context by $q$-deforming the Lie algebra $\mathfrak{su}(1,1)$, resulting in a well-defined $q$-deformed open superstring amplitude. We further demonstrate that the $q$-prefactor in the Coon amplitude arises naturally from the property of the $q$-integral. Furthermore, we find that two different types of $q$-prefactors, corresponding to different representations of the same scattering amplitude, are essentially the same by leveraging the properties of $q$-numbers. Our findings indicate that the $q$-deformed string amplitude defines a continuous family of amplitudes, illustrating how string amplitudes with a finite $\alpha^\prime$ uniquely flow to the amplitudes of scalar scattering in field theory at energy scale $\Lambda$ as $q$ changes from $1$ to $0$. This happens without the requirement of an $\alpha^\prime$ expansion, presenting a fresh perspective on the connection between string and field theories.
Yuqi Li, Hao-Yu Sun
2023-07-24T20:11:38Z
http://arxiv.org/abs/2307.13117v1
# Towards \(\alpha^{\prime}\)-finiteness: \(q\)-deformed open string amplitude ###### Abstract Revisiting the Coon amplitude, a deformation of the Veneziano amplitude with a logarithmic generalization of linear Regge trajectories, we scrutinize its potential origins in a worldsheet theory by proposing a definition of its \(q\)-deformation through the integral representation of the \(q\)-beta function. By utilizing \(q\)-deformed commutation relations and vertex operators, we derive the Coon amplitude within the framework of the dual resonance model. We extend this to the open-string context by \(q\)-deforming the Lie algebra \(\mathfrak{su}(1,1)\), resulting in a well-defined \(q\)-deformed open superstring amplitude. We further demonstrate that the \(q\)-prefactor in the Coon amplitude arises naturally from the property of the \(q\)-integral. Furthermore, we find that two different types of \(q\)-prefactors, corresponding to different representations of the same scattering amplitude, are essentially the same by leveraging the properties of \(q\)-numbers. Our findings indicate that the \(q\)-deformed string amplitude defines a continuous family of amplitudes, illustrating how string amplitudes with a finite \(\alpha^{\prime}\) uniquely flow to the amplitudes of scalar scattering in field theory at energy scale \(\Lambda\) as \(q\) changes from \(1\) to \(0\). This happens without the requirement of an \(\alpha^{\prime}\) expansion, presenting a fresh perspective on the connection between string and field theories. ## I Introduction Recently, the Coon amplitude [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13], considered as a generalization of the Veneziano amplitude [14] in string scatterings, has regained significant attention. Several remarkable advancements in understanding Coon amplitudes have been achieved, starting from their helping identifying the next-to-leading-order contribution to unitary amplitudes [15], to more direct investigation into their unitarity resulting in the critical dimension [16], positivity [17; 18] and uniqueness [19], their relationship with Virasoro amplitudes [20; 21], to application of bootstrap techniques [22; 23], and even to their interpretation in terms of D-branes [24]. However, despite early attempts at identifying a string-theoretic origin of these formulas [6; 9; 25; 26; 27; 28; 29; 30; 31; 32], a full understanding of whether a worldsheet model of the Coon amplitude exists remains elusive. As established in pioneering studies [2; 4; 5; 7; 10; 11; 25; 26; 27; 28], the four-point Coon amplitude \[\mathcal{V}_{C}^{4}(\alpha_{q}(s),\alpha_{q}(t))=q^{\alpha_{q}(s)\alpha_{q}( t)}B_{q}(-\alpha_{q}(s),-\alpha_{q}(t)), \tag{1}\] where \(\alpha_{q}(x):=\ln(1-(1-q)\alpha(x))/\ln q\), and \(\alpha(x)\) represents the linear Regge trajectory with respect to the Mandelstam variable \(x\), is written as a continuous family of amplitudes parameterized by a single real parameter \(q\in[0,1]\). This is achieved with the help of the so-called \(q\)-beta function [33; 34; 35; 36; 37; 38; 39], \(B_{q}(\alpha,\beta):=\frac{\Gamma_{q}(\alpha)\Gamma_{q}(\beta)}{\Gamma_{q}( \alpha+\beta)}\) for \(\alpha,\beta>0\), where \(q\)-gamma function \(\Gamma_{q}(t):=\frac{(1-q)^{q}}{(1-q)^{q-1}}\), with \((1-q)^{t-1}_{q}\) being the \(q\)-analogue of \((1-q)^{t-1}[40]\). It has a nice integral representation \[B_{q}(\alpha,\beta)=\int_{0}^{1}\ \mathrm{d}_{q}x\ x^{\alpha-1}(1-qx)^{\beta-1}_{q}, \tag{2}\] where \(\mathrm{d}_{q}x\) is the \(q\)-analogue of the ordinary integration measure \(\mathrm{d}x\), and \((1-qx)^{\beta-1}_{q}\) is the \(q\)-analogue of \((1-x)^{\beta-1}\)[41]. When attempting to view the integral representation of the \(q\)-beta function as a certain worldsheet integral however, one realizes that \(x^{\alpha-1}\) and the \(q\)-analogue \((1-qx)^{\beta-1}_{q}\) there cannot be treated as equals, in the same manner as in conventional string theory. Thus, the precise definition of the \(q\)-deformed string [42] remains unclear. To establish a proper definition for the \(q\)-deformed string, we first need to understand the origins of the two foregoing factors in the integrand in (2). Noting that \(x^{\alpha-1}\) is not \(q\)-deformed, we will initially assume that it originates from the propagator, and that \((1-qx)^{\beta-1}_{q}\) is derived from the vertex operators, both in the dual resonance model (DRM). After appropriately \(q\)-deforming the commutation relations and vertex operators, the Coon amplitude emerges through calculations using DRM. This approach differs from the set-up of [29; 31], presenting a novel and distinct perspective on the \(q\)-deformation of string amplitudes. As a consequence, the prefactor \(q^{\alpha_{q}(s)\alpha_{q}(t)}\) in the Coon amplitude naturally arises directly from the properties of the polynomials in the integrand, rather than being inserted manually. In advancing to the open (super)string case [25; 26; 27], we choose to \(q\)-deform the Virasoro subalgebra \(\mathfrak{su}(1,1)\) rather than the equivalent \(\mathfrak{sl}(2,\mathbb{R})\), because their \(q\)-deformed versions, defined with \(0<q<1\) and \(q\) being a root of unity respectively, are not isomorphic [43; 44]. By modifying the vertex operators, we are able to derive a well-defined \(q\)-deformed open superstring amplitude. Interestingly, we discover that both prefactors \(q^{\alpha_{q}(s)\alpha_{q}(t)}\) and \(q^{\alpha_{q}(s)\alpha_{q}(t)-\alpha_{q}(s)-\alpha_{q}(t)}\) are correct but correspond to different representations of the scattering amplitude. In the DRM calculations, we \(q\)-deformed the commutation relation between the creation \((a^{\mu}_{n})\) and annihilation \((a^{\mu}_{n})\) operators to \([a^{\mu}_{m},a^{\nu\dagger}_{n}]_{q}=g^{\mu\nu}\delta_{m,n}\), using the definition of the \(q\)-deformed commutator \([A,B]_{q}:=AB-qBA\) for two operators \(A\) and \(B\). Whereas in the superstring calculations, we employ \([a^{\mu}_{m},a^{\nu\dagger}_{n}]_{q}=[m]_{q}\,g^{\mu\nu}\delta_{m+n,0}\) with \([m]_{q}\) being a \(q\)-number, the \(q\)-analogue of an integer, defined as \([m]_{q}:=(q^{m}-1)/(q-1)\). Without changing the definitions of the \(q\)-analogues of logarithm and exponential functions, we find that both commutation relations yield consistent results. By interpreting \(q\) as a function of \(\alpha^{\prime}\), we propose that the \(q\)-deformed string amplitude can be viewed as a continuous family that uniquely determines the transition from string theory, characterized by a fixed finite \(\alpha^{\prime}\), to scattering of scalars in field theory, determined by a certain energy scale \(\Lambda\), without any necessity of the expansion in terms of \(\alpha^{\prime}\). Finally, we discuss the possible future directions based on our results. ## II \(q\)-deformed Operator Formalism and Coon Amplitude One of several systematic investigations into Coon amplitudes in the early days was achieved by using the DRM in [25]. Not based on Lagrangian density, DRM perturbatively describes hadrons, with a set of axioms including Lorentz invariance, unitarity, \(T,C,P\) invariance, crossing symmetry, analyticity with only poles in the complex plane of the Mandelstam variables and in that of angular momenta ("Regge behavior"), among others. For a complete list for empirical applications and approximations, see [45; 46]. It differs from conventional field theories due to an infinite number of resonances even at the lowest order in the real coupling constant. Now consider the simplest vertex operators corresponding to Veneziano amplitudes and perform the \(q\)-deformation on the operators. We obtain the \(q\)-deformed vertex operator with momentum \(k\) as follows [45], \[V(k) = \exp_{q}\left(\sqrt{2\alpha^{\prime}}k\cdot\sum_{n=1}^{\infty} \frac{a^{\dagger}_{n}}{\sqrt{[n]_{q}}}\right) \tag{3}\] \[\times\exp_{q}\left(-\sqrt{2\alpha^{\prime}}k\cdot\sum_{n=1}^{ \infty}\frac{a_{n}}{\sqrt{[n]_{q}}}\right),\] with the \(q\)-exponential defined as \(\exp_{q}(x):=\sum_{k=0}^{\infty}z^{k}/[k]_{q}!\). Here, we adopt our commutation relations as \[[a^{\mu}_{m},a^{\nu\dagger}_{n}]_{q}=g^{\mu\nu}\delta_{m,n},\quad m,n=1,2,3,\ldots, \tag{4}\] where \(g^{\mu\nu}\) is the inverse metric of the flat spacetime in Lorentzian signature. There is no a priori dimension of the spacetime [47]. After verifying the completeness of the Bargmann-Fock space, as explored in prior work [48], we then proceed to define the corresponding propagator as \[D(s)=\int_{0}^{1}x^{R-\alpha(s)-1}(1-qx)_{q}^{\alpha_{0}-1}\ \mathrm{d}_{q}x, \tag{5}\] with the \(q\)-analogue of the number operator defined as \[R=\sum_{n=1}^{\infty}[n]_{q}\,a^{\dagger}_{n}\cdot a_{n}. \tag{6}\] This leads to the \(q\)-deformed 4-point Veneziano amplitude, which is expressed as \[\mathcal{V}^{4}_{q}=\left\langle 0_{q}\left|V\left(p_{2}\right)D\left(s\right)V \left(p_{3}\right)\right|0_{q}\right\rangle, \tag{7}\] with \(a^{\mu}_{n}|0_{q}\rangle=0\) for \(n>0\) as in [29]. On substituting the expressions of the corresponding vertex operators in (3), one obtains: \[\mathcal{V}^{4}_{q}(s,t) = \int_{0}^{1}\ \mathrm{d}_{q}x\ x^{-\alpha(s)-1}(1-qx)_{q}^{\alpha_{0} -1} \tag{8}\] \[= \int_{0}^{1}\ \mathrm{d}_{q}x\ x^{-\alpha(s)-1}(1-qx)_{q}^{\alpha_{ 0}-1}\exp_{q}\left(-2\alpha^{\prime}p_{2}\cdot p_{3}\sum_{n=1}^{\infty}\frac{ x^{n}}{[n]_{q}}\right)\] \[= \int_{0}^{1}\ \mathrm{d}_{q}x\ x^{-\alpha(s)-1}(1-q^{\alpha(t)}qx)_{q} ^{-\alpha(t)-1}\] \[= q^{\alpha(s)\alpha(t)}\int_{0}^{1}\ \mathrm{d}_{q}x\ x^{-\alpha(s)-1}(1-qx)_{q} ^{-\alpha(t)-1}. \tag{9}\] Here, we use the relation \(e^{A}e^{B}=e^{B}e^{A}e^{[A,B]_{q}}\) when \([A,B]_{q}\) is a \(c\)-number and in the last line of the calcu lation, we perform the substitution \(q^{\alpha(t)}x\to x\). In step (8), we utilized the following useful identity [35; 37]: \[\frac{(a+b)_{q}^{m}}{(a+b)_{q}^{n}}=(a+q^{n}b)_{q}^{m-n},\quad m,n\geq 0. \tag{10}\] In our calculation, we employed a more generalized definition \((a+b)_{q}^{m}:=\prod_{k=0}^{m-1}(a+q^{k}b)\), which immediately gives us \((1-qx)_{q}^{m-1}=(1-x)_{q}^{m}/(1-x)_{q}\) as already indicated earlier. This identity will be used repeatedly in our subsequent calculations. The calculations so far have been performed in the kinematic space, yielding what we refer to as the \(q\)-Veneziano amplitude. However, this is not yet the Coon amplitude. The Coon amplitude is defined in the \(q\)-space, with its coordinates given by the logarithmic Regge trajectory, which is mapped from its linear Regge trajectory in the kinematic space [25; 26]. For any Regge trajectory \(\alpha(x)\), we define \(\alpha_{q}(x):=\frac{\ln[1-(1-q)\alpha(x)]}{\ln q}\). Finally, we obtain the Coon amplitude as shown in (1): \[\mathcal{V}_{C}^{4}(s,t) = q^{\alpha_{q}(s)\alpha_{q}(t)}\int_{0}^{1}\,\mathrm{d}_{q}x\ x^ {-\alpha_{q}(s)-1}(1-qx)_{q}^{-\alpha_{q}(t)-1} \tag{11}\] \[= q^{\alpha_{q}(s)\alpha_{q}(t)}B_{q}(-\alpha_{q}(s),-\alpha_{q}( t)).\] ## III From \(q\)-deformed \(\mathfrak{su}(1,1)\) to \(q\)-Strings Upon appropriately normalizing \(\alpha_{0}\) in the linear Regge trajectory, the calculation using the DRM can be interpreted as the scattering amplitude of tachyons. The integral representation of four-tachyon scattering raises an intriguing question about the existence of a corresponding expression for the scattering of excited string states in terms of \(q\)-analogues of gamma functions. Building on the groundwork laid by previous research [25; 26; 27], our endeavor aims to construct a well-defined \(q\)-deformed string (\(q\)-string) theory. To start, let us first focus on the tree-level scattering amplitude of four massless open superstrings with \(\mathfrak{su}(1,1)\) as the subalgebra of the Virasoro algebra. In the conventional calculations of conformal field theory (CFT), we set \((0,1,\infty)\) as the three punctures on the Riemann surface fixed by the global conformal group \(\mathrm{SU}(1,1)\) and the fourth puncture is located at \(x\) on the real axis. In the \(q\)-deformed case, since \([0]_{q}=0\) and \([1]_{q}=1\) remain unchanged under the \(q\)-deformation, we can still use the same three punctures fixed by the group corresponding to the \(q\)-deformed \(\mathfrak{su}(1,1)\) algebra. This is possible as long as the terms corresponding to the \(q\)-analogue of \(\infty\) in the calculation cancel out to yield some polynomials of degree \(0\). This is exactly the same case as in conventional string amplitude calculations [49]. In contrast to the traditional commutation relations used in the DRM calculations above, we will use the \(q\)-analogue of the corresponding notations often used in string amplitude calculations to demonstrate that both commutations used in DRM and string theory are consistent sets of commutation relations: \[\left[\alpha_{m}^{\mu},\alpha_{n}^{\nu}\right]_{q}=\left[m\right]_{q}g^{\mu \nu}\delta_{m+n,0}. \tag{12}\] The key modifications in the calculations of the \(q\)-deformed string amplitude are listed as follows: * **The mode expansion of the string field \(X^{\mu}\) is transformed into its \(q\)-analogue.** For instance, if we assume that both the left and right movers (corresponding to \(\alpha_{n}^{\mu}\) and \(\tilde{\alpha}_{n}^{\mu}\), respectively) are deformed under the same \(q\)[50], we obtain the following mode expansion: \[X^{\mu}(z,\bar{z}) = x^{\mu}-ip^{\mu}\left[2\alpha^{\prime}\ln_{q}(z\bar{z})\right]\] (13) \[+i\sqrt{2\alpha^{\prime}}\sum_{n\neq 0}\frac{1}{[n]_{q}}\left( \alpha_{n}^{\mu}z^{-n}+\tilde{\alpha}_{n}^{\mu}\bar{z}^{-n}\right),\] with \(\ln_{q}(\cdot)\) being the inverse function of \(\exp_{q}(\cdot)\). * **The \(q\)-deformed commutation relations implies \([p^{\mu},x^{\nu}]_{q}=-ig^{\mu\nu}\) and the \(q\)-Wick's theorem.** Note that the \(q\)-deformation only affects the indices of the worldsheet coordinates and leaves the spacetime indices untouched. The \(q\)-deformation necessitates the \(q\)-Wick's theorem, and it enforces the normal ordering of two bosonic operators such that \(:bb^{\dagger}:=qb^{\dagger}b\), as described in [51]. * **The partial derivatives acting on worldsheet coordinates are \(q\)-deformed.** This essentially involves the substitution of the corresponding \(q\)-analogue of differential operators: \(\frac{\partial}{\partial z}\rightarrow\left(\frac{\partial}{\partial z}\right) _{q}=\frac{1}{z}\left[z\frac{\partial}{\partial z}\right]_{q}\) and \(\frac{\partial}{\partial\bar{z}}\rightarrow\left(\frac{\partial}{\partial\bar{ z}}\right)_{q}=\frac{1}{z}\left[\bar{z}\frac{\partial}{\partial \bar{z}}\right]_{q}\). * **The string vertex operators maintain the same form with the aforementioned modified \(q\)-analogues, and the operator product expansions (OPEs) remain unchanged.** This point is straightforward since the OPEs used in the amplitude calculations only involve terms of order \(1/(z-w)\), which are not affected by the \(q\)-analogue. * With all the substitutions mentioned above, the kinematic factor of the \(q\)-deformed string amplitudes remains unchanged. The modifications mentioned above enable the computation of the \(q\)-deformed string amplitudes using the standard techniques found in conventional string amplitude calculations, as demonstrated in [52]. After a series of straightforward calculations, we find that by setting \((z_{1},z_{2},z_{3},z_{4})=(0,x,1,\infty)\), the color-stripped amplitude \(\mathcal{A}_{q}(1,2,3,4)\) of four massless open superstrings at tree level can be expressed as follows: \[\mathcal{A}_{q}^{4}(s,t)=-2\alpha^{\prime}\int_{0}^{1}\mathrm{d}_{q}x \left(\frac{N_{s}}{x}-\frac{N_{t}}{1-x}\right) \tag{14}\] \[\times\exp_{q}\left[-2\alpha^{\prime}s_{12}\ln_{q}(x-0)\right]\exp _{q}\left[-2\alpha^{\prime}s_{23}\ln_{q}(1-x)\right]\] \[= -2\alpha^{\prime}\int_{0}^{1}\mathrm{d}_{q}x\left(\frac{N_{s}}{x }-\frac{N_{t}}{1-x}\right)x^{-\alpha^{\prime}s}\left(1-q^{\alpha^{\prime}t}x \right)_{q}^{-\alpha^{\prime}t}.\] Here, \(s=2s_{12}=2k_{1}\cdot k_{2}\) and \(t=2s_{23}=2k_{2}\cdot k_{3}\) represent the Mandelstam variables, while \(N_{s}\) and \(N_{t}\) correspond to the Bern-Carrasco-Johansson (BCJ) numerators [53]. Note that we utilized the definition of the \(q\)-analogue such that \((x-0)_{q}^{a}=x^{a}\), which reverts to a non-deformed expression. Now, let us map from the kinematic space to the \(q\)-space as introduced in the previous section by setting \(\alpha(s)=\alpha^{\prime}s\) and \(\alpha(t)=\alpha^{\prime}t\). In the \(q\)-coordinate, for the part involving \(N_{s}\), we have: \[N_{s}(2\alpha^{\prime})\int_{0}^{1}\mathrm{d}_{q}x\ x^{-\alpha_ {q}(s)-1}(1-q^{\alpha(t)-1}qx)_{q}^{-\alpha_{q}(t)} \tag{15}\] \[= q^{\alpha_{q}(s)\alpha_{q}(t)-\alpha_{q}(s)}\frac{N_{s}}{s}([ \alpha_{q}(s)]_{q})\int_{0}^{1}\mathrm{d}_{q}x\ x^{-\alpha_{q}(s)-1}(1-qx)_{q }^{-\alpha_{q}(t)}\] \[= q^{\alpha_{q}(s)\alpha_{q}(t)-\alpha_{q}(s)}\frac{N_{s}}{s}(-q^{ \alpha_{q}(s)}[-\alpha_{q}(s)]_{q})\int_{0}^{1}\mathrm{d}_{q}x\ x^{-\alpha_{q}( s)-1}(1-qx)_{q}^{-\alpha_{q}(t)}\] \[= -q^{\alpha_{q}(s)\alpha_{q}(t)}\frac{\Gamma_{q}(1-\alpha_{q}(s)) \Gamma_{q}(1-\alpha_{q}(t))}{\Gamma_{q}(1-\alpha_{q}(s)-\alpha_{q}(t))}\left[ \frac{N_{s}}{s}\right].\] Here, we used \(\Gamma\left(1-\alpha_{q}(s)\right)=\left[-\alpha_{q}(s)\right]_{q}\Gamma_{q}( -\alpha_{q}(s))\) and \([-a]_{q}=-q^{-a}[a]_{q}\). The calculation of the part corresponding to \(N_{t}\) is similar. After summing up the two parts and setting \(A_{4}=A_{SYM}(1,2,3,4):=\frac{N_{s}}{s}-\frac{N_{t}}{t}\) and \(q(s,t)=q^{\alpha_{q}(s)\alpha_{q}(t)}\), we obtain the \(q\)-deformed four-point amplitude: \[\mathcal{A}_{q}^{4}(s,t)=q(s,t)A_{4}\frac{\Gamma_{q}(1-\alpha_{q}(s))\Gamma_{ q}(1-\alpha_{q}(t))}{\Gamma_{q}(1-\alpha_{q}(s)-\alpha_{q}(t))}. \tag{16}\] If we apply the property of \(q\)-gamma function backwards and set \(\tilde{q}(s,t)=q^{\alpha_{q}(s)\alpha_{q}(t)-\alpha_{q}(s)-\alpha_{q}(t)}\), we can get the other form of the amplitude with the kinematic factors \(K_{4}\): \[\mathcal{A}_{q}^{4}=\tilde{q}(s,t)\left[\frac{K_{4}}{st}\right]\frac{\Gamma_{ q}(-\alpha_{q}(s))\Gamma_{q}(-\alpha_{q}(t))}{\Gamma_{q}(1-\alpha_{q}(s)- \alpha_{q}(t))}, \tag{17}\] which agrees with the result in [21]. Furthermore, our method can be straightforwardly extended to higher-point amplitudes by carefully handling the \(q\)-prefactors with respect to the kinematic factors. For 5- and 6-point amplitudes, we observe similar results to those in [25], and the appearance of the \(q\)-prefactors emerges naturally. The \(q\)-deformed 5-point amplitude of tachyons in the kinematic space can be expressed as follows: \[\mathcal{V}_{q}^{5}(s_{12},s_{23},s_{34},s_{45},s_{51}) =q^{\alpha(s_{34})\alpha(s_{45})+\alpha(s_{12})\alpha(s_{23})} \int_{0}^{1}\mathrm{d}_{q}y\ y^{-\alpha(s_{45})-1}(1-qy)_{q}^{-\alpha(s_{34} )-1} \tag{18}\] \[\times\int_{0}^{1}\mathrm{d}_{q}x\ x^{-\alpha(s_{12})-1}(1-qx)_{ q}^{-\alpha(s_{23})-1}(1-q^{-\alpha(s_{22})-\alpha(s_{34})}xy)_{q}^{-\alpha(s_{1 })+\alpha(s_{23})+\alpha(s_{34})}\] \[=q^{\alpha(s_{34})\alpha(s_{45})+\alpha(s_{12})\alpha(s_{23})}B _{q}\left(-\alpha\left(s_{34}\right),-\alpha\left(s_{45}\right)\right)B_{q} \left(-\alpha\left(s_{12}\right),-\alpha\left(s_{23}\right)\right)\] \[\times\ {}_{3}\phi_{2}\left(\begin{array}{c}q^{-\alpha(s_{12})},q^{- \alpha(s_{45})},q^{\alpha(s_{11})-\alpha(s_{23})-\alpha(s_{34})}\\ q^{-\alpha(s_{12})-\alpha(s_{23})},q^{-\alpha(s_{34})-\alpha(s_{45})}\end{array} \right.;q,q^{-\alpha(s_{51})}\right).\] Here, we consider 5 points located at \((z_{1},z_{2},z_{3},z_{4},z_{5})=(0,x,y,1,\infty)\) and define \(\alpha(s_{ij})\) as the linear Regge trajectory in terms of the Mandelstam variables \(s_{ij}\). The expression \({}_{3}\phi_{2}\) denotes the unilateral basic hypergeometric series [54] (also known as \(q\)-hypergeometric series) [55; 56; 57]. Indeed, the last term in the integrand, derived from the properties of Mandelstam variables, is valid only for the linear Regge trajectories, and it is specific to the kinematic space. This term is crucial for obtaining the correct \(q\)-deformed 5-point amplitude within the given framework. However, when transitioning to the \(q\)-space, where the Coon amplitude is defined, this term may require further modifications or adaptations to ensure the consistency of the Coon amplitude in the \(q\)-space. We will leave the details of the calculations of \(N\)-point amplitude to our future work. ## IV The relationship between \(q\) and \(\alpha^{\prime}\) at low energy As already shown in [16; 17; 19; 22; 23] and references therein, the Coon amplitude in \(q\)-space interpolates between the field-theoretic results and the stringy results at the endpoints \(q=0\) and \(q=1\), respectively. In the field-theoretic limit as \(q\to 0\), the non-contact part of the Coon amplitude has simple poles with an overall factor of the stringy parameter \(\alpha^{\prime}\). This can be represented as follows: \[\lim_{q\to 0}V_{q}^{4}(s,t)=-\frac{1}{\alpha^{\prime}(s-m^{2})}-\frac{1}{ \alpha^{\prime}(t-m^{2})}+1. \tag{19}\] In kinematic space, the \(q\)-deformed Veneziano amplitude allows us to take a step further in our exploration of whether, for fixed and finite arbitrary \(\alpha^{\prime}\), the dependence on \(\alpha^{\prime}\) can be absorbed into a dimensional prefactor \(\Lambda\) so that the resulting four-point scattering in the \(q\to 0\) limit will be independent of the stringy characteristic \(\alpha^{\prime}\). Note that, similar to the standard gamma function, the \(q\)-gamma function \(\Gamma_{q}(z)\) also has a pole at \(z=0\)[58], [59] \[\Gamma_{q}(z)\approx\frac{q-1}{\ln q}\frac{1}{z}+\mathcal{O}(1).\] Thus, we can expand the \(q\)-deformed Veneziano amplitude around the poles and keep the non-contact leading order terms: \[V_{q}^{4}(s,t)_{\texttt{non}}\sim\left[\frac{1-q}{\ln q}\frac{1}{\alpha^{ \prime}}\right]\left(\frac{1}{s-m^{2}}+\frac{1}{t-m^{2}}\right). \tag{20}\] Next, let us absorb the overall factor into \(\Lambda\) and set \(\Lambda=\frac{q-1}{\ln q}\frac{1}{\alpha^{\prime}}\). Upon taking the small \(q\) limit, the leading contribution yields \(q\approx\exp{(-\frac{1}{\alpha^{\prime}\Lambda})}\). Before proceeding, it is essential to carry out some consistency checks of different limits: * \(\alpha^{\prime}\to 0\) **limit:** In the standard string theory calculations, \(\alpha^{\prime}\to 0\) limit of the string amplitude calculations corresponds to the field theory result. In our case, \(\alpha^{\prime}\to 0\) implies \(\exp{(-\frac{1}{\alpha^{\prime}\Lambda})}\to 0\), which corresponds to the scalar field theory result. * \(\alpha^{\prime}\not\approx 0\) **and fixed:** Let us set \(\Lambda=\lambda m_{0}^{2}\), with \(\lambda\in\mathbb{R}^{+}\) and \(m_{0}^{2}=-m_{T}^{2}\), where \(m_{T}^{2}\) denotes the squared mass of the tachyon. In this case, the limit \(q\to 0\) implies \(\alpha^{\prime}m_{T}^{2}\to 0\). This is consistent as \(\alpha^{\prime}m_{T}^{2}=\frac{D-2}{2}\zeta_{q}(-1)\) is linearly suppressed in the \(q\to 0\) limit. Here, \(\zeta_{q}(z)\) represents the \(q\)-deformed Riemann zeta function. Now, let us fix the energy scale of the scalar field theory to be \(\Lambda=\lambda m_{0}^{2}\) and define \[q_{0}=\exp\left(-\frac{1}{\alpha^{\prime}m_{0}^{2}}\frac{1}{\lambda}\right), \quad\lambda\in\mathbb{R}^{+}. \tag{21}\] All remaining terms can then be absorbed into the overall factor \(\mathcal{R}(\lambda,m_{0}^{2})\), such that \(q=q_{0}\mathcal{R}(\lambda,m_{0}^{2})\). Here, a crucial observation is that when \(q=q^{\prime}\), \(\Gamma_{q}(x)=\Gamma_{q^{\prime}}(x)\) and \(\Gamma_{q}(x)<\Gamma_{q^{\prime}}(x)\) if \(q<q^{\prime}\). Therefore, as \(q\) flows from \(1\) to \(0\), it uniquely defines a transition from a string theory characterized by a finite \(\alpha^{\prime}\) to a field theory with a fixed energy scale \(\Lambda\). It becomes evident that the \(q\)-deformation can be interpreted as a continuous family that demonstrates how string theory flows into field theory without the need for a perturbative expansion of \(\alpha^{\prime}\). Instead, this transition is facilitated through a complicated function that maps \(\alpha^{\prime}\) to \(q\). ## V Discussions and outlook In summary, our calculation proposes a definition for the Coon amplitude, with the appropriate prefactor arising directly from the \(q\)-integral. We extend this approach to the open string amplitude through a \(q\)-deformation of the Lie algebra \(\mathfrak{su}(1,1)\), which yields a well-defined \(q\)-deformed open superstring amplitude. Our results indicate that the \(q\)-deformation gives rise to a continuous family of amplitudes, offering a novel perspective on the relationship between string theory and field theory. Before moving to the outlook of future directions, we will discuss more on the details of our findings. Noticing that \(\exp_{q}(a\ln b)\neq\exp_{q}(a\ln_{q}b)=b_{q}^{a}\), with \(b_{q}^{a}\) being the \(q\)-analogue of \(b^{a}\) as usual, we advance our understanding beyond the work of [29], proposing a more canonical definition of the \(q\)-deformation of the mode expansion of the string field, as given in (13). It is only with this refined definition that the correct form of the Coon amplitude emerges. When we associate the deformation parameter \(q\) with \(\alpha^{\prime}\), the curve in the \((q,D)\) space proposed by [16] can be reinterpreted as a diagram in the \((\alpha^{\prime},D)\) space, which illustrates the point at which the amplitude consistently maintains unitarity as \(\alpha^{\prime}\) approaches zero. It is intriguing that, after suitably defining our \(\lambda m_{0}^{2}\), we obtain a behavior similar to that observed in [24]: \(q\sim e^{-\frac{1}{\alpha^{\prime}m_{0}^{2}}}\). This resemblance is particularly striking considering that our results are derived in the low energy limit. Let us wrap up our exploration by the outlook of the future directions: Our work has provided a potential new perspective on the \(q\)-Virasoro algebra. Based on the previous studies [60; 61; 62; 28; 63; 64; 65], the \(q\)-deformation of the Virasoro algebra and the mass spectrum can be further ex plored. It would be very interesting to see how closed string theory can be formulated based on our findings in the open string scenario, including the potential for a \(q\)-analogue of double copy relations. If we interpret our deformed algebra as a deformation of the universal envelope. AcknowledgementsWe express our gratitude to Piotr Tourkine for enlightening discussions and incisive comments on this work. We also thank Peng Zhao for meticulously reviewing the initial draft and providing useful suggestions. Special thanks go to Marcus Spradlin for bringing Romans' work to our attention. We thank Martin Rocek and Mykola Dedushenko for valuable discussions as well. Y.L. is supported, in part, by NSF grant PHY-22-15093. Y.L. would like to express special thanks to Nima Arkani-Hamed, Lorenz Eberhardt, and Sebastian Mizera for their hospitality at the Institute for Advanced Study, and also extended the gratitude to the workshop "String Amplitudes at Finite \(\alpha^{\prime}\)", where the initial ideas for this project were conceived. H.-Y.S. is supported, in part, by a grant from the Simons Foundation (Grant 651440, AK). H.-Y.S. thanks the hospitality of the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-2210452.
2303.06953
Mapping cones of monomial ideals over exterior algebras
Let $K$ be a field, $V$ a finite dimensional $K$-vector space and $E$ the exterior algebra of $V$. We analyze iterated mapping cone over $E$. If $I$ is a monomial ideal of $E$ with linear quotients, we show that the mapping cone construction yields a minimal graded free resolution $F$ of $I$ via the Cartan complex. Moreover, we provide an explicit description of the differentials in $F$ when the ideal $I$ has a regular decomposition function. Finally, we get a formula for the graded Betti numbers of a new class of monomial ideals including the class of strongly stable ideals.
Marilena Crupi, Antonino Ficarra, Ernesto Lax
2023-03-13T09:46:29Z
http://arxiv.org/abs/2303.06953v2
# Mapping cones of monomial ideals over exterior algebras ###### Abstract Let \(K\) be a field, \(V\) a finite dimensional \(K\)-vector space and \(E\) the exterior algebra of \(V\). We analyze iterated mapping cone over \(E\). If \(I\) is a monomial ideal of \(E\) with linear quotients, we show that the mapping cone construction yields a minimal graded free resolution \(F\) of \(I\) via the Cartan complex. Moreover, we provide an explicit description of the differentials in \(F\) when the ideal \(I\) has a regular decomposition function. Finally, we get a formula for the graded Betti numbers of a new class of monomial ideals including the class of strongly stable ideals. _Keywords_: Exterior algebra, free resolutions, monomial ideals, mapping cones _2020 Mathematics Subject Classification_: 13D02, 13C13, 15A75 (*) _Corrisponding author_ ## 1 Introduction A popular topic in combinatorial commutative algebra is to detect minimal graded free resolutions of classes of monomial ideals (see, for instance, [7, 18] and the reference therein). In [15], Herzog and Takayama have shown that one can iteratively construct a free resolution of any monomial ideal in a polynomial ring by considering the mapping cone of the map between certain complexes, adding one generator at a time. Especially when the ideal has linear quotients, the complexes and the maps are well-behaved, and the procedure determines a minimal resolution [15] by means of the Koszul complex. Some meaningful examples of free resolutions which arise as iterated mapping cones are the Eliahou-Kervaire resolution of stable ideals [8] (see, also, [10]) and the Taylor resolution. Let \(K\) be a field and let \(E=K\left\langle e_{1},\ldots,e_{n}\right\rangle\) be the exterior algebra of a \(K\)-vector space \(V\) with basis \(e_{1},\ldots,e_{n}\). A fundamental tool in the study of minimal graded free resolutions of graded \(E\)-modules is played by the Cartan complex [3]. It is an infinite complex which has the structure of a divided power algebra, and furthermore, as in the case of the ordinary Koszul complex over the polynomial ring, one may compute the Cartan homology of an \(E\)-module using long exact sequences attached to the partial sequences \(e_{1},\ldots,e_{i}\). The aim of this article is to determine a minimal graded free resolution \(F\) of a monomial ideal in \(E\) with linear quotients by iterated mapping cones and, in particular, to describe explicitly the free \(E\)-modules which appear in \(F\) and thus to get information on the graded Betti numbers of the given ideal. Our main tool is the Cartan complex. As in the polynomial case [15], we are able to provide an explicit description of the differentials in \(F\) for a special class of ideals with linear quotients which satisfy an additional condition. A significant example of monomial ideals of \(E\) with linear quotients is the class of stable ideals whose minimal graded free resolution has been completely described by Aramova, Herzog and Hibi in [3] (see, also, [2]). Recently, minimal graded resolutions in the context of skew commutative algebras have been studied in [11, 17, 20]. The plan of the article is as follows. Section 2, contains some notions that will be used throughout the article. In Section 3, we discuss the mapping cone over the exterior algebra \(E\). We prove that if \(I\) is a monomial ideal with linear quotients, the mapping cone construction determines a minimal graded free resolution of \(I\), and thus we describe explicitly a basis for each free module in the resolution (Theorem 3.1). A crucial fact is the exterior version of the "Rearrangement lemma" of Bjorner and Wachs [4]. Hence, if \(G(I)\) is the minimal set of monomial generators of \(I\), analyzing some special sets of positive integers that can be associated to \(G(I)\), we get a formula for the graded Betti numbers of \(I\) (Corollary 3.3) _via_ the notion of weak composition (Definition 3.2). Then, we obtain some formulas for the graded Poincare series, the complexity and the depth of \(I\) (Corollaries 3.5, 3.6). In Section 4, we introduce the notion of regular decomposition function (Definition 4.2) of an ideal of \(E\) with linear quotients. Our main result is Theorem 4.5 that gives an explicit description of the differentials of the resolutions of all ideals of \(E\) with linear quotients which admit a regular decomposition function. Finally, in Section 5, firstly, we introduce the class of \(\mathbf{t}\)-spread strongly stable ideals in \(E\) (Definition 5.2), then we discuss their main properties and, as a consequence, we compute their graded Betti numbers (Corollary 5.5) _via_ the results in Section 3. Such a formula generalizes the one stated in [3] for the ordinary strongly stable ideal in \(E\). All the examples in the article have been verified by [1, 13]. ## 2 Preliminaries and notations Let \(K\) be a field. We denote by \(E=K\left\langle e_{1},\ldots,e_{n}\right\rangle\) the exterior algebra of a \(K\)-vector space \(V\) with basis \(e_{1},\ldots,e_{n}\). As a \(K\)-algebra \(E\) is standard graded with defining relations \(v\wedge v=0\), for \(v\in E\) and \(e_{i}\wedge e_{j}=-e_{j}\wedge e_{i}\). For any subset \(\mu=\{i_{1},\ldots,i_{d}\}\) of \(\{1,\ldots,n\}\) with \(i_{1}<i_{2}<\cdots<i_{d}\) we write \(e_{\mu}=e_{i_{1}}\wedge\ldots\wedge e_{i_{d}}\), and call \(e_{\mu}\) a _monomial_ of degree \(d\). We set \(e_{\mu}=1\), if \(\mu=\emptyset\). Moreover, setting \(\sigma(\tau,\mu)=|\{(i,j):i\in\tau,j\in\mu,i>j\}|\), for any \(\tau,\mu\subseteq\{1,\ldots,n\}\), we have \(e_{\tau}\wedge e_{\mu}=(-1)^{\sigma(\tau,\mu)}e_{\tau\cup\mu}\), if \(\tau\cap\mu=\emptyset\) and \(e_{\tau}\wedge e_{\mu}=0\), otherwise. The set of monomials in \(E\) forms a \(K\)-basis of \(E\) of cardinality \(2^{n}\). An element \(f\in E\) is called _homogeneous_ of degree \(j\) if \(f\in E_{j}\), where \(E_{j}=\bigwedge^{j}V\). From the previous relations, one has that \(f\wedge g=(-1)^{\deg(f)\deg(g)}g\wedge f\) for any two homogeneous elements \(f\) and \(g\) in \(E\) (see, for instance, [14]). In order to simplify the notation, we put \(fg=f\wedge g\) for any two elements \(f\) and \(g\) in \(E\). We denote by \(\mathcal{G}\) the category of graded \(E\)-modules whose morphisms are the homogeneous \(E\)-module homomorphisms of degree \(0\)[14]. Every \(E\)-module \(M\in\mathcal{G}\) has a minimal graded free resolution \(F\) over \(E\): \[F:\ldots\to F_{2}\,\stackrel{{ d_{2}}}{{\rightarrow}}\,F_{1}\, \stackrel{{ d_{1}}}{{\rightarrow}}\,F_{0}\to M\to 0,\] where \(F_{i}=\oplus_{j}E(-j)^{\beta_{i,j}(M)}\). The integers \(\beta_{i,j}(M)=\dim_{K}\mbox{Tor}_{i}^{E}(M,K)_{j}\) are called the _graded Betti numbers_ of \(M\), whereas the numbers \(\beta_{i}(M)=\sum_{j}\beta_{i,j}(M)\) are called the _ith total Betti numbers_ of \(M\). Moreover, the _graded Poincare series_ of \(M\) over \(E\) is defined as \({\cal P}_{M}(s,t)=\sum_{i,j\geq 0}\beta_{i,j}(M)t^{i}s^{j}\)[3]. A graded ideal \(I\) of \(E\) is a monomial ideal if \(I\) is generated by monomials. Let \(e_{\mu}=e_{i_{1}}\cdots e_{i_{d}}\neq 1\) be a monomial in \(E\). We define \[\mbox{supp}(e_{\mu})=\{i\,\colon\,\colon\,e_{i}\mbox{ divides }e_{\mu}\}=\mu\] and we write \[\mbox{m}(e_{\mu})=\max\{i:i\in\mbox{supp}(e_{\mu})\}=\max\{i:i\in\mu\}.\] We set \(\mbox{m}(e_{\mu})=0\), if \(e_{\mu}=1\). **Definition 2.1**: Let \(I\) be a monomial ideal of \(E\). \(I\) is called _stable_ if for each monomial \(e_{\mu}\in I\) and each \(j<\mbox{m}(e_{\mu})\) one has \(e_{j}e_{\mu\setminus\{\mbox{m}(e_{\mu})\}}\in I\). \(I\) is called _strongly stable_ if for each monomial \(e_{\mu}\in I\) and each \(j\in\mu\) one has \(e_{i}e_{\mu\setminus\{j\}}\in I\) for all \(i<j\). If \(I\) is a monomial ideal in \(E\), we denote by \(G(I)\) the unique minimal set of monomial generators of \(I\), and by \(G(I)_{d}\) the set of all monomials \(u\in G(I)\) such that \(\deg(u)=d\), \(d>0\). We close the section with some comments. As in the polynomial ring, every element of \(E\) of the type \(ce_{\mu}\), with \(c\in K\) and \(e_{\mu}\) a monomial, is called a _term_. In what follows, to avoid ambiguity and with the aim of simplifying the computations, we refer to any term of the form \(ce_{\mu}\) with \(c\in\{-1,1\}\), as a monomial. In other words we generalize the classical notion of monomial recalled at the beginning of the section. This choice does not affect the classical development of the exterior algebra theory. ### A glimpse to the Cartan Complex The Cartan complex for the exterior algebra \(E=K\left\langle e_{1},\ldots,e_{n}\right\rangle\) plays the role of the Koszul complex for the symmetric algebra. We refer to [3, 14] for more details on this subject. Let \(\mathbb{N}\) be the set of all non negative integers and for a positive integer \(n\), let \([n]=\{1,\ldots,n\}\). Let \(v=v_{1},\ldots,v_{m}\) be a sequence of elements of degree \(1\) in \(E\). The Cartan complex \(C.(v;E)\) of the sequence \(v\) with values in \(E\) is defined as the complex whose \(i\)-chains \(C_{i}(v;E)\) are the elements of degree \(i\) of the free divided power algebra \(E\langle x_{1},\ldots,x_{m}\rangle\), where \(E\langle x_{1},\ldots,x_{m}\rangle\) is the polynomial ring over \(E\) in the set of variables \[x_{i}^{(j)},\quad i=1,\ldots,m,\quad j=1,2,\ldots\] modulo the relations \[x_{i}^{(j)}x_{i}^{(k)}=\binom{j+k}{j}x_{i}^{(j+k)}.\] We set \(x_{i}^{(0)}=1\), \(x_{i}^{(1)}=x_{i}\) for \(i=1,\ldots,m\) and \(x_{i}^{(a)}=0\) for \(a<0\). The algebra \(E\langle x_{1},\ldots,x_{m}\rangle\) is a free \(E\)-module with basis \[x^{(a)}=x_{1}^{(a_{1})}x_{2}^{(a_{2})}\cdots x_{m}^{(a_{m})},\quad a=(a_{1}, \ldots,a_{m})\in\mathbb{N}^{m}.\] We say that \(x^{(a)}\) has degree \(i\) if \(|a|=i\) where \(|a|=a_{1}+\cdots+a_{m}\). If \(x^{(a)}\neq 1\), we define \(\operatorname{supp}(x^{(a)})=\{i\in[m]:a_{i}\neq 0\}\) and set \(\operatorname{supp}(x^{(a)})=\emptyset\), for \(x^{(a)}=1\). One has \(C_{i}(v;E)=\oplus_{|a|=i}Ex^{(a)}\). The \(E\)-linear differential \(\partial\) on \(C_{.}(v;E)\) is defined as follows: for \(x^{(a)}=x_{1}^{(a_{1})}x_{2}^{(a_{2})}\cdots x_{m}^{(a_{m})}\), we set \[\partial(x^{(a)})=\sum_{i=1}^{m}v_{i}x_{1}^{(a_{1})}\cdots x_{i}^{(a_{i}-1)} \cdots x_{m}^{(a_{m})}.\] If \(\mathcal{G}\) is the category of graded \(E\)-modules above defined and \(M\in\mathcal{G}\), one can define the complex \(C_{.}(v;M)=M\otimes_{E}C_{.}(v;E)\) and set \(H_{i}(v;M)=H_{i}(C_{.}(v;M))\). We call \(H_{i}(v;M)\) the \(i\)th Cartan homology module of \(v\) with respect to \(M\). One can observe that each \(H_{i}(v;M)\) is a graded \(E\)-module. In [3, Theorem 2.2], the authors showed that the Cartan complex \(C_{.}(v;E)\) is a minimal free resolution of \(E/(v)\) and, as a consequence, they proved that for any graded \(E\)-module \(M\) and each \(i\geq 0\), there is the natural isomorphism \(\operatorname{Tor}_{i}^{E}(M,K)\cong H_{i}(e_{1},\ldots,e_{n};M)\) of graded \(E\)-modules. Such a result allows to compute the graded Betti numbers of \(M\). The next important formula for determining the graded Betti numbers of a stable ideal \(I\) of \(E\) has been stated by Aramova, Herzog and Hibi _via_ the Cartan complex [14, Corollary 3.3]: \[\beta_{i,i+j}(I)=\sum_{u\in G(I)_{j}}\binom{\operatorname{m}(u)+i-1}{ \operatorname{m}(u)-1},\quad\text{for all }i\geq 0. \tag{1}\] Indeed, for every \(i>0\), a basis of the modules \(H_{i}(e_{1},\ldots,e_{n};E/I)\) is given by the homology classes of the cycles \[u_{\operatorname{m}(u)}x^{(a)},\ u\in G(I),\ |a|=i,\ \max(a)=\operatorname{m}(u),\] where \(\max(a)=\max\operatorname{supp}(x^{(a)})=\max\{i\in[n]:a_{i}\neq 0\}\) (\(a\in\mathbb{N}^{n}\)) and \(u_{\operatorname{m}(u)}=e_{\mu\setminus\{\operatorname{m}(u)\}}\), for \(u=e_{\mu}\). ### A glimpse to ideals with linear quotients Monomial ideals with linear quotients have been introduced by Herzog and Takayama [15] to study resolutions that arise as iterated mapping cones. **Definition 2.2**: A monomial ideal \(I\) of \(E\) is said to have _linear quotients_ if for some order \(u_{1},\ldots,u_{r}\) of the elements of \(G(I)\) the ideals \((u_{1},\ldots,u_{i-1}):(u_{i})\) are generated by a subset of the set \(\{e_{1},\ldots,e_{n}\}\) for \(i=1,\ldots,r\). The ideal \(I=(e_{2},e_{3}e_{4})\) of \(E=K\langle e_{1},\ldots,e_{4}\rangle\) has linear quotients with respect to this order of the generators. In fact, \(0:_{E}(e_{2})=(e_{2})\), and \((e_{2}):(e_{3}e_{4})=(e_{2},e_{3},e_{4})\). But \(I\) has not linear quotients with respect to the reversed order on the generators. Indeed, for the order \(e_{3}e_{4},e_{2}\), we have \(0:(e_{3}e_{4})=(e_{3},e_{4})\) and \((e_{3}e_{4}):(e_{2})=(e_{2},e_{3}e_{4})\). **Remark 2.3**: Differently from the definition of linear quotients over the polynomial ring, one starts with \(i=1\), _i.e._, \(0:(u_{1})\) has to be generated by a subset of \(\{e_{1},\ldots,e_{n}\}\). This is always the case, since \(0:(u_{1})=(\mathrm{supp}(u_{1}))\). Following [15], for a monomial ideal \(I\) of \(E\) with linear quotients with respect to some order of the elements \(u_{1},\ldots,u_{r}\) of \(G(I)\), we define \[\mathrm{set}(u_{j})=\{k\in[n]:e_{k}\in(u_{1},\ldots,u_{j-1}):(u_{j})\},\quad \text{for $j=1,\ldots,r$.}\] One can observe that in contrast to the polynomial ring context, \(\mathrm{set}(u)\) contains \(\mathrm{supp}(u)\) for every \(u\in G(I)\). **Remark 2.4**: A stable ideal \(I\) of \(E\) has linear quotients with respect to the reverse degree lexicographical order on the generators. Moreover, \(\mathrm{set}(u)=\{i:i\leq\mathrm{m}(u)\}\), _i.e._, \(\mathrm{set}(u)=[\mathrm{m}(u)]\), for every \(u\in G(I)\). For instance, let us consider the stable ideal \(I=(e_{1}e_{2},e_{1}e_{3},e_{2}e_{3},e_{3}e_{4}e_{5})\) of \(E=K\langle e_{1},\ldots,e_{5}\rangle\). Then \(I\) has linear quotients with respect to the reverse lexicographic order \(e_{1}e_{2},e_{1}e_{3},e_{2}e_{3},e_{3}e_{4}e_{5}\). Indeed, \(0:(e_{1}e_{2})=(e_{1},e_{2})\), \((e_{1}e_{2}):(e_{1}e_{3})=(e_{1},e_{2},e_{3})\), \((e_{1}e_{2},e_{1}e_{3}):(e_{2}e_{3})=(e_{1},e_{2},e_{3})\) and \((e_{1}e_{2},e_{1}e_{3},e_{2}e_{3}):(e_{3}e_{4}e_{5})=(e_{1},e_{2},e_{3},e_{4}, e_{5})\). Therefore, \(\mathrm{set}(e_{1}e_{2})=\{1,2\}\), \(\mathrm{set}(e_{1}e_{3})=\{1,2,3\}\), \(\mathrm{set}(e_{2}e_{3})=\{1,2,3\}\), \(\mathrm{set}(e_{3}e_{4}e_{5})=\{1,2,3,4,5\}\), and so \(|\mathrm{set}(u)|=\mathrm{m}(u)\), for all \(u\in G(I)\). Finally, we have the following hierarchy of monomial ideals in \(E\): \[\text{strongly stable ideals}\ \ \Rightarrow\ \text{stable ideals}\ \ \Rightarrow\ \text{ ideals with linear quotients}\] Let \(I\) be a monomial ideal of \(E\) with linear quotients with respect to a minimal set of monomial generators \(\{u_{1},\ldots,u_{r}\}\). We observe that not necessarily \(\deg u_{1}\leq\cdots\leq\deg u_{r}\). Let us consider the ideal \(I=(e_{1}e_{2},e_{2}e_{3}e_{4},e_{1}e_{3})\) of \(E=K\langle e_{1},\ldots,e_{4}\rangle\). \(I\) has linear quotients for the given order of the generators. In fact, \(0:(e_{1}e_{2})=(e_{1},e_{2})\), \((e_{1}e_{2}):(e_{2}e_{3}e_{4})=(e_{1},e_{2},e_{3},e_{4})\) and \((e_{1}e_{2},e_{2}e_{3}e_{4}):(e_{1}e_{3})=(e_{1},e_{2},e_{3})\). Now, let \(I\subset E\) be a monomial ideal with linear quotients with respect to homogeneous generators \(u_{1},\ldots,u_{r}\). We say that the order \(u_{1},\ldots,u_{r}\) is _degree increasing_ if \(\deg(u_{1})\leq\cdots\leq\deg(u_{r})\). By using exterior algebra methods [19, Lemma 4.2.1], one can prove that if \(I\subset E\) is a monomial ideal with linear quotients with respect to a minimal system of monomial generators, then \(I\) has also linear quotients with respect to a degree increasing order of this system. Such a statement can be seen as the exterior version of the "Rearrangement lemma" of Bjorner and Wachs [4]. Hence, from now on when we say that a monomial ideal \(I\subset E\) has linear quotients with respect to some order of \(G(I)=\{u_{1},\cdots,u_{r}\}\), we assume tacitly that such an order is a degree increasing order, _i.e._, \(\deg(u_{1})\leq\cdots\leq\deg(u_{r})\). ## 3 Resolutions by mapping cones In this section, we determine a minimal graded free resolution of monomial ideals with linear quotients by _mapping cones_. Let \(A\) and \(B\) two complexes of graded \(E\)-modules of the category \(\mathcal{G}\) and \(\psi:A\to B\) be a complex homomorphism. The mapping cone of \(\psi\) is the complex \(\mathrm{cone}(\psi)\) with \(\mathrm{cone}(\psi)_{i}=A_{i-1}\oplus B_{i}\) for all \(i\), and the differential \(d:\mathrm{cone}(\psi)_{i}\rightarrow\mathrm{cone}(\psi)_{i-1}\) given by the formula [21] \[d(a,b)=(-d(a),d(b)-\psi(a)),\quad a\in A_{i-1},b\in B_{i}.\] The mapping cone of a chain map between two resolutions gives again a free resolution. We refer to [21] for more details on this topic. We apply this concept in a special situation. Let \(I\) be a monomial ideal of \(E\) and assume that \(I\) has linear quotients with respect the order \(u_{1},\ldots,u_{r}\) of the generators of \(I\). The basic idea is to exploit the short exact sequences that arise from adding one generator of \(I\) at a time, and to iteratively construct the resolution as a mapping cone of an appropriate map between previously constructed complexes. For this aim, set \(I_{j}=(u_{1},\ldots,u_{j})\) and \(L_{j}=(u_{1},\ldots,u_{j}):(u_{j+1})\), then we get the following exact sequences of \(E\)-modules \[0\longrightarrow E/L_{j}\longrightarrow E/I_{j}\longrightarrow E/I_{j+1} \longrightarrow 0,\] where \(E/L_{j}\longrightarrow E/I_{j}\) is the multiplication by \(u_{j+1}\). Let \(F^{(j)}\) be a graded free resolution of \(E/I_{j}\), \(C^{(j)}=C.(e_{k_{1}},\ldots,e_{k_{\ell}};E)\) the Cartan complex for the sequence \(e_{k_{1}},\ldots,e_{k_{\ell}}\) with \(\mathrm{set}(u_{j+1})=\{k_{1},\ldots,k_{\ell}\}\), and \(\psi^{(j)}:C^{(j)}\to F^{(j)}\) a graded complex homomorphism which lifts \(E/L_{j}\longrightarrow E/I_{j}\). The map \(\psi^{(j)}\) exists because the modules in \(C^{(j)}\) are free, thus projective. Then the mapping cone, \(\mathrm{cone}(\psi^{(j)})\), of \(\psi^{(j)}\) produces a free resolution of \(E/I_{j+1}\). Thus by iterated mapping cones one obtains step by step a graded free resolution of \(E/I\). The next statement follows the arguments of the corresponding result in the polynomial ring context [15, Lemma 1.5]. We elaborate on this argument and stress the needed changes due to the different context. For a non zero \(n\)-tuple \(a=(a_{1},\ldots,a_{n})\in\mathbb{N}^{n}\), let \(\mathrm{supp}(a)=\{j\in[n]:a_{j}\neq 0\}\) and \(|a|=a_{1}+\cdots+a_{n}\). **Theorem 3.1**: _Let \(I\) be a monomial ideal of \(E\) with linear quotients with respect to the order \(u_{1},\ldots,u_{r}\) of the minimal generators of \(I\). Then the iterated mapping cone \(F\) derived from the sequence \(u_{1},\ldots,u_{r}\), is a minimal graded free resolution of \(E/I\) and for all \(i>0\), the symbols_ \[f(a;u),\mbox{ with }u\in G(I)\mbox{, }a\in\mathbb{N}^{n}\mbox{, }\mathrm{supp}(a)\subset\mathrm{set}(u)\mbox{, }|a|=i-1\mbox{,}\] _form a homogeneous basis of the \(E\)-module \(F_{i}\). Moreover, \(\deg f(a;u)=|a|+\deg u\)._ _Proof._ We prove by induction on \(j\) that \(F^{(j)}\) is a minimal graded free resolution of \(E/I_{j}\), and that \(F^{(j)}\) has a basis as argued. For \(j=1\), the assertion is clear. With the same notations as before (see also Subsection 2.1), in homological degree \(i-1\) the Cartan complex \(C^{(j)}\) has the \(E\)-basis \(x^{(a_{k_{1}})}=x_{k_{1}}^{(a_{k_{1}})}x_{k_{2}}^{(a_{k_{2}})}\cdots x_{k_{\ell}} ^{(a_{k_{\ell}})}\), \(a=(a_{k_{1}},\ldots,a_{k_{\ell}})\in\mathbb{N}^{\ell}\), with \(|a|=\sum_{q=1}^{\ell}a_{k_{q}}=i-1\), \(\mathrm{supp}(x^{(a)})=\mathrm{supp}(a)\subset\mathrm{set}(u_{j+1})\). Since \(F_{i}^{(j+1)}=C_{i-1}^{(j)}\oplus F_{i}^{(j)}\), we obtain the basis we are looking for from the inductive hypothesis by identifying each element \(x^{(a)}\) with \(f(a;u_{j+1})\). In order to prove that \(F^{(j+1)}\) is a minimal free resolution, it is sufficient to verify that \(\mathrm{Im}(\psi^{(j)})\subset(e_{1},\ldots,e_{n})F^{(j)}\). Let \(f(a;u_{j+1})\in C_{i-1}^{(j)}\) and \(\psi^{(j)}(f(a;u_{j+1}))=\sum_{i=1}^{j}\sum_{b}c_{b,i}f(b;u_{i})\). Since we are only considering degree increasing orders, \(\deg u_{j+1}\geq\deg u_{i}\), for all \(i=1,\ldots,j\). Furthermore \(|b|=|a|-1\). Thus \(\deg f(a;u_{j+1})=|a|+\deg u_{j+1}>|b|+\deg u_{i}=\deg f(b;u_{i})\), for all \(b\) and \(i\). Hence, \(\deg c_{b,i}>0\) for all \(b\) and \(i\). The assertion follows. \(\Box\) As a consequence of Theorem 3.1, we obtain a formula for computing the graded Betti numbers of a monomial ideal with linear quotients. Next notion [5] will be crucial for our aim. **Definition 3.2**: A sequence \((a_{1},a_{2},\ldots,a_{k})\) of integers fulfilling \(a_{i}\geq 0\) for all \(i\), and \(a_{1}+a_{2}+\cdots+a_{k}=n\) is called a _weak composition_ of \(n\). The number of weak compositions of an integer \(n\) in \(k\) parts is given by \(\binom{n+k-1}{k-1}=\binom{n+k-1}{n}\). **Corollary 3.3**: _Let \(I\) be a monomial ideal of \(E\) with linear quotients with respect to the order \(u_{1},\ldots,u_{r}\) of the minimal generators of \(I\). Then_ \[\beta_{i,i+j}(I)=\sum_{u\in G(I)_{j}}\binom{i+|\mathrm{set}(u)|-1}{|\mathrm{ set}(u)|-1},\] _for all \(i,j\). Hence,_ \[\beta_{i}(I)=\sum_{u\in G(I)}\binom{i+|\mathrm{set}(u)|-1}{|\mathrm{set}(u)|-1},\] _for all \(i\geq 0\)._ _Proof._ From Theorem 3.1, \[\beta_{i,i+j}(I)=|\{a=(a_{1},\ldots,a_{n})\in\mathbb{N}^{n}:\mathrm{supp}(a) \subset\mathrm{set}(u),u\in G(I)_{j}\text{ and }|a|=i\}|.\] One the other hand, one can observe that \[|\{a=(a_{1},\ldots,a_{n})\in\mathbb{N}^{n}:\mathrm{supp}(a)\subset\mathrm{set} (u),|a|=i\}|\] is the number of the weak compositions of the positive integer \(i\) in \(|\mathrm{set}(u)|\) parts. The assertions follow. \(\Box\) **Example 3.4**: Let \(I=(e_{1}e_{3},e_{1}e_{4},e_{2}e_{4}e_{6})\) be a monomial ideal of \(E=K\langle e_{1},\ldots,e_{6}\rangle\). \(I\) has linear quotients with respect to the order \(e_{1}e_{3},e_{1}e_{4},e_{2}e_{4}e_{6}\). In fact, \(0:(e_{1}e_{3})=(e_{1},e_{3})\), \((e_{1}e_{3}):(e_{1}e_{4})=(e_{1},e_{3},e_{4})\), \((e_{1}e_{3},e_{1}e_{4}):(e_{2}e_{4}e_{6})=(e_{1},e_{2},e_{4},e_{6}).\) Therefore, setting \(u_{1}=e_{1}e_{3}\), \(u_{2}=e_{1}e_{4}\), \(u_{3}=e_{2}e_{4}e_{6}\), one has \[\mathrm{set}(u_{1})=\{1,3\},\ \ \mathrm{set}(u_{2})=\{1,3,4\},\ \ \mathrm{set}(u_{3})=\{1,2,4,6\}.\] Clearly \(\beta_{0}(I)=3\). In fact, \(\binom{0+|\mathrm{set}(u)|-1}{|\mathrm{set}(u)|-1}=1\), for all \(u\in G(I)\). Moreover, \[\beta_{1}(I) = |\{a=(a_{1},\ldots,a_{6})\in\mathbb{N}^{6}:\mathrm{supp}(a) \subset\mathrm{set}(u),u\in G(I)\ \mathrm{and}\ |a|=1\}|\] \[= \sum_{i=1}^{3}\binom{1+|\mathrm{set}(u_{i})|-1}{|\mathrm{set}(u _{i})|-1}=\binom{2}{1}+\binom{3}{2}+\binom{4}{3}=9.\] Indeed, * \((1,0,0,0,0,0),(0,0,1,0,0,0)\) are the only \(6\)-tuple \(a\in\mathbb{N}^{6}\) such that \(|a|=1\) and \(\mathrm{supp}(a)\subset\mathrm{set}(u_{1})\); * \((1,0,0,0,0,0),(0,0,1,0,0,0),(0,0,0,1,0,0)\) are the only \(6\)-tuple \(a\in\mathbb{N}^{6}\) such that \(|a|=1\) and \(\mathrm{supp}(a)\subset\mathrm{set}(u_{2})\), and * \((1,0,0,0,0,0),(0,1,0,0,0),(0,0,0,1,0,0),(0,0,0,0,1)\) are the only \(6\)-tuple \(a\in\mathbb{N}^{6}\) such that \(|a|=1\) and \(\mathrm{supp}(a)\subset\mathrm{set}(u_{3})\). Let us compute \(\beta_{2}(I)\). It is \[\beta_{2}(I) = |\{a=(a_{1},\ldots,a_{6})\in\mathbb{N}^{6}:\mathrm{supp}(a) \subset\mathrm{set}(u),u\in G(I)\ \mathrm{and}\ |a|=2\}|\] \[= \sum_{i=1}^{3}\binom{2+|\mathrm{set}(u_{i})|-1}{|\mathrm{set}(u_{ i})|-1}=\binom{3}{1}+\binom{4}{2}+\binom{5}{3}=19.\] And so on. The Betti table of \(I\) displayed by _Macaulay2_[13] is the following one: \[\begin{array}{cccccccc}&0&1&2&3&4&5&6\\ \mathrm{total:}&3&9&19&34&55&83&119\\ 2:&2&5&9&14&20&27&35\\ 3:&1&4&10&20&35&56&84\end{array}\] Corollary 3.3 yields the next results. **Corollary 3.5**: _Let \(I\) be a monomial ideal of \(E\) with linear quotients with respect to the order \(u_{1},\ldots,u_{r}\) of the minimal generators of \(I\). Then the graded Poincare series of \(I\) over \(E\) is_ \[\mathcal{P}_{I}(s,t)=\sum_{i}\sum_{j}\sum_{u\in G(I)_{j}}(1+t)^{i+|\mathrm{ set}(u)|-1}s^{j}.\] Proof.: Since \(\mathcal{P}_{I}(s,t)=\sum_{i,j}\beta_{i,j}(I)t^{i}s^{j}\), the desired formula follows from Corollary 3.3. \(\Box\) Recall that if \(\mathcal{M}\) is the category of finitely generated \(\mathbb{Z}\)-graded left and right \(E\)-modules \(M\) satisfying \(am=(-1)^{\deg a\deg m}ma\) for all homogeneous elements \(a\in E\) and \(m\in M\) the _complexity_ of \(M\), which measures the growth rate of the Betti numbers of \(M\), is defined as follows [2] (see, also, [16]): \[\mathrm{cx}_{E}(M)=\inf\{c\in\mathbb{Z}:\beta_{i}(M)\leq\alpha i^{c-1}\mbox{ for some }\alpha\in\mathbb{R}\mbox{ and for all }i\geq 1\}.\] In [2], the authors introduced an important invariant of an \(E\)-module, the depth, in a similar way as the depth of a module over a polynomial ring. More in detail, let \(M\in\mathcal{M}\), a linear form \(v\in E_{1}\) is called \(M\)-regular if \(0:_{M}(v)=vM\). A sequence of linear forms \(v_{1},\ldots,v_{s}\) is called an \(M\)-regular sequence if \(v_{i}\) is \(M/(v_{1},\ldots,v_{i-1})M\)-regular for \(i=1,\ldots,s\) and \(M/(v_{1},\ldots,v_{s})M\neq 0\). It is shown in [2] that all maximal \(M\)-regular sequences have the same length. This length is called the _depth_ of \(M\) over \(E\) and is denoted by \(\mathrm{depth}_{E}(M)\). Such an invariant is closely related to the complexity at least if the ground field is infinite. Indeed, if \(K\) is an infinite field, the next formula stated by Aramova, Avramov and Herzog [2, Theorem 3.2] holds \[n=\mathrm{cx}_{E}(E/I)+\mathrm{depth}_{E}(E/I). \tag{2}\] **Corollary 3.6**: _Let \(I\) be a monomial ideal of \(E\) with linear quotients. Then,_ \[\mathrm{cx}_{E}(E/I)\ =\ \max\big{\{}|\mathrm{set}(u)|:u\in G(I)\big{\}}.\] _Moreover, if \(K\) is an infinite field, then_ \[\mathrm{depth}_{E}(E/I)\ =\ \min\big{\{}n-|\mathrm{set}(u)|:u\in G(I)\big{\}}.\] Proof.: We have \(\mathrm{cx}_{E}(E/I)=\mathrm{cx}_{E}(I)\). By Corollary 3.3 we have \[\beta_{i}(I) =\ \sum_{u\in G(I)}\binom{i+|\mathrm{set}(u)|-1}{|\mathrm{set}(u)| -1}\] \[=\ \sum_{k=1}^{n}\Big{[}\sum_{u\in G(I):|\mathrm{set}(u)|=k}\binom {i+k-1}{k-1}\Big{]}\] \[=\ \sum_{k=1}^{n}\big{|}\big{\{}u\in G(I):|\mathrm{set}(u)|=k \big{\}}|\binom{i+k-1}{i}.\] The \(k\)th binomial coefficient in the previous sum is a polynomial in \(i\) of degree \(k-1\) and the number \(\max\big{\{}|\mathrm{set}(u)|:u\in G(I)\big{\}}\) is the maximal \(k\) which appears in the sum. It follows that \(\mathrm{cx}_{E}(E/I)=\max\big{\{}|\mathrm{set}(u)|:u\in G(I)\big{\}}\). If \(K\) is an infinite field, then by (2), \(\mathrm{depth}_{E}(E/I)=n-\mathrm{cx}_{E}(E/I)\) and the assertion follows. \(\Box\) **Remark 3.7**: If \(I\) is a stable ideal of \(E\), then the formula for the graded Betti numbers stated in Corollary 3.3 becomes formula (1), whereas the formula for the complexity in Corollary 3.6 becomes the formula in [16, Lemma 3.1.4]. In fact, \(I\) has linear quotients with respect to the reverse lexicographic order and, moreover, in such a case \(|\mathrm{set}(u)|=\mathrm{m}(u)\), for all \(u\in G(I)\) (Remark 2.4). Regular decomposition functions In this section we describe an explicit resolution of all ideals in \(E\) with linear quotients which satisfy an extra condition. For our aim we introduce the notion of regular decomposition function. Let \(I\) be a monomial ideal with linear quotients with respect to the sequence of generators \(u_{1},\ldots,u_{r}\), and set as before \(I_{j}=(u_{1},\ldots,u_{j})\) for \(j=1,\ldots,r\). Let \(M(I)\) be the set of all monomials in \(I\). Define the map \(g:M(I)\to G(I)\) as follows: \(g(u)=u_{j}\), if \(j\) is the smallest number such that \(u\in I_{j}\). The next statement holds. **Lemma 4.1**: 1. _For all_ \(u\in M(I)\) _one has_ \(u=g(u)c(u)\) _with some complementary factor_ \(c(u)\)_, and_ \(\operatorname{set}(g(u))\cap\operatorname{supp}(c(u))=\emptyset\)_._ 2. _Let_ \(u\in M(I)\)_,_ \(u=vw\) _with_ \(v\in G(I)\) _and_ \(\operatorname{set}(v)\cap\operatorname{supp}(w)=\emptyset\)_. Then_ \(v=g(u)\)_._ 3. _Let_ \(u,v\in M(I)\) _such that_ \(\operatorname{supp}(u)\cap\operatorname{supp}(v)=\emptyset\)_. Then_ \(g(uv)=g(u)\) _if and only if_ \(\operatorname{set}(g(u))\cap\operatorname{supp}(v)=\emptyset\)_._ _Proof._ The proofs of Herzog and Takayama for ideals over polynomial ring [14, Lemma 1.7, Lemma 1.8] carry over. \(\square\) One can observe that any function \(M(I)\to G(I)\) satisfying Lemma 4.1(a) is uniquely determined because of Lemma 4.1(b). We call \(g\) the _decomposition function_ of \(I\). **Definition 4.2**: The decomposition function \(g:M(I)\to G(I)\) is regular, if \(\operatorname{set}(g(e_{s}u))\subseteq\operatorname{set}(u)\) for all \(s\in\operatorname{set}(u)\) such that \(s\notin\operatorname{supp}(u)\) and \(u\in G(I)\). If \(I\) is a stable ideal, then its decomposition function is regular with respect to the reverse lexicographic order. In general, the decomposition function for an ideal with linear quotients is not regular as next example shows. **Example 4.3**: Let \(E=K\langle e_{1},\ldots,e_{4}\rangle\) and \(I=(e_{2}e_{4},e_{1}e_{2},e_{1}e_{3})\). By using _Macaulay2_, one can verify that \(I\) has linear quotients. In particular, \(0:(e_{2}e_{4})=(e_{2},e_{4})\), \((e_{2}e_{4}):(e_{1}e_{2})=(e_{1},e_{2},e_{4})\) and \((e_{2}e_{4},e_{1}e_{2}):(e_{1}e_{3})=(e_{1},e_{2},e_{3})\). On the other hand, \(g(e_{2}(e_{1}e_{3}))=e_{1}e_{2}\) but \(\operatorname{set}(e_{1}e_{2})=\{1,2,4\}\nsubset(e_{1}e_{3})=\{1,2,3\}\). The next statement can be proved using the same techniques as the corresponding result in the polynomial case [14, Lemma 1.11]. **Lemma 4.4**: _If the decomposition function \(g:M(I)\to G(I)\) is regular, then_ \[g(e_{s}(g(e_{t}u)))=g(e_{t}(g(e_{s}u))),\] _for all \(u\in M(I)\) and all \(s,t\in\operatorname{set}(u)\) such that \(s,t\notin\operatorname{supp}(u)\)._ Let \(\varepsilon_{1},\ldots,\varepsilon_{n}\) denote the canonical basis of \(\mathbb{N}^{n}\). From Subsection 2.1, one knows that the differential of the Cartan complex \(C(e_{1},\ldots,e_{n};E/I)\) is given by the formula: \[\partial(cx^{(a)})=\sum_{i=1}^{n}ce_{i}x_{1}^{(a_{1})}\cdots x_{i}^{(a_{i}-1) }\cdots x_{n}^{(a_{n})}=\sum_{j\in\operatorname{supp}(a)}ce_{j}x^{(a- \varepsilon_{j})},\quad c\in E/I,\quad a\in\mathbb{N}^{n}.\] From convenience, we extend the definition introduced in Proposition 3.1 setting \(f(a;u)=0\) if \(\operatorname{supp}(a)\nsubset(u)\), for \(u\in M(I)\). Suppose \(u\) and \(v\) are monomials of \(E\) with \(\operatorname{supp}(v)\subseteq\operatorname{supp}(u)\). Then, there exists a unique monomial \(u^{\prime}\in E\) such that \(u^{\prime}v=u\). We set \[u^{\prime}=uv^{-1}=\frac{u}{v}.\] With this convention, the following identities hold: \[u\frac{w}{v}=\frac{uw}{v}\quad\text{ and }\quad\frac{u}{v}\frac{v}{z}=\frac{u}{z}.\] For instance, let \(u=e_{1}e_{2}e_{3}e_{4}e_{5}\) and \(v=e_{2}e_{1}e_{4}\). Then \(\frac{u}{v}=u^{\prime}=e_{3}e_{5}\). Indeed, \[(e_{3}e_{5})v =(e_{3}e_{5})(e_{2}e_{1}e_{4})=(-1)^{\deg(e_{3}e_{5})\deg(e_{2}e_{ 1}e_{4})}(e_{2}e_{1}e_{4})(e_{3}e_{5})\] \[=-e_{1}e_{2}e_{4}e_{3}e_{5}=-(-e_{1}e_{2}e_{3}e_{4}e_{5})\] \[=e_{1}e_{2}e_{3}e_{4}e_{5}=u.\] Whereas, if \(v=e_{1}e_{2}e_{3}\), then \(\frac{u}{v}=u^{\prime}=e_{4}e_{5}\). Indeed, \[(e_{4}e_{5})(e_{1}e_{2}e_{3})=(-1)^{\deg(e_{4}e_{5})\deg(e_{1}e_{2}e_{3})}(e_{ 1}e_{2}e_{3})(e_{4}e_{5})=u.\] In the following theorem, for our convenience, we use a modified version of the Cartan complex \(C.(v;E)\), where \(v\) is a sequence of linear forms of \(E\). We substitute the differential \(\partial\) with \((-1)^{\deg(u)}\partial\) and denote the new differential again by \(\partial\). We denote this modified Cartan complex by \((-1)^{\deg(u)}C.(v;E)\). It is clear that this complex provides again a minimal free resolution of \(E/(v)\). **Theorem 4.5**: _Let \(I\) be a monomial ideal of \(E\) with linear quotients, and \(F\) the graded minimal free resolution of \(E/I\). Suppose that the decomposition function \(g:M(I)\to G(I)\) is regular. Then the maps in the resolution \(F\) are given by \(d(f(0;u))=(-1)^{\deg(u)}u\), and for \(a\in\mathbb{N}^{n}\), \(a\neq 0\), \(u\in G(I)\) and \(\operatorname{supp}(a)\subset\operatorname{set}(u)\),_ \[d(f(a;u)) = -\sum_{t\in\operatorname{supp}(a)}(-1)^{\deg(u)}e_{t}f(a-\varepsilon _{t};u)\] \[+ \sum_{t\in\operatorname{supp}(a)\setminus\operatorname{supp}(u)} (-1)^{\deg(g(e_{t}u))}\frac{e_{t}u}{g(e_{t}u)}f(a-\varepsilon_{t};g(e_{t}u)).\] _Proof._ We proceed by induction on \(r=|G(I)|\). For \(r=1\), \(I=(u_{1})=(u)\) is a principal ideal and the Cartan complex \((-1)^{\deg(u)}C.(e_{k_{1}},\ldots,e_{k_{\ell}};E)\) with \(\operatorname{set}(u)=\operatorname{supp}(u)=\{k_{1},\ldots,k_{\ell}\}\) provides a minimal free resolution of \(E/I\). In this case, the maps are as required. Now, suppose that \(r>1\) and that our statement holds for \(r-1\). As before, we set \(I_{r-1}=(u_{1},\ldots,u_{r-1})\) and \(L_{r-1}=(u_{1},\ldots,u_{r-1}):(u_{r})\), then we get the following exact sequences of \(E\)-modules \[0\to E/L_{r-1}\longrightarrow E/I_{r-1}\longrightarrow E/I\to 0,\] where \(E/L_{r-1}\longrightarrow E/I_{r-1}\) is the multiplication by \(u_{r}\). Let \(F^{(r-1)}\) be the graded minimal free resolution of \(E/I_{r-1}\), \(C^{(r-1)}=(-1)^{\deg(u)}C.(e_{k_{1}},\ldots,e_{k_{\ell}};E)\) the Cartan complex for the sequence \(e_{k_{1}},\ldots,e_{k_{\ell}}\) with \(\operatorname{set}(u_{r})=\{k_{1},\ldots,k_{\ell}\}\). Let \(F\) be the mapping cone of \(\psi^{(r-1)}:C^{(r-1)}\to F^{(r-1)}\) and for simplicity set \(\psi^{(r-1)}=\psi\) and \(u_{r}=u\). Since \(F^{(r-1)}\) is a subcomplex of \(F\), it follows from our inductive hypothesis that the formula for the map given in the statement holds for all \(f(a;u_{j})\) with \(j<r\). Hence, it remains to check it for \(f(a;u_{r})=f(a;u)\). By the definition of the mapping cone of \(\psi\), \(d(f(a;u))=-\partial(f(a;u))+\psi(f(a;u))\). Thus, to prove the asserted formula it is enough to show that we can define \(\psi\) as \[\psi(f(a;u))=\sum_{t\in\operatorname{supp}(a)\setminus\operatorname{supp}(u) }(-1)^{\deg(g(e_{t}u))}\frac{e_{t}u}{g(e_{t}u)}f(a-\varepsilon_{t};g(e_{t}u)).\] if \(a\neq 0\) and \(\psi(f(0;u))=(-1)^{\deg(u)}u\), otherwise. That is, the following diagram is commutative Here, with abuse of notation, we denote also by \(d\) the chain map of \(F^{(r-1)}\). Let \(f(\varepsilon_{t};u)\), with \(t\in\operatorname{set}(u)\). Then \[(\psi\circ\partial)(f(\varepsilon_{t};u))\ =\ \psi((-1)^{\deg(u)}e_{t}f(0;u))\ =\ (-1)^{\deg(u)}(-1)^{\deg(u)}e_{t}u=e_{t}u.\] On the other hand, \(d(f(0;g(e_{t}u)))=(-1)^{\deg(g(e_{t}u))}g(e_{t}u)\). Hence, if \(t\notin\operatorname{supp}(u)\), we have that \[(d\circ\psi)(f(\varepsilon_{t};u)) =\ (-1)^{\deg(g(e_{t}u))}\frac{e_{t}u}{g(e_{t}u)}d(f(0;g(e_{t}u)))\] \[=\ \frac{e_{t}u}{g(e_{t}u)}g(e_{t}u)=e_{t}u.\] Otherwise, if \(t\in\operatorname{supp}(u)\), then \(\psi(f(\varepsilon_{t};u))=0\). In this case, \((d\circ\psi)(f(\varepsilon_{t};u))=0\) but also \((\psi\circ\partial)(f(\varepsilon_{t};u))=e_{t}u=0\). So the first square in the diagram commutes. For the other squares, let \(f(a;u)\) with \(\operatorname{supp}(a)\subseteq\operatorname{set}(u)\) and \(|a|>1\). Then \[(\psi\circ\partial)(f(a;u)) =\sum_{t\in\operatorname{supp}(a)}(-1)^{\deg(u)}e_{t}\psi(f(a- \varepsilon_{t};u))\] \[=\sum_{\begin{subarray}{c}t\in\operatorname{supp}(a)\\ t\notin\operatorname{supp}(u)\end{subarray}}\sum_{\begin{subarray}{c}s\in \operatorname{supp}(a)\setminus\{t\}\\ s\notin\operatorname{supp}(u)\end{subarray}}(-1)^{\deg(u)+\deg(g(e_{s}u))} \frac{e_{t}e_{s}u}{g(e_{s}u)}f(a-\varepsilon_{t}-\varepsilon_{s};g(e_{s}u))\] \[=\sum_{\begin{subarray}{c}t\in\operatorname{supp}(a)\setminus \operatorname{supp}(u)\\ s\in\operatorname{supp}(a)\setminus(\operatorname{supp}(u)\cup\{t\})\end{subarray}} (-1)^{\deg(u)+\deg(g(e_{s}u))}\frac{e_{t}e_{s}u}{g(e_{s}u)}f(a-\varepsilon_{t}- \varepsilon_{s};g(e_{s}u))\] \[-\sum_{\begin{subarray}{c}t\in\operatorname{supp}(a)\setminus \operatorname{supp}(u)\\ s\in\operatorname{supp}(a)\setminus(\operatorname{supp}(u)\cup\{t\})\end{subarray}} (-1)^{\deg(u)+\deg(g(e_{s}u))}\frac{e_{s}e_{t}u}{g(e_{s}u)}f(a-\varepsilon_{t}- \varepsilon_{s};g(e_{s}u)).\] Here, to get the second equality, we have noted that if \(s=t\) or \(t\in\operatorname{supp}(u)\) or \(s\in\operatorname{supp}(u)\), the corresponding term in the sum is zero. Exchanging the role of \(t\) and \(s\) in the second sum, we obtain \[\begin{split}(\psi\circ\partial)(f(a;u))&=\sum_{ \begin{subarray}{c}t,s\in\operatorname{supp}(a)\\ t,s\notin\operatorname{supp}(u)\end{subarray}}(-1)^{\deg(u)+\deg(g(e_{s}u))} \frac{e_{t}e_{s}u}{g(e_{s}u)}f(a-\varepsilon_{t}-\varepsilon_{s};g(e_{s}u)) \\ \\ &-\sum_{\begin{subarray}{c}t,s\in\operatorname{supp}(a)\\ t,s\notin\operatorname{supp}(u)\end{subarray}}(-1)^{\deg(u)+\deg(g(e_{t}u))} \frac{e_{t}e_{s}u}{g(e_{t}u)}f(a-\varepsilon_{t}-\varepsilon_{s};g(e_{t}u)). \end{split} \tag{3}\] As for the composition counterclockwise, we have \[(d\circ\psi)(f(a;u))=\sum_{t\in\operatorname{supp}(a)\setminus\operatorname{ supp}(u)}(-1)^{\deg(g(e_{t}u))}\frac{e_{t}u}{g(e_{t}u)}d(f(a-\varepsilon_{t};g(e_{t}u))).\] Let \(t\in\operatorname{supp}(a)\setminus\operatorname{supp}(u)\) and assume for a moment that \(\operatorname{supp}(a-\varepsilon_{t})\subseteq\operatorname{set}(g(e_{t}u))\). Then, \[\begin{split} d(f(a-\varepsilon_{t};g(e_{t}u)))=-\sum_{s\in \operatorname{supp}(a-\varepsilon_{t})}(-1)^{\deg(g(e_{t}u))}e_{s}f(a- \varepsilon_{t}-\varepsilon_{s};g(e_{t}u))\\ +\sum_{s\in\operatorname{supp}(a-\varepsilon_{t})\setminus \operatorname{supp}(g(e_{t}u))}(-1)^{\deg(g(e_{s}g(e_{t}u)))}\frac{e_{s}g(e_ {t}u)}{g(e_{s}g(e_{t}u))}f(a-\varepsilon_{t}-\varepsilon_{s};g(e_{s}g(e_{t}u) )).\end{split} \tag{4}\] We claim that this expression is also true if \(\operatorname{supp}(a-\varepsilon_{t})\not\subseteq\operatorname{set}(g(e_{t }u))\). In this case, the symbol \(f(a-\varepsilon_{t};g(e_{t}u))\) is zero. Hence, the right-hand side of the previous equation should be zero, too. Indeed, pick \(s\in\operatorname{supp}(a-\varepsilon_{t})\). If \(\operatorname{supp}(a-\varepsilon_{t}-\varepsilon_{s})\not\subseteq \operatorname{set}(g(e_{t}u))\), then by the regularity of \(g\) we have \(\operatorname{set}(g(e_{s}g(e_{t}u)))\subseteq\operatorname{set}(g(e_{t}u))\). Therefore \(\operatorname{supp}(a-\varepsilon_{t}-\varepsilon_{s})\not\subseteq \operatorname{set}(g(e_{s}g(e_{t}u)))\) too, and so the corresponding summands are zero. Otherwise, \(\operatorname{supp}(a-\varepsilon_{t}-\varepsilon_{s})\subseteq\operatorname{set}(g (e_{t}u))\). In this case \(s\notin\operatorname{set}(g(e_{t}u))\), otherwise we would have \(\operatorname{supp}(a-\varepsilon_{t})\subseteq\operatorname{set}(g(e_{t}u))\), against our assumption. Hence, by Lemma 4.1(c) we have that \(g(e_{s}g(e_{t}u))=g(e_{t}u)\). Furthermore, in such a case \(s\in\operatorname{supp}(a-\varepsilon_{t})\setminus\operatorname{supp}(g(e_{ t}u))\) and so \[(-1)^{\deg(g(e_{s}g(e_{t}u)))}\frac{e_{s}g(e_{t}u)}{g(e_{s}g(e_{t}u))}f(a- \varepsilon_{t}-\varepsilon_{s};g(e_{s}g(e_{t}u)))=(-1)^{\deg(g(e_{t}u))}e_{s} f(a-\varepsilon_{t}-\varepsilon_{s};g(e_{t}u)).\] Therefore, we see that the right hand side of (4) is zero, as all summands are either zero or cancel against each other. Now, observe that \[-(-1)^{\deg(g(e_{t}u))}(-1)^{\deg(g(e_{t}u))}\frac{e_{t}u}{g(e_{t} u)}e_{s} =-\frac{e_{t}u}{g(e_{t}u)}e_{s}\] \[=-(-1)^{\deg(e_{t}u)-\deg(g(e_{t}u))}e_{s}\frac{e_{t}u}{g(e_{t}u)}\] \[=(-1)^{\deg(u)+\deg(g(e_{t}u))}\frac{e_{s}e_{t}u}{g(e_{t}u)}\] and \[(-1)^{\deg(g(e_{t}u))+\deg(g(e_{s}g(e_{t}u)))}\frac{e_{t}u}{g(e_{ t}u)}\frac{e_{s}g(e_{t}u)}{g(e_{s}g(e_{t}u))}\] \[=(-1)^{\deg(g(e_{t}u))+\deg(g(e_{s}g(e_{t}u)))}(-1)^{\deg(g(e_{t} u))}\frac{e_{t}u}{g(e_{t}u)}\frac{g(e_{t}u)e_{s}}{g(e_{s}g(e_{t}u))}\] \[=(-1)^{\deg(g(e_{s}g(e_{t}u)))}\frac{e_{t}ue_{s}}{g(e_{s}g(e_{t}u) )}\ =\ (-1)^{\deg(g(e_{s}g(e_{t}u)))+\deg(u)}\frac{e_{t}e_{s}u}{g(e_{s}g(e_{t}u))}.\] Consequently, by (4) and the previous calculations, we have \[(d\diamond\psi)(f(a;u))=\sum_{\begin{subarray}{c}t\in\operatorname{supp}(a)\\ s\in\operatorname{supp}(a)(\operatorname{supp}(u)\cup\{t\})\end{subarray}}(-1) ^{\deg(u)+\deg(g(e_{t}u))}\frac{e_{s}e_{t}u}{g(e_{t}u)}f(a-\varepsilon_{t}- \varepsilon_{s};g(e_{t}u))\] \[+\sum_{\begin{subarray}{c}t\in\operatorname{supp}(a)\\ s\in\operatorname{supp}(a-\varepsilon_{t})\setminus\operatorname{supp}(g(e_{t}u ))\end{subarray}}(-1)^{\deg(u)+\deg(g(e_{s}g(e_{t}u)))}\frac{e_{t}e_{s}u}{g(e_ {s}g(e_{t}u))}f(a-\varepsilon_{t}-\varepsilon_{s};g(e_{s}g(e_{t}u))).\] Note that in the first sum we can write \(t\in\operatorname{supp}(a)\setminus\operatorname{supp}(u)\), otherwise the corresponding term in the sum is zero. Likewise, in the second sum, we can write \(t\in\operatorname{supp}(a)\setminus\operatorname{supp}(u)\) and moreover \(s\in\operatorname{supp}(a)\setminus(\operatorname{supp}(u)\cup\{t\})\). Indeed, if \(t\in\operatorname{set}(u)\setminus\operatorname{supp}(u)\), it is clear that \(\operatorname{supp}(g(e_{t}u))\subset\operatorname{supp}(u)\cup\{t\}\). Therefore, \[\operatorname{supp}(a)\setminus(\operatorname{supp}(u)\cup\{t\})\subseteq \operatorname{supp}(a-\varepsilon_{t})\setminus\operatorname{supp}(g(e_{t}u)).\] If \(s\) belongs to the second set and does not belong to the first one, then \(s\in\operatorname{supp}(u)\) and the corresponding summand is zero. Hence, we can rewrite the previous equation as \[(d\circ\psi)(f(a;u))=\sum_{\begin{subarray}{c}t\in\operatorname{supp}(a) \setminus\operatorname{supp}(u)\\ s\in\operatorname{supp}(a)\setminus(\operatorname{supp}(u)\cup\{t\})\end{subarray}}(-1)^{ \deg(u)+\deg(g(e_{t}u))}\frac{e_{s}e_{t}u}{g(e_{t}u)}f(a-\varepsilon_{t}- \varepsilon_{s};g(e_{t}u))\] \[+\sum_{\begin{subarray}{c}t\in\operatorname{supp}(a)\setminus \operatorname{supp}(u)\\ s\in\operatorname{supp}(a)\setminus(\operatorname{supp}(u)\cup\{t\})\end{subarray}}(-1)^{ \deg(u)+\deg(g(e_{s}g(e_{t}u)))}\frac{e_{t}e_{s}u}{g(e_{s}g(e_{t}u))}f(a- \varepsilon_{t}-\varepsilon_{s};g(e_{s}g(e_{t}u))).\] Consequently, we get \[(d\circ\psi)(f(a;u))=\sum_{\begin{subarray}{c}t,s\in\operatorname{ supp}(a)\\ t,s\notin\operatorname{supp}(u)\end{subarray}}(-1)^{\deg(u)+\deg(g(e_{t}u))}\frac{e_{s}e_{t}u }{g(e_{t}u)}f(a-\varepsilon_{t}-\varepsilon_{s};g(e_{t}u))\] \[-\sum_{\begin{subarray}{c}t,s\in\operatorname{supp}(a)\\ t,s\notin\operatorname{supp}(u)\end{subarray}}(-1)^{\deg(u)+\deg(g(e_{t}u))} \frac{e_{t}e_{s}u}{g(e_{t}u)}f(a-\varepsilon_{t}-\varepsilon_{s};g(e_{t}u))\] \[+\sum_{\begin{subarray}{c}t,s\in\operatorname{supp}(a)\\ t,s\notin\operatorname{supp}(u)\end{subarray}}(-1)^{\deg(u)+\deg(g(e_{s}g(e_{t }u)))}\frac{e_{t}e_{s}u}{g(e_{s}g(e_{t}u))}f(a-\varepsilon_{t}-\varepsilon_{s}; g(e_{s}g(e_{t}u)))\] \[-\sum_{\begin{subarray}{c}t,s\in\operatorname{supp}(a)\\ t,s\notin\operatorname{supp}(u)\end{subarray}}(-1)^{\deg(u)+\deg(g(e_{s}g(e_{t }u)))}\frac{e_{s}e_{t}u}{g(e_{s}g(e_{t}u))}f(a-\varepsilon_{t}-\varepsilon_{s}; g(e_{s}g(e_{t}u))).\] Exchanging the role of \(t\) and \(s\) in the second and fourth sum, we obtain \[(d\circ\psi)(f(a;u))=\sum_{\begin{subarray}{c}t,s\in\operatorname{ supp}(a)\\ t,s\notin\operatorname{supp}(u)\\ s<t\end{subarray}}(-1)^{\deg(u)+\deg(g(e_{t}u))}\frac{e_{s}e_{t}u}{g(e_{t}u)} f(a-\varepsilon_{t}-\varepsilon_{s};g(e_{t}u))\] \[-\sum_{\begin{subarray}{c}t,s\in\operatorname{supp}(a)\\ t,s\notin\operatorname{supp}(u)\\ s<t\end{subarray}}(-1)^{\deg(u)+\deg(g(e_{s}u))}\frac{e_{s}e_{t}u}{g(e_{s}u) }f(a-\varepsilon_{t}-\varepsilon_{s};g(e_{s}u))\] \[+\sum_{\begin{subarray}{c}t,s\in\operatorname{supp}(a)\\ t,s\notin\operatorname{supp}(u)\\ s>t\end{subarray}}(-1)^{\deg(u)+\deg(g(e_{t}g(e_{s}u)))}\frac{e_{t}e_{s}u}{g(e _{t}g(e_{s}u))}f(a-\varepsilon_{t}-\varepsilon_{s};g(e_{t}g(e_{s}u))).\] By Lemma 4.4, since the map \(g\) is regular, we have that \(g(e_{t}g(e_{s}u))=g(e_{s}g(e_{t}u))\), for all \(s,t\in\operatorname{set}(u)\) with \(t,s\notin\operatorname{supp}(u)\). Therefore, the last two double sums in the above expression cancel against each other. Thus, comparing the previous expression with equation (3), we see that \((d\circ\psi)(f(a;u))=(\psi\circ\partial)(f(a;u))\). The conclusion follows. \(\square\) ## 5 An application In this section we introduce the notion of strongly stable vector-spread ideal in the exterior algebra \(E=K\langle e_{1},\ldots,e_{n}\rangle\). This notion has been recently introduced in [12] in the polynomial ring as a generalization of the concept of \(t\)-spread strongly stable ideal given by Ene, Herzog and Qureshi in [9]. **Definition 5.1**: Given \(n\geq 1,{\bf t}=(t_{1},\ldots,t_{d-1})\in\mathbb{N}^{d-1},d\geq 2\) and \(e_{\mu}=e_{i_{1}}\cdots e_{i_{\ell}}\neq 1\) a monomial of \(E\), with \(1\leq i_{1}<i_{2}<\cdots<i_{\ell}\leq n\), \(\ell\leq d\) we say that \(e_{\mu}\) is \({\bf t}\)-spread if \(i_{j+1}-i_{j}\geq t_{j}\), for all \(j=1,\ldots,\ell-1\). We say that a monomial ideal \(I\) of \(E\) is \({\bf t}\)-spread if it is generated by \({\bf t}\)-spread monomials. For instance, the monomial ideal \(I=(e_{1}e_{3},e_{2}e_{6},e_{2}e_{4}e_{8})\) of the exterior algebra \(E=K\langle e_{1},\ldots,e_{8}\rangle\) is a \((2,2)\)-spread monomial ideal, but not a \((2,3)\)-spread monomial ideal. Note that any monomial ideal of \(E\) is \({\bf 1}\)-spread, where \({\bf 1}=(1,\ldots,1)\in\mathbb{N}^{n}\). The set of all monomials of \(E\) will be denoted by \({\rm Mon}(E)\), whereas the set of all \({\bf t}\)-spread monomials of \(E\) will be denote by \({\rm Mon}_{\bf t}(E)\). For \({\bf t}={\bf 1}\), \({\rm Mon}_{\bf 1}(E)={\rm Mon}(E)\). One can quickly observe that if \(\ell\) is a positive integer, then there does exist at least a \({\bf t}\)-spread monomial of degree \(\ell\) in \(E\) if and only if \(n\geq 1+\sum_{j=1}^{\ell-1}t_{j}\). **Definition 5.2**: A \({\bf t}\)-spread monomial ideal \(I\) is called \({\bf t}\)-spread strongly stable if for all \({\bf t}\)-spread monomial \(e_{\mu}\in I\), all \(j\in{\rm supp}(e_{\mu})\) and all \(i<j\), such that \(e_{i}e_{\mu\setminus\{j\}}\) is \({\bf t}\)-spread, it follows that \(e_{i}e_{\mu\setminus\{j\}}\in I\). For instance, the ideal \(I=(e_{1}e_{3},e_{1}e_{4},e_{2}e_{4}e_{6})\) is a \((2,2)\)-spread strongly stable monomial ideal of \(E=K\langle e_{1},\ldots,e_{6}\rangle\). Note that a \({\bf 1}\)-spread strongly stable ideal is the ordinary strongly stable ideal introduced in [3]. The defining property of a \({\bf t}\)-spread strongly stable ideal needs to be checked only for the set of \({\bf t}\)-spread monomial generators. **Proposition 5.3**: _Let \(I\) be a \({\bf t}\)-spread ideal and suppose that for all \(e_{\mu}\in G(I)\), and for all integers \(1\leq i<j\leq n\) such that \(j\in\mu\) and \(e_{i}e_{\mu\setminus\{j\}}\) is \({\bf t}\)-spread, one has \(e_{i}e_{\mu\setminus\{j\}}\in I\). Then \(I\) is \({\bf t}\)-spread strongly stable._ _Proof._ Let \(e_{\nu}\in I\) be a \({\bf t}\)-spread monomial and \(1\leq i<j\leq n\) integers such that \(j\in\nu\) and \(e_{i}e_{\nu\setminus\{j\}}\) is \({\bf t}\)-spread. There exists \(e_{\mu}\in G(I)\) and a monomial \(e_{\rho}\in E\) such that \(e_{\nu}=e_{\mu}e_{\rho}\) in \(E\). We distinguish two cases: \(j\in\mu\), \(j\in\rho\). If \(j\in\mu\), then \(e_{i}e_{\mu\setminus\{j\}}\in I\) by assumption, and so \(e_{i}e_{\nu\setminus\{j\}}=e_{i}e_{\mu\setminus\{j\}}e_{\rho}\in I\). If \(j\in\rho\), then \(e_{i}e_{\nu\setminus\{j\}}=e_{i}e_{\mu}e_{\rho\setminus\{j\}}\in I\). \(\Box\) The proof of the next statement is verbatim the same as the corresponding result in the polynomial ring, see [6, Theorem 2.2] and [12, Proposition 5.1]. Thus we omit it. **Theorem 5.4**: _Let \(I\) be a \({\bf t}\)-spread strongly stable ideal. Then \(I\) has linear quotients with \(G(I)\) ordered with respect to the pure lexicographic order. In particular, \(I\) is componentwise linear._ Let \(E=K\langle e_{1},\ldots,e_{n}\rangle\) be the exterior algebra on \(n\) alternating variables and let \(S=K[x_{1},\ldots,x_{n}]\) be the standard graded polynomial ring on \(n\) variables. By abuse of notation, and in order to simplify the notations, if \(u\) is a monomial in \(E\), we denote again by \(u\) the corresponding squarefree monomial in \(S\). For instance, if \(u=e_{1}e_{2}e_{3}\in{\rm Mon}(E)\) then we denote by \(u\) the squarefree monomial \(x_{1}x_{2}x_{3}\) of \(S\), too. Moreover, if \(I\) is a monomial ideal in \(E\), we will denote again by \(I\) the corresponding squarefree ideal in \(S\). Let \(I\) be a \(\mathbf{t}\)-spread strongly stable ideal of \(E\) with \(G(I)=\{u_{1},\ldots,u_{r}\}\) ordered with respect to the pure lexicographic order [14]. Using the notation discussed above, \(I\) can be considered as a \(\mathbf{t}\)-spread strongly stable ideal of \(S\) with \(G(I)=\{u_{1},\ldots,u_{r}\}\) ordered with respect to the pure lexicographic order. Hence, we set \[\mathrm{set}_{S}(u_{j})=\{i:x_{i}\in(u_{1},\ldots,u_{j-1}):_{S}(u_{j})\}, \quad\mathrm{set}_{E}(u_{j})=\{i:e_{i}\in(u_{1},\ldots,u_{j-1}):_{E}(u_{j})\},\] for \(j=1,\ldots,r\). It is clear that \[\mathrm{set}_{E}(u_{j})=\mathrm{set}_{S}(u_{j})\cup\mathrm{supp}(u_{j}),\quad j =1,\ldots,r.\] Now, by [6, Corollary 2.3], if \(u_{k}=e_{j_{1}}\cdots e_{j_{\ell}}\), then \[\mathrm{set}_{S}(u_{k})=[\mathrm{m}(u_{k})-1]\setminus\bigcup_{h=1}^{\ell-1} \ \big{[}j_{h},j_{h}+(t_{h}-1)\big{]},\] where \([a,b]=\{c\in\mathbb{N}:a\leq c\leq b\}\) and \(a,b\in\mathbb{N}\). Therefore, \[\mathrm{set}_{E}(u_{k})=\mathrm{set}_{S}(u_{k})\cup\mathrm{supp}(u_{k})=[ \mathrm{m}(u_{k})]\setminus\bigcup_{h=1}^{\ell-1}\ \big{[}j_{h}+1,j_{h}+(t_{h}-1)\big{]} \tag{5}\] and consequentially \(|\mathrm{set}_{E}(u_{k})|=\mathrm{m}(u_{k})-\sum_{h=1}^{\ell-1}(t_{h}-1)\). Theorem 5.4 yields the following corollary which allow us to state a formula for computing the graded Betti numbers of a \(\mathbf{t}\)-spread strongly stable ideal. **Corollary 5.5**: _Let \(I\) be a \(\mathbf{t}\)-spread strongly stable ideal. Then_ \[\beta_{i,i+j}(I)=\sum_{u\in G(I)_{j}}\binom{\mathrm{m}(u)-\sum_{h=1}^{j-1}(t_{ h}-1)+i-1}{i}.\] _Proof._ From Theorem 5.4, \(I\) has linear quotients. Hence, the assertion follows from Corollary 3.3 and (5). \(\Box\) **Remark 5.6**: For \(\mathbf{t}=\mathbf{1}\), the formula in Corollary 5.5 becomes the well-known formula (1).
2304.13447
Automorphisms of Chevalley groups over commutative rings
In this paper we prove that every automorphism of a Chevalley group (or its elementary subgroup) with root system of rank >1 over a commutative ring (with 1/2 for the systems A_2, F_4, B_l, C_l; with 1/2 and 1/3 for the system G_2) is standard, i.e., it is a composition of ring, inner, central and graph automorphisms. This result finalizes description of automorphisms of Chevalley groups. However the restrictions on invertible elements can be a topic of further considerations. We provide also some model-theoretic applications of this description.
Elena Bunina
2023-04-26T11:05:27Z
http://arxiv.org/abs/2304.13447v2
**Automorphisms of Chevalley groups over commutative rings** **Automorphisms of Chevalley groups over commutative rings** **E. I. Bunina** **Bar Ilan University** **Abstract.** In this paper we prove that every automorphism of a Chevalley group (or its elementary subgroup) with root system of rank \(>1\) over a commutative ring (with \(1/2\) for the systems \({\bf A}_{2}\), \({\bf F}_{4}\), \({\bf B}_{l}\), \({\bf C}_{l}\); with \(1/2\) and \(1/3\) for the system \({\bf G}_{2}\)) is standard, i. e., it is a composition of ring, inner, central and graph automorphisms. This result finalizes description of automorphisms of Chevalley groups. However the restrictions on invertible elements can be a topic of further considerations. We provide also some model-theoretic applications of this description. ## 1 Introduction ### Automorphisms and isomorphisms of classical linear groups Automorphisms and isomorphisms of linear groups are studied by mathematicians from the beginning of XX century. First papers on automorphisms and isomorphisms of linear groups appeared already in the beginning of the 20th century. In particular, in the paper by Schreier and van der Warden [74] they described all automorphisms of the group \(\,{\rm PSL}\,_{n}\) (\(n\geqslant 3\)) over an arbitrary field. Later on, Hua [49] generalized this method and applied it to the description of automorphisms of symplectic groups over a field of characteristic \(\neq 2\). Diedonne [38] (1951) and Rickart [72] (1950) introduced the involution method, and described automorphisms of the group \(\,{\rm GL}\,_{n}\) (\(n\geqslant 3\)) over a skew field, and then also of unitary and symplectic groups over skew fields of characteristic \(\neq 2\)[73]. The first step towards the description of automorphisms of classical groups over rings was made by Hua and Reiner [48]. They dealt with the case \(\,{\rm GL}\,_{n}(\mathbb{Z})\). This result was extended to non-commutative principal ideal domains by Landin and Reiner in [57] and by Yan Shi-jian in [77]. The methods of the papers mentioned above were based mostly on studying involutions in the corresponding linear groups. O'Meara in 1976 invented very different (geometrical) method, which did not use involutions. By its aid, O"Meara described automorphisms of the group \(\,{\rm GL}\,_{n}\) (\(n\geqslant 3\)) over domains [64] and automorphisms of symplectic groups of a special form over fields (so-called _groups rich in transvections_) [65]. Independently, Yan Shi jian in [77] described automorphisms of the group \(E_{n}(R)\), \(n\geqslant 3\), where \(R\) is a domain of characteristic \(\neq 2\) using the involution method. In the paper [62] Pomret and MacDonald studied automorphisms of the groups \(\,{\rm GL}\,_{n},\,n\geqslant 3,\) over a commutative local ring with \(1/2\). Further on, Waterhouse in [96] obtained a description of automorphisms of the group \(\,{\rm GL}\,_{n},\,n\geqslant 3,\) over arbitrary commutative rings with \(1/2\). In 1982 Petechuk [66] described automorphisms of the groups \(\,{\rm GL}\,_{n},\,\,{\rm SL}\,_{n}\) (\(n\geqslant 4\)) over arbitrary commutative rings. If \(n=3,\) then automorphisms of linear groups are not always standard [68]. They are standard either if in a ring \(2\) is invertible, or if a ring is a domain, or it is a semisimple ring. McQueen and McDonald in [63] obtained the description of automorphisms of the groups \(\,{\rm Sp}\,_{n},\,n\geqslant 6\) over commutative local rings with \(1/2\). Continuing research in this direction, in 1980 Petechuk in [69] studied automorphisms of symplectic groups over arbitrary commutative local rings. In 1982 he extended description of automorphisms to the case \(\,{\rm Sp}\,_{n}(R),\,n\geqslant 6,\) over arbitrary commutative ring \(R,\) using the localization method, see [70]. Isomorphisms of the groups \(\,{\rm GL}\,_{n}(R)\) and \(\,{\rm GL}\,_{m}(S)\) over arbitrary associative rings with \(1/2\) for \(n,m\geqslant 3\) were described in 1981 by Golubchik and Mikhalev [43] and independently by Zelmanov [101]. In 1997 Golubchik described isomorphisms between these groups for \(n,m\geqslant 4,\) over arbitrary associative rings with \(1\)[44]. In 1983 Golubchik and Mikhalev in [42] studied isomorphisms of unitary linear groups over arbitrary associative rings with \(1/2,\) with some conditions for the dimension of the group and the rank of the form. For the case when \(n=2k\) and the hyperbolic rank of the form \(Q\) is maximal, the automorphism of \(U_{n}(R,Q),\,k\geqslant 3,\) were independently classified in 1985 by Zelmanov, see [101]. ### Automorphisms and isomorphisms of Chevalley groups In 50-th years of the previous century Chevalley, Steinberg and others introduced the concept of Chevalley groups over commutative rings. The foundations of the theory of Chevalley groups have been laid in the papers of Chevalley, Tits, Borel, Weil, Grothendieck, Demazure, Stenberg, etc. In 1956-1958 Chevalley obtained a classification of semisimple algebraic groups over algebraically closed fields. Later on, Chevalley showed that all semisimple groups over an algebraically closed field are actually defined under \(\mathbb{Z},\) or, in other words, are obtained as a result of expanding to an arbitrary ring of some group scheme defined over \(\mathbb{Z}\). These group schemes are called _Chevalley-Demazure schemes_. The groups of points of Chevalley-Demazure schemes over commutative rings are called _Chevalley groups_. Chevalley groups include classical linear groups (special linear \(\,{\rm SL}\,\), special orthogonal \(\,{\rm SO}\,\), symplectic \(\,{\rm Sp}\,\), spinor \(\,{\rm Spin}\,\), and also projective groups connected with them) over commutative rings. Finite simple groups of Lie type are the central quotients of Chevalley groups. Isomorphisms and automorphisms of Chevalley groups over different classes of rings were were intensively studied. The description of isomorphisms of Chevalley groups over fields was obtained by Steinberg [82] for the finite case and by Humphreys [50] for the infinite one. Many papers are devoted to description of automorphisms of Chevalley groups over commutative rings. We can mention here the papers of Borel-Tits [12], Carter-Chen Yu [27], Chen Yu [28]-[32], Abe [1], Klyachko [56]. Usually complete description of automorphisms of Chevalley groups means standardity of all these automorphisms, that is, all automorphisms are compositions of some simple and well-described types of automorphisms: inner automorphisms, automorphisms induced by ring automorphisms, etc. Abe in [1] proved the standardity of automorphisms for Noetherian rings with \(1/2\), which could help to close the question of automorphisms of Chevalley groups over arbitrary commutative rings with \(1/2\). However, in considering the case of adjoint elementary groups has a gap, which cannot be eliminated by the methods of this article. The cases when the ring contains a lot of invertible integers (in some sense) are completely clarified in the paper of Klyachko [56]. In the paper [15] Bunina proved that automorphisms of adjoint elementary Chevalley groups with root systems \({\bf A}_{l},{\bf D}_{l},{\bf E}_{l}\), \(l\geqslant 2\), over local rings with invertible \(2\) can be represented as the composition of ring automorphism and an _automorphism-conjugation_ (by automorphism-conjugation we call conjugation of elements of a Chevalley group in the adjoint representation by some matrix from the normalizer of this group in \(\,{\rm GL}\,(V)\)). By the similar token it was proved in [17] that every automorphism of an arbitrary Chevalley (or its arbitrary subgroup) group is standard, i. e., it is a composition of ring, inner, central and graph automorphisms. In the same paper it was obtained the theorem describing the normalizer of Chevalley groups in their adjoint representation, which also holds for local rings without \(1/2\). In the series of papers [19], [16], [18], [20], [25] the similar methods made it possible to obtain the standardity of all automorphisms of Chevalley groups \(G(\Phi,R)\) where \(\Phi={\bf F}_{4}\), \({\bf B}_{l}\), \(l\geqslant 3\), \(R\) is a local ring and \(1/2\in R\), or \(\Phi={\bf G}_{2}\) and \(1/2\), \(1/3\in R\). The same is true for \(\Phi={\bf A}_{l}\), \({\bf D}_{l}\)\({\bf E}_{l}\), \({\bf G}_{2}\), \(l\geqslant 2\), \(R\) is a local ring and \(1/2\notin R\). As we already mentioned the case \({\bf C}_{l}\) (symplectic linear groups and projective symplectic linear groups) was considered in the papers of Petechuk and Golubchik-Mikhalev (even for non-commutative rings). The non-standard automorphisms are described by Steinberg in [81] for the cases of Chevalley groups of types \({\bf B}_{2}\) and \({\bf F}_{4}\) over fields of characteristic \(2\) and of type \({\bf G}_{2}\) over fields of characteristic \(3\). For fields of characteristic \(2\) also there exists an isomorphism between Chevalley groups of types \({\bf B}_{l}\) and \({\bf C}_{l}\), \(l\geqslant 3\). In [68] Petechuk described (non-standard) automorphisms of Chevalley groups of the type \({\bf A}_{2}\) over local rings without \(1/2\). Therefore the cases of Chevalley groups of the types \({\bf A}_{2},{\bf B}_{l},{\bf C}_{l},{\bf F}_{4}\) over rings without \(1/2\) and of the type \({\bf G}_{2}\) over rings without \(1/3\) require separate consideration. In the paper [21] Bunina used the localization method and ideas of Petechuk and generalized the description of automorphisms of Chevalley groups over local rings to adjoint Chevalley groups over arbitrary commutative rings. In the paper [22] the isomorphisms between these Chevalley groups were described. In this paper we extend the result of [21] to arbitrary Chevalley groups over rings. The paper is organized as follows. Section 2 deals with definitions and formulation of the Main Theorem. The proof of the Main Theorem for elementary case is situated in Section 3. The next Section 4 is devoted to the proof of the Main Theorem in the general case. Definitions and main theorem. ### Root systems and semisimple Lie algebras We fix an indecomposable root system \(\Phi\) of the rank \(\ell>1\), with the system of simple roots \(\Delta\), the set of positive (negative) roots \(\Phi^{+}\) (\(\Phi^{-}\)), and the Weil group \(W\). Recall that any two roots of the same length are conjugate under the action of the Weil group. Let \(|\Phi^{+}|=m\). More detailed texts about root systems and their properties can be found in the books [51], [13]. Recall also that for \(\alpha,\beta\in\Phi\) \[\langle\alpha,\beta\rangle=2\frac{(\alpha,\beta)}{(\beta,\beta)}.\] Suppose now that we have a semisimple complex Lie algebra \(\mathcal{L}\) with the Cartan subalgebra \(\mathcal{H}\) (more details about semisimple Lie algebras can be found, for instance, in the book [51]). Lie algebra \(\mathcal{L}\) has a decomposition \(\mathcal{L}=\mathcal{H}\oplus\sum\limits_{\alpha\neq 0}\mathcal{L}_{\alpha}\), \[\mathcal{L}_{\alpha}:=\{x\in\mathcal{L}\mid[h,x]=\alpha(h)x\text{ for every }h\in\mathcal{H}\},\] and if \(\mathcal{L}_{\alpha}\neq 0\), then \(\dim\mathcal{L}_{\alpha}=1\), all nonzero \(\alpha\in\mathcal{H}\) such that \(\mathcal{L}_{\alpha}\neq 0\), form some root system \(\Phi\). The root system \(\Phi\) and the semisimple Lie algebra \(\mathcal{L}\) over \(\mathbb{C}\) uniquely (up to automorphism) define each other. On the Lie algebra \(\mathcal{L}\) we can introduce a bilinear _Killing form_\(\varkappa(x,y)=\,\mathrm{tr}\,(\,\mathrm{ad}\,x\,\mathrm{ad}\,y)\), that is non-degenerated on \(\mathcal{H}\). Therefore we can identify the spaces \(\mathcal{H}\) and \(\mathcal{H}^{*}\). We can choose a basis \(\{h_{1},\ldots,h_{l}\}\) in \(\mathcal{H}\) and for every \(\alpha\in\Phi\) elements \(x_{\alpha}\in\mathcal{L}_{\alpha}\) so that \(\{h_{i};x_{\alpha}\}\) is a basis in \(\mathcal{L}\) and for every two elements of this basis their commutator is an integral linear combination of the elements of the same basis. This basis is called a _Chevalley basis_. ### Elementary Chevalley groups Introduce now elementary Chevalley groups (see [81]). Let \(\mathcal{L}\) be a semisimple Lie algebra (over \(\mathbb{C}\)) with a root system \(\Phi\), \(\pi:\mathcal{L}\to\mathfrak{gl}(V)\) be its finitely dimensional faithful representation (of dimension \(n\)). If \(\mathcal{H}\) is a Cartan subalgebra of \(\mathcal{L}\), then a functional \(\lambda\in\mathcal{H}^{*}\) is called a _weight_ of a given representation, if there exists a nonzero vector \(v\in V\) (that is called a _weight vector_) such that for any \(h\in\mathcal{H}\)\(\pi(h)v=\lambda(h)v\). In the space \(V\) in the Chevalley basis all operators \(\pi(x_{\alpha})^{k}/k!\) for \(k\in\mathbb{N}\) are written as integral (nilpotent) matrices. An integral matrix also can be considered as a matrix over an arbitrary commutative ring with \(1\). Let \(R\) be such a ring. Consider matrices \(n\times n\) over \(R\), matrices \(\pi(x_{\alpha})^{k}/k!\) for \(\alpha\in\Phi\), \(k\in\mathbb{N}\) are included in \(M_{n}(R)\). Now consider automorphisms of the free module \(R^{n}\) of the form \[\exp(tx_{\alpha})=x_{\alpha}(t)=1+t\pi(x_{\alpha})+t^{2}\pi(x_{\alpha})^{2}/2 +\cdots+t^{k}\pi(x_{\alpha})^{k}/k!+\ldots\] Since all matrices \(\pi(x_{\alpha})\) are nilpotent, we have that this series is finite. Automorphisms \(x_{\alpha}(t)\) are called _elementary root elements_. The subgroup in \(\,\mathrm{Aut}\,(R^{n})\), generated by all \(x_{\alpha}(t)\), \(\alpha\in\Phi\), \(t\in R\), is called an _elementary Chevalley group_ (notation: \(E_{\pi}(\Phi,R)\)). In elementary Chevalley group we can introduce the following important elements and subgroups: * \(w_{\alpha}(t)=x_{\alpha}(t)x_{-\alpha}(-t^{-1})x_{\alpha}(t),\)\(\alpha\in\Phi,\)\(t\in R^{*};\) * \(h_{\alpha}(t)=w_{\alpha}(t)w_{\alpha}(1)^{-1};\) * \(N\) is generated by all \(w_{\alpha}(t),\)\(\alpha\in\Phi,\)\(t\in R^{*};\) * \(H\) is generated by all \(h_{\alpha}(t),\)\(\alpha\in\Phi,\)\(t\in R^{*};\) * The subgroup \(U=U(\Phi,R)\) of the Chevalley group \(G(\Phi,R)\) (resp. \(E(\Phi,R)\)) is generated by elements \(x_{\alpha}(t),\)\(\alpha\in\Phi^{+},\)\(t\in R,\) the subgroup \(V=V(\Phi,R)\) is generated by elements \(x_{-\alpha}(t),\)\(\alpha\in\Phi^{+}\)\(t\in R.\) The action of \(x_{\alpha}(t)\) on the Chevalley basis is described in [26], [90]. It is known that the group \(N\) is a normalizer of \(H\) in elementary Chevalley group, the quotient group \(N/H\) is isomorphic to the Weil group \(W(\Phi).\) All weights of a given representation (by addition) generate a lattice (free Abelian group, where every \(\mathbb{Z}\)-basis is also a \(\mathbb{C}\)-basis in \(\mathcal{H}^{*}\)), that is called the _weight lattice_\(\Lambda_{\pi}.\) Elementary Chevalley groups are defined not even by a representation of the Chevalley groups, but just by its _weight lattice_. More precisely, up to an abstract isomorphism an elementary Chevalley group is completely defined by a root system \(\Phi,\) a commutative ring \(R\) with \(1\) and a weight lattice \(\Lambda_{\pi}.\) Among all lattices we can mark two: the lattice corresponding to the adjoint representation, it is generated by all roots (the _root lattice_\(\Lambda_{ad}\)) and the lattice generated by all weights of all reperesentations (the _lattice of weights_\(\Lambda_{sc}\)). For every faithful reperesentation \(\pi\) we have the inclusion \(\Lambda_{ad}\subseteq\Lambda_{\pi}\subseteq\Lambda_{sc}.\) Respectively, we have the _adjoint_ and _simply connected_ elementary Chevalley groups. Every elementary Chevalley group satisfies the following relations: (R1) \(\forall\alpha\in\Phi\)\(\forall t,u\in R\quad x_{\alpha}(t)x_{\alpha}(u)=x_{\alpha}(t+u);\) (R2) \(\forall\alpha,\beta\in\Phi\)\(\forall t,u\in R\quad\alpha+\beta\neq 0\Rightarrow\) \[[x_{\alpha}(t),x_{\beta}(u)]=x_{\alpha}(t)x_{\beta}(u)x_{\alpha}(-t)x_{\beta} (-u)=\prod x_{i\alpha+j\beta}(c_{ij}t^{i}u^{j}),\] where \(i,j\) are integers, product is taken by all roots \(i\alpha+j\beta,\) taken in some fixed order; \(c_{ij}\) are integer numbers not depending on \(t\) and \(u,\) but depending on \(\alpha\) and \(\beta\) and the order of roots in the product. (R3) \(\forall\alpha\in\Phi\)\(w_{\alpha}=w_{\alpha}(1);\) (R4) \(\forall\alpha,\beta\in\Phi\)\(\forall t\in R^{*}\)\(w_{\alpha}h_{\beta}(t)w_{\alpha}^{-1}=h_{w_{\alpha}(\beta)}(t);\) (R5) \(\forall\alpha,\beta\in\Phi\)\(\forall t\in R^{*}\)\(w_{\alpha}x_{\beta}(t)w_{\alpha}^{-1}=x_{w_{\alpha}(\beta)}(ct),\) where \(c=c(\alpha,\beta)=\pm 1;\) (R6) \(\forall\alpha,\beta\in\Phi\)\(\forall t\in R^{*}\)\(\forall u\in R\)\(h_{\alpha}(t)x_{\beta}(u)h_{\alpha}(t)^{-1}=x_{\beta}(t^{\langle\beta,\alpha \rangle}u).\) For a given \(\alpha\in\Phi\) by \(X_{\alpha}\) we denote the subgroup \(\{x_{\alpha}(t)\mid t\in R\}.\) ### Chevalley groups Introduce now Chevalley groups (see [81], [33], [11], [26], [36], [88], [90], and references therein). Consider semisimple linear algebraic groups over algebraically closed fields. These are precisely elementary Chevalley groups \(E_{\pi}(\Phi,K)\) (see. [81], SS 5). All these groups are defined in \(\,\operatorname{SL}\,_{n}(K)\) as common set of zeros of polynomials of matrix entries \(a_{ij}\) with integer coefficients (for example, in the case of the root system \(\mathbf{C}_{\ell}\) and the universal representation we have \(n=2l\) and the polynomials from the condition \((a_{ij})Q(a_{ji})-Q=0\), where \(Q\) is a matrix of the symplectic form). It is clear now that multiplication and taking inverse element are defined by polynomials with integer coefficients. Therefore, these polynomials can be considered as polynomials over an arbitrary commutative ring with a unit. Let some elementary Chevalley group \(E\) over \(\mathbb{C}\) be defined in \(\,\operatorname{SL}\,_{n}(\mathbb{C})\) by polynomials \(p_{1}(a_{ij}),\ldots,p_{m}(a_{ij})\). For a commutative ring \(R\) with a unit let us consider the group \[G(R)=\{(a_{ij})\in\,\operatorname{SL}\,_{n}(R)\mid\widetilde{p}_{1}(a_{ij})=0,\ldots,\widetilde{p}_{m}(a_{ij})=0\},\] where \(\widetilde{p}_{1}(\ldots),\ldots\widetilde{p}_{m}(\ldots)\) are polynomials having the same coefficients as \(p_{1}(\ldots),\ldots,p_{m}(\ldots)\), but considered over \(R\). This group is called the _Chevalley group_\(G_{\pi}(\Phi,R)\) of the type \(\Phi\) over the ring \(R\), and for every algebraically closed field \(K\) it coincides with the elementary Chevalley group. In more advanced terms a Chevalley group \(G(\Phi,R)\) is the value of the _Chevalley-Demazure group scheme_, see [10]. The subgroup of diagonal (in the standard basis of weight vectors) matrices of the Chevalley group \(G_{\pi}(\Phi,R)\) is called the _standard maximal torus_ of \(G_{\pi}(\Phi,R)\) and it is denoted by \(T_{\pi}(\Phi,R)\). This group is isomorphic to \(Hom(\Lambda_{\pi},R^{*})\). Let us denote by \(h(\chi)\) the elements of the torus \(T_{\pi}(\Phi,R)\), corresponding to the homomorphism \(\chi\in Hom(\Lambda(\pi),R^{*})\). In particular, \(h_{\alpha}(u)=h(\chi_{\alpha,u})\) (\(u\in R^{*}\), \(\alpha\in\Phi\)), where \[\chi_{\alpha,u}:\lambda\mapsto u^{\langle\lambda,\alpha\rangle}\quad(\lambda \in\Lambda_{\pi}).\] ### Connection between Chevalley groups and their elementary subgroups Connection between Chevalley groups and corresponding elementary subgroups is an important problem in the structure theory of Chevalley groups over rings. For elementary Chevalley groups there exists a convenient system of generators \(x_{\alpha}(\xi)\), \(\alpha\in\Phi\), \(\xi\in R\), and all relations between these generators are well-known. For general Chevalley groups it is not always true. If \(R\) is an algebraically closed field, then \[G_{\pi}(\Phi,R)=E_{\pi}(\Phi,R)\] for any representation \(\pi\). This equality is not true even for the case of fields, which are not algebraically closed. However if \(G\) is a simply connected Chevalley group and the ring \(R\) is _semilocal_ (i.e., contains only finite number of maximal ideals), then we have the condition \[G_{sc}(\Phi,R)=E_{sc}(\Phi,R).\] [60], [2], [79], [6]. If, however, \(\pi\) is arbitrary and \(R\) is semilocal, then: \(G_{\pi}(\Phi,R)=E_{\pi}(\Phi,R)T_{\pi}(\Phi,R)]\) (see [2], [6], [60]), and the elements \(h(\chi)\) are connected with elementary generators by the formula \[h(\chi)x_{\beta}(\xi)h(\chi)^{-1}=x_{\beta}(\chi(\beta)\xi). \tag{1}\] **Remark 1**.: _Since \(\chi\in\,\operatorname{Hom}\,(\Lambda(\pi),R^{*})\), if we know the values of \(\chi\) on some set of roots which generate all roots (for example, on some basis of \(\Phi\)), then we know \(\chi(\beta)\) for all \(\beta\in\Phi\) and respectively all \(x_{\beta}(\xi)^{h(\chi)}\) for all \(\beta\in\Phi\) and \(\xi\in R^{*}\)._ _Therefore (in particular) if for all roots \(\beta\) from some generating set of \(\Phi\) we have \([x_{\beta}(1),h(\chi)]=1\), then \(h(\chi)\in Z(E_{\pi}(\Phi,R))\) and hence \(h(\chi)\in Z(G_{\pi}(\Phi,R))\)._ _We will use this observation in the next section many times._ If \(\Phi\) is an irreducible root system of a rank \(\ell\geqslant 2\), then \(E(\Phi,R)\) is always normal and even **characteristic** in \(G(\Phi,R)\) (see [86], [47]). In the case of semilocal rings it is easy to show that \[[G(\Phi,R),G(\Phi,R)]=E(\Phi,R).\] except the cases \(\Phi=\mathbf{B}_{2},\mathbf{G}_{2}\), \(R=\mathbb{F}_{2}\). In the case \(\ell=1\) the subgroup of elementary matrices \(E_{2}(R)=E_{sc}(\mathbf{A}_{1},R)\) is not necessarily normal in the special linear group \(\,\operatorname{SL}\,_{2}(R)=G_{sc}(\mathbf{A}_{1},R)\) (see [35], [85], [83]). In the general case the difference between \(G_{\pi}(\Phi,R)\) and \(E_{\pi}(\Phi,R)\) is measured by \(K_{1}\)-functor. ### Standard automorphisms of Chevalley groups Define four types of automorphisms of a Chevalley group \(G_{\pi}(\Phi,R)\), we call them _standard_. **Central automorphisms.** Let \(C_{G}(R)\) be a center of \(G_{\pi}(\Phi,R)\), \(\tau:G_{\pi}(\Phi,R)\to C_{G}(R)\) be some homomorphism of groups. Then the mapping \(x\mapsto\tau(x)x\) from \(G_{\pi}(\Phi,R)\) onto itself is an automorphism of \(G_{\pi}(\Phi,R)\), denoted by \(\tau\). It is called a _central automorphism_ of the group \(G_{\pi}(\Phi,R)\). **Ring automorphisms.** Let \(\rho:R\to R\) be an automorphism of the ring \(R\). The mapping \((a_{i,j})\mapsto(\rho(a_{i,j}))\) from \(G_{\pi}(\Phi,R)\) onto itself is an automorphism of the group \(G_{\pi}(\Phi,R)\), denoted by the same letter \(\rho\). It is called a _ring automorphism_ of the group \(G_{\pi}(\Phi,R)\). Note that for all \(\alpha\in\Phi\) and \(t\in R\) an element \(x_{\alpha}(t)\) is mapped to \(x_{\alpha}(\rho(t))\). **Inner automorphisms.** Let \(S\) be some ring containing \(R\), \(g\) be an element of \(G_{\pi}(\Phi,S)\), that normalizes the subgroup \(G_{\pi}(\Phi,R)\). Then the mapping \(x\mapsto gxg^{-1}\) is an automorphism of the group \(G_{\pi}(\Phi,R)\), denoted by \(i_{g}\). It is called an _inner automorphism_, _induced by the element_\(g\in G_{\pi}(\Phi,S)\). If \(g\in G_{\pi}(\Phi,R)\), then we call \(i_{g}\) a _strictly inner_ automorphism. **Graph automorphisms.** Let \(\delta\) be an automorphism of the root system \(\Phi\) such that \(\delta\Delta=\Delta\). Then there exists a unique automorphisms of \(G_{\pi}(\Phi,R)\) (we denote it by the same letter \(\delta\)) such that for every \(\alpha\in\Phi\) and \(t\in R\) an element \(x_{\alpha}(t)\) is mapped to \(x_{\delta(\alpha)}(\varepsilon(\alpha)t)\), where \(\varepsilon(\alpha)=\pm 1\) for all \(\alpha\in\Phi\) and \(\varepsilon(\alpha)=1\) for all \(\alpha\in\Delta\). Now suppose that \(\delta_{1},\ldots,\delta_{k}\) are all different graph automorphisms for the given root system (for the systems \(\mathbf{E}_{7},\mathbf{E}_{8},\mathbf{B}_{l},\mathbf{C}_{l},\mathbf{F}_{4}, \mathbf{G}_{2}\) there can be just identical automorphism, for the systems \(\mathbf{A}_{l},\mathbf{D}_{l},l\neq 4,\mathbf{E}_{6}\) there are two such automorphisms, for the system \(\mathbf{D}_{4}\) there are six automorphisms). Suppose that we have a system of orthogonal idempotents of the ring \(R\): \[\{\varepsilon_{1},\ldots,\varepsilon_{k}\mid\varepsilon_{1}+\cdots+\varepsilon _{k}=1,\forall i\neq j\ \varepsilon_{i}\varepsilon_{j}=0\}.\] Then the mapping \[\Lambda_{\varepsilon_{1},\ldots,\varepsilon_{k}}:=\varepsilon_{1}\delta_{1}+ \cdots+\varepsilon_{k}\delta_{k}\] of the Chevalley group onto itself is an automorphism, called a _graph automorphism_ of the Chevalley group \(G_{\pi}(\Phi,R)\). Similarly we can define four types of automorphisms of the elementary subgroup \(E_{\pi}(\Phi,R)\). An automorphism \(\sigma\) of the group \(G_{\pi}(\Phi,R)\) (or \(E_{\pi}(\Phi,R)\)) is called _standard_ if it is a composition of automorphisms of these introduced four types. In [21] the following theorem was proved: **Theorem 1**.: _Let \(G=G_{\,\mathrm{ad}}\,(\Phi,R)\) be an adjoint Chevalley group (or its elementary subgroup \((E_{\,\mathrm{ad}}\,(\Phi,R))\)) of rank \(>1\), \(R\) be a commutative ring with \(1\). Suppose that for \(\Phi=\mathbf{A}_{2},\mathbf{B}_{l},\mathbf{C}_{l}\) or \(\mathbf{F}_{4}\) we have \(1/2\in R\), for \(\Phi=\mathbf{G}_{2}\) we have \(1/2,1/3\in R\). Then every automorphism of the group \(G\) is standard and the inner automorphism in the composition is strictly inner._ Our goal is to prove the following theorem: **Theorem 2**.: _Let \(G=G_{\pi}(\Phi,R)\) be a Chevalley group \((\)or its elementary subgroup \(E_{\pi}(\Phi,R))\) of rank \(>1\), \(R\) be a commutative ring with \(1\). Suppose that for \(\Phi=\mathbf{A}_{2},\mathbf{B}_{l},\mathbf{C}_{l}\) or \(\mathbf{F}_{4}\) we have \(1/2\in R\), for \(\Phi=\mathbf{G}_{2}\) we have \(1/2,1/3\in R\). Then every automorphism of the group \(G\) is standard._ ## 3 Proof of the main theorem for elementary Chevalley groups and subgroups ### Localization of rings and modules; injection of a ring into the product of its localizations. Definition 1.: Let \(R\) be a commutative ring. A subset \(Y\subset R\) is called _multiplicatively closed_ in \(R\), if \(1\in Y\) and \(Y\) is closed under multiplication. Introduce an equivalence relation \(\sim\) on the set of pairs \(R\times Y\) as follows: \[\frac{a}{s}\sim\frac{b}{t}\Longleftrightarrow\exists u\in Y:\ (at-bs)u=0.\] By \(\frac{a}{s}\) we denote the whole equivalence class of the pair \((a,s)\), by \(Y^{-1}R\) we denote the set of all equivalence classes. On the set \(S^{-1}R\) we can introduce the ring structure by \[\frac{a}{s}+\frac{b}{t}=\frac{at+bs}{st},\quad\frac{a}{s}\cdot\frac{b}{t}=\frac {ab}{st}.\] Definition 2.: The ring \(Y^{-1}R\) is called the _ring of fractions of \(R\) with respect to \(Y\)_. Let \(\mathfrak{p}\) be a prime ideal of \(R\). Then the set \(Y=R\setminus\mathfrak{p}\) is multiplicatively closed (it is equivalent to the definition of the prime ideal). We will denote the ring of fractions \(Y^{-1}R\) in this case by \(R_{\mathfrak{p}}\). The elements \(\frac{a}{s}\), \(a\in\mathfrak{p}\), form an ideal \(\mathfrak{M}\) in \(R_{\mathfrak{p}}\). If \(\frac{b}{t}\notin\mathfrak{M}\), then \(b\in Y\), therefore \(\frac{b}{t}\) is invertible in \(R_{\mathfrak{p}}\). Consequently the ideal \(\mathfrak{M}\) consists of all non-invertible elements of the ring \(R_{\mathfrak{p}}\), i. e., \(\mathfrak{M}\) is the greatest ideal of this ring, so \(R_{\mathfrak{p}}\) is a local ring. The process of passing from \(R\) to \(R_{\mathfrak{p}}\) is called _localization at \(\mathfrak{p}\)_. **Proposition 1**.: _Every commutative ring \(R\) with \(1\) can be naturally embedded in the cartesian product of all its localizations by maximal ideals_ \[S=\prod_{\mathfrak{m}\text{ is a maximal ideal of }R}R_{\mathfrak{m}}\] _by diagonal mapping, which corresponds every \(a\in R\) to the element \(\prod\limits_{\mathfrak{m}}\left(\frac{a}{1}\right)_{\mathfrak{m}}\in S.\)_ ### Proof for \(E_{\pi}(\Phi,R)\). Suppose that \(G=G_{\pi}(\Phi,R)\) or \(G=E_{\pi}(\Phi,R)\) is a Chevalley group (or its elementary subgroup), where \(\Phi\) is an indecomposable root system of rank \(>1\), \(R\) is an arbitrary commutative ring (with \(1/2\) in the case \(\Phi=\mathbf{A}_{2},\mathbf{F}_{4},\mathbf{B}_{l},\mathbf{C}_{l}\) and with \(1/2\) and \(1/3\) in the case \(\Phi=\mathbf{G}_{2}\)). Suppose that \(\varphi\in\,\operatorname{Aut}\,(G)\). Since the subgroup \(E_{\pi}(\Phi,R)\) is characteristic in \(G_{\pi}(\Phi,R)\), then \(\varphi\) induces the automorphism \(\varphi\in\,\operatorname{Aut}\,(E_{\pi}(\Phi,R))\) (we denote it by the same letter). The elementary adjoint Chevalley group \(E_{\,\operatorname{ad}\,}(\Phi,R)\) is the quotient group of our initial elementary Chevalley group \(E_{\pi}(\Phi,R)\) by its center \(Z=Z(E_{\pi}(\Phi,R))\). Therefore the automorphism \(\varphi\) induces an automorphism \(\overline{\varphi}\) of the adjoint Chevalley group \(E_{\,\operatorname{ad}\,}(\Phi,R)\). By Theorem 1\(\varphi\) is the composition of a graph automorphism \(\overline{\Lambda}_{\varepsilon_{1},\ldots,\varepsilon_{k}}\), where \(\varepsilon_{1},\ldots,\varepsilon_{k}\in R\), a ring automorphism \(\overline{\rho}\), induced by \(\rho\in\,\operatorname{Aut}R\), and the strictly inner automorphism \(i_{\overline{\rho}}\), induced by some \(\overline{g}\in G_{\,\operatorname{ad}\,}(\Phi,R)\). Central automorphism is identical in the decomposition of \(\overline{\varphi}\), since the center of any adjoint Chevalley group is trivial. Since \(\varepsilon_{1},\ldots,\varepsilon_{k}\in R\) and for any \(\delta_{i}\in\,\operatorname{Aut}\Delta\) and for any representation \(\pi\) of the corresponding Lie algebra there exists the corresponding graph automorphism \(\delta_{i}\in\,\operatorname{Aut}\,(G_{\pi}(\Phi,R))\), then there exists a graph automorphism \(\Lambda_{\varepsilon_{1},\ldots,\varepsilon_{k}}\in\,\operatorname{Aut}\,(E_{ \pi}(\Phi,R))\) such that the induced automorphism of the group \(E_{\,\operatorname{ad}\,}(\Phi,R)\) is precisely \(\overline{\Lambda}_{\varepsilon_{1},\ldots,\varepsilon_{k}}\). Also taking the ring automorphism \(\rho\in\,\operatorname{Aut}\,(G_{\pi}(\Phi,R))\) we see that the induced automorphism of \(E_{\,\operatorname{ad}\,}(\Phi,R)\) is precisely \(\overline{\rho}\). Therefore if we take \(\varphi_{1}=\Lambda^{-1}\circ\rho^{-1}\circ\varphi\), then we obtain an automorphism of the group \(G\) (and in any cases of the group/subgroup \(E_{\pi}(\Phi,R)\)) which induces the strictly inner automorphism \(i_{\overline{g}}\) on \(E_{\,\operatorname{ad}\,}(\Phi,R)\). We always assume that \(R\) is a subring of the ring \(S=\prod\limits_{\mathfrak{m}}R_{\mathfrak{m}}=\prod\limits_{i\in\varkappa}R_{i}\), where every \(R_{i}\) is a local ring, therefore \[G_{\pi}(\Phi,R)\subseteq G_{\pi}(\Phi,S)=\prod\limits_{i\in\varkappa}G_{\pi}( \Phi,R_{i})\text{ and }E_{\pi}(\Phi,R)\subseteq\prod\limits_{i\in\varkappa}E_{\pi}(\Phi,R_{i}).\] Note that since every \(R_{i}\) is local, then we have \(G_{\pi}(\Phi,R)=T_{\pi}(\Phi,R)E_{\pi}(\Phi,R)\) and therefore \[\prod\limits_{i\in\varkappa}G_{\pi}(\Phi,R_{i})=\prod\limits_{i\in\varkappa} T_{\pi}(\Phi,R_{i})E_{\pi}(\Phi,R_{i}).\] Suppose now that \(\overline{g}=\prod\limits_{i\in\varkappa}\overline{g}_{i}\), where \(\overline{g}_{i}\in G_{\,\mathrm{ad}}\,(\Phi,R_{i})\). Let us consider one \(i\in\varkappa\), where \(\overline{g}_{i}\in T_{\,\mathrm{ad}}\,(\Phi,R_{i})E_{\,\mathrm{ad}}\,(\Phi,R _{i})\), i. e., \(g_{i}=\overline{t}_{i}\cdot\overline{x}_{i}\), where \(\overline{t}_{i}\in T_{\,\mathrm{ad}}\,(\Phi,R_{i})\), \(\overline{x}_{i}\in E_{\,\mathrm{ad}}\,(\Phi,R_{i})\). Since \(\overline{x}_{i}\) is a product of elementary unipotents over the ring \(R_{i}\), then we can take \(x_{i}\in E_{\pi}(\Phi,R_{i})\), that is the same product of the same elementary unipotents and its image under factorization of \(E_{\pi}(\Phi,R_{i})\) by its center is precisely \(\overline{x}_{i}\). Now let us consider the element \(\overline{t}_{i}\in T_{\,\mathrm{ad}}\,(\Phi,R_{i})\). This element corresponds to some homomorphism \(\chi_{i}\in\,\mathrm{Hom}\,(\Lambda(\,\mathrm{ad}\,),R_{i}^{*})\) and acts on any \(x_{\alpha}(s)\in E_{\,\mathrm{ad}}\,(\Phi,R_{i})\) as \[\overline{t}_{i}x_{\alpha}(s)\overline{t}_{i}^{-1}=x_{\alpha}(\chi_{i}(\alpha )\cdot s).\] If \(\overline{t}_{i}\notin H_{\,\mathrm{ad}}\,(\Phi,R_{i})\), then we can extend the ring \(R_{i}\) up to a ring \(S_{i}\) so that there exists \(\overline{h}_{i}\in H_{\,\mathrm{ad}}\,(\Phi,S_{i})\) with the same action on all elementary uniponents \(x_{\alpha}(s)\) as our \(\overline{t}_{i}\). The ring \(S_{i}\) is an algebraic extension of \(R_{i}\), in which there exist several new roots \(\sqrt[k]{\lambda}\) for a finite number of \(\lambda\in R_{i}^{*}\). This \(S_{i}\) can be obtained from \(R_{i}\) by the standard procedure \[S_{i}\cong R_{i}[y]/(y^{k}-\lambda).\] Note that \(S_{i}\) is not necessarily local. Now since \(R_{i}\subseteq S_{i}\), then \(S\subseteq\prod\limits_{i\in\varkappa}S_{i}=\widetilde{S}\) and \(R\subseteq S\subseteq\widetilde{S}\). We see that for every \(i\in\varkappa\) the torus element \(\overline{t}_{i}\) acts on all \(x_{\alpha}(s)\), \(s\in S_{i}\) as \(\overline{h}_{i}\in H_{\,\mathrm{ad}}\,(\Phi,S_{i})\), therefore the element \(\overline{y}_{i}=\overline{h}_{i}\cdot\overline{x}_{i}\) acts on all \(x_{\alpha}(s)\), \(s\in S_{i}\) as the initial \(\overline{g}_{i}\). Consequently the element \(\overline{y}:=\prod\limits_{i\in\varkappa}\overline{y}_{i}\in E_{\,\mathrm{ad} }\,(\Phi,\widetilde{S})\) acts on all \(x_{\alpha}(s)\), \(s\in\widetilde{S}\) as the initial \(\overline{g}\). Therefore we have \(\overline{y}\in E_{\,\mathrm{ad}}\,(\Phi,\widetilde{S})\) such that \[i_{\overline{y}}|_{E_{\,\mathrm{ad}}\,(\Phi,\widetilde{S})}=i_{\overline{g}}| _{E_{\,\mathrm{ad}}\,(\Phi,\widetilde{S})}.\] In particular, \[i_{\overline{y}}|_{E_{\,\mathrm{ad}}\,(\Phi,R)}=i_{\overline{g}}|_{E_{\, \mathrm{ad}}\,(\Phi,R)}.\] Let us take \(y\in E_{\pi}(\Phi,\widetilde{S})\) such that its image under factorization of \(E_{\pi}(\Phi,\widetilde{S})\) by its center is precisely \(\overline{y}\). Now we can take \(\varphi_{2}=i_{y^{-1}}\circ\varphi_{1}\), it will be an isomorphism between \(E_{\pi}(\Phi,R)\) and the subgroup of \(E_{\pi}(\Phi,\widetilde{S})\) such that under factorization by the center of \(E_{\pi}(\Phi,\widetilde{S})\) we obtain the identical automorphism \(\overline{\varphi}_{2}\) of the group \(E_{\,\mathrm{ad}}\,(\Phi,R)\). Now let us analyze the mapping \(\varphi_{2}\). Since \(\overline{\varphi}_{2}\) is identical, then \[\forall\alpha\in\Phi\ \forall s\in R\quad\varphi_{2}(x_{\alpha}(s))=z_{\alpha,s}x _{\alpha}(s),\text{ where }z_{\alpha,s}\in Z(E_{\pi}(\Phi,\widetilde{S})).\] If \(\alpha\) is either any root of the systems \(\mathbf{A}_{l}\), \(l\geqslant 2\), \(\mathbf{D}_{l}\), \(l\geqslant 4\), \(\mathbf{E}_{l}\), \(l=6,7,8\), \(\mathbf{F}_{4}\), or any long root of the systems \(\mathbf{G}_{2}\), \(\mathbf{B}_{l}\), \(l\geqslant 3\), or any short root of the systems \(\mathbf{C}_{l}\), \(l\geqslant 3\), then \(\alpha\) can be represented as \(\alpha=\beta+\gamma\), where \(\{\pm\beta,\pm\gamma,\pm\alpha\}\cong\mathbf{A}_{2}\). In this case \[x_{\alpha}(s)=[x_{\beta}(s),x_{\gamma}(1)],\] therefore \[z_{\alpha,s}x_{\alpha}(s)=\varphi_{2}(x_{\alpha}(s))=[\varphi_{2}(x_{\beta}(s )),\varphi_{2}(x_{\gamma}(1))]=[z_{\beta,s}x_{\beta}(s),z_{\gamma,1}x_{\gamma} (1)]=[x_{\beta}(s),x_{\gamma}(1)]=x_{\alpha}(s).\] Consequently, \(z_{\alpha,s}=1\) for all \(s\in R\). For the root system \(\mathbf{G}_{2}\) all Chevalley groups are adjoint and so we do not need to prove Theorem 1 for this root system. For the root system \(\mathbf{B}_{2}\) if \(\alpha\) is a long simple root and \(\beta\) is a short simple root, then \(\Phi^{+}=\{\alpha,\beta,\alpha+\beta,\alpha+2\beta\}\), where \(\alpha+\beta\) is short and \(\alpha+2\beta\) is long and \[[x_{\alpha}(t),x_{\beta}(u)] =x_{\alpha+\beta}(\pm tu)x_{\alpha+2\beta}(\pm tu^{2}),\] \[=x_{\alpha+2\beta}(\pm 2tu).\] (see [81], Lemma 33). Since for the root system \(\mathbf{B}_{2}\) we require \(1/2\in R\), then \[[x_{\alpha+\beta}(s),x_{\beta}(1/2)]=x_{\alpha+2\beta}(\pm s)\] and by the same arguments as above \(z_{\gamma,s}=1\) for all long roots \(\gamma\) and all \(s\in R\). Then \[x_{\alpha+\beta}(\pm s)x_{\alpha+2\beta}(\pm s)=[x_{\alpha}(s),x _{\beta}(1)]=\varphi_{2}([x_{\alpha}(s),x_{\beta}(1)])=\\ =\varphi_{2}(x_{\alpha+\beta}(\pm s)x_{\alpha+2\beta}(\pm s))=z_{ \alpha+\beta,\pm s}x_{\alpha+\beta}(\pm s)x_{\alpha+2\beta}(\pm s),\] thus \(z_{\gamma,s}=1\) also for all short roots \(\gamma\in\mathbf{B}_{2}\). Therefore for \(\mathbf{B}_{2}\) for all \(\alpha\in\Phi\) and all \(s\in R\) the mapping \(\varphi_{2}\) is an identical automorphism of \(E_{\pi}(\Phi,R)\). Since any root \(\gamma\) of the root system \(\mathbf{B}_{l}\) or \(\mathbf{C}_{l}\), \(l\geqslant 3\), can be embedded to some root system isomorphic to \(\mathbf{B}_{2}\), and in this case we also require \(1/2\in R\), then for these root systems also \(z_{\gamma,s}=1\) for all \(s\in R^{*}\) and \(\varphi_{2}\) is an identical automorphism of \(E_{\pi}(\Phi,R)\). Therefore for all cases under consideration \[\varphi_{2}|_{E_{\pi}(\Phi,R)}=i_{y^{-1}}\circ\Lambda^{-1}\circ\rho^{-1}\circ \varphi|_{E_{\pi}(\Phi,R)}=id_{E_{\pi}(\phi,R)},\] so \[\varphi|_{E_{\pi}(\Phi,R)}=\rho\circ\Lambda\circ i_{y}|_{E_{\pi}(\Phi,R)},\] where \(y\in E_{\pi}(\Phi,\widetilde{S})\cap N(E_{\pi}(\Phi,R))\), \(\Lambda\) is a graph automorphism of the groups \(G_{\pi}(\Phi,R)\) and \(E_{\pi}(\Phi,R)\) and \(\rho\) is a ring automorphism of the groups \(G_{\pi}(\Phi,R)\) and \(E_{\pi}(\Phi,R)\). Thus, for \(G=E_{\pi}(\Phi,R)\) the main theorem (Theorem 2) is proved. Proof of the main theorem for the groups \(G_{\pi}(\varphi,R)\) Let now \(G=G_{\pi}(\Phi,R)\). Initially the mapping \(\varphi\) was an automorphism of the group \(G\). The mapping \(\varphi_{1}\) from the previous section was the composition of \(\varphi\) and graph and ring automorphisms of the group \(G\), i. e., also an automorphism of \(G\). After that \(\varphi_{2}\) (from the previous section) is the composition of \(\varphi_{1}\) and the conjugation of \(G\) by some element \(y\in E_{\pi}(\Phi,\widetilde{S})\), where \(R\subset\widetilde{S}\). We know that \(y\) normalizes \(E_{\pi}(\Phi,R)\) and we want to show that in our case \(y\) normalizes also our full Chevalley group \(G\). Note that for the simply-connected Chevalley group of the type \(\mathbf{E}_{6}\) Luzgarev and Vavilov in [58] proved that the normalizers of the Chevalley group and its elementary subgroup coincide. Then in [59] they proved the same theorem for the root system \(\mathbf{E}_{7}\). Since all other exceptional Chevalley groups are adjoint, we only need to show the coincidence of normalizers for non-adjoint classical Chevalley groups, but our method will cover all the cases. **Lemma 1**.: _Under assumptions of Theorem 2 the elements \(x_{\alpha}(1)\), \(\alpha\in\Phi\), by addition, multiplication and multiplication by elements from \(R\) generate the Lie algebra \(\pi(\mathcal{L}_{R}(\Phi))\subset M_{N}(R)\), where \(N\) is the dimension of the representation \(\pi\)._ Proof.: For the adjoint Chevalley groups this lemma was proved in [21]. Therefore we will not repeat the proof for the root system \(\mathbf{G}_{2}\) (since it is always adjoint). If the root system differs from \(\mathbf{G}_{2}\) and \(1/2\in R\), then \(x_{\alpha}(1)=E+\pi(X_{\alpha})+\pi(X_{\alpha})^{2}/2\), therefore \[\pi(X_{\alpha})=x_{\alpha}(1)-E-(x_{\alpha}(1)-E)^{2}/2,\] and \[\pi(\mathcal{L}_{R}(\Phi))=\langle\pi(X_{\alpha})\mid\alpha\in\Phi\rangle_{R}.\] Suppose now that we deal with systems \(\mathbf{A}_{l}\), \((l\geqslant 3)\), \(\mathbf{D}_{l},\mathbf{E}_{l}\), \(1/2\notin R\). For all these systems and non-adjoint representations \(\pi\) we have \(\pi(X_{\alpha})^{2}=0\) for all \(\alpha\in\Phi\), therefore \[\pi(X_{\alpha})=x_{\alpha}(1)-E.\] The lemma is proved. From Lemma 1 we see that the conjugation by \(y\) maps the Lie algebra \(\pi(\mathcal{L}(\Phi)_{R})\) onto itself. **Lemma 2**.: _Under assumptions of Theorem 2 the Lie algebra \(\pi(\mathcal{L}_{R}(\Phi))\) together with the unity matrix \(E\) by addition, multiplication and multiplication by elements from \(R\) generate the matrix ring \(M_{N}(R)\), where \(N\) is the dimension of the representation \(\pi\)._ Proof.: For all adjoint Lie algebras under consideration this fact was proved in the papers [15], [16], [18], [19], [20]. For classical representations of classical Lie algebras the proof is clear and direct: **1.** If we have the root system \(\mathbf{A}_{l}\) and the standard representation, then \[\pi(X_{e_{i}-e_{j}})=E_{ij},\quad\pi(X_{e_{i}-e_{j}})\pi(X_{e_{j}-e_{i}})=E_{ii },\quad M_{l+1}(R)=\langle E_{ij}\mid 1\leqslant i,j\leqslant l+1\rangle_{R}.\] **2.** The Lie algebra of the type \({\bf C}_{l}\) in its universal representation has \(2l\)-dimensional linear space and the basis \[\{E_{ii}-E_{l+i,l+i};E_{ij}-E_{l+j,l+i};E_{i,l+i};E_{l+i,i};E_{i,l+j}+E_{j,l+i};E_ {l+i,j}+E_{l+j,i}\mid 1\leqslant i\neq j\leqslant l\}.\] Multiplying \(E_{ij}-E_{l+j,l+i}\) by \(E_{j,l+j}\), we get all \(E_{i,l+j}\) for all \(1\leqslant i,j\leqslant l\). Multiplying \(E_{l+i,i}\) by \(E_{ij}-E_{l+j,l+i}\), we obtain \(E_{l+i,j}\) for all \(1\leqslant i,j\leqslant l\). It is clear that after that we have all \(E_{ij}\), \(1\leqslant i,j\leqslant l\), and therefore the whole matrix ring \(M_{2l}(R)\). **3.** For the root system \({\bf D}_{l}\) the standard representation gives the algebra \(\mathfrak{so}_{2l}\), where in \(2l\)-dimensional space the basis is \[\{E_{ii}-E_{l+i,l+i};E_{ij}-E_{l+j,l+i};E_{i,l+j}-E_{j,l+i};E_{i+l,j}-E_{j+l,i} \mid 1\leqslant i\neq j\leqslant l\}.\] Since for \(i\neq j\) we have \((E_{ii}-E_{l+i,l+i})\cdot(E_{ij}-E_{l+j,l+i})=E_{ij}\), then the whole matrix ring \(M_{2l}(R)\) is generated by this Lie algebra. All other representations are described by Plotkin, Semenov and Vavilov in [71] as _microweight_ representations with the help of so-called _weight diagrams_. Weight diagram is a labeled graph, its vertices correspond (bijectively) to the weights \(\lambda\in\Lambda(\pi)\). The vertices corresponding to \(\lambda,\mu\in\Lambda(\pi)\), are joined by a bond marked \(\alpha_{i}\in\Delta\) (or simply \(i\)) if and only if their difference \(\lambda-\mu=\alpha_{i}\) is a simple root. The diagrams are usually drawn in such way that the marks on the opposite (parallel) sides of a parallelogram are equal and at least one of them is usually omitted. All weights are numbered in any order and give the basis of our representation \(\pi\). If we want to find \(\pi(X_{\alpha_{i}})\), \(i=1,\ldots,l\), then we need to find all bonds marked by \(i\), and if they join the vertices \((\gamma_{1},\gamma_{1}+\alpha_{i}),\ldots,(\gamma_{k},\gamma_{k}+\alpha_{i})\), then \[\pi(X_{\alpha_{i}})=\pm E_{\gamma_{1},\gamma_{1}+\alpha_{i}}\pm\cdots\pm E_{ \gamma_{k},\gamma_{k}+\alpha_{i}},\quad\pi(X_{-\alpha_{i}})=\pm E_{\gamma_{1}+ \alpha_{i},\gamma_{1}}\pm\cdots\pm E_{\gamma_{k}+\alpha_{i},\gamma_{k}}.\] It is clear that if we take an element \(\pi(X_{\alpha_{i}})\cdot\pi(X_{\alpha_{j}})\), then it is a sum of \(\pm E_{\gamma,\gamma^{\prime}}\), where there exists a path from the weight \(\gamma\) to \(\gamma^{\prime}\) of the length \(2\) marked by the sequence \((i,j)\). Similarly, if we take an element \(\pi(X_{\alpha_{i_{1}}})\times\cdots\times\pi(X_{\alpha_{i_{k}}})\), then it is a sum of \(\pm E_{\gamma,\gamma^{\prime}}\), where there exists a path from the weight \(\gamma\) to \(\gamma^{\prime}\) of the length \(k\) marked by the sequence \((i_{1},\ldots,i_{k})\). Our goal is to generate all matrix units \(E_{\gamma_{1},\gamma_{2}}\), where \(\gamma_{1},\gamma_{2}\in\Lambda(\pi)\). Since all weight diagrams are connected, it is sufficient to generate all matrix units \(E_{\gamma,\gamma+\alpha_{i}}\) and \(E_{\gamma+\alpha_{i},\gamma}\), where \(\alpha_{i}\in\Delta\), \(\gamma,\gamma+\alpha_{i}\in\Lambda(\pi)\). The general idea how to do it is the following: for any \(\gamma\in\Lambda(\pi)\) and any \(\alpha_{i_{0}}\in\Delta\) such that \(\gamma+\alpha_{i_{0}}\in\Lambda(\pi)\) we find \(\gamma^{\prime}\in\Lambda(\pi)\) such that: (1) there exists a path \((i_{0},i_{1},\ldots,i_{k})\) from \(\gamma\) to \(\gamma^{\prime}\); (2) in our weight diagram there is no other path \((i_{0},i_{1},\ldots,i_{k})\); (3) the path \((i_{1},\ldots,i_{k})\) exists only from \(\gamma+\alpha_{i}\) to \(\gamma^{\prime}\). Then \[\pi(X_{\alpha_{i_{0}}})\pi(X_{\alpha_{i_{1}}})\ldots\pi(X_{\alpha_{i_{k}}})=\pm E _{\gamma,\gamma^{\prime}}\] and \[\pi(X_{-\alpha_{i_{k}}})\ldots\pi(X_{-\alpha_{i_{1}}})=\pm E_{\gamma^{\prime}, \gamma+\alpha_{i_{0}}}\] and therefore \(E_{\gamma,\gamma+\alpha_{i_{0}}}=E_{\gamma,\gamma^{\prime}}E_{\gamma^{\prime}, \gamma+\alpha_{i_{0}}}\). It is almost clear that such \(\gamma^{\prime}\) and unique paths always exist, we will just show one diagram as an example. If we take the case \({\bf A}_{7}\) with the weight \(\omega_{2}\), the representation is \(28\)-dimensional. Let us find a path which gives \(E_{\gamma_{1},\gamma_{2}}\). Since the path \((1,3)\) is unique in the diagram, then the path \((2,1,3)\) is also unique and we have \[E_{\gamma_{1},\gamma_{2}}=(\pi(X_{\alpha_{2}})\pi(X_{\alpha_{1}})\pi(X_{\alpha _{3}}))\cdot(\pi(X_{-\alpha_{3}})\pi(X_{-\alpha_{1}}).\] If we want to generate, for example, \(E_{\gamma_{4},\gamma_{6}}\), then the suitable path is \((4,1,5)\), since the path \((1,5)\) is unique in the diagram. Looking at the picture it is easy to find the suitable path for any pair of neighboring vertices. Therefore the lemma is proved for all the cases. Since \(y\pi({\cal L}(\Phi)_{R})y^{-1}=\pi({\cal L}(\Phi)_{R}\) and \(\pi({\cal L}(\Phi)_{R}\) generates the whole matrix ring \(M_{N}(R)\), then \(yM_{N}(R)y^{-1}=M_{N}(R)\). Therefore \(yG_{\pi}(\Phi,R)y^{-1}\subseteq\,{\rm SL}\,_{N}(R)\). From the other side, since \(y\in G_{\pi}(\Phi,\widetilde{S})\), then \(yG_{\pi}(\Phi,R)y^{-1}\subseteq G_{\pi}(\Phi,\widetilde{S})\). Since \(G_{\pi}(\Phi,\widetilde{S})\cap\,{\rm SL}\,_{N}(R)\) is (by definition) the Chevalley group \(G_{\pi}(\Phi,R)\), then \(y\) normalizes \(G\). Now we know that \(\varphi_{2}\) is an automorphism of \(G=G_{\pi}(\Phi,R)\), identical on the elementary subgroup \(E=E_{\pi}(\Phi,R)\). Let us take some \(g\in G\) and \(x_{1}\in E\) and let \(gx_{1}g^{-1}=x_{2}\in E\). Then \[\varphi_{2}(g)\varphi_{2}(x_{1})\varphi_{2}(g)^{-1}=\varphi_{2}(x_{2}) \Longrightarrow\varphi_{2}(g)x_{1}\varphi_{2}(g)^{-1}=x_{2},\] therefore \[\varphi_{2}(g)x_{1}\varphi_{2}(g)^{-1}=gx_{1}g^{-1}\Longrightarrow(g^{-1} \varphi_{2}(g))x_{1}(g^{-1}\varphi_{2}(g))^{-1}=x_{1},\] so \[g^{-1}\varphi_{2}(g)\in C_{G}(E).\] By the main theorem from [5]\(C_{G}(E)=Z(G)\), therefore \[\varphi_{2}(g)=c_{g}\cdot g,\quad c_{g}\in Z(G)\mbox{ for all }g\in G.\] Whence \(\varphi_{2}\) is a central automorphism of \(G\) and the initial \(\varphi\) is the composition of graph, ring, inner and central automorphisms, i.e, \(\varphi\) is standard. The theorem is proved. Some applications: isomorphisms and model theory of Chevalley groups Standard description of automorphisms of Chevalley groups allows to describe and classify Chevalley groups up to different type of equivalencies and also to study model-theoretic properties. **Theorem 3**.: _Let \(G_{1}=G_{\pi_{1}}(\Phi_{1},R_{1})\) and \(G_{2}=G_{\pi_{2}}(\Phi_{2},R_{2})\) be two Chevalley groups of ranks \(>1\), \(R_{1}\), \(R_{2}\) be commutative rings with \(1\). Suppose that for \(\Phi_{1}=\mathbf{A}_{2},\mathbf{B}_{l},\mathbf{C}_{l}\) or \(\mathbf{F}_{4}\) we have \(1/2\in R_{1}\), for \(\Phi_{1}=\mathbf{G}_{2}\) we have \(1/2,1/3\in R_{1}\). Then every isomorphism between the groups \(G_{1}\) and \(G_{2}\) is standard: it is a composition of inner, diagram and central automorphisms of \(G_{1}\) and ring isomorphism between \(G_{1}\) and \(G_{2}\)._ Proof.: The proof is identical to the proof of Theorem 9 from [22]. One needs to replace the references to Theorem 1 in the proof by those to Theorem 2. **Remark 2**.: _The result of Theorem 3 is valid with respect to elementary Chevalley groups \(E_{\pi_{1}}(\Phi_{1},R_{1})\) and \(E_{\pi_{2}}(\Phi_{2},R_{2})\) as well._ **Corollary 1** (classification of Chevalley groups up to isomorphism).: _Under conditions from Theorem 3 two Chevalley groups \(G_{1}\) and \(G_{2}\) (elementary Chevalley groups, respectively) are isomorphic if and only if they have the same root systems \(\Phi_{1}\) and \(\Phi_{2}\), same weight lattices \(\Lambda_{\pi_{1}}\) and \(\Lambda_{\pi_{2}}\) and isomorphic rings \(R_{1}\) and \(R_{2}\)._ Proof.: If \(G_{1}\cong G_{2}\), then there exists an isomorphism \(\varphi:G_{1}\to G_{2}\), which is composition of a ring isomorphism \(\rho:G_{1}\to G_{2}\) and some automorphism \(\psi\in\,\mathrm{Aut}\,G_{1}\) (according to Theorem 3). Therefore there exists a ring isomorphism between \(G_{1}\) and \(G_{2}\), i. e., \(G_{1}\) and \(G_{2}\) have the same root systems, weight lattices and isomorphic rings. Another application of Theorem 3 is classification of Chevalley groups up to elementary equivalence (for adjoint Chevalley groups it was done in [22]). Definition 3.: Two algebraic systems \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) of the same language \(\mathcal{L}\) are called _elementarily equivalent_, if their first order theories coincide. **Theorem 4** (Keisler-Shelah Isomorphism theorem, [76], [54]).: _Two models \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) of the same language are elementarily equivalent if and only if there exists an ultrafilter \(\mathcal{F}\) such that_ \[\prod_{\mathcal{F}}\mathcal{M}_{1}\cong\prod_{\mathcal{F}}\mathcal{M}_{2}.\] **Corollary 2** (classification of Chevalley groups up to elementary equivalence).: _Under conditions from Theorem 3 two Chevalley groups \(G_{1}\) and \(G_{2}\) (elementary Chevalley groups, respectively) are elementarily equivalent if and only if they have the same root systems \(\Phi_{1}\) and \(\Phi_{2}\), same weight lattices \(\Lambda_{\pi_{1}}\) and \(\Lambda_{\pi_{2}}\) and elementarily equivalent rings \(R_{1}\) and \(R_{2}\)._ Proof.: By Theorem 4 the groups \(G_{1}\) and \(G_{2}\) are elementarily equivalent if and only if for some ultrafilter \(\mathcal{F}\) their ultrapowers are isomorphic. Since \[\prod_{\mathcal{F}}G_{\pi}(\Phi,R)\cong G_{\pi}(\Phi,\prod_{\mathcal{F}}R),\] the latter is equivalent to \[G_{\pi_{1}}(\Phi_{1},\prod_{\mathcal{F}}R_{1})\cong G_{\pi_{2}}(\Phi_{2},\prod_ {\mathcal{F}}R_{2})\Longleftrightarrow\begin{cases}\Lambda_{\pi_{1}}= \Lambda_{\pi_{2}},\\ \Phi_{1}=\Phi_{2},\\ \prod_{\mathcal{F}}R_{1}\cong\prod_{\mathcal{F}}R_{2},\end{cases}\Longleftrightarrow \begin{cases}\Lambda_{\pi_{1}}=\Lambda_{\pi_{2}},\\ \Phi_{1}=\Phi_{2},\\ R_{1}\equiv R_{2},\end{cases}\] what was required. Two last corollaries almost finalize classification of Chevalley groups over commutative rings up to isomorphisms and elementary equivalence. However, there are still open questions concerning the relations of Chevalley groups with model theory. In the recent work of D. Segal and K. Tent [75] the question of bi-interpretability of Chevalley groups over integral domains was considered (see [75] and [55] for the definition of _bi-interpretability_): **Theorem 5** ([75]).: _Let \(G(R)=G_{\pi}(\Phi,R)\) be a Chevalley group of rank at least two, and let \(R\) be an integral domain. Then \(R\) and \(G(R)\) are bi-interpretable provided either_ (1)_\(G\) is adjoint, or_ (2)_\(G(R)\) has finite elementary width,_ _assuming in case \(\Phi=\mathbf{E}_{6},\mathbf{E}_{7},\mathbf{E}_{8}\), or \(\mathbf{F}_{4}\) that \(R\) has at least two units._ In the paper [23]_regular_ bi-interpretabilty of Chevalley groups over local rings was obtained. This result used the ideas from [75] along with description of isomorphisms between Chevalley groups over local rings. It has also been proved that the class of Chevalley groups over local rings is _elementarily definable_: _any group that is elementarily equivalent to some Chevalley group over a local ring is also a Chevalley group (of the same type) over a local ring_ (see [23]). Theorem 2 and 3 of the current paper allows us to prove regular bi-interpretability and elementary definability of adjoint Chevalley groups and Chevalley groups of finite elementary width over arbitrary commutative rings. **Acknowledgements.** Our sincere thanks go to Eugene Plotkin for very useful discussions regarding various aspects of this work and permanent attention to it.
2301.01851
Quantum Machine Learning: from physics to software engineering
Quantum machine learning is a rapidly growing field at the intersection of quantum technology and artificial intelligence. This review provides a two-fold overview of several key approaches that can offer advancements in both the development of quantum technologies and the power of artificial intelligence. Among these approaches are quantum-enhanced algorithms, which apply quantum software engineering to classical information processing to improve keystone machine learning solutions. In this context, we explore the capability of hybrid quantum-classical neural networks to improve model generalization and increase accuracy while reducing computational resources. We also illustrate how machine learning can be used both to mitigate the effects of errors on presently available noisy intermediate-scale quantum devices, and to understand quantum advantage via an automatic study of quantum walk processes on graphs. In addition, we review how quantum hardware can be enhanced by applying machine learning to fundamental and applied physics problems as well as quantum tomography and photonics. We aim to demonstrate how concepts in physics can be translated into practical engineering of machine learning solutions using quantum software.
Alexey Melnikov, Mohammad Kordzanganeh, Alexander Alodjants, Ray-Kuang Lee
2023-01-04T23:37:45Z
http://arxiv.org/abs/2301.01851v2
# Quantum Machine Learning: from physics to software engineering ###### Abstract Quantum machine learning is a rapidly growing field at the intersection of quantum technology and artificial intelligence. This review provides a two-fold overview of several key approaches that can offer advancements in both the development of quantum technologies and the power of artificial intelligence. Among these approaches are quantum-enhanced algorithms, which apply quantum software engineering to classical information processing to improve keystone machine learning solutions. In this context, we explore the capability of hybrid quantum-classical neural networks to improve model generalization and increase accuracy while reducing computational resources. We also illustrate how machine learning can be used both to mitigate the effects of errors on presently available noisy intermediate-scale quantum devices, and to understand quantum advantage via an automatic study of quantum walk processes on graphs. In addition, we review how quantum hardware can be enhanced by applying machine learning to fundamental and applied physics problems as well as quantum tomography and photonics. We aim to demonstrate how concepts in physics can be translated into practical engineering of machine learning solutions using quantum software. R REVIEW ARTICLE CONTACT Alexey Melnikov. Email: [email protected]. ## 1 Introduction Nowadays due to exponential grows of information, computational speedup, acceleration of information transmission and recognition, many key global interdisciplinary problems for modern societies emerge [1]. In everyday life we face a problem of big data everywhere. Classical information science and relevant technological achievements in communication and computing enable to move our society from Internet of computers to the Internet of Things (IoT), when humans interact with spatially distributed smart systems including high precision sensors, various recommendation systems based on huge amount of on-line information processing and its recognition [2]. Artificial intelligence (AI) and machine learning (ML) drive the progress in this movement of our society. These tasks and facilities require online information recognition that is actually possible only on the basis of the parallel information processing. Today, a number of areas have formed in information science, physics, mathematics and engineering which propose to solve these problems by means of various approaches of parallel processing of information by spatially distributed systems. Our vision of the problem we establish schematically in Fig. 1 that reflects content of this paper. Nowadays AI predominantly focuses on ML approach that provides solutions for Big data problem, data mining problem, explainable AI, and knowledge discovery. As a result, in our everyday life we can find distributed intelligent systems (DIS) which represent networks of natural intelligent agents (humans) interacting with artificial intelligence agents (chatbots, digital avatars, recommendation systems, etc. ), see e.g. [3] and references therein. Such systems require new approaches to data processing that may be described by means of cognitive computing which possess human cognitive capabilities, cf. [4]. At the same time such a system operates within a lot of uncertainties which may provide new complexities. But, how about quantum approach and quantum technologies which can help us in this way? Certainly, quantum approach and relevant quantum technologies are one of the drivers of current progress in development of information sciences and artificial intelligence, which have common goals of designing efficient computing, as well as fast and secure communication and smart IoT. The mutual overlapping of the three seminal disciplines are bearing meaningful fruits today. Within left halve of ellipsis in Fig. 1 we establish some crucial topics of quantum technologies studies which are interdisciplinary right now. In particular, quantum computing opens new horizons for classical software engineering, see e.g. [5]. Especially it is necessary to mention quantum inspired algorithms and quantum inspired approaches which utilize quantum probability and quantum measurement theory for classical computing [6]. Quantum computers as physical systems, biological neurons and human brain are Figure 1: Interdisciplinary paradigm of quantum machine learning that is based on current classical information, quantum technologies, and artificial intelligence, respectively (the details are given in the text). capable for parallel information proceeding in natural way. However, sufficient criterion for speedup information processing is still unknown in many cases. A qubit which is minimal tool of quantum information science, is established by superposition of well distinguished two quantum states defined in Hilbert space and represents indispensable ingredient for parallel information processing [7]. Quantum algorithms (software) which are proposed many years ago utilize qubits quantum superposition and entanglement power for achieving so-called quantum supremacy and speedup in solution of NP-hard problems which unattainable with classical algorithms [8]. Quantum computers (hardware) as physical devices firstly was proposed by Richard Feynman, deals with simple two-level systems as physical qubits performing quantum computation [9, 10]. Despite the fact that a lot of time has passed since the successful demonstration of the first quantum gates and the simplest operations with them (see e.g. [11]), there exists a large gap between the quantum information theory, quantum algorithms, and quantum computers designed to execute them. Existing quantum computers and simulators are still very far from quantum supremacy demonstrations in solving real problems related to our daily life. This can be partly explained by the modern noisy intermediate-scale quantum (NISQ) era of the development of quantum technologies [12]. Currently, quantum computers are restricted by small number of qubits, and relatively high level of various noises which include decoherence processes that completely (or, partially) destroy the effects of interference and entanglement. In this regard, the problem of quantum supremacy for specific tasks represents the subject of heated debates [13, 14, 15]. Surface codes and creation of logical qubits are purposed for significant reduction of computation errors [16, 17]. In particular, such a codes presume mapping of some graph of physical qubits onto the logical qubit. Typically, special network-like circuits are designed for quantum processor consisting of logical qubits. However, it is unclear how this mapping is unique and how such a networks is optimal and universal for various computation physical platforms? As an example, minor embedding procedure is supposed for quantum annealing computers which are based on superconductor quantum hardware [18]. Obviously, various physical platforms examined now for quantum computation can use different mapping procedures and relay to design of specific networks of qubits accounting specific noises and decoherence. Thus, the choosing of appropriate network architecture represent keystone problem for current quantum computing and properly relays to demonstration of quantum supremacy. Clearly, the solution of this problem is connected not only with the properties of quantum systems, but also with the ability of networks to parallel and robust information processing. An important example that we refer here is a human brain as a complex network comprising from biologically active networks which exhibit fast information processing. Noteworthy, the architecture of such a computations is a non-von Neumann. In order, human brain capable for pattern retrieving by means of association. A long time ago, Hopfield introduced simple neural network model for associative memory [19]. As time evolves, neural networks have represented an indispensable tool for parallel classical computing. Artificial intelligence and machine learning paradigm, cognitive and neuromorphic computing, use neural network some specific peculiarities represent vital approach proposed to explore the full power of parallel character of computation [1, 20]. Quantum machine learning (QML) is a new paradigm of parallel computation where quantum computing meets network theory for improving computation speed-up and current, NISQ-era quantum technology facilities, by means of quantum or classical computational systems and algorithms [21, 22, 23, 24]. In Fig. 2 we represent a timeline of the appearance and development of some important algorithms [25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45] which are able to improve computational complexity, accuracy and efficiency within various type of hardware available now. In this work we are going to discuss most of them in details. In more general, nowadays QML disciplines occur at the border of current quantum technologies and artificial intelligence and includes all their methods and approaches to information processing, see Fig.1. The rapidly growing number of publications and reviews in this discipline indicates an increasing interest in it from the scientific community, see e.g. [46, 47, 48, 49, 50, 51, 52, 53]. In particular, seminal problems in algorithm theory which are capable to enhancement of quantum computing by means of ML approach we can find in [21, 46, 50, 53]. Some applications of ML approach to solve timely problems in material science, photonics, physical chemistry reader can find in [47, 48, 49, 51, 52], respectively. It is important to notice that ML approach is closely connecting with knowledge discoveries in modern fundamental physics which closely connected with a problem of big data and their recognition. In order, we talk about automated scientific discovery which can significantly expand our knowledge of Nature, cf. [54, 55]. In particular, it is worth to mention research of the Large Hadron Collider (LHC), where data mining can contribute to new discoveries in the field of fundamental physics [56]. Another important example constitutes network researches on registration of gravitational waves and extreme weak signals in astronomy, see e.g. [57]. Clearly, further discoveries in this area require improvement of sensitivity of network detectors (which are interferometers) and obtained data mining where ML approach can significantly promote, cf. [58]. Despite the fact that previous review papers [21, 22, 46, 47, 49, 50, 51, 52, 53] theoretically substantiates and discusses the effectiveness of quantum approaches and quantum algorithms in ML problems, in practice there are many problems that do not allow to see quantum supremacy in experiment. Within the NISQ era of modern quantum computers and simulators, their capabilities are not yet enough to achieve quantum supremacy, cf. [12]. In this regard, hybrid information processing algorithms that take into account the sharing of quantum and classical computers come forward. Quantum-classical variational, quantum approximate optimization (QAOA) algorithms are very useful and effective in this case, see e.g. [59, 60, 61, 62]. In this review work we are going to discuss various approaches which are use for QML paradigm within current NISQ-era Figure 2: Timeline with milestones of quantum machine learning achievements. ealities. Unlike previous work [21, 22, 46, 47, 49, 50, 51, 52, 53], below we will focus on methods and approaches of ML that can be effective, especially for hybrid (quantum-classical) algorithms, see Fig. 2. In its most general form, current work can be divided into two large parts which we establish as Sections 2 and 3, respectively. In particular, in Sec. 2 we consider a variety of problems where the ML approach may be enhanced by means of quantum technologies, as it is presented in Fig. 1. In general, we speak here about speed-up of data processing by quantum computers and/or quantum simulators which we can use for classical ML purposes, see Fig. 3. An important part of these studies is devoted to optimal encoding, or embedding of classical data set into the quantum device [63], and recognition of data set from quantum state readout. We establish comprehensive analysis of quantum neural networks (QNN) features as a novel models in QML whose parameters are updated classically. We discussed how such a model may be used in timely hybrid quantum-classical variational algorithms. On the contrary, in Sec. 3 we establish currently developing QML hot directions where classical ML approach can help to solve NISQ-era quantum computing and quantum technology tasks, cf. Fig.1. In particular, it is necessary to mention automation of quantum experiments, quantum state tomography, quantum error correction, etc. where classical ML technique may be applied. Especially, we note here ML algorithms which can be useful in recognition of quantum speedup problem of random (classical, or quantum) walks performed on various graphs. The solution of this problem proposed by us plays essential role for both of current quantum computing hardware and software development. ## 2 Machine learning enhanced by quantum technologies In this section an impact that quantum technologies make in machine learning is discussed. The outline of the topics is given in Fig. 3. ### Machine learning models At its roots, machine learning is a procedural algorithm that is augmented by the provision of external data to model a specific probability distribution. The data could consist of only environment variables (features), \(\mathbf{x}\in\mathcal{X}\) - unsupervised learning - features and their associated outcomes (labels), \(\mathbf{y}\in\mathcal{Y}\) - supervised learning - or environment variables and a reward for specific actions, \(R(a)\) - reinforcement learning. Figure 3: Quantum technologies help in improving machine learning. Sections that discuss a particular topic are labeled. #### 2.1.1 Unsupervised learning The point of unsupervised learning is to infer attributes about a series of data points, usually to find the affinity of data points to a clustering regime. A popular method of unsupervised learning is known as the K-means clustering [64] where data points are assigned to a chosen number of clusters, and the position of the centres of these clusters could be trained. Unsupervised learning is applied to many real-world problems, from customer segmentation in different industries [65] to criminal activity detection [66]. #### 2.1.2 Supervised learning In contrast, supervised learning endeavours to infer patterns in the provided data. The goal of such models is to generalise this inference to previously-unseen data points. In a linear regression setting, this is often done by linear interpolation [67], but for an a-priory problem where some degree of non-linearity is plausible, supervised learning can train non-linear regression models and provide better alternatives. Supervised learning is also used for logistical learning, where instead of a regression model, a categorical probability distribution is to be learned. Supervised learning has seen considerable success in many areas, from credit-rating models [68] to scientific fields [69]. #### 2.1.3 Reinforcement learning Finally, reinforcement learning is the optimisation of a set of actions (policy) in an environment. The environment allows actions and provides rewards if certain conditions are met. An agent is made to explore this environment by investigating the outcomes of certain actions given its current state and accordingly optimise its model variables. Reinforcement learning often attracts significant attention from the gaming industry [70] but it has also contributed to real-life scenarios such as portfolio management [71]. #### 2.1.4 Exponential growth of practical machine learning models A recurring theme in all three modes of ML is the high complexity of their models. This could be caused by a high-dimensional input size like classifying a high-resolution image database [72], or a complex problem like image segmentation [73]. A commonly-used - but known to be inaccurate [74, 75] - measure of complexity is the parameter-count of an ML model. It is simply the number of trainable parameters of a model. Most familiarly in neural networks, the parameters are the weights and biases associated with each neural layer. For smaller problems, the parameter-count could be as small as hundreds [76, 77], but cutting-edge AI models such as DALL-E 2 [78], Gopher [79], and GPT-3 [80] are models that have tens or hundreds of billions of parameters and is increasing [81, 82]. This level of high-dimensionality comes at a great financial and environmental cost. Ref. [83] assessed the Carbon emission and the financial cost of fine-tuning and training several large ML models in 2019. They found that in some cases these models emitted more CO2 than the entire lifetime of an average American car, and could cost over $3m. In addition to this great cost, there are concerns regarding the scalability, namely that the exponential-growth in computing power - known as Moore's law [84], revisited in [85] - is growing at a slower rate than ML research [86, 87]. #### 2.1.5 Quantum-enhanced machine learning The idea behind the field of quantum machine learning is to use the capabilities of quantum computers to provide scalable machine learning models that can provide machine learning capabilities beyond what classical models can be expected to deliver, at a healthier cost. Quantum computers offer an exponential computational space advantage over classical computers by representing information in quantum binary digits or qubits. Where classical computers work in the Boolean space, \(\mathbb{B}^{\otimes n}\), qubits form an exponentially growing, special unitary space, \(\mathbb{SU}(2^{n})\). This means that while a classical register with \(n\) bits can hold an \(n\)-digit binary string, a quantum register of the same size holds all possible strings of such size, providing an exponential number of terms in comparison to its classical counterpart [7]. In addition to addressing the scalability concerns, classical machine learning models operate within the realm of classical statistical theory, which in some cases seems to diverge from human behavioural surveys. Ref. [88] introduced the _sure thing principle_, which shows how unrelated uncertainties could affect a human's decision, which a classical statistical model would deem as unrelated and remove. In Refs. [89, 90] it is showed that in some cases people tend to give higher credence to two events occurring in conjunction than either happening individually, which is contrary to the classical statistical theory picture. In Ref. [6, 91] it is argued that that these problems could be addressed by using a quantum statistical formulation. In addition, other similar issues like the problem of negation [92] and others listed in Ref.[93] are also shown to have a resolution in the quantum theory. The distributional compositional categorical (DisCoCat) model of language [94] could be addressed as the first theoretically successful attempt at harnessing this advantage of quantum machine learning. ### Quantum neural networks For a given data provision method, e.g. supervised learning, a host of different machine learning architectures could be considered. A machine learning architecture has a set of trainable parameters \(\theta\) that can be realised based on an initial probability distribution. Any specific realisation of the parameters of a machine learning architecture is a model. The quest of machine learning is to train these parameters and achieve a nearer probability distribution to that of the problem in question. The fully trained version of each architecture yields a different model with different performance, and generally, the architectures that can spot and infer existent patterns in the data are said to be of superior performance. It is also important to avoid models that find non-existent patterns, models that are said to over-fit their pattern-recognition to the provided data, and when evaluated on previously-unseen data fail to perform as well. A model that can spot existent patterns without over-fitting to the provided data is said to have a high _generalisation ability_. This metric establishes a platform for model selection1[96]. Footnote 1: Sometimes referred to as Occam’s factor [95]. For any given problem, there are a variety of architecture classes to choose from. Some of the most commonly-used architectures are multi-layered perceptrons (neural networks), convolutional networks for image processing, and graph neural networks for graphically-structured data. QML contributes to this list by introducing quantum models such as QNN [97]. Quantum neural networks are models in QML whose parameters are updated classically. The training pipeline includes providing data to the quantum model, calculating an objective (loss) value, and then adapting the QNN parameters in such a way as to reduce this objective function. The specific approach to providing the data to the quantum model is known as the data encoding strategy, and it can have a drastic effect on the functional representation of the model. Sec 2.2.1 covers the various approaches to data encoding, and Sec 2.2.4 offers a review of the theoretical advances in exploring the analytical form of this representation. In QNNs, the objective function is (or includes) the expectation value of a parametrised quantum circuit (PQC) [98]. PQCs are quantum circuits that make use of continuous-variable group rotations. Fine-tuning the architecture of the PQC can have a direct effect on the performance of the resultant QNN model. Sec 2.2.2 reviews the various PQC parametrisations suggested in the literature. The consequences of the choice of the loss function are outlined in Sec 2.3.1. After making this choice, one could evaluate the PQC, and pass the result to the loss function to obtain a loss value. To minimise the loss value, it is important to tune the trainable parameters in such a way to maximally minimise this value. This is achieved - in both classical and quantum ML - by calculating the gradient of the loss function with respect to the model parameters2. The gradient vector of a function points to the direction of maximal increase in that function, and to maximally reduce the loss function one could find the gradient and step in the opposite direction. Sec 2.2.3 reviews the literature concerning QNN gradient computation. Footnote 2: Some alternative approaches exist known as gradient-free optimisation methods [99]. #### 2.2.1 Data encoding strategies There are three over-arching data encoding strategies [100]: * **State embedding:** the features are mapped to the computational bases. This is often used for categorical data, and as the number of bases grows, the number of data points needs to follow the same trend, otherwise, the encoding will be sparse [97]. * **Amplitude embedding:** when the features of the dataset are mapped to amplitudes of the qubit states. This embedding could be repeated to increase the complexity of the encoding. For \(n\) qubits, this method allows us to encode up to \(2^{n+1}\) features onto the quantum system. * see Sec 2.5.2 - namely variational quantum eigensolvers (VQE) [30] and quantum differential equation solvers [101, 102]. It is important to recognise that state embedding is the only discrete-variable encoding with a strong resemblance to classical ML, whereas the other two are continuous-variable methods and can be considered analogue machine learning3. Footnote 3: This is subject to the input methodology. Normally, a digital computer is used to set up the quantum circuit, in which case the learning is still fully digital. Amplitude embedding could be sub-divided into sub-categories: angle embedding, state amplitude embedding, squeezing embedding, and displacement embedding [103, 104]. Ref. [100] provides an expressivity comparison between these encoding methods. Effective encoding strategies were analysed in [101, 105, 106, 107]. #### 2.2.2 Parametrised architecture The specific parametrisation of the network could dramatically change the output of a circuit. In classical neural networks, adding parameters to a network improves the model expressivity, whereas, in a quantum circuit, parameters could become redundant in over-parametrised circuits [108]. Additionally, the architecture must be trainable, whereas it was shown that this cannot be assumed in an a-priori setting [109] - see Sec 2.2.5. Many architectures have been suggested in the literature, and many templates are readily available to choose from on QML packages [110, 111, 112]. Ref. [113] introduced a family of hardware-efficient architectures and used them as variational eigensolvers - see Sec 2.5.2. These architectures repeated variational gates and used CNOT gates to create highly-entangled systems. Based on the discrete model in Ref. [114] and made continuous in Ref. [115] a model was devised using RZZ quantum gates that was shown to be computationally expensive to classically simulate [116, 117], named the instantaneous polynomial-time quantum ansatz (IQP). Another approach to creating quantum circuits is to take inspiration from tensor networks [118]. Famous architectures in this class are the tensor-tree network (TTN), matrix product state (MPS), and the quantum convolutional neural networks (QCNN) [34, 119, 120, 121]. #### 2.2.3 Gradient calculation Despite its excessive memory usage [122], the most prominent gradient calculation method in classical ML is the back-propagation algorithm [123]. This method computes the gradient of every function that trainable parameters are passed through alongside its output and employs the chain rule to create an automatically differentiable ML routine. The back-propagation method can (and has been [124]) implemented for QML, but as it requires access to the quantum state-vector, it can only be used on a simulator and not a real quantum processing unit (QPU). As quantum advantage can only occur in the latter setting, it is important to seek alternatives that can operate on QPUs. The first proposed algorithm is known as the finite-difference differentiation method [125]. As its name suggests, it calculates the gradient by using the first principles of taking derivatives, i.e. adding a finite difference to the trainable parameters one at a time, and observing the change that this action makes. This method is prone to error in the NISQ era. As an alternative, a discovery was made in [126] known as the parameter-shift rule that suggested an exact, analytic derivative could be calculated by evaluating the circuit twice for each trainable parameter. The suggestion was that the derivative of a circuit with respect to a trainable parameter \(\theta\) is half of the evaluation of the circuit with \(\theta\) shifted by \(\frac{\pi}{2}\) subtracted from the circuit when it is shifted by \(-\frac{\pi}{2}\). This suggestion initially worked only on trainable parameters applied to Pauli rotations, but later works [127, 128, 129, 130, 131, 132, 133, 134] expanded to its current form, applicable to any parameterisation. The parameter-shift rule is the state-of-the-art gradient computation method and is compatible with QPUs, but, one of its major problems is its scalability. As mentioned, the number of circuit evaluations for this method increases linearly with the number of trainable parameters, and this poses a challenge to how complex the quantum models can get. A notable effort to mitigate this effect was by parallelising the gradient computation, which is now natively provided when using PennyLane on AWS Braket [135]. As a transitional gradient computation method for QNNs, Ref. [136] introduced the adjoint algorithm. Similar to the back-propagation method, adjoint can only be run on a simulator and calculates the entire gradient vector using a single evaluation of the circuit. However, its memory usage is superior to the former. It works by holding a copy of the quantum state and its adjoint in the memory, and in turn applying the gates in reverse order, calculating the gradients wherever possible. This means that two overall evaluations of the circuit are made, first to evaluate the output, and second to compute the gradient. Alternative suggestions have also been made to optimise QML models following the geometry of their group space. Ref. [137] suggested a Riemannian gradient flow over the Hilbert space, which through hardware implementation showed its favourable optimisation performance. #### 2.2.4 Quantum neural networks as universal Fourier estimators Ref. [107] explored the effects of data encoding on the expressivity of the model. It proved that the data re-uploading technique suggested by Ref. [138] created a truncated Fourier series limited by the number of repetitions of the encoding gates. Ref. [139] also showed that QNNs can be universal Fourier estimators - an analogue to the universality theorem in classical multi-layered perceptrons [140]. Another point proven by Refs. [107, 138] was that by repeating the encoding strategy (in amplitude embedding and more specifically the angle embedding) more Fourier bases are added to the final functional representation of the circuit. This was true if the repetitions were added in parallel qubits or series. This sparked a question about the accessibility of these Fourier bases, i.e. whether their coefficients can independently be altered which remains an open question at the time of this publication. #### 2.2.5 Barren plateaus and trainability issues QNNs could suffer from the problem of vanishing gradients. This is when during training, the gradient of the model tends to zero in all directions. This could severely affect the efficiency of the training or even bring it to a halt. This is known as the barren plateau (BP) problem. BPs are not usually at the centre of attention in classical ML, but their dominance in quantum architectures makes them one of the most important factors in choosing a circuit. Ref. [109] showed that the expectation value of the derivative of a well-parametrised4 quantum circuit is equal to zero, and that its variance decays exponentially with the number of qubits. Ref. [142] confirmed that barren plateaus also exist in gradient-free optimisation methods. In addition, Ref. [143] showed that in the NISQ era, using deep circuits flattens the overall loss landscape resulting in noise-induced BPs.These are mathematically different kinds of Barren plateaus that flatten the landscape as a whole. The illustrations in Fig 4 summarise these phenomena. Footnote 4: Formally when the variational architecture of a quantum circuit nears a 1-design[141]. These two findings painted a sobering picture for the future of QNNs, namely that they need to be shallow and low on qubit-count to be trainable, which contradicts the vision of high-dimensional, million-qubit QML models. Many remedies have been proposed: Ref. [144] suggested that instead of training all parameters at once, the training could be done layer-wise and Ref. [145] showed that if the depth of the variational layers of the QNN is of the order O(\(\log(n)\)), \(n\) being the number of qubits, and that only local measurements are made, the QNN remains trainable. This was tested on circuits with up to 100 qubits and no BPs were detected. Other remedies included introducing correlations in the trainable parameters [146, 147] and specific initialisations of the parameters [148] of the circuit by applying adjoint operators of certain variational gates. More analysis was done on specific architectures: Ref. [149] showed that under well-defined weak conditions, the TTN architecture was free from the BP problem; and Ref. [150] showed that the quantum convolutional neural network architecture introduced in Ref. [34] was also free from BPs. Ref. [151] developed a platform based on ZX calculus 5 to analyse whether a given QNN is subject to suffering from BPs. In addition to confirming the results from the two earlier contributions, it also proved that the matrix product state [155] and hardware efficient, strongly-entangling QNNs suffered from BPs. Furthermore, Ref. [156] related the barren plateau phenomenon to the degree of entanglement present in the system. Footnote 5: ZX calculus is a graphical language in quantum information theory [152, 153, 154]. ### Quantum learning theory #### 2.3.1 Supervised QML methods are kernel methods In Refs. [157, 100] the similarities between the QNNs and kernel models were brought to focus. First introduced in Ref. [158], kernel methods are well-established ML techniques with many applications. In conjunction with support vector machines (SVM), the way they work is by mapping the features of a dataset into a high-dimensional space through a function \(\phi(\mathbf{x})\) and then using a kernel function, \(\mathcal{K}(x_{1},x_{2})\), as a distance measure between any two data points in this high-dimensional space. This is exactly the behaviour observed in QNNs: the features are first embedded into a high-dimensional quantum state-vector, and by overlapping one encoded state with another we can find the level of similarity between two points in this space. In this high-dimensional space, one hopes to find better insight into the data - usually expressed as a decision boundary in the form of a hyperplane in classification tasks. Ref. [159] used this link and developed a framework for searching for possible quantum advantages over classical models. It also showed that large models could scatter the data so far apart that a distance measure becomes too large for optimisation purposes, and proposed that an intermediate step be added to map the high-dimensional space into a lower-dimensional hyperplane to improve its training performance. Figure 4: Visualisation of the barren plateau phenomenon in (a) noise-free and (b) noise-induced settings. #### 2.3.2 Bayesian inference Bayesian inference is an alternative approach to the statistical learning theory where Bayes' theorem [160] is used to adapt an initial assumption about the problem (prior distribution) based on newly-found data (evidence) to get a posterior distribution. Bayesian learning is when this logic is applied to ML. This is done by applying a distribution to every parameter in the network and updating the distributions when training. Calculating the posterior distribution is generally computationally expensive, but it is possible to approximate it using a trick known as variational inference [161, 162] successfully demonstrated an approximate back-propagation algorithm on a Bayesian neural network (BNN), referred to as Bayes-by-backprop. The first implementations of Bayesian QNNs were in Refs. [163, 164, 165] which attempted to make quantum circuits into an exact Bayesian inference machine. Ref. [166] introduced two efficient, but approximate methods - one from kernel theory and another using a classical adversary - to use QNNs to perform variational inference. The work consists of a quantum circuit that can be modelled to produce the probability distribution of a phenomenon by exploiting the probabilistic nature of quantum mechanics - known as a Born machine [167, 168] or quantum generative models [169, 170, 171, 172, 173, 174]. This could also be used later to quantify the prediction error for a single data point, as it has been done classically in Ref. [175]. #### 2.3.3 Model complexity and generalisation error bounds Intuitively, complex phenomena require complex modelling, but quantifying the complexity of a given model is non-trivial. There are multiple ways of defining the model complexity: Vapnik-Chervonenkis (VC) dimension [176], Rademacher complexity [177], and effective dimension [178]6. The complexity measures are also connected to the generalisation error because when the model becomes too complex for the problem, the generalisation is expected to worsen. Footnote 6: For an exhaustive list of complexity measures see [75] Much work has been done to quantify the complexity and the generalisation error of quantum neural networks: Ref. [179] explored a generalisation error bound through the Rademacher complexity that explicitly accounted for the encoding strategy; and Ref. [180] used the effective dimension - a measure dependent on the sample size - to bound the generalisation error of QNNs as well as prove their higher expressivity given the same number of trainable parameters. Other attempts were also made to quantify the complexity (also referred to as the expressivity) of QNNs in Refs. [181, 182, 183, 184, 185]7. Notably, Ref. [187] theoretically proved that the generalisation error of a QNN grew as \(\mathcal{O}(\sqrt{T/N})\) where \(T\) was the number of parametrised quantum gates in the QNN, and \(N\) was the number of data samples in the dataset. The latter work implies that QML models are better at generalising from fewer data points. Footnote 7: It is noteworthy that Ref. [186] showed that some models have a higher generalisation ability despite over-fitting. ### Hybrid quantum neural networks Just as there are classical and quantum models, one could also combine the two to create hybrid models - see Fig. 5. It is conceivable that in the NISQ era, one could use the understanding of QML described in Sec. 2.3 to find a regime where quantum models cover some bases that classical models do not. Ref. [41] developed a platform for hybrid quantum high-performance computing (HQC) cloud and it was deployed on the QMware hardware [188, 189]. It showed that for high-dimensional data, a combination of classical and quantum networks in conjunction could offer two advantages: computational speed and the quality of the solution. The data points were, first, fed to a shallow8 quantum circuit composed of 4 qubits, two of which were measured and their corresponding values were passed onto a neural network. Two classical datasets were chosen to explore the effectiveness of a hybrid solution and to compare it when the quantum part is removed, leaving only a classical network: the sci-kit circles' dataset [190] and the Boston housing dataset [191]. The former is a synthetic geometrical dataset that consists of two concentric circles in a 2-dimensional square of side \(x=2\pi\), and the latter is about the distribution of the property value given its population status and the number of rooms. In both cases, it was shown that the hybrid network generalises better than the classical and this difference is most visible at the extremes of problems with very small training sample sizes. However, this difference became smaller as the number of samples grew. Footnote 8: Apart from performance considerations, this also provided an effective way to avoid barren plateaus described in Sec 2.2.5. In continuation, Ref. [44] suggested a hyperparameter optimisation scheme aimed at architecture selection of hybrid networks. This work also implemented a hybrid network for training, but in two new ways: 1) using a real-world, image recognition dataset [192], and 2) the quantum part of the hybrid network was inserted in the middle of the classical implementation. Additionally, the architecture of the quantum part was a subject of hyper-parameter optimisation, namely the number of qubits used and the number of repetition layers included were optimised. Training this network showed that the hybrid network was able to achieve better quality solutions albeit by a small margin. It is notable that because of this architecture optimisation, a highly improved quantum circuit was achieved. The performance of this circuit was theoretically measured by applying analysis methods such as ZX reducibility [152], Fourier analysis [107], Fisher information [108, 193, 194], and the effective dimension [178, 180]. ### Applications and realisations QML automatically inherits all classical ML problems and implementations, as it is simply a different model to apply to data science challenges. In addition to this inheritance, QML research has also provided novel, quantum-native solutions. In both cases, QML has so far been unable to provide a definite, practical advantage over classical alternatives, and all the suggested advantages are purely theoretical. Figure 5: An example of a hybrid quantum-classical ML model. In this case, the inputs are passed into a fully connected classical multi-layered perceptron, and its outputs are fed into the embedding of a quantum circuit. Depending on the setting, some measurements of this quantum circuit are taken and then passed into another fully-connected layer, the output of which can be compared with the label. #### 2.5.1 Solving classical problems QML is employed in many classical applications. Some notable contributions are in sciences [195; 196; 197; 198; 199; 200; 201], in finance [42; 202; 203; 204], pharmaceutical [43; 45], and automotive industries [44]. In many cases, these models replaced a previously-known classical setting [205; 206; 207; 208]. Quantum generative adversarial networks were suggested in Ref. [209] and followed by Refs. [210; 211; 212; 213; 214; 215; 216; 217]. Similarly, quantum recurrent neural networks were investigated in [218; 219] and two approaches to image recognition were proposed in Refs. [34; 220]. Ref. [221] looked at a classical-style approach to quantum natural language processing. The applications of QML in reinforcement learning were also explored in Refs. [38; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 291; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 322; 323; 324]1. Finally, a celebrated application is the quantum auto-encoder, where data is compressed and then re-constructed from less information, a notable suggestion was made in Ref. [226]. Footnote 1: Note that the quantum auto-encoder is a quantum-encoder, which is a quantum-encoder, which is a quantum-encoder, is a quantum-encoder. #### 2.5.2 Quantum-native problems Native problems are novel, quantum-inspired ML problems that are specifically designed to be solved by a QML algorithm. Perhaps the most known QML algorithm is the variational quantum eigensolver (VQE). The problem formulation is that the input data is a Hamiltonian and we are required to find its ground state and ground-state energy. The VQE solution consists of preparing a PQC of trainable parameters and taking the expectation value of the Hamiltonian. This yields the energy expectation of the prepared state, and the idea is that by minimising this expectation value, we can achieve the ground-state energy, at which point the prepared state will represent the ground state of the problem. This was first implemented to find the ground-state energy of \(\mathrm{He-H^{+}}\)[30] and was then substantially extended in Ref. [59]. VQE remains one of the most promising areas of QML. Ref. [25] showed that PQCs can be used to solve a linear system of equations (LSE). They proposed a commonly known as the Harrow, Hassidim, and Lloyd (HHL). Refs. [227; 228; 229; 230; 231; 232; 233; 234] improved this algorithm and Ref. [235] extended it to also include non-linear differential equations. Ref.[102] showed that it is possible to use a quantum feature map to solve non-linear differential equations on QNNs. This is also an exciting and promising area of QML. An important QML formulation is known as the quadratic unconstrained binary optimisation (QUBO) [236; 237]. This is generally a classical problem, but using the Ising model - see [238] - this can be solved on a quantum computer [239; 240]. A common demonstration of the latter is the max cut problem [241] - see Fig 6. There are solutions for the QUBO problem on both gate-based quantum computers and quantum annealers [242; 243; 244], and this general concept has seen use in many sectors [245]. Lastly, another quantum-native formulation is in natural language processing. Ref. [94] developed a platform for turning grammatical sentences into quantum categories using J. Lambek's work [246]. Refs. [247; 248; 249] tested this algorithm on quantum hardware, and later a full QNLP package was developed [250]. The initial value proposition of QNLP in this way is that this algorithm is natively grammar-aware, but given that large classical language models are shown to infer grammar [80], the real advantage of this approach could lie in other avenues, such as a potential Grover-style [251] speed-up in text classification. ### Open questions #### 2.6.1 Quantum advantage Despite the theoretical findings in Sec 2.3.3, there is limited demonstrable success in using QML in real-life problems, and this is not purely due to hardware shortcomings. Ref. [252] showed that there exists a class of datasets that could showcase quantum advantage, and Ref. [159] found a mathematical formulation for where we can expect to find such an advantage. In Refs. [253, 254, 255] attempts were also made to devise a set of rules for potential quantum advantage. However, Ref. [256] argued that a shift of perspective from quantum advantage to alternative research questions could unlock a better understanding of QML. The suggested research questions were: finding an efficient building block for QML, finding bridges between statistical learning theory and quantum computing, and making QML software ready for scalable ML10. Footnote 10: Other examples of include quantum federated (distributed) learning (QFL) [257]. Notably for the latter, Ref. [258] proposed the first model of distributed secure quantum machine learning #### 2.6.2 Optimal parametrisation In Sec 2.2.2, we encountered various QNN parametrisations with specific properties. An open question is how to optimally parametrise a circuit to avoid barren plateaus, be as expressive as possible, and be free of redundancy. A potential characteristic of such parametrisation is a high level of Fourier accessibility as mentioned in Sec 2.2.4, potentially requiring a quantifiable measure of this accessibility. #### 2.6.3 Theory for hybrid models Despite the successes outlined in Sec 2.4, the theoretical grounding for such models is limited. We saw that hybrid networks performed well if the quantum section was introduced at the beginning of the model architecture [41] or in the middle [44]. From an information-theoretic perspective, this needs to be investigated in more detail to shed light on the effect of hybridisation. Such investigation could identify if there exist areas where the application of a quantum part could complement a classical circuit by either introducing an information bottle-neck to prevent over-fitting or by creating high-dimensional models. Figure 6: The max cut problem. The abstract manifestation of this problem is a general graph, and we are interested in finding a partition of its vertices such that the number of edges connecting the resultant graph to the complimentary graph is maximal. #### 2.6.4 An efficient optimisation method The current gradient calculation methods are either only available on simulators or require a linearly-growing number of circuit evaluations - see Sec. 2.2.3. Neither of these can accommodate a billion-parameter, million-qubit setting. This poses a barrier to the future of QML, and thus an efficient optimisation method is needed for the long term. ## 3 Quantum technologies enhanced by machine learning In this section an impact that machine learning makes in quantum technologies is discussed. The outline of the topics is given in Fig. 7. Today machine learning is used to realize algorithms and protocols in quantum devices, by autonomously learning how to control [35, 259, 260, 261, 262], error-correct [263, 264, 265, 36], and measure quantum devices [266]. Given experimental data, ML can reconstruct quantum states of physical systems [267, 268, 269, 270], learn compact representations of these states [271, 272], and validate the experiment [273]. In this section we discuss the impact of machine learning on fundamental and applied physics, and give specific examples from quantum computing and quantum communication. ### Machine learning in fundamental and applied quantum physics Since its full development in the mid-1920s, a century later quantum mechanics is still considered as the most powerful theory, modeling a wide range of physical phenomena from subatomic to cosmological scales with the most precise accuracy. Even though the measurement problem and quantum gravity had led many physicists to conclude that quantum mechanics cannot be a complete theory, the spooky action of entanglement in the Einstein-Podolsky-Rosen pair [274], has provided the resources for quantum information processing tasks. With machine learning, one may be able to model different physical systems (e.g., quantum, statistical, gravitational) using artificial neural networks, which might lead to the development of a new framework for fundamental physics. Even without a precise description of a physical apparatus and solely based on measurement data, one can prove the quantumness of some observed correlations by the device-independent test of Bell nonlocality [275]. In particular, by using generative algorithms to blend automatically many multilayer perceptrons (MLPs), a machine Figure 7: Machine learning helps in solving problems in fundamental and applied quantum physics. Sections that discuss a particular problem are labeled. learning approach may allow the detection and quantification of nonlocality as well as its quantum (or postquantum) nature [276; 277; 278]. #### 3.1.1 Machine learning in quantum computing Machine learning has also became an essential element in applied quantum information science and quantum technologies. ML that was inspired by the success of automated designs [31], was demonstrated to be capable of designing new quantum experiments [32]. Quantum experiments represent an essential step towards creating a quantum computer. More specifically, for example, three-particle photonic quantum states represent a building block for a photonic quantum computing architecture. In Ref. [32] ML algorithm used is a reinforcement learning algorithm based on the projective simulation model [279; 280; 281; 282; 283; 284]. An agent, the reinforcement learning algorithm, puts optical elements on a (simulated) optical table. Each action adds an optical element to an existing setup. In case the resulting setup achieves the goal, e.g., creates a desired multiphoton entangled state, the agent receives a reward. The described learning scheme is depicted in Fig. 8(a). The initial photonic setup is an output of a double spontaneous parametric down-conversion (SPDC) process in two nonlinear crystals. Neglecting these higher-order terms in the down-conversion, the initial state \(\ket{\psi(0)}\) can be written as a tensor product of two orbital angular momentum entangled photons, \[\ket{\psi_{0}}=\frac{1}{3}\left(\sum_{m=-1}^{1}\ket{m}_{a}\ket{-m}_{b}\right) \otimes\left(\sum_{m=-1}^{1}\ket{m}_{c}\ket{-m}_{d}\right), \tag{1}\] where the indices \(a,b,c\) and \(d\) specify four arms in the optical setup. The actions available to the agent consist of beam splitters (BS), mirrors (Refl), shift-parametrized holograms (Holo), and Dove prisms (DP). The final photonic state \(\ket{\psi_{f}}\) is obtained by measuring the arm \(a\), and post-selecting the state in the other arms based on the measurement outcome in \(a\). The reinforcement learning algorithm that achieves the experimental designs is shown in Fig. 8(b). It is the projective simulation agent represented by a two-layered network of clips. The first layer corresponds to the current state of the optical setup, Figure 8: A reinforcement learning algorithm that designs a quantum experiment. An experiment on an optical table is shown as an example. (a) The learning scheme depicts how an agent, the reinforcement learning algorithm, learns to design quantum experiments. (b) Representation of the reinforcement learning algorithm, projective simulation, as a two-layered network of clips. whereas the second layer is the layer of actions. The connectives between layers define the memory of the agent, which changes during the learning process. The connectivities correspond to the probabilities of reaching a certain action in a given state of a quantum optical setup. During the learning process the agent automatically adjusts the connectivities, and thereby priorize some actions other the other. As shown in Ref. [32] this leads to a variety of entangled states of improved efficiency of their realization. #### 3.1.2 Machine learning in quantum communication In addition to designing new experiments, ML helps in designing new quantum algorithms [285] and protocols [286]. Designing new algorithms and protocols has similarities to experiment design. In particular, similar to experiment design, every protocol can be broken down into individual actions. In the case of the quantum communication protocol, these actions are, e.g: apply \(T\)-gate to the second qubit, apply \(H\)-gate to the first qubit, send the third qubit to the location \(B\), and measure the first qubit in the \(Z\)-basis. Because of the combinatorial nature of the design problem, the number of possible protocols grows exponentially with the number of available actions. For that reason, a bruteforce search of a solution is impossible for an estimated number of possible states of a quantum communication environment \(0.6\times 10^{12}\)[286]. A reinforcement learning approach to quantum protocol design, first proposed in Ref. [286], is shown to be applicable to a variety of quantum communication tasks: quantum teleportation, entanglement purification, and a quantum repeater. The scheme of the learning setting is shown in Fig. 9. The agent perceives the quantum environment state, and chooses an action based on the projective simulation deliberation process. The projective simulation network used in this work is similar to the one in Fig. 8(b), with addition of hierarchical skill acquisition. This skill is of particular importance in the long-distance quantum communication setting, which has to include multiple repeater schemes. With the help of projective simulation, it was demonstrated that reinforcement learning can play a helpful assisting role in designing quantum communication protocols. It is shown that the use of ML in the protocol design is not limited to rediscovering existing protocols. The agent finds new protocols that are better than existing protocols in case optimal situations lack certain symmetries assumed by the known basic protocols11. Footnote 11: Ref. [287] uses reinforcement learning (DPPO) to design the optimal path of quantum imaginary time evolution, which can always achieve an efficient find an efficient quantum circuit. ### Machine learning in random walks problems Random walks paradigm plays important role in many scientific fields related to the transfer of charge, energy, or information transport [288, 289, 290, 291, 292]. Random (classical) walks on graphs represent indispensable tool for many subroutines in computational algorithms [293, 294, 295]. Quantum walks (QW) represent generalization of classical walks to quantum domain and use quantum particle instead of classical one [296, 297]. Resulting quantum interference pattern which governs QW physics fundamentally differs form the classical one [298]. For quantum information science crucially important that quantum particle exhibits quantum parallelism which appear as a result of various paths interference and entanglement. It was shown that quantum particle propagates quadratically faster than classical one on certain graphs which are line [299], cycle [300, 301], hypercube [302, 303], and glued trees graphs [304], respectively. It is expected that algorithms based on QW should demonstrate quadratic speedup that is \(O(\sqrt{N})\). Such parallelism may be useful for quantum information processing and quantum algorithms purposes [304, 305, 306]. It is especially important to note that QW are explored in quantum search algorithms which represent important tools for speedup of QML algorithms [307, 21, 308, 21]. Noteworthy, QW speedup demonstration with arbitrary graphs represents open problem [310]. Standard approach would be to simulate quantum and classical dynamics on given graph, which provides an answer in which case a particle would arrive a target vertex faster. However, this approach may be difficult (and costly) to use in computations for the graphs possessing large number of vertices; the propagation time scales polynomially in the size of the graph. Second, we usually interested in a set of graphs for which obtained results of the simulations cannot reveal some general features of quantum advantage. In a number of works we attacked this problem by means of ML approach [39, 40, 311]. We explore a supervised learning approach to predict a quantum speedup just by looking at a graph. In particular, we designed a classical-quantum convolutional neural network (CQCNN) that learns from graph samples to recognize the quantum speedup of random walks. The basic concept of CQCNN that we use in Ref. [39, 40, 311] is shown in Fig. 10 and Fig. 11, respectively. In particular, we examined in Ref. [39, 40, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 342, 343, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 352, 359, 353, 356, 357, 358, 359, 359, 360, 361, 362, 363, 364, 365, 366, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 409, 402, 404, 405, 406, 407, 408, 409, 403, 404, 409, 404, 405, 406, 407, 408, 409, 409, 400, 404, 407, 409, 400, 408, 409, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 400, 409, 401, 402, 404, 403, 405, 406, 407, 409, 408, 409, 400, 401, 402, 403, 404, 405, 406, 407, 409, 408, 409, 400, 409, 400, 407, 408, 409, 400, 409, 400, 401, 402, 404, 405, 406, 407, 408, 409, 400, 409, 401, 402, 403, 404, 405, 406, 407, 408, 409, 400, 409, 400, 407, 409, 400, 408, 409, 400, 401, 402, 404, 405, 406, 407, 409, 400, 408, 409, 400, 409, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 409, 400, 407, 409, 400, 409, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 400, 409, 401, 402, 403, 404, 409, 405, 406, 407, 409, 400, 408, 409, 400, 409, 401, 402, 404, 405, 406, 407, 408, 409, 400, 409, 401, 403, 404, 409, 401, 404, 405, 406, 407, 408, 409, 400, 409, 400, 407, 409, 400, 408, 409, 400, 400, 409, 400, 409, 401, 402, 403, 404, 405, 406, 407, 408, 409, 400, 409, 401, 403, 404, 405, 406, 407, 409, 408, 409, 400, 409, 401, 402, 403, 404, 409, 401, 404, 405, 406, 407, 408, 409, 400, 409, 400, 409, 400, 401, 402, 403, 404, 409, 400, 409, 400, 409, 401, 404, 403, 404, 405, 406, 407, 408, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 401, 402, 404, 405, 406, 407, 408, 409, 400, 409, 400, 409, 401, 402, 403, 404, 409, 401, 404, 405, 406, 407, 408, 409, 400, 409, 400, 409, 400, 409, 400, 409, 401, 402, 403, 404, 409, 401, 402, 404, 405, 406, 407, 408, 409, 400, 409, 400, 409, 401, 403, 404, 409, 400, 409, 401, 402, 403, 404, 409, 401, 404, 405, 406, 407, 408, 409, 400, 409, 400, 401, 404, 409, 401, 402, 403, 404, 409, 400, 409, 400, 409, 400, 401, 404, 409, 400, 409, 400, 409, 401, 402, 403, 404, 409, 401, 404, 405, 406, 407, 408, 409, 400, 409, 400, 401, 402, 403, 409, 401, 404, 409, 402, 404, 409, 401, 403, 404, 409, 400, 409, 400, 409, 401, 404, 409, 400, 401, 402, 405, 406, 407, 408, 409, 409, 400, 409, 400, 409, 400, 409, 401, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 409, 400, 409, 400, 409, 400, 409, 400, 409, 409, 400, 409, 409, 400, 409, 400, 409, 400, 409, 409, 400, 409, 409, 409, 400, 409, 409, 409, 409, 409, 409, 409, 409, 409, 409, 409, 409, 409, 409, 41, 409, 409, 409, 410, 409, 410, 411, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 42, 42, 43, 43, 44, 445, 45, 46, 419, 430, 44, 450, 46, 417, 419, 45, 410, 45, 410, 45, 410, 45, 411, 46, 417, 47, 418, 419, 46, 419, 47, 419, 48, 420, 42, 43, 44, 450, 46, 419, 49, 49, 409, 409, 410, 49, 409, 410, 42, 409, 42, 44, 451, 46, 41 \[\frac{\mathrm{d}\rho(t)}{\mathrm{d}t}=-\frac{i}{\hbar}(1-p)\left[ \mathcal{H},\rho(t)\right]+p\sum_{mk}\left(L_{mk}\rho(t)L_{mk}^{\dagger}-\frac{ 1}{2}\left\{L_{mk}^{\dagger}L_{mk},\rho(t)\right\}\right)\] \[+\gamma\left(L_{s}\rho(t)L_{s}^{\dagger}-\frac{1}{2}\left\{L_{s} ^{\dagger}L_{s},\rho(t)\right\}\right), \tag{2}\] where \(\rho(t)\) is time dependent density operator, \(L_{mk}=T_{mk}\left|m\right\rangle\left\langle k\right|\) and \(L_{s}=\left|s\right\rangle\left\langle t\right|\) operators characterize transitions from vertices \(k\) to \(m\) and from \(t\) (target) to \(s\) (sink), respectively; \(\gamma\) is coupling parameter between target and sink vertices. The parameter \(p\) lies \(0\leq p\leq 1\) condition and determines the decoherence; the value \(p=0\) is relevant to purely quantum transport, while \(p=1\) determines completely classical random walks. Solution of Eq. (2) specifies quantum probability \(\mathrm{P}^{\mathrm{q}}(t)\equiv\rho_{(n+1)(n+1)}(t)\) (\(n\) is the total number of vertices), which is relevant to QW on a chosen graph. The classical random walk may be established by the probability distribution \[\mathrm{P(t)=e^{-It}e^{Tt}P(0),} \tag{3}\] where \(\mathrm{P}(t)\) is a vector of probabilities \(\mathrm{P}_{v}(t)\) of detecting a classical particle in vertices \(v\in\mathcal{V}\) of the graph; \(I\) is the identity matrix of size \(n\times n\). The transition matrix \(T\) is a matrix of probabilities \(T_{vu}\) for a particle to jump from \(u\) to \(v\). In this case the sink vertex is not needed and we can assume \(\gamma=0\). We are interested in the probability of finding a particle in the target (or, in the sink) vertex which is described by solutions of Eqs. (2) and (3). Then, one possible to compare \(\mathrm{P}^{\mathrm{q}}(t)\equiv\rho_{(n+1)(n+1)}(t)\) and \(\mathrm{P}^{\mathrm{c}}(t)\equiv\mathrm{P}_{n}(t)\) against \(\mathrm{P}_{th}=1/\log n\) that determines threshold value of probability for a given graph. If this probability is larger than \(\mathrm{P}_{th}\), we can conclude that the particle occurs at the target. The time at which one of inequalities \(\mathrm{P}^{\mathrm{q}}(t)>\mathrm{P}_{th}\), \(\mathrm{P}^{\mathrm{c}}(t)>\mathrm{P}_{th}\) fulfilled, is called the hitting time for quantum or classical particle, respectively. Hence, by comparing the solutions to Eqs. (2) and (3), we can define the particle transfer efficiency: it is 1 if the quantum particle reached the target first, and 0 otherwise. In Fig. 11 we schematically summarize proposed CQCNN approach for detection of QW speedup. In order, the architecture of CQCNN is shown in Fig.11(a). It consists of a two-dimensional input layer that takes one graph represented by an adjacency matrix \(A\). This layer is connected to several convolutional layers, the number of which depends Figure 10: A schematic representation of considered in Refs. [39, 40] random walks on (a) connected random graph, (b) cycle graph. The labels \((i)\), and \((t)\) specify initial and target vertices, respectively; \(s\) is a sink vertex which is require to localize and detect quantum particle. The \(\gamma\) is coupling parameter between target and sink vertices, respectively. on the number of vertices \(n\) of the input graph. The number of layers is the same for all graph sizes. CQCNN has a layout with convolutional and fully connected layers, and two output neurons that specify two possible output classes. The convolutional layers are used to extract features from graphs, and decrease the dimensionality of the input. Empirically we find out that relevant features are in the rows and columns of adjacency matrices. The first convolutional layer comprises six filters (or, feature detectors) which define three different ways of processing the input graph. These three ways are marked by green, red, blue colors in Fig. 11(a), respectively. The constructed filters are form 'crosses' which are shown in Fig. 11(a) and capture a weighted sum of column and row elements. These filters act as functions of a weighted total number of neighboring vertices of each vertex. Thus, the cross 'edge-to-edge' and 'edge-to-vertex' filters crucially important in designed CQCNN; they are capable for prediction of the quantum advantage by QW. Fig. 11(b) shows schematically training procedure by using some graphs samples which are established by adjacency matrices \(A\) as an input. CQCNN made prediction at the output determining classical or quantum classes depending on the values of two output neurons. The predicted class is determined by means of the index of a neuron with the largest output value \(\text{class}=\text{argmax}_{m}y(m)\). Having a correct label, the loss value is computed. The filters that we constructed in CQCNN play an essential role in the success of Figure 11: Schematic representation of CQCNN approach which is used for predicting the quantum speedup on the graphs represented in Fig. 10. (a) - scheme of the CQCNN architecture. The neural network takes a labeled graph in form of an adjacency matrix \(A\) as an input. The \(A\) then processed by convolutional layers with filters of graph-specific “edge-to-edge” and “edge-to-vertex”, respectively. These filters act as functions of a weighted total number of neighboring vertices of each vertex. The convolutional layers are connected with fully-connected layers which classify the input graph. Data and error propagation are shown with arrows. (b) and (c) demonstrates processes of CQCNN training and testing, respectively. learning. CQCNN learns by stochastic gradient descent algorithm that takes the cross entropy loss function. The loss on a test example \(i\) is defined relative to the correct class class\({}_{i}\) (classical or quantum, 0 or 1) of this example: \[\text{loss}_{i}=-\kappa(\text{class}_{i})\log\left(\frac{\text{e}^{x(\text{class }_{i})}}{\text{e}^{x(0)}+\text{e}^{x(1)}}\right), \tag{4}\] where we defined the values of the output neurons as \(x(0)\) and \(x(1)\); \(\kappa(\text{class}_{i})\) is the total fraction of examples from this class in the dataset. As we have shown in Ref. [39] CQCNN constructed a function that generalizes over seen graphs to unseen graphs, as the classification accuracy (which may be defined as the fraction of correct predictions) goes up. CQCNN testing procedure is not principally different from the training process how it is seen in Fig. 11(c), cf. Fig. 11(b). The CQCNN does not receive any feedback on its prediction in this case and the network is not modified. We apply the described ML approach to different sets of graphs. In particular, to understand how our approach works in a systematic way, we first analyze the CQCNN on line graphs with up to 10 vertices. CQCNN was trained over 2000 epochs with a single batch of 3 examples per epoch. Then, we simulated CQCNN's learning process for random graphs, each sampled uniformly from the set of all possible graphs with \(n\) vertices and \(m\) edges. The learning performance results in the absence of decoherence (\(p=0\)) are shown in Fig. 12 for \(n=15,20,25\); the \(m\) is chosen uniformly from \(n-1\) to \((n^{2}-n)/2\). Our simulations shows that the loss after training is vanishing; it is below \(3\times 10^{-3}\) for all these random graphs. In Fig. 12(a) we see that both recall and precision are about 90% for the "classical" part of the set, and is in the range of \(25-35\%\) for the "quantum" one. n12 Thus, one can see that CQCNN helps to classify random graphs correctly much better than a random guess without performing any QW dynamics simulations. In Fig. 12(b)-(c) we represent samples of correctly classified graphs. Footnote 12: Recall quantifies the fraction of correct predictions in a particular class, whereas precision identifies the fraction of a particular class predictions that turned out to be correct. #### 3.2.1 Quantum walks with decoherence In the presence of decoherence, i.e. for \(p>0\) physical picture is getting richer. In Fig. 13 we demonstrate results of QW dynamics simulation on cycle graph consisting Figure 12: (a) CQCNN learning performance. Dataset consists of random graphs with \(n=15,20\) and \(25\) vertices, \(1000\) examples for each \(n\), and the corresponding classical and quantum labels. CQCNN was simulated during 3000 epochs, 100 mini batches each with the batch size of 3 examples. The neural network was tested on 1000 random graphs for each \(n\). (b), (c) establish random graph examples taken from the test set which were correctly classified by CQCNN (initial and target vertices are marked in yellow and red, respectively). The classical particle is faster on (b), whereas the quantum one is faster on graph (c). of 6-vertices; the efficiency of transport is measured between opposite vertices of the graph as it is shown in Fig. 10(b). Simulations performed for 1000 randomly sampled values of the decoherence parameter \(p\) and used to train CQCNN. After the training procedure we suggest CQCNN to predict if the QW can lead to an advantage for a new given parameter \(p\). In Fig. 13 we represent the results of the transfer efficiency predictions as a violet line. From Fig. 13 clearly seen that at the value of decoherence parameter \(p\simeq 0.34\) abrupt crossover from quantum (\(p\simeq 1\)) to classical (\(p\simeq 0\)) regime transport occurs. Thus, one possible to expect QW advantage in transport in domain of \(p<0.34\). Physically, such an crossover may be relevant to quantum tunneling features in the presence of dissipation, cf. [315, 316]. Notice that the parameter \(p\) is temperature-dependent in general, cf. [317]. In this case we can recognize the established crossover as a (second-order) phase transition from quantum to classical (thermal activation) regime that happens for a graph at some finite temperature. The Fig. 13 demonstrates predictions of CQCNN which are based on the learned values of the output neurons; they are shown in Fig. 13 as a classical (blue) and quantum (green) classes, respectively. CQCNN made decision about the class by using the maximum value of the output neurons activation. From Fig. 13 it is clearly seen that the "vote" for the quantum class grows up to the maximum value \(p\simeq 0.2\), which corresponds to the highest confidence for the quantum class. Simultaneously, the confidence in the classical class grows with increasing of decoherence parameter \(p\). Separation between classes become more evident after the crossover point of \(p\simeq 0.34\). Thus, obtained results play significant role in creation of soft- and hardware systems which are based on the graph approach at their basis. CQCNN that we proposed here allows to find out which graphs, and under which conditions on decoherence, can provide a quantum advantage. This is especially relevant to NISQ era quantum devises development. ### Machine learning in quantum tomography With the capability to find the best fit to arbitrarily complicated data patterns with a limited number of parameters available, machine learning has provided a powerful approach for quantum tomography. Here, quantum tomography or quantum state tomography (QST) refers to the reconstruction about a quantum state with its comprehensive information by using measurements on an ensemble of identical quantum states [318, 319, 320, 321, 322, 323, 324]. However, the exponential growth in bases for a Hilbert space of \(N\)-qubit states implies that exact tomography techniques require exponential measure Figure 13: Prediction of transfer efficiency (violet curve) for a 6-cycle graph versus decoherence parameter \(p\). The activation values of output neurons are shown in blue and green. The results obtained by averaging of 5 CQCNN networks. Standard deviations are marked by shaded regions. ments and/or calculations. In order to leverage the full power of quantum states and related quantum processes, a well characterization and validation of large quantum systems is demanded and remains important challenge [325]. Traditionally, by estimating the closest probability distribution to the data for any arbitrary quantum states, the maximum likelihood estimation (MLE) method is used in quantum tomography [326, 327]. However, MLE method requires exponential-in-\(N\) amount of data as well as an exponential-in-\(N\) time for processing. Albeit dealing with Gaussian quantum states, unavoidable coupling from the noisy environment makes a precise characterization on the quantum features in a large Hilbert space almost unattackable. Moreover, MLE also suffers from the over-fitting problem when the number of bases grows. To make QST more accessible, several alternative algorithms are proposed by assuming some physical restrictions imposed upon the state in question, such as the adaptive quantum tomography [328], permutationally invariant tomography [329], quantum compressed sensing [330, 331, 332, 333], tensor networks [334, 335], generative models [336], feed-forward neural networks [337], and variational autoencoders [338]. To reduce the over-fitting problem in MLE, the restricted Boltzmann machine (RBM) [339, 340, 341, 342] has provided a powerful solution in QST. With the help of two layers of stochastic binary units, a visible layer and a hidden layer, the RBM acts as a universal function approximator. For qubits on an IBM Q quantum computer, quantum state reconstructions via ML were demonstrated with four qubits [343]. For continuous variables, the convolutional neural network (CNN) has been experimentally implemented with the quantum homodyne tomography for continuous variables [269, 344, 345]. As illustrated in Fig. 14, the time sequence data obtained in the optical homodyne measurements share the similarity to the voice (sound) pattern recognition [346, 347]. Here, the noisy data of quadrature sequence are fed into a CNN, composited with 30 convolutional layers in total. In applying CNN, we take the advantage of good generalizability to extract the resulting density matrix from the time series data [346]. In our deep CNN, there are four convolution blocks used, each containing 1 to 9 convolution layers (filters) in different sizes. Five shortcuts are also introduced among the convolution blocks, in order to tackle the gradient vanishing problem. Instead of max-pooling, average pooling is applied to produce higher fidelity results, as all the tomography data should be equally weighted. Finally, after flattening two fully connected layers and normalization, the predicted matrices are inverted to reconstruct the density matrices in truncation. Here, the loss function we want to minimize is the mean squared error (MSE); Figure 14: Schematic of machine learning enhanced quantum state tomography with convolutional neural network (CNN). Here, the noisy data of quadrature sequence obtained by quantum homodyne tomography in a single-scan are fed to the convolutional layers, with the shortcut and average pooling in the architecture. Then, after flattening and normalization, the predicted matrices are inverted to reconstruct the density matrices in truncation. while the optimizer used for training is Adam. We take the batch size as 32 in the training process. By this setting, the network is trained with 70 epochs to decrease the loss (MSE) up to \(5\times 10^{-6}\). Practically, instead of an infinite sum on the photon number basis, we keep the sum in the probability up to 0.9999 by truncating the photon number. Here, the resulting density matrix is represented in photon number basis, which is truncated to \(35\times 35\) by considering the maximum anti-squeezing level up to 20 dB. As to avoid non-physical states, we impose the positive semi-definite constraint into the predicted density matrix. Here, an auxiliary (lower triangular) matrix is introduced before generating the predicted factorized density matrix through the Cholesky decomposition. During the training process, the normalization also ensures the trace of the output density matrix is kept as 1. More than a million data sets are fed into our CNN machine with a variety of squeezed (\(\rho^{sq}\)), squeezed thermal (\(\rho_{th}^{sq}\)), and thermal states (\(\rho_{th}\)) in different squeezing levels, quadrature angles, and reservoir temperatures, i.e., \[\rho^{sq} =\hat{S}\rho_{0}\hat{S}^{\dagger}; \tag{5}\] \[\rho_{th}^{sq} =\hat{S}\rho_{th}\hat{S}^{\dagger}. \tag{6}\] Here, \(\rho_{0}=|0\rangle\langle 0|\) with the vacuum state \(|0\rangle\), \(\rho_{th}=\sum_{n}P(n)\,|n\rangle\langle n|\) with the probability distribution function \(P(n)=\frac{1}{\bar{n}+1}(\frac{\bar{n}}{\bar{n}+1})^{n}\), defined with the mean-photon number \(\bar{n}=\frac{1}{\exp[\hbar\omega/k_{B}T]-1}\) at a fitting temperature \(T\), and \(\hat{S}(\xi)=\exp[\frac{1}{2}\xi^{*}\hat{a}^{2}-\frac{1}{2}\xi\hat{a}^{\dagger 2}]\) denotes the squeezing operator, with the squeezing parameter \(\xi\equiv r\exp(i\phi)\) characterized by the squeezing factor \(r\) and the squeezing angle \(\phi\). All the training is carried out with the Python package _tensorflow.keras_ performed in GPU (Nvidia Titan RTX). The validation of ML-enhanced QST is verified with simulated data set, through the average fidelity obtained by MLE and CNN by calculating the purity of quantum state, i.e., purity \(\equiv\text{tr}(\rho^{2})\). Compared with the time-consuming MLE method, ML-enhanced QST keeps the fidelity up to 0.99 even taking 20 dB anti-squeezing level into consideration. With prior knowledge in squeezed states, such a supervised CNN machine can be trained in a very short time (typically in less than one hour), enabling us to build a specific machine learning for certain kinds of problem. When well-trained, an average time of about 38.1 milliseconds (by averaging 100 times) costs in a standard GPU server. One unique advantage of ML-enhanced QST is that we can precisely identify the pure squeezed and noisy parts in extracting the degradation information. By directly applying the singular value decomposition to the predicted density matrix, i.e., \(\rho=\sigma_{1}\,\rho^{sq}+c_{1}\,\rho_{th}^{sq}+d_{1}\,\rho_{th}\), all the weighting ratio about the ideal (pure) squeezed state, the squeezed thermal state, and thermal state can be obtained. With this identification, one should be able to suppress and/or control the degradation at higher squeezing levels, which should be immediately applied to the applications for gravitational wave detectors and quantum photonic computing. Toward a real-time QST to give physical descriptions of every feature observed in the quantum noise, a characteristic model to directly predict physical parameters in such a CNN configuration is also demonstrated [348]. Without dealing with a density matrix in a higher dimensional Hilbert space, the predicted physical parameters obtained by the characteristic model are as good as those generated by a reconstruction model. One of the most promising advantages for ML in QST is that only fewer measurement settings are needed [349]. Even with incomplete projective measurements, the resilience of ML-based quantum state estimation techniques was demonstrated from partial measurement results [350]. Furthermore, such a high-performance, lightweight, and easy-to-install supervised characteristic model-based ML-QST can be easily installed on edge devices such as FPGA as an in-line diagnostic toolbox for all possible applications with squeezed states. In addition to the squeezed states illustrated here, similar machine learning concepts can be readily applied to a specific family of continuous variables, such as non-Gaussian states. Of course, different learning (adaptation) processes should be applied in dealing with single-photon states, Schrodinger's cat states [351, 352], and Gottesman-Kitaev-Preskill states for quantum error code corrections [353]. Alternatively, it is possible to use less training data with a better kernel developed in machine learning, such as the reinforce learning, generative adversarial network, and the deep reinforcement learning used in the optimization problems [268, 354, 355, 356, 357, 358, 359, 360]. Even without any prior information, informational completeness certification net (ICCNet), along with a fidelity prediction net (FidNet), have also been carried out to uniquely reconstruct any given quantum state [361]. Applications of these data-driven learning and/or adaptation ML are not limited to quantum state tomography. Identification and estimation of quantum process tomography, Hamiltonian tomography, and quantum channel tomography, as well as quantum phase estimation, are also in progresses [362, 363, 364, 365]. Moreover, ML in quantum tomography can be used for the quantum state preparation, for the general single-preparation quantum information processing (SIPQIP) framework [366]. ### Photonic quantum computing In addition to classical information processing, photonic quantum computing is also one of the possible technologies to demonstrate quantum advantage [14, 367, 368], i.e., in which a quantum system has been shown to outperform a classical one on some well-defined information processing task. Even though the computational task on the implementation of photonic Boson sampling is non-universal [369], meaning that it cannot perform arbitrary quantum operations, whether any useful applications exist within the heavily restricted space of non-universal photonic systems is an open question. Nevertheless, the advantages of photonics as a quantum technologies platform compared to other platforms are a high degree of integration with mature classical photonic technologies, and the fact that the photonic circuits involved can be operated at room temperature [370, 371]. Reviews on the recent advances of machine learning, in particular deep learning, for the photonic structure design and optical data analysis, as well as the challenges and perspectives, can be found in Refs. [48, 372, 373, 374]. Inverse designs and optimization on the photonic crystals, plasmonic nano-structures, programmable meta-materials, and meta-surfaces have been actively explored for high-speed optical communication and computing, ultrasensitive biochemical detection, efficient solar energy harvesting, and super-resolution imaging [375]. By utilizing adaptive linear optics, quantum machine learning is proposed for performing quantum variational classification and quantum kernel estimation [376]. Towards the goal of realization of on-chip quantum photonics, silicon-based materials have been actively explored due to their compatibility with conventional CMOS fabrication processes. Instead of using photons or entangled photons pairs from optical parametric process, which suffers from the low probabilities and rare success events in post-selection, quantum optical microresonator on a chip employs the Kerr nonlinearity has the advantage over the photonic qubit-based approach [377, 378, 379, 380, 381]. Unlike all gate-based quantum computing, which are based on single-qubit and entangling gates, photonic quantum computing does not rely on any physical gates during the computation, but on the preparation of quantum modes, e.g., cluster states, with the time-domain and/or frequency-domain multiplexings [382, 383]. Regarding the integrated quantum photonics, as well as the progress of the hybrid quantum devices, machine-learning methodology has also been widely applied in order to offer an efficient means to design photonic structures, as well as to tailor light-matter interaction [384]. For the emerging field of machine learning quantum photonics, we need not ML for automated design of quantum optical experiments [385], such as the quantum walk with graph states illustrated in Sec. 3.2, but also quantum technologies enhanced by machine learning, such as ML in quantum tomography illustrated in Sec. 3.3 to have ML-assisted quantum measurements. Along this direction, quantum optical neural network (QONN) was introduced to leverage advances in integrated quantum photonics [386]. With the newly developed protocols such as quantum optical state compression for quantum networking and black-box quantum simulation, many thousands of optoelectronic components should be integrated in monolithically integrated circuits. It is expected to see the combination of quantum measurements, quantum metrology, and optimization of on-chip and free-space meta-devices for photonic quantum computing as a promising route for automatization of quantum experiments. ## 4 Conclusions Artificial intelligence and machine learning are currently key factors in the progress of modern society in the world. Now it is quite difficult to find an area of our life where achievements in the field of artificial intelligence would not be used. However, this success is largely ensured by the development of classical information technologies in terms of hardware, which possess natural limitations. The creation of quantum computers and quantum networks can bypass these limitations in different fields. Quantum machine learning is a rapidly developing new field of research at the border of artificial intelligence, quantum information science and quantum technology. The ecosystem of QML, which has developed to date and will be in demand for Figure 15: Quantum machine learning ecosystem for the next decade. Please check the references in the published paper version: Alexey Melnikov, Mohammad Kordzanganeh, Alexander Alodjants, Ray-Kuang Lee (2023) Quantum machine learning: from physics to software engineering, Advances in Physics: X, 8;1, DOI: 10.1080/23746149.2023.2165452 the next decade, is depicted in Fig.15. Here we proceed from the fact that in the near future, the main role in our daily life will be played by various (tensor) network systems of transmission, processing, and intelligent recognition of large amounts of information. In this regard, we are confident that practically significant quantum computers will be embedded into large distributed intelligent systems environment that will surround us everywhere. In this review, we outlined the hot topics of interplay between promising artificial intelligence methods and modern quantum computing. These topics are predominantly associated with the limited capabilities of NISQ era quantum computers and use variety of variational algorithms such as variational eigensolver and quantum approximate optimisation algorithm, respectively. A special place in our review is given to so-called quantum neural networks which represent new QML models whose parameters are updated classically and may be used within quantum-classical training algorithms for variational circuits. In this sense we discussed promising hybrid information processing methods that use both classical ML algorithms and quantum devices. The training procedure proposes data providing to the quantum model, calculating an objective (loss) value, and then adapting the QNN parameters. Thus, the whole procedure represents a hybrid quantum-classical algorithm. In this work we discuss another possible application for hybrid computation, that may use classical-quantum convolutional neural network designed for resolving speedup of random walks on chosen graphs. Our approach is based on training CQCNN, which learns to extract feature vectors from graph adjacency matrices combined with a decoherence parameter. We have shown that even without any decoherence in generally the speedup of random walks essentially depends on topological properties of the graph, i.e. on adjacency matrix peculiarities. Our findings open new perspectives in quantum-classical algorithms which explore random walks as subroutines. ## 5 Acknowledgements A.P.A acknowledges support from the Goszadanie No. 2019-1339 project of Ministry of Science and Higher Education of Russian Federation. R.-K.L. is partially supported by the National Science and Technology Council of Taiwan (No. 110-2123-M-007-002).
2308.13786
Kink solutions in generalized 2D dilaton gravity
We study static kink solutions in a generalized two-dimensional dilaton gravity model, where the kinetic term of the dilaton is generalized to be an arbitrary function of the canonical one $\mathcal X= -\frac12 (\nabla \varphi)^2$, say $\mathcal F(\mathcal X)$, and the kink is generated by a canonical scalar matter field $\phi$. It is found that for arbitrary $\mathcal F(\mathcal X)$, the background field equations have a simple first-order formalism, and the linear perturbation equation can always be written as a Schr\"odinger-like equation with factorizable Hamiltonian operator. After choosing appropriate $\mathcal F(\mathcal X)$ and superpotential, we obtain a sine-Gordon type kink solution with pure AdS$_2$ metric. The linear perturbation issue of this solution becomes an exactly solvable conformal quantum mechanics problem, if one of the model parameter takes a critical value.
Yuan Zhong, Heng Guo, Yu-Xiao Liu
2023-08-26T06:53:35Z
http://arxiv.org/abs/2308.13786v2
# Kink solutions in generalized 2D dilaton gravity ###### Abstract We study static kink solutions in a generalized two-dimensional dilaton gravity model, where the kinetic term of the dilaton is generalized to be an arbitrary function of the canonical one \(\mathcal{X}=-\frac{1}{2}(\nabla\varphi)^{2}\), say \(\mathcal{F}(\mathcal{X})\), and the kink is generated by a canonical scalar matter field \(\phi\). It is found that for arbitrary \(\mathcal{F}(\mathcal{X})\), the background field equations have a simple first-order formalism, and the linear perturbation equation can always be written as a Schrodinger-like equation with factorizable Hamiltonian operator. After choosing appropriate \(\mathcal{F}(\mathcal{X})\) and superpotential, we obtain a sine-Gordon type kink solution with pure AdS\({}_{2}\) metric. The linear perturbation equation of this solution becomes exactly solvable if one of the model parameter takes a critical value. keywords: Generalized 2D dilaton gravity, Kink, Thick branes + Footnote †: journal: Journal of LaTeX Templates ## 1 Introduction Two-dimensional (2D) gravity models allow physicists to study difficult issues like quantum gravity [1; 2; 3; 4], gravitational collapse [5; 6], black hole evaporation [7; 8; 9; 10; 11], and gauge/gravity duality [12; 13; 14; 15; 16; 17], while avoiding technical complexity. For this reason, 2D gravity received much attention recently, see Refs. [18; 19; 20; 21; 22; 23; 24; 25] for comprehensive reviews. Because the Einstein tensor vanishes for arbitrary 2D metric, one usually adopts the so-called dilaton gravity to describe 2D gravity, for example, the Jackiw-Teitelboim (JT) gravity [1; 2]: \[S_{\rm JT}=\frac{1}{\kappa}\int d^{2}x\sqrt{-g}\varphi(R+\Lambda), \tag{1}\] where \(\kappa\) and \(\Lambda\) are the gravitational coupling constant and the cosmological constant, respectively. Obviously, in the JT gravity the dilaton field \(\varphi\) plays the role of a Lagrangian multiplier, and the Ricci scalar \(R\) is constrained to be a constant \(-\Lambda\). Another interesting 2D dilaton gravity is the one proposed by Mann, Morsink, Sikkema and Steele (MMSS), who added a kinetic term \(\mathcal{X}\equiv-\frac{1}{2}g^{\mu\nu}\nabla_{\mu}\varphi\nabla_{\mu}\varphi\) to the action [26]: \[S_{\rm MMSS}=\frac{1}{\kappa}\int d^{2}x\sqrt{-g}(\varphi R+\mathcal{X}). \tag{2}\] In this case, the dilaton equation is \[\nabla_{\lambda}\nabla^{\lambda}\varphi+R=0, \tag{3}\] which obviously allows solutions with variable scalar curvature. Since two-dimensional gravity has only one degree of freedom, it is always possible to express the metric as the following form [27]: \[ds^{2}=-e^{2A}dt^{2}+dx^{2}. \tag{4}\] One may notice that when the warp factor \(A=A(x)\) is static, which is assumed from now on, the above metric can be regarded as a 2D version of the Randall-Sundrum braneworld metric [28; 29]. A remarkable property of the MMSS gravity is that for the metric (4) the dilaton equation (3) reduces to a simple algebraic relation [27; 30]: \[\varphi=2A. \tag{5}\] This relation enables us to eliminate \(\varphi(x)\) in terms of \(A(x)\), and therefore, largely reduces the complexity of the field equations. Especially, in some models with additional scalar matter fields, it was found that the field equations have very simple first-order formalisms, from which an important class of topological soliton solutions, namely, kink can be easily constructed [27; 30; 31; 32; 33; 34; 35]. Some of these kink solutions have asymptotic AdS\({}_{2}\) metrics, and can be interpreted as 2D thick branes. Besides, the linear perturbation equations of these solutions can always be rewritten as Schrodinger-like equations with factorizable Hamiltonians [30; 36], which take similar forms as those of the scalar perturbations of 5D Einstein thick branes [37; 38; 39]. The factorization of the perturbation Hamiltonian usually ensures the stability of the kink solutions [40]. If noncanonical scalar matter fields are allowed, it is even possible to construct 2D gravitating kink solutions with exactly solvable perturbation equations [33]. As is well known, the information of linear spectrum plays a key role in understanding the quantum [41; 42; 43; 44; 45] and dynamic [46; 47; 48; 49] properties of kink. Since the MMSS gravity is just a special theory for 2D gravity, one may ask if there are other 2D dilaton gravity theories which have similar properties as the MMSS gravity in the modeling of gravitating kinks, namely: 1. The field equations have simple first-order formalism, from which exact kink solutions can be easily constructed. 2. The Hamiltonian operator of the linear perturbation equation is factorizable. In this work, we report a 2D dilaton gravity model which extends the MMSS gravity but still reserves the above properties. The model and its general properties, including the first-order formalism and linear perturbation equations, are discussed in the next section. An explicit kink solution with pure AdS\({}_{2}\) metric will be derived in Sec. 3. The main results are summarized in Sec. 4. ## 2 The model and its general properties We consider a generalized 2D dilaton gravity model with the following action \[S=\frac{1}{\kappa}\int d^{2}x\sqrt{-g}\left[\varphi R+\mathcal{F}(\mathcal{X}) +\kappa\mathcal{L}_{m}\right], \tag{6}\] where \(\mathcal{L}_{m}=-\frac{1}{2}(\nabla\phi)^{2}-V(\phi)\) is the Lagrangian density of the scalar matter field that generates the kink. What makes the present model different from the MMSS gravity is the term \(\mathcal{F}(\mathcal{X})\), which is an arbitrary function of the standard dilaton kinetic term \(\mathcal{X}=-\frac{1}{2}(\nabla\varphi)^{2}\). The MMSS gravity model corresponds to the special case with \(\mathcal{F}(\mathcal{X})=\mathcal{X}\). The action (6) leads to three field equations, namely, the dilaton equation \[\nabla^{\lambda}(\mathcal{F}_{\mathcal{X}}\nabla_{\lambda}\varphi)+R=0, \tag{7}\] the scalar equation \[\nabla_{\lambda}\nabla^{\lambda}\phi=\frac{dV}{d\phi}, \tag{8}\] and the Einstein equation \[\mathcal{F}_{\mathcal{X}}\nabla_{\mu}\varphi\nabla_{\nu}\varphi- \frac{1}{2}g_{\mu\nu}\left(-2\mathcal{F}+4\nabla_{\lambda}\nabla^{\lambda} \varphi\right) \tag{9}\] \[+ 2\nabla_{\mu}\nabla_{\nu}\varphi+\kappa T_{\mu\nu}=0,\] where \(\mathcal{F}_{\mathcal{X}}\) and \(\mathcal{F}_{\varphi}\) are the derivatives of \(\mathcal{F}\) with respect to \(\mathcal{X}\) and \(\varphi\), respectively, and \(T_{\mu\nu}=g_{\mu\nu}\mathcal{L}_{m}+\nabla_{\mu}\phi\nabla_{\nu}\phi\) is the energy-momentum tensor. For the static metric \[ds^{2}=-e^{2A(x)}dt^{2}+dx^{2}, \tag{10}\] the dilaton and the scalar equations (7) and (8) become \[(\mathcal{F}_{\mathcal{X}}\partial_{x}\varphi-2\partial_{x}A)\,\partial_{x}A +\partial_{x}\left(\mathcal{F}_{\mathcal{X}}\partial_{x}\varphi-2\partial_{x} A\right)=0, \tag{11}\] and \[\partial_{x}A\,\partial_{x}\phi+\partial_{x}^{2}\phi=\frac{dV}{d\phi}, \tag{12}\] respectively. The nontrivial components of the Einstein equation are \[-2\partial_{x}^{2}\varphi-\partial_{x}\varphi(\mathcal{F}_{ \mathcal{X}}\partial_{x}\varphi-2\partial_{x}A)=\kappa(\partial_{x}\phi)^{2}, \tag{13}\] \[-2\partial_{x}^{2}\varphi+\mathcal{F}=\frac{1}{2}\kappa(\partial _{x}\phi)^{2}+\kappa V. \tag{14}\] Only three of the above four equations are independent. For example, one can derive the scalar equation by using the dilaton equation and the Einstein equation. Thus, we will neglect the scalar equation (12) and try to find solutions for the other three. ### The first-order formalism A remarkable feature of the present model is that the dynamical equations (11)-(14) can be rewritten as a group of first-order equations. To see this, we start by noticing that the dilaton equation (11) is satisfied, if \[\partial_{x}A=\frac{1}{2}\mathcal{F}_{\mathcal{X}}\partial_{x}\varphi, \tag{15}\] with which Eq. (13) reduces to \[-2\partial_{x}^{2}\varphi=\kappa(\partial_{x}\phi)^{2}. \tag{16}\] To proceed, we introduce the so-called superpotential function \(W(\phi)\) such that \[\partial_{x}\phi=\frac{dW}{d\phi}. \tag{17}\] Then Eq. (16) becomes \[\partial_{x}\varphi=-\frac{\kappa}{2}W, \tag{18}\] or equivalently, \(\mathcal{X}(W)=-\frac{\kappa^{2}}{8}W^{2}\). After substituting Eqs. (17) and (18) into Eqs. (14) and (15) we obtain \[V = \frac{\mathcal{F}}{\kappa}+\frac{1}{2}\bigg{(}\frac{dW}{d\phi} \bigg{)}^{2}, \tag{19}\] \[\partial_{x}A = -\frac{\kappa}{4}\mathcal{F}_{\mathcal{X}}W. \tag{20}\] As will be shown in Sec. 3, exact kink solutions can be easily constructed by inserting appropriate functions \(W(\phi)\) and \(\mathcal{F}(\mathcal{X})\) into the first-order equations (17)-(20). We see that \(\mathcal{F}(\mathcal{X})\) only affects the solutions of \(V\) and \(A\). Therefore, by modifying the function \(\mathcal{F}(\mathcal{X})\), one can tune the form of the warp factor while keep \(\phi\) and \(\varphi\) unchanged. ### Linear stability issue It is convenient to discuss the linear perturbation in the conformally flat coordinates: \[ds^{2}=e^{2A(r)}(-dt^{2}+dr^{2}), \tag{21}\] where \[r\equiv\int e^{-A(x)}dx. \tag{22}\] For simplicity, we denote the derivatives with respect to \(t\) and \(r\) by overdots and primes, respectively. In the \((t,r)\)-coordinates the dynamical equations (11)-(14) become \[A^{\prime} = \frac{1}{2}{\cal F}_{\cal X}\varphi^{\prime}, \tag{23}\] \[\phi^{\prime\prime} = e^{2A}V_{\phi},\] (24) \[V = \frac{1}{2}e^{-2A}\phi^{\prime 2}+\frac{{\cal F}}{\kappa},\] (25) \[\varphi^{\prime\prime} = \frac{1}{2}{\cal F}_{\cal X}\varphi^{\prime 2}-\frac{1}{2} \kappa\phi^{\prime 2}, \tag{26}\] respectively. Following Refs. [30; 31; 36], we consider small perturbations \(\{\delta\varphi(r,t),\delta\phi(r,t),\delta g_{\mu\nu}(r,t)\}\) around an arbitrary static solution of Eqs. (23)-(26), say, \(\{\varphi(r),\phi(r),g_{\mu\nu}(r)\}\). It is convenient to define the metric perturbation as \[\delta g_{\mu\nu}(r,t) \equiv e^{2A(r)}h_{\mu\nu}(r,t) \tag{27}\] \[= e^{2A(r)}\left(\begin{array}{cc}h_{00}(r,t)&\Phi(r,t)\\ \Phi(r,t)&h_{rr}(r,t)\end{array}\right).\] To the first order, the perturbation of the metric inverse reads \[\delta g^{\mu\nu}=-e^{-2A}h^{\mu\nu}, \tag{28}\] where \[h^{\mu\nu}\equiv\eta^{\mu\rho}\eta^{\nu\sigma}h_{\rho\sigma}=\left(\begin{array} []{cc}h_{00}&-\Phi\\ -\Phi&h_{rr}\end{array}\right). \tag{29}\] As in Refs. [30; 31; 36], we define a variable \(\Xi\equiv 2\dot{\Phi}-h^{\prime}_{00}\), and taking the dilaton gauge \(\delta\varphi=0\). Independent perturbation equations can be obtained by linearizing the Einstein equation (9) and the scalar field equation (8). The linearization of the Einstein equation leads to two independent perturbation equations, namely, the \((0,1)\) component: \[h_{rr}=\kappa\frac{\phi^{\prime}}{\varphi^{\prime}}\delta\phi, \tag{30}\] and the \((1,1)\) component: \[\Xi=\kappa\frac{\phi^{\prime}}{\varphi^{\prime}}\bigg{[}\delta\phi^{\prime}+ \delta\phi\left(\frac{\varphi^{\prime\prime}}{\varphi^{\prime}}-\frac{\phi^{ \prime\prime}}{\phi^{\prime}}-{\cal F}_{{\cal X}{\cal X}}{\cal X}\varphi^{ \prime}\right)\bigg{]}. \tag{31}\] The \((0,0)\) component is also nontrivial, but after substituting the background field equations it reduces to Eq. (30). Another independent perturbation equation comes from the linearization of the scalar equation (8), which, after eliminating \(h_{rr}\) and \(\Xi\) by using Eqs. (30) and (31), takes the following form: \[\ddot{\delta\phi}-\delta\phi^{\prime\prime}-\bigg{[}{\cal F}_{{ \cal X}{\cal X}}{\cal X}\left(\varphi^{\prime\prime}-\frac{1}{2}{\cal F}_{{ \cal X}}\varphi^{\prime 2}\right) \tag{32}\] \[+ {\cal F}_{{\cal X}}\left(\varphi^{\prime\prime}-\frac{\phi^{ \prime\prime}}{\phi^{\prime}}\varphi^{\prime}\right)-2\left(\frac{\varphi^{ \prime\prime}}{\varphi^{\prime}}\right)^{2}-\frac{\phi^{\prime\prime\prime}}{ \phi^{\prime}}\] \[+ 4\frac{\varphi^{\prime\prime}}{\varphi^{\prime}}\frac{\phi^{ \prime\prime}}{\phi^{\prime}}\bigg{]}\delta\phi=0.\] The terms that contain \({\cal F}_{{\cal X}}\) and \({\cal F}_{{\cal X}{\cal X}}\) can be eliminated by applying the following identity: \[{\cal F}_{{\cal X}{\cal X}}{\cal X}\left(\varphi^{\prime\prime}- \frac{1}{2}{\cal F}_{{\cal X}}\ \varphi^{\prime 2}\right) \tag{33}\] \[+ {\cal F}_{{\cal X}}\left(\varphi^{\prime\prime}-\frac{\phi^{ \prime\prime}}{\phi^{\prime}}\varphi^{\prime}\right)=\frac{\varphi^{\prime \prime\prime}}{\varphi^{\prime}}-2\frac{\varphi^{\prime\prime}}{\varphi^{ \prime}}\frac{\phi^{\prime\prime}}{\phi^{\prime}},\] which is derived by using background equations (23) and (26). Finally, the equation for \(\delta\phi\) takes the same form as the one derived in the MMSS gravity [30]: \[\ddot{\delta\phi}-\delta\phi^{\prime\prime}+\left[\frac{\phi^{\prime\prime \prime}}{\phi^{\prime}}-2\frac{\varphi^{\prime\prime}\phi^{\prime\prime}}{ \varphi^{\prime}\phi^{\prime}}+2\left(\frac{\varphi^{\prime\prime}}{\varphi^{ \prime}}\right)^{2}-\frac{\varphi^{\prime\prime\prime}}{\varphi^{\prime}} \right]\delta\phi=0, \tag{34}\] which can also be written as \[\ddot{\delta\phi}-\delta\phi^{\prime\prime}+\frac{f^{\prime\prime}}{f}\delta \phi=0,\quad f=\frac{\phi^{\prime}}{\varphi^{\prime}}. \tag{35}\] By conducting the mode expansion \[\delta\phi=\sum_{n}\psi_{n}(r)e^{i\omega_{n}t}, \tag{36}\] we can rewrite the perturbation equation into a Schrodinger-like equation: \[\hat{H}\psi_{n}\equiv\left[-\frac{d^{2}}{dr^{2}}+V_{\rm eff}\right]\psi_{n}= \omega_{n}^{2}\psi, \tag{37}\] where the effective potential \(V_{\rm eff}\equiv\frac{f^{\prime\prime}}{f}\). The particular form of the effective potential enables us to factorize the Hamiltonian operator into the product of an operator \(\hat{\cal A}\) and its Hermit conjugate: \[\hat{H}=\hat{\cal A}^{\dagger}\hat{\cal A}, \tag{38}\] where \[\hat{\cal A}=-\frac{d}{dr}+\frac{f^{\prime}}{f},\quad\hat{\cal A}^{\dagger}= \frac{d}{dr}+\frac{f^{\prime}}{f}. \tag{39}\] According to the theory of supersymmetric quantum mechanics [40], the eigenvalues of such a factorizable Hamiltonian operator are semipositive definite, namely, \(\omega_{n}^{2}\geq 0\). Therefore, any static solution is stable against small linear perturbations. The ground state has vanished eigenvalue \(\omega_{0}=0\), and the corresponding wave function is \(\psi_{0}(r)\propto f\). Now, let us consider an explicit kink solution. ## 3 Kink with AdS\({}_{2}\) metric To construct an explicit solution, one must specify the functions \({\cal F}({\cal X})\) and \(W(\phi)\). As can be seen from Eq. (15), the freedom in choosing \({\cal F}({\cal X})\) allows us to construct kink solutions with very simple warp factor. For example, if we take \[{\cal F}=-2\sqrt{-2{\cal X}}/l, \tag{40}\] such that \({\cal F}_{\cal X}=\frac{2}{l|(\partial_{x}\varphi|}\), then Eq. (15) leads to a metric solution of the following form: \[A(x)={\rm sgn}(\partial_{x}\varphi)\cdot x/l, \tag{41}\] where \(l>0\) is a parameter with the dimension of length, and \({\rm sgn}(x)\) is the sign function. Obviously, if we can construct solutions with monotonically increasing dilatons, such that \({\rm sgn}(\partial_{x}\varphi)=1\), then the warp factor \[A(x)=x/l \tag{42}\] describes a pure AdS\({}_{2}\) space with negative constant curvature \(R=-2l^{-2}\). As can be seen from Eqs. (17) and (18), the solutions of \(\varphi\) and \(\phi\) are completely determined by the superpotential \(W(\phi)\). Therefore, by choosing suitable superpotentials we can obtain monotonically increasing dilatons, one such example is [50] \[W(\phi)=kv^{2}\left[\sin\left(\frac{\phi}{v}\right)-c\right], \tag{43}\] for which Eqs. (17)-(19) have the following solution: \[\phi(x)=v\arcsin(\tanh(kx)), \tag{44}\] \[\varphi(x)=\frac{1}{2}\kappa v^{2}[ckx-\ln(\cosh(kx))],\] (45) \[V(\phi)=\frac{1}{2}\left[\cos^{2}\phi-2l(c-\sin\phi)\right], \tag{46}\] where \(k,v,c\) are some real parameters. The scalar field configuration in Eq. (44) corresponds to a sine-Gordon kink, whose width and the asymptotic behavior are controlled by parameters \(k\) and \(v\), respectively. For simplicity, we fix \(k=v=1\), so that \(\lim_{x\rightarrow\pm\infty}\phi(x)=\pm\pi/2\). For the dilaton field, the asymptotic behavior is \[\lim_{x\rightarrow\pm\infty}\varphi(x)=\frac{1}{2}\kappa[(c\mp 1)x+\ln 2]. \tag{47}\] Obviously, as the dimensionless parameter \(c\) is turned on, the dilaton becomes asymmetric. Especially, for \(c\geq 1\), \(\partial_{x}\varphi=\frac{1}{2}\kappa[c-\tanh(x)]\geq 0\), and the dilaton becomes a monotonically increasing function, see Fig. 1 (a). In the critical case \(c=1\), the dilaton approaches to a constant \(\frac{1}{2}\kappa\ln 2\), as \(x\rightarrow+\infty\). In what follows, we assume \(c\geq 1\) so that the metric solution is the one given in (42), and the integral (22) gives \[r=(1-e^{-x/l})l. \tag{48}\] It is convenient to introduce two dimensionless variables \(\alpha\equiv k\,l>0\) and \(u\equiv e^{-x/l}\in[0,+\infty)\)1, in terms of which the scalar and dilaton fields read Footnote 1: Note that \(x=-\infty,0,+\infty\) is mapped to \(u=+\infty,1,0\), respectively. \[\phi(u)=-v\arcsin[\tanh(\alpha\ln u)] \tag{49}\] \[\varphi(u)=-\frac{1}{2}\kappa v^{2}[\alpha c\ln u+\ln[\cosh( \alpha\ln u)]]. \tag{50}\] Using the relation \(\frac{d}{dr}=\frac{du}{dr}\frac{d}{du}=-l^{-1}\frac{d}{du}\), the Schrodinger-like equation (37) can be rewritten as \[-\frac{d^{2}}{du^{2}}\psi_{n}+\tilde{V}_{\rm eff}\,\psi_{n}=\tilde{w}_{n}^{2} \psi_{n}, \tag{51}\] where \(\tilde{V}_{\rm eff}=\partial_{u}^{2}f/f\) and \(\tilde{w}_{n}=w_{n}l\). A direct calculation gives \[\tilde{V}_{\rm eff}(u) = \alpha u^{-2}\left[(c+1)u^{2\alpha}+c-1\right]^{-2} \tag{52}\] \[\times \big{[}-6\alpha\left(c^{2}-1\right)u^{2\alpha}+(\alpha-1)(c-1)^{2}\] \[+ (\alpha+1)(c+1)^{2}u^{4\alpha}\big{]},\] and \[\psi_{0}(u)\propto f(u)=\frac{4u^{\alpha}}{\kappa v\left[(c+1)u^{2\alpha}+c-1 \right]}. \tag{53}\] The shape of the effective potential, and therefore, the linear spectrum varies with the values of the parameters, and there are four cases: _Case 1: \(c=1\)._ In this critical case \[\tilde{V}_{\rm eff}(u)=\frac{\alpha(\alpha+1)}{u^{2}}, \tag{54}\] and Eq. (51) has a regular solution of the following form: \[\psi_{n}(u)\propto\sqrt{u}\,J_{\alpha+\frac{1}{2}}(\tilde{w}_{n}u), \tag{55}\] where \(J_{m}(x)\) is the spherical Bessel function of order \(m\). The zero mode wave function \[\psi_{0}\propto f\propto u^{-\alpha} \tag{56}\] is divergent at \(u=0\), and therefore, cannot be normalized. _Case 2: \(c>1\), \(\alpha=1\)._ In this case, \[\tilde{V}_{\rm eff}(u)=\frac{2(c+1)\left[(c+1)\,u^{2}-3(c-1)\right]}{\left[(c+ 1)u^{2}+c-1\right]^{2}}. \tag{57}\] Unlike case 1 where \(\tilde{V}_{\rm eff}\) is divergent and positive at \(u=0\), here \[\tilde{V}_{\rm eff}(0)=-\frac{6(c+1)}{c-1} \tag{58}\] is finite and negative, see Fig. 2. The zero mode \[\psi_{0}=\mathcal{N}\frac{u}{(c+1)u^{2}+c-1}\propto f, \tag{59}\] is nodeless and normalizable, and the normalization constant \(\mathcal{N}=\left[\frac{4(c+1)\sqrt{c^{2}-1}}{\pi}\right]^{1/2}\) is finite. For \(c>1,\alpha\neq 1\), the effective potential behaves as \(\frac{(\alpha-1)\alpha}{u^{2}}\) when \(u\to 0\). Therefore, there are two other cases with singular effective potentials. _Case 3: \(c>1\), \(0<\alpha<1\)._ In this case, we get an attractive volcano-like potential, and the zero mode is normalizable if \(\alpha>\frac{1}{2}\). _Case 4: \(c>1\), \(\alpha>1\)._ In this case, the effective potential has a repulsive core at \(u=0\), and a finite well within \(u\in(0,1)\). The zero mode is always nodeless and normalizable. As can be seen from Eq. (53), the peak of \(f(u)\) locates at \(u=u_{\max}=\left(\frac{c-1}{c+1}\right)^{1/2\alpha}\), where \(f(u_{\max})=\frac{2}{\sqrt{c^{2}-1}}\) is independent of \(\alpha\). For \(c\geq 1\), we always have \(u_{\max}<1\), and if \(c\gg 1\), we have \(u_{\max}\approx 1\). ## 4 Conclusion In this work, we studied a 2D dilaton gravity where the dilaton has a generalized kinetic term \(\mathcal{F}(\mathcal{X})\). We found that the static field equations have a simple first-order formalism, from which exact kink solutions can be easily constructed after giving suitable superpotential \(W(\phi)\) and function \(\mathcal{F}(\mathcal{X})\). The solutions of the dilaton and the scalar fields are determined only by the form of \(W(\phi)\). While the solution of the metric depends on both \(W(\phi)\) and \(\mathcal{F}(\mathcal{X})\). Therefore, by tuning \(\mathcal{F}(\mathcal{X})\), one may obtain kink solutions with different metric. The example we given here is a sine-Gordon kink with a pure AdS\({}_{2}\) metric, but it is not difficult to construct many other solutions. Figure 1: Plots of (a) the dilaton field \(\varphi(x)\), and (b) the scalar potential \(V(\phi)\). For \(c\geq 1\), the dilaton field is monotonically increasing. Figure 2: Plots of (a) the effective potential \(\bar{V}_{\rm eff}(u)\), and (b) the zero mode \(\psi_{0}(u)\propto f(u)\). We also found that for arbitrary static solutions of our model, the linear perturbation equation can always be written as a Schrodinger-like equation with factorizable Hamiltonian operator, and the zero mode wave function can be given analytically. The zero mode is nodeless and normalizable if \(c>1\) and \(\alpha>1/2\). Therefore, the kink solution is stable against linear perturbation in this case. An interesting critical case is \(c=1\), for which the linear perturbation equation is exactly solvable. It is interesting to explore other kink solutions by choosing different \(W(\phi)\) and \(\mathcal{F}(\mathcal{X})\), or to investigate the quasi-normal mode issue of gravitating kinks [51; 52]. We leave these issues to our future works. ## Acknowledgments This work was supported by the National Natural Science Foundation of China (Grant numbers 12175169, 12247101 and 11875151), and the Natural Science Basic Research Plan in Shaanxi Province of China (Program No. 2020JM-198). ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2301.00597
Fairness Guaranteed and Auction-based x-haul and Cloud Resource Allocation in Multi-tenant O-RANs
The open-radio access network (O-RAN) embraces cloudification and network function virtualization for base-band function processing by dis-aggregated radio units (RUs), distributed units (DUs), and centralized units (CUs). These enable the cloud-RAN vision in full, where multiple mobile network operators (MNOs) can install their proprietary or open RUs, but lease on-demand computational resources for DU-CU functions from commonly available open-clouds via open x-haul interfaces. In this paper, we propose and compare the performances of min-max fairness and Vickrey-Clarke-Groves (VCG) auction-based x-haul and DU-CU resource allocation mechanisms to create a multi-tenant O-RAN ecosystem that is sustainable for small, medium, and large MNOs. The min-max fair approach minimizes the maximum OPEX of RUs through cost-sharing proportional to their demands, whereas the VCG auction-based approach minimizes the total OPEX for all resources utilized while extracting truthful demands from RUs. We consider time-wavelength division multiplexed (TWDM) passive optical network (PON)-based x-haul interfaces where PON virtualization technique is used to flexibly provide optical connections among RUs and edge-clouds at macro-cell RU locations as well as open-clouds at the central office locations. Moreover, we design efficient heuristics that yield significantly better economic efficiency and network resource utilization than conventional greedy resource allocation algorithms and reinforcement learning-based algorithms.
Sourav Mondal, Marco Ruffini
2023-01-02T11:03:50Z
http://arxiv.org/abs/2301.00597v3
# Fairness Guaranteed and Auction-based x-haul and Cloud Resource Allocation in Multi-tenant O-RANs ###### Abstract The open-radio access network (O-RAN) embraces cloudification and network function virtualization for base-band function processing by dis-aggregated radio units (RUs), distributed units (DUs), and centralized units (CUs). These enable the cloud-RAN vision in full, where multiple mobile network operators (MNOs) can install their proprietary or open RUs, but lease on-demand computational resources for DU-CU functions from commonly available open-clouds via open x-haul interfaces. In this paper, we propose and compare the performances of _min-max fairness_ and _Vickrey-Clarke-Groves_ (VCG) _auction_-based x-haul and DU-CU resource allocation mechanisms to create a multi-tenant O-RAN ecosystem that is sustainable for small, medium, and large MNOs. The min-max fair approach _minimizes the maximum OPEX of RUs_ through cost-sharing proportional to their demands, whereas the VCG auction-based approach _minimizes the total OPEX for all resources utilized while extracting truthful demands from RUs_. We consider time-wavelength division multiplexed (TWDM) passive optical network (PON)-based x-haul interfaces where PON virtualization technique is used to flexibly provide optical connections among RUs and edge-clouds at macro-cell RU locations as well as open-clouds at the central office locations. Moreover, we design efficient heuristics that yield significantly better economic efficiency and network resource utilization than conventional greedy resource allocation algorithms and reinforcement learning-based algorithms. Min-max fairness, multi-tenant Open-RAN, reinforcement learning, resource allocation, VCG auction. ## I Introduction The fifth-generation (5G) radio access networks (RANs) are standardized to meet a diverse set of QoS requirements to support broadband, low-latency, and machine-type communications. Applications like mixed reality, telesurgery, high-definition video streaming, and Industrial Internet-of-Things, to name a few, will be free from the spectrum crunch and network resource scarcity issues of the legacy RANs. However, the existing mobile networks with their "one size fits all" architecture lack sufficient flexibility and intelligence for efficient catering of such requirements [1]. Therefore, the necessity for a major architectural revolution is envisaged for beyond 5G and sixth-generation (6G) RANs. Over the past few years, major mobile network operators (MNOs) across the globe are collaborating within the _Open-RAN (O-RAN) Alliance_ to standardize an open and smart RAN architecture that can perform complex RAN management with the aid of software-defined networking (SDN), network function virtualization (NFV), and edge computing (EC) technologies [2]. This architecture typically follows 3GPP recommendations where the RUs perform low-PHY functions (typically split 7.2 and 7.3), while high-PHY, MAC, RLC, RRC, and PDCP functions are processed by the DU-CUs that can be hosted on OLT-Clouds with commercial off-the-shelf (COTS) hardware, as shown in Fig. 1. Recently, the IEEE P1914.1 standardization working group was created to specify the next-generation front-haul interface (NGFI). The RU-DU interface is known as the _NGFI-I_, or the _front-haul_ (maximum one-way latency bound = 100 \(\mu\)sec), and the DU-CU interface is known as the _NGFI-II_ or the _mid-haul_ (maximum one-way latency bound = 1 msec) [3]. The interface beyond CU to the 5G core is known as the _back-haul_; hence, the general term _x-haul_ is used. The incorporation of open clouds for DU-CU function processing over the open front/mid-haul interfaces in the O-RAN architecture creates new business opportunities for small, medium, and large MNOs as well as network service providers (NSPs) [4]. In turn, this creates a _multi-tenant O-RAN ecosystem_ where several MNOs deploy their RUs with macro and small-cell coverage over a certain geographic area but procure front/mid-haul and DU-CU function processing resources from the open and shared resource pool provided by various NSPs [5]. The primary benefit of this multi-tenant O-RAN architecture is minimization of the CAPEX and OPEX for the MNOs. The techno-economic analysis in [6] shows that \(\sim\)40% CAPEX and \(\sim\)15% OPEX over 5 years can be reduced by adopting SDN-based architectures for mobile network virtualization. In practice, government, municipality, or an alliance of MNOs can be the NSP that owns the open x-haul and cloud resources and distribute the resources among the MNOs. On the other hand, a competitive market model can also be created where the MNOs compete against each other or form opportunistic coalitions for procuring their required x-haul and cloud resources. These observations motivate us to propose efficient resource allocation mechanisms that create a multi-tenant O-RAN ecosystem that is sustainable for small, medium, and large MNOs. The cloud servers installed at a central office (CO) or optical line terminal (OLT) locations are referred to as OLT-Clouds, but their significant intermediate distance may become disadvantageous for supporting low-latency applications and front Fig. 1: A schematic diagram showing O-RAN architecture with functions of RU, O-DU, and O-CU and their corresponding interfaces.
2305.12464
Self-supervised Predictive Coding Models Encode Speaker and Phonetic Information in Orthogonal Subspaces
Self-supervised speech representations are known to encode both speaker and phonetic information, but how they are distributed in the high-dimensional space remains largely unexplored. We hypothesize that they are encoded in orthogonal subspaces, a property that lends itself to simple disentanglement. Applying principal component analysis to representations of two predictive coding models, we identify two subspaces that capture speaker and phonetic variances, and confirm that they are nearly orthogonal. Based on this property, we propose a new speaker normalization method which collapses the subspace that encodes speaker information, without requiring transcriptions. Probing experiments show that our method effectively eliminates speaker information and outperforms a previous baseline in phone discrimination tasks. Moreover, the approach generalizes and can be used to remove information of unseen speakers.
Oli Liu, Hao Tang, Sharon Goldwater
2023-05-21T14:03:54Z
http://arxiv.org/abs/2305.12464v3
Self-supervised Predictive Coding Models Encode Speaker and Phonetic Information in Orthogonal Subspaces ###### Abstract Self-supervised speech representations are known to encode both speaker and phonetic information, but how they are distributed in the high-dimensional space remains largely unexplored. We hypothesize that they are encoded in orthogonal subspaces, a property that lends itself to simple disentanglement. Applying principal component analysis to representations of two predictive coding models, we identify two subspaces that capture speaker and phonetic variances, and confirm that they are nearly orthogonal. Based on this property, we propose a new speaker normalization method which collapses the subspace that encodes speaker information, without requiring transcriptions. Probing experiments show that our method effectively eliminates speaker information and outperforms a previous baseline in phone discrimination tasks. Moreover, the approach generalizes and can be used to remove information of unseen speakers. Oli Danyi Liu, Hao Tang, Sharon Goldwater+School of Informatics, University of Edinburgh, United Kingdom [email protected], [email protected], [email protected] Footnote †: This work was supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S02248/1/1) and the University of Edinburgh, School of Informatics and School of Philosophy, Psychology & Language Sciences. **Index Terms**: self-supervised learning, unsupervised speech processing, speaker normalization ## 1 Introduction Self-supervised learning (SSL) models of speech are trained on large quantities of unlabeled data to learn useful representations of speech audio. Empirically, SSL pre-training has been shown to improve performance on downstream tasks while reducing reliance on annotated data [1, 2]. Researchers have also started to evaluate SSL models as computational models for phonetic acquisition and speech perception in humans [3, 4]. It is natural to ask what makes these learned representations so effective. So far, most work in the area has done so by analyzing _what_ type of information is encoded, finding that SSL models encode information ranging from acoustic [5] and contextual [6] cues, to gender [7] and speaker identity [8]. Some researchers have also analyzed _where_ in these models (at which layer) such information is encoded [9]. In this paper, we ask a question that has received little attention so far, namely _how_ different types of information are distributed across the dimensions of the representation space. Analysis targeted at this question can potentially improve the interpretability of the learned representations, inspire new methods for disentangling linguistic and non-linguistic information, and shed light on properties of the representation space that can be compared against neural representations or behaviour data of humans to evaluate the scientific value of the model. We know of no previous work analyzing this question in detail, although some studies have shown through two-dimensional visualizations that representations may cluster according to their corresponding language, gender, or phone class [7]; word-level context [6]; or speaker identity [8]. In this work, we explicitly investigate how speaker and phonetic information are distributed in the representation space learned by SSL models. We hypothesize that a good representation (one that is efficient and works well for predicting speech) should implicitly disentangle these two sources of information, since they vary independently in the processes that generate the speech signal. If so, then the two types of information would be represented in orthogonal subspaces of the representation space, and it would be possible to perform speaker normalization by identifying the speaker subspace and collapsing it. We test our hypothesis in experiments with two SSL models (APC [10] and CPC [11]) trained on English LibriSpeech. We use principal component analysis (PCA) to identify the subspaces that capture most of the variance for speakers and for phones, and confirm that these subspaces are nearly orthogonal. We then test whether the speaker subspace generalizes to unseen speakers by using it to perform speaker normalization on a test set: we project the representations in the test set onto the subspace orthogonal to the speaker subspace learned from a training set. We use probing classifiers and an ABX phone discrimination task to show that with the resulting representations, (a) little speaker information remains; and (b) within- and across-speaker phone discrimination improves, outperforming a baseline of utterance-level standardization [8]. Our results suggest that these two SSL models implicitly disentangle speaker and phone information into orthogonal subspaces. ## 2 Overview of the approach We demonstrate the orthogonal speaker and phone subspaces in the SSL representations using a simple approach, where we first aggregate the representations by speaker and phone, then perform PCA on the resulting matrices1 and compare the principal directions. We then show that collapsing the speaker subspace is an effective speaker normalization technique, for both seen and unseen speakers. These steps are explained in more detail below. Footnote 1: Performing PCA on the raw representations is far more compute-intensive and did not give the clear patterns we see in Figure 1; the aggregation step seems necessary to overcome noise. **Aggregating representations by speaker and phone** Suppose that the dataset for analysis contains a set of speakers \(S\) and has time-aligned phone transcriptions (which are used for analysis only, and are not needed for speaker normalization). We denote \(Z_{s}\) to be the set of frame-level representation vectors (each of dimension \(D\)) from an SSL model of a speaker \(s\in S\). Similarly, we define \(Z_{p}\) to be the set of hidden vectors labeled as phone \(p\in P\), where \(P\) is a phone set; we define \(Z_{s,p}\) to be the set of hidden vectors labeled as phone \(p\in P\) and of a speaker \(s\in S\). For our analysis, we aggregate the representations in three ways: (1) by speaker (2) by phone, and (3) by each combination of speaker and phone. This results in three matrices: The first, \(M_{\text{spk}}\), is a \(|S|\times D\) matrix of speakers, where the \(s\)-th row \(M_{\text{spk}}[s]\) is the average of all frames of a speaker \(s\in S\), i.e., \(M_{\text{spk}}[s]=\text{avg}(Z_{s})\). Analogously, \(M_{\text{phn}}\) is a \(|P|\times D\) matrix of phones, where the \(p\)-th row \(M_{\text{phn}}[p]\) is the average of all frames labeled as \(p\in P\), i.e., \(M_{\text{phn}}[p]=\text{avg}(Z_{p})\). Finally, \(M_{\text{joint}}\) is a \(|S||P|\times D\) matrix, where each row corresponds to \(\text{avg}(Z_{s,p})\) for a particular speaker \(s\) and a particular phone \(p\). Identifying speaker and phone dimensionsApplying PCA to \(M_{\text{spk}}\) gives us \(|S|\) principal components that describe the directions that account for the largest variance among the speakers. A _speaker subspace_ is the subspace spanned by the eigenvectors corresponding to the top eigenvalues. Similarly, PCA on \(M_{\text{phn}}\) and \(M_{\text{joint}}\) gives us a phone subspace and a joint speaker-phone subspace, respectively. In section 4, we analyze the relationships between these matrices by measuring the similarity between their principal directions. We also look at the projection of \(M_{\text{joint}}\) on each of its principal directions to explore what specific information is encoded in each dimension. Collapsing the speaker subspaceIf speaker and phonetic information are represented in orthogonal subspaces, projecting frames to the subspace orthogonal to the speaker subspace would remove speaker information without affecting how well phonetic information can be extracted. Specifically, for a hidden vector \(z\) and a principal direction \(v\), the vector \(z^{\prime}=z-(z^{\top}v)v\) is orthogonal to \(v\). This step can be performed on multiple principal directions, and we refer to the projection as collapsing the speaker space when \(v\) is a principal component of \(M_{\text{spk}}\). Before doing this, we need to choose the number of principal components to use, which can be done based on the cumulative variance explained by the principal components for the training set, or by tuning development set performance (e.g., on phone discrimination). To test whether a speaker subspace generalizes across different sets of speakers, we use a speaker subspace learned from a different set of speakers for collapsing. ## 3 Experimental setup The two models used in this work are the autoregressive predictive coding (APC) model [10] and the contrastive predictive coding (CPC) model [11]. Both CPC and APC are trained to predict one or several future frame(s) given past context in the same utterance, with the main difference being that CPC is optimized with a contrastive learning objective, _i.e._ to distinguish the true future frame from a set of negative samples, whereas APC is directly optimized to minimize the distance between the predicted and the target frames. These models, which learn representations by trying to predict unobserved continuous frames in the future, stand in contrast to some other SSL models such as HuBERT [12] and wav2vec 2.0 [13], which perform masked prediction of quantized units given context on both sides, and which we intend to explore in future.2 Footnote 2: We follow recent work in speech technology and machine learning in using the term _predictive coding_ to refer to error-driven learning based on forward prediction, contrasting with masked prediction or other objectives. In the more general sense used in information theory [14, 15] and computational neuroscience [16, 17, 18], masked prediction can also be viewed as a type of predictive coding. We use two CPC models of different sizes and one APC model. Both CPC-big and CPC-small are taken from the Zero Speech 2021 baseline3. We use our own implementation of APC, following [15]. The CPC and APC models differ in several ways. CPC has a 5-layer convolution that extracts 10-ms frames from wave samples for further processing, while APC operates on 10-ms Log Mel features. We extract the representations from the LSTM component of the models (also called context network in CPC). The prediction horizon, as parameterized by \(K\), is 12 for CPC and 3 for APC. The specific hyperparameters are chosen based on prior work [19, 10] and are listed in Table 1. Footnote 3: [https://github.com/zerospesch/zerospesch2021_baseline](https://github.com/zerospesch/zerospesch2021_baseline) For both CPC models, the negative samples are drawn from the same speaker but not necessarily from the same utterance as the frame to be predicted. It is possible that a CPC model that draws negative samples without speaker restrictions would be qualitatively different in terms of speaker and phone encoding, but we limit the current study to the within-speaker sampling case since it gives better phone classification results [19] and is thus the more common setup. ### Dataset We use data from LibriSpeech [20], a corpus of English read speech. We perform our initial analyses on the dev-clean portion, which contains 8 minutes of speech for each of 19 male and 21 female speakers.4 For testing generalization, we extract a speaker subspace from the train-clean-100 portion, which contains 25 minutes of speech for each of 126 male and 125 female speakers. We then evaluate on dev-clean and test-clean by collapsing the speaker subspace learned from train-clean-100. The test-clean portion consists of 8-minute speech per speaker for 20 male and 20 female speakers. The speakers occurred in the three sets are mutually disjoint. Footnote 4: The documentation claims that dev-clean contains 20 males and 20 females, but through our analysis, we found that one of the “male” speakers (ID: 7976, name: JenniferRutters) appears to be female. The phone labels required for our analyses were obtained by performing forced alignment with an acoustic model created according to the official Kaldi recipe for LibriSpeech data5. We ignore frames aligned to "silence" and "spoken noise" labels in our analysis, leaving 39 phone categories (i.e. \(|P|=39\)). Footnote 5: [https://github.com/kaldi-asr/kaldi/blob/master/eqg/](https://github.com/kaldi-asr/kaldi/blob/master/eqg/) ### Evaluation To evaluate how much speaker information is removed and its effect on the phonetic information in the representations, we perform two types of tests. First, we train speaker and phone **probing classifiers**. We use linear classifiers, which are trained to predict either speaker or phone labels based on a single representation frame. We train each classifier on a random half of each speaker's utterances, using the other half for testing. We also use a machine ABX **phone discrimination** test [21]. This \begin{table} \begin{tabular}{l c c c c c} \hline \hline Model & LSTM layers & Extracted layer & Hidden units & \(K\) & Training data \\ \hline CPC-big & 4 & 2 & 512 & 12 & LL 6k hrs \\ CPC-small & 2 & 2 & 256 & 12 & LS 100 hrs \\ APC & 3 & 3 & 512 & 3 & LS 360 hrs \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of model specifications. Models are trained on LibriLight (LL) or LibriSpeech (LS) train-clean. test asks whether triphone \(x\) is more similar to triphone \(a\) than \(b\), where \(a\) and \(x\) are tokens of the same type (_e.g._ _'aba'_) and \(b\) is of a different type ('_apa'_). The final ABX error rate is computed by aggregating the error rate with (\(a\), \(b\), \(x\)) iterating over all tokens combinations and triphone contrasts. In _within-speaker_ ABX test, each triplet are tokens produced the same speaker. In the _across-speaker_ setting, \(a\) and \(b\) are drawn from the same speaker and \(x\) from a different speaker. We follow the Zero Speech challenge splits [19] for within- and across-speaker ABX tests. ### Baselines for speaker normalization experiments Previous work [8] has shown that utterance-level standardization (centering plus rescaling) effectively removes speaker information from CPC representations, improving performance on ABX tests and other tasks benchmarked in the Zero Speech 2021 challenge. We therefore use utterance-level standardization as a baseline, but also report centering alone, which we found to work better. For analyses where data from all speakers is available in advance, we apply these approaches at the speaker level rather than the utterance level for a fair comparison to our methods. ## 4 Analysis of subspaces To examine the relationship between speaker and phone subspaces, we compute the similarity between the top principal directions of \(M_{\text{pk}},M_{\text{phan}}\), and \(M_{\text{joint}}\), as extracted from dev-clean. Since we only care about measuring orthogonality, we use the absolute value of the dot product as our similarity measure; this ranges from 0 to 1. When comparing the principal directions of \(M_{\text{phan}}\) and \(M_{\text{qpk}}\), we found very low similarity: e.g., for the CPC-big model, amongst the top 20 speaker directions, their similarity with the most aligned phone direction is on average only \(0.13\) (variance: \(0.002\), maximum: \(0.26\)), indicating that the speaker and phone subspaces are nearly orthogonal. We then compared the principal directions of \(M_{\text{phan}}\) and \(M_{\text{qpk}}\) to those of \(M_{\text{joint}}\), as shown in Figure 1. We see that, of the top 13 directions of \(M_{\text{joint}}\), only two (directions 1 and 2) are similar to principal directions of both \(M_{\text{phan}}\) and \(M_{\text{qpk}}\). Directions 0 and 3-11 of \(M_{\text{joint}}\) align only to phone directions, while direction 12 aligns only to a speaker direction. Moreover, while directions 1-2 are somewhat aligned with both speaker direction 0 and phone direction 1, this does not mean that speaker direction 1 is aligned with phone direction 0--in fact, their cosine similarity is only \(0.169\). Taken together, these observations further support the orthogonality of speaker and phone directions. The preceding analysis also informs us of the relative variance of phone and speaker encoding. While \(M_{\text{joint}}\) consists of combinations of 40 speakers and 39 phones, _i.e._ roughly the same number of speaker and phone categories, most of the top principal directions are used to encode phones. That is, the SSL representations encode much less inter-speaker variance and use more directions to discriminate phones than speakers. This is not a surprise since the training objective of both models entail learning to discriminate between phones and not speakers. What remains to be explained is how the models come to encode speaker information despite only being trained to predict or discriminate phones within each speaker. To better understand what is encoded in some of the top principal directions, we visualize the projection of \(M_{\text{joint}}\) onto its dimensions 0, 1, and 12 (Figure 2). In the projection plot for dimension 0, we can see vowels lying on the positive end and fricatives and affricates lying on the negative end, indicating that this dimension discriminates phonetic categories along a sonority gradient while maintaining no information about speaker Figure 1: Similarity (absolute value of dot product) between the principal directions of \(M_{\text{phk}}\) & \(M_{\text{joint}}\) (top) and \(M_{\text{qpk}}\) & \(M_{\text{joint}}\) (bottom), computed from dev-clean using representations from CPC-big. The patterns are similar for APC or CPC-small. Figure 2: Projection of \(M_{\text{joint}}\) (extracted from dev-clean using CPC-big) onto its principal dimension 0 (left), 1 (middle), and 12 (right). Each dot represents a row in \(M_{\text{joint}}\), with each speaker plotted in a single column and phone identities colour-coded. The first 19 columns represent male speakers and the rest females. The legend on top is ordered by each phone’s average projection on dimension 0. differences. Dimension 1, as well as dimension 2 which is not shown here, differentiate between genders while the consistent ordering of the coloured dots across all columns implies that they also capture some phonetic information. In contrast, dimension 12 contains relatively greater inter-speaker variance, albeit still much smaller than variance between phones captured in dimension 0 and 1. The projection on these dimensions are consistent with our observation from the similarity analysis. ## 5 Application to speaker normalization Collapsing speaker subspace of seen speakersWe first explore speaker normalization for a set of known speakers, by learning the speaker subspace from dev-clean, and collapsing the speaker directions on the same dataset. Since the number of speakers is small (40) relative to the number of dimensions (256 or 512), we collapse all 40 of the speaker directions. (See below for results with fewer directions collapsed.) Table 2 reports results in comparison to speaker-level centering and standardization, because here all speakers are known in advance. Utterance-level centering and standardization show similar results but are slightly less effective in removing speaker information. Our method shows the best ABX results in nearly all cases, as well as close to 100% speaker error rates and little change in phone error rates, indicating that this method removes nearly all the speaker information while improving the accessibility of phone information for unsupervised tasks. We also see from the baselines that removing speaker information need not _always_ improve phone discrimination: in particular, applying rescaling on top of centering increases both the speaker error rate _and_ the phone classification and ABX error rates. Collapsing speaker subspace of unseen speakersWe first explore how results vary depending on the number of dimensions (and % of variance) that are removed. Figure 3 illustrates the results on CPC-big, comparing the case where the speaker subspace is learned and applied to the same speakers and the case where we apply it to new speakers. When we have relatively few speakers as in dev-clean, and we collapse the speaker subspace of the same speakers, including all of the speaker dimensions is helpful. However, we see that when generalizing from the large number of speakers in train-clean-100, many of the lower principal components appear to be overfitting: after about 50 dimensions, speaker error rate on the dev set increases only slowly while ABX error also begins to rise. Finally, after learning the speaker directions from the training set, we choose the number of directions to collapse based on the best across-speaker ABX scores on the development set, and present results for both the development and test sets in Table 3. For CPC-big, CPC-small, and APC, we collapse speaker subspaces that explain 98%, 95%, and 95% of training variance (57, 36, and 30 dimensions, respectively). We compare to utterance-level centering, the baseline that gives the best results for the ABX task (of those applicable to previously unseen speakers). Both methods remove a similar amount of speaker information, but our method is better at the ultimate goal of improving phone discrimination in nearly all cases (the exception being the APC model on the test set). Moreover, it can be applied in a fully streaming setting since the speaker subspaces to collapse are computed in advance. ## 6 Conclusions In analyses of SSL speech representations based on predictive coding models, we showed that speaker information and phonetic information are encoded in orthogonal dimensions of the representation space, indicating that these models are implicitly disentangling the two sources of information. This insight led to a simple but effective speaker normalization technique that requires only speaker-labeled training data, and leads to improved phone discrimination on the test set. Important avenues for future work include investigating the extent to which the speaker dimensions generalize to out of domain data (e.g., other languages or genres of speech), and whether the orthogonality we found extends to other SSL models based on different principles (e.g., masked prediction [12, 13]). \begin{table} \begin{tabular}{l l l l l l} \hline \hline Model & Error rate (\%) & Original & \begin{tabular}{l} Speaker-level \\ centered \\ \end{tabular} & \begin{tabular}{l} Speaker space \\ collapsed \\ \end{tabular} \\ \hline \multirow{4}{*}{CPC-big} & Speaker & 0.46 & 76.39 & +12.00 & **96.55** \\ & Phone & 24.32 & **24.29** & +2.14 & 24.49 \\ & ABX Within & 3.38 & 3.39 & +1.35 & **3.20** \\ & ABX Across & 4.11 & 3.98 & +1.34 & **3.75** \\ \hline \multirow{4}{*}{CPC-small} & Speaker & 11.04 & 67.95 & +11.21 & **95.54** \\ & Phone & 35.48 & **35.10** & +1.07 & 36.34 \\ & ABX Across & 6.22 & 6.14 & +1.97 & **5.11** \\ & ABX Across & 8.10 & 7.47 & +1.69 & **6.78** \\ \hline \multirow{4}{*}{APC} & Speaker & 15.87 & 83.42 & +4.95 & **97.23** \\ & Phone & 35.93 & **35.78** & +1.20 & 36.44 \\ \cline{1-1} & ABX Within & 6.77 & 6.50 & +0.50 & **6.34** \\ \cline{1-1} & ABX Across & 9.83 & **8.96** & +0.36 & 9.04 \\ \hline \hline \end{tabular} \end{table} Table 2: Probing (speaker & phone) and ABX (within- across-speaker) error rates on dev-clean. \begin{table} \begin{tabular}{l l l l l l} \hline \hline \multicolumn{2}{l}{Model} & Error (\%) & Original & Unt. centered & Collapsed \\ \hline \multirow{4}{*}{CPC-big} & Speaker & 0.46 & **69.88** & 64.51 \\ & \multirow{2}{*}{CPC-big} & ABX Within & 3.38 & 3.43 & **3.20** \\ & & ABX Across & 4.11 & 4.06 & **3.76** \\ \cline{2-6} & \multirow{2}{*}{CPC-small} & Speaker & 11.04 & 69.20 & **78.12** \\ & & ABX Within & 6.22 & 5.86 & **5.29** \\ & & ABX Across & 8.10 & 7.22 & **6.92** \\ \cline{2-6} & \multirow{2}{*}{APC} & Speaker & 15.87 & 77.3 & **79.74** \\ & & ABX Within & 6.77 & 6.42 & **6.38** \\ & & ABX Across & 9.83 & **9.02** & 9.14 \\ \hline \multirow{4}{*}{CPC-big} & \multirow{2}{*}{CPC-big} & ABX Within & 3.29 & 3.27 & **3.10** \\ & & ABX Across & 4.22 & 4.11 & **4.01** \\ \cline{1-1} \cline{2-6} & \multirow{2}{*}{CPC-small} & ABX Within & 5.86 & 5.54 & **4.85** \\ \cline{1-1} & & ABX Across & 7.48 & 6.91 & **6.37** \\ \cline{1-1} \cline{2-6} & \multirow{2}{*}{APC} & ABX Within & 6.62 & **6.13** & 6.19 \\ \cline{1-1} & & ABX Across & 9.47 & **8.72** & 8.98 \\ \hline \hline \end{tabular} \end{table} Table 3: Results of speaker normalization on dev-clean (top) and test-clean (bottom) by collapsing speaker subspaces learnt from train-clean-100. Figure 3: Speaker classification (blue) and across-speaker ABX (orange) on dev-clean after collapsing speaker subspaces learned from either train-clean-100 (solid lines) or dev-clean (dashed lines). The large dots are where the number of dimensions covers 95% of the variance.
2304.05231
Design of a low-velocity impact framework for evaluating space-grade materials
Material deformation and failure under impact loading is a subject of active investigation in space science and often requires very specialized equipment for testing. In this work, we present the design, operational analysis and application of a low-velocity ($\sim 100$ m/s) projectile impact framework for evaluating the deformation and failure of space-grade materials. The system is designed to be modular and easily adaptable to various test geometries, while enabling accurate quantitative evaluation of plastic flow. Using coupled numerical methods and experimental techniques, we first establish an operating procedure for the system. Following this, its performance in two complementary impact configurations is demonstrated using numerical and experimental analysis. In the first, a Taylor impact test is performed for predicting the deformed shape of a cylindrical projectile impinging on a rigid substrate. In the second, deformation of a plate struck by a rigid projectile is evaluated. In both cases, physics-based models are used to interpret the resulting fields. We present a discussion of how the system may be used both for material property estimation (e.g., dynamic yield strength) as well as for failure evaluation (e.g., perforation and fracture) in the same projectile impact configuration.
Vineet Dawara, Ashok Bajantri, Harish Singh Dhami, SVS Narayana Murty, Koushik Viswanathan
2023-04-11T14:04:08Z
http://arxiv.org/abs/2304.05231v1
# Design of a low-velocity impact framework for evaluating space-grade materials ###### Abstract Material deformation and failure under impact loading is a subject of active investigation in space science and often requires very specialized equipment for testing. In this work, we present the design, operational analysis and application of a low-velocity (\(\sim 100\) m/s) projectile impact framework for evaluating the deformation and failure of space-grade materials. The system is designed to be modular and easily adaptable to various test geometries, while enabling accurate quantitative evaluation of plastic flow. Using coupled numerical methods and experimental techniques, we first establish an operating procedure for the system. Following this, its performance in two complementary impact configurations is demonstrated using numerical and experimental analysis. In the first, a Taylor impact test is performed for predicting the deformed shape of a cylindrical projectile impinging on a rigid substrate. In the second, deformation of a plate struck by a rigid projectile is evaluated. In both cases, physics-based models are used to interpret the resulting fields. We present a discussion of how the system may be used both for material property estimation (e.g., dynamic yield strength) as well as for failure evaluation (e.g., perforation and fracture) in the same projectile impact configuration. Keywords -Gas gun, Large strain deformation, projectile impact, low-velocity impact Introduction Failure of metallic materials under projectile impact is a subject of active investigation in several sub-domains of space science. For instance, mitigating satellite failure due to space debris requires materials and structures that are resilient enough to withstand high strain-rate deformation of \(\sim 10^{3}-10^{5}\) s\({}^{-1}\)[1, 2, 3]. While extensive work has been carried out on hypervelocity (\(>1\) km/s) impact events in the context of space debris in low-earth orbit [4, 5], relatively lesser work has focused on low-velocity (\(10-100\) m/s) impact events. However, these events are equally important to understand--as an example, Whipple shields can be significantly less effective at low velocity impact due to complete projectile penetration and lack of fragmentation [6]. While hypervelocity impact is common in low earth orbit applications, low-velocity events are expected to be more likely for deep-space missions. The performance of a complex structure is commonly obtained by piecing together behaviour of representative samples under individual unit impact or deformation events [7, 8]. These events can help drive the design of new materials and/or structures with engineered internal features [9, 10]. Quantifying impact-induced failure is also vital for evaluating the performance of more commonplace materials such as structural steels and glass sandwich panels [11, 12] that are used for spacecraft and launch vehicle applications. There is hence a fundamental need for a well-controlled impact testing system for materials performance evaluation, especially in the context of space applications. Unfortunately, while many impact test systems exist in research laboratories across the world (e.g., see Ref. [6]), with several research results being continuously published, such a system is difficult to reproduce from scratch, due to the paucity of available design information, operating procedures and calibration data. This manuscript attempts to fill that gap by outlining the design and performance of a projectile impact test-bed. We provide the basic design, detailed performance evaluation, as well as demonstrations of the use of this setup in two common impact configurations [13]. In the first--the so-called Taylor impact test--a deformable projectile is struck against a comparatively rigid target plate. This configuration is most often used for material property evaluation--the yield strength of the projectile material may be evaluated from its deformed shape [14]. The second test, termed plate impact, is a complementary configuration, in which a comparatively rigid projectile is impacted against a deforming target plate. Here, the focus is on evaluating failure mechanisms in the target material. In either case, specific microstructural mechanisms such as adiabatic shear banding and crack bifurcation can be correlated with impact parameters and kinematically measured deformation fields [15, 16, 17, 18, 19]. These mechanisms are, in general, not amenable to direct real-time measurements, necessitating the use of techniques such as 'quick stop' projectiles to infer temporal evolution [20]. Results of these experiments often suggest the use of structural and geometrical changes to either accentuate or mitigate the occurrence of plastic failure in the projectile/target or both [21, 22, 23]. The primary kinematic/geometric parameters in a typical projectile impact test in either configuration described above are the projectile velocity, size and shape. The desired projectile velocity is achieved using a gas gun, wherein an inert gas (such as He, Ar or N\({}_{2}\)) is compressed to a suitable initial pressure and allowed to expand rapidly in order to accelerate the projectile. The result is projectile exit velocities that can range from 10-250 m/s [24]. The accelerated projectile is retained within a barrel and strikes a target plate placed a fixed distance away. One of several impact models may then be employed to evaluate failure in either the projectile or the target, as a function of impact parameters [25, 26, 27]. While the setup described and analyzed in this manuscript is not unlike common frameworks described before, we present a complete design-to-performance description using a combination of quantitative mechanics models and experimental observations for both impact configurations. The primary purpose of the framework is to evaluate failure mechanisms in space-grade materials, that are of specific interest to the Indian Space Research Organization. The design does not focus on maximizing projectile velocity; rather, the emphasis is on achieving versatility in conducting different projectile-target impact experiments. The experimental configuration is also easily adaptable for realizing larger target sizes and a wider variety of target shapes to simulate actual parts. We envisage this manuscript as being the first in a series of works that explore a systematic method for designing lightweight structural materials for space applications. The manuscript is organized as follows. The basic design and realization details of the impact framework are discussed in Sec. 2. The main results are presented in three parts. Firstly, coupled numerical simulations and experimental velocity measurements are used to evaluate performance envelopes in Sec. 3. Following this, we present investigations of the two impact configurations described above--Taylor impact (Sec. 4) and plate deformation (Sec. 5). In each of these tests, experimental results are accompanied by a presentation of mechanics-based models to quantitatively understand the final deformation. In all three parts, the findings are followed by a short exposition on how the corresponding results may be used in a practical setting. Finally, a broad overall discussion is presented in Sec. 6, along with some concluding remarks. ## 2 Experimental configuration The primary components of the impact framework are a gas gun, a barrel of fixed length and a target chamber, housing the target, see Fig. 1(a). These are mounted on dovetail guides fixed to a supporting structure. The overall dimension of the setup is 2.5 m \(\times\) 1 m \(\times\) 1 m. Dovetail mounting enables easy projectile loading, while also allowing changes to individual components to be made quickly. The detailed description of individual components, as well as the triggering mechanism and velocity measurement system, are discussed below. ### Gas chambers and triggering mechanism The gas gun has a total volume of 1.8 litres and is designed to withstand a maximum pressure of 50 bar, see Fig. 1(b). The overall design and triggering mechanism is based on that commonly used for split-Hopkinson pressure bars (see, for instance, Ref. [28]). The gun has two internal chambers, termed valve and main chambers. A two-valve arrangement regulates gas flow from an external compressed air source (here a portable air compressor) to the gas chamber. The projectile is typically supported by coaxial Teflon rings, called sabots. One or two sabots are used, depending on the projectile length. Their use is necessary because sabots minimize friction between the projectile and interior barrel surface, while also helping align projectiles of arbitrary diameter centrally with the barrel axis. The projectile is first loaded from the target, through the barrel and close to the end connected to the gun (termed the breech end). This is done by moving the target plate on the dovetail and using a long enough tube to push the projectile through the length of the barrel. The actuation mechanism to accelerate the projectile is then operated as follows, see Fig. 1(b) and (c). When valve 1 is opened and valve 2 closed, pressurized gas from the external source flows into the valve chamber. This gas then enters the main chamber via the clearance provided between the sliding disc and interior surface of the valve chamber. O-rings are provided on the sliding disc and support plate to prevent gas from flowing directly to the barrel. Once the main chamber is filled to the required pressure (measured using a pressure gauge), valve 1 is closed. The Figure 1: Operational drawings of the gas gun system. (a) Schematic of the complete system showing the gun, barrel and target chamber mounted on guideways for easy of disassembly. Panels (b) and (c) show the operation mechanism using the two-valve system in sequence. The projectile is at the breech end of the gun, and is fired by air escaping from the main chamber. projectile placed at the breech end of the barrel is now triggered by opening valve 2. When this is done, pressure inside the valve chamber drops and high-pressure gas from the main chamber flows back, pushing the sliding disc. It enters the barrel immediately through the ports, see Fig. 1(c), providing the required driving force for projectile motion. This simple triggering system is inherently advantageous as it does not require any operating skill, and can be easily reconfigured for subsequent tests. Further, experiments have been performed to demonstrate that a projectile velocity of \(>100\) m/s can be easily achieved by simply using compressed air, with higher velocities attainable using a light gas such as He [29]. ### Barrel and velocity measurement system The barrel housing the projectile has an inner diameter of 21 mm and total length of 1.4 m. Five sets of four vent holes (10 mm diameter, 50 mm apart, arranged at 90\({}^{\circ}\) from each other) are provided along the length of the barrel for allowing gas to escape, see Fig. 2. If this is not done, the possibility of high gas pressure resulting in secondary impact may obscure the results. The location of the first hole, at a distance of 740 mm from the barrel breech end, is chosen to maximize projectile exit velocity, while preventing secondary impacts. The velocity measurement Figure 2: Measurement methods for determining projectile exit velocities from the barrel. (Left) Section near the barrel end showing vent holes and the three independent measurement systems—a camera/light source combination, IR sensors with a microcontroller and an oscilloscope for interrupt measurements. (Right) Target chamber cross section showing the plate mounting details and momentum trap. system consists of IR sensors and a microcontroller (Arduino Mega 2560), as shown in Fig. 2. An oscilloscope is also used in parallel with the microcontroller for verifying the measurements. A camera and light source arrangement may also be employed across opposite lying ventholes for this purpose. Projectile velocity within the barrel, and prior to entering the target chamber, is measured using two IR sensors that read signals from the third and fifth holes, spaced 100 mm apart, see Fig. 2. Each IR sensor consists of an emitter and receiver placed at opposite facing vent holes on the barrel. The sensor's voltage output changes from low to high at the instant when the projectile is at the vent hole, and is recorded by the microcontroller. The projectile velocity is estimated the distance between these holes, divided by the time difference between the two crossing events. A second velocity measurement is performed using the voltage output of the IR sensor placed at the third hole, using a digital oscilloscope (GW Instek GDS 11028). When the projectile passes through the hole, it obstructs the receiver, and the voltage output changes from high to low and back. The width of this signal gives the time taken by the projectile to pass through the hole, and the velocity can be estimated by dividing the length of the projectile by the obtained time interval. Finally, a third velocity measurement, for comparison, is also performed by capturing a high speed image sequence through the second vent hole using a high-speed camera (Photron AX100). The camera and light source are placed opposite each other, see Fig. 2. Initially, the camera records a bright image that changes to dark as the projectile passes. By counting the number of dark images \(n\) and noting the frame rate (FPS) set for the camera, the velocity is estimated as \[V_{0}=\frac{L_{0}}{n/\text{FPS}} \tag{2.1}\] where \(L_{0}\) is the length of the projectile. All three measurement methods gave identical values (within \(\pm 1\%\)) for the projectile velocity and are hence reported interchangably in the rest of the manuscript. ### Target chamber The fully enclosed target chamber consists of a circular mounting fixture with holes for mounting the target plate, see Fig. 2. This circular fixture is easily replaceable for mounting different sizes of targets, varying from 50-120 mm (square) and 150-200 mm (circular) dimensions. Appropriate spacer lengths can be provided to change the distance between the barrel exit and the target plate, to accommodate load cells for force measurement. Three viewing windows--two parallel to each other and the other perpendicular, are provided for imaging purposes. A momentum trap, made either of wood or silicone rubber is placed at the end to arrest projectile motion in case of complete target perforation. ## 3 Results I: Gas gun performance evaluation The basic requirement of a single impact experiment is that a projectile of mass \(m\) be accelerated to a pre-determined velocity \(V_{0}\). Since the gas gun is pressure-controlled, we require a scheme for evaluating the projectile exit velocity as a function of the filled chamber pressure \(p_{0}\) and \(m\). This is obtained using numerical 1D gas dynamics calculations and results in so-called velocity-pressure curves. These numerical results are then compared with experimental measurements to provide an operation scheme for the gun. ### Numerical 1D gas dynamics calculations As discussed in Sec. 2.1, as gas exits the main chamber, it causes the projectile to move forward. This is essentially effected via compression and rarefaction waves that develop inside the main chamber, causing pressure variations behind the projectile [29]. Since the pressure behind the projectile determines the force moving it forward, these waves influence its velocity during motion inside the barrel. Given the finite length of the main gas chamber, reflected waves are soon set up and travel forward towards the projectile, further lowering the pressure behind it. An analysis of these pressure fluctuations as well as simulations of projectile velocity are performed using the description in Ref. [29]. Only the final results are presented in this section. Figure 3(left) shows the gas dynamics calculation for one such situation when the initial gas pressure is set to 5 bar, the projectile mass is 10 g, \(A_{b}\) is the cross-sectional area of the barrel, and \(a_{0}=435\) m/s is the initial speed of sound. The gas is assumed to be ideal with a specific heat ratio of 1.4. The horizontal (time) and vertical (distance along barrel) axes are presented in non-dimensional form using \(p_{0},M,a_{0}\) and \(A_{b}\). The transition (line OD) from the main chamber to the barrel is assumed to be sudden in this analysis. The solid and dashed lines in this figure are the two characteristic lines corresponding to isentropic one-dimensional gas flow, given by \[\frac{D(u\pm\sigma)}{Dt}=0\;\;\text{along}\;\;u\pm a\;\;\text{ characteristic lines} \tag{3.1}\] where \(u\) is the gas velocity, \(a\) is the speed of sound in gas (in general, not constant), \(\sigma=\int dp/(a\rho)\) is the Riemann function, with \(p\) and \(\rho\) the pressure and density of the gas, respectively. The \(u-a\) (\(u+a\)) lines, shown by dashed (solid) lines, correspond to disturbances moving away from (towards) the projectile. Only a few characteristic lines are drawn, along with the chamber and barrel schematic (middle). When the projectile moves, it sends corresponding disturbance (rarefaction waves) towards the main chamber, see Fig. 3(left). As these waves arrive at the transition between barrel and main chamber, they are partly reflected towards the projectile (compression waves) and partly transmitted (rarefaction waves) to the main chamber, where they are further reflected Figure 3: Gas dynamic calculations for dynamic gas pressure and exit velocity of projectile. (Left) A space-time diagram showing characteristic curves for rarefaction waves behind the projectile. Corresponding sections of the main gas chamber and breech end of the barrel are shown in the middle schematic. Solid (dashed) lines correspond to \(u+a\) (\(u-a\)) characteristics (Right) Variation of pressure and velocity with distance along the barrel. Projectile mass \(M=10\) g at initial pressure \(p_{0}=5\) bar, \(A_{b}\) is the cross-section area of the barrel, and intial speed of sound \(a_{0}=435\) m/s. (reflected waves) on arriving at the other end. In this figure, OAB consists of only rarefaction waves, with no effect of reflected waves, and is termed the simple wave region. When reflected waves return to the transition, they again undergo partial reflection/transmission away from/towards the projectile. The characteristic gas equation, Eq. 3.1, is solved using a step-by-step numerical calculation, simultaneously with Newton's second law for tracking the projectile's trajectory. The result is a gas velocity \(u\) and pressure \(p\) immediately behind the projectile as it traverses the length of the barrel, see Fig. 3(right). Here \(u\) and \(p\) are non-dimensionalized using the speed of sound in static air \(a_{0}\) and initial gas pressure \(p_{0}\). The horizontal distance \(x\) along the barrel is again non-dimensionalized by a characteristic length scale \(Ma_{0}^{2}/p_{0}A_{b}\). It is clear from the \(u\) vs. \(x\) graph that the projectile velocity tends to saturate after about \(x\sim 0.1Ma_{0}^{2}/(A_{b}p_{0})\). At this length, the gauge pressure behind the projectile also reduces significantly, resulting in negligible force. For a given \(p_{0}\), the projectile exit velocity \(V_{0}\) is hence \(u\) evaluated at \(x\) equal to the distance to the first vent hole. ### Experimental measurement: velocity-pressure curve The results of the 1D calculations provide the projectile exit velocity \(V_{0}\) as a function of filling pressure \(p_{0}\) for various \(m\). To calibrate this prediction for the system described in Sec. 2, experimental data was obtained for various projectiles, see Table 3.1. Friction between the sabot and the (stainless steel) barrel was neglected since the corresponding friction coefficient is very small (0.05-0.08, see Ref. [30]). Two sabots, totalling one-third of the projectile length, were used with all projectiles, except for the smallest one (mass 10 g). For this case, owing to length constraints, a single sabot with one-third length was used instead. Before presenting the experimental measurements, the following distinctions between the theoretical model and the actual configuration in Sec. 2 must be considered. Firstly, since significant gas escape occurs when the projectile crosses the first set of vent holes, the 1D gas analysis computation is only performed until the first hole. Secondly, in the analysis, the main chamber is assumed to be connected to the barrel by a sudden transition region, while the actual design uses a more gradual transition. Hence, the gas chamber contains 10% lesser gas volume than considered in the model. As a first approximation, this volume correction is incorporated by decreasing predicted projectile kinetic energy by 10%. The final predicted relation between the projectile exit velocity \(V_{0}\) and the initial gas pressure \(p_{0}\) is summarized in Fig. 4 (left), for various values of (combined projectile + sabot) mass \(m\). This graph, termed a velocity-pressure curve, provides the initial filling pressure \(p_{0}\) required for accelerating a combined mass \(m\) to the required \(V_{0}\). Experimental validation of this theoretical curve is performed using a total of 8 to 10 velocity readings for a given \(m\), using simultaneous independent velocity measurements (_cf._ Sec. 2.2). The mean velocity (square marker) is plotted over the predicted curves in Fig. 4 (left). Error bars represent standard deviation of the measurements. From the data in this figure, it is clear that projectiles can be propelled to \(V_{0}\sim 100\) m/s at \(p_{0}=5.5\) bar. While compressed air was used for these results, the velocity can be nearly doubled if a light gas, such as He, is used instead [31]. A systematic discrepancy is clear in Fig. 4(left) between the predicted and measured \(V_{0}\) curves, primarily because the numerical model assumes an idealized one-dimensional geometry, whereas flow from the main chamber to the barrel is actually two-dimensional, as described in Sec. 2.1. To account for this systematic deviation, we note the following primary observations from the plot of kinetic energy (\(=1/2mV_{0}^{2}\)) of the projectile as a function of initial pressure \(p_{0}\), see Fig. 4(right). Firstly, all the theoretical predictions (solid lines) fall within a very narrow band (shaded grey region, labelled 'prediction') approximated by a single straight line (\(E_{p}\approx\lambda p_{0}\)) with slope \(\lambda=20.9\) J/bar. Secondly, the measured values are also found to lie on fitted straight lines (dashed lines) for each projectile mass. These best-fit lines have slope lesser than \(\lambda\), indicating that the energy \begin{table} \begin{tabular}{c c c c c} **Combined (projectile+sabot) mass (g)** & \(10\) & \(24\) & \(35\) & \(50\) \\ \hline \hline **Material** & Al & Al & SS & SS \\ **Initial diameter**\(D_{0}\)(mm) & \(10.0\) & \(14.0\) & \(13.0\) & \(13.7\) \\ **Initial length**\(L_{0}\) (mm) & \(26.1\) & \(50.0\) & \(30.5\) & \(44.0\) \\ \hline \end{tabular} \end{table} Table 3.1: Blunt cylindrical projectile dimensions for velocity measurements loss increases with \(p_{0}\). As mentioned earlier, except for the 10 g projectiles, the slope of the other \(V_{0}-p_{0}\) experimental lines differ only marginally; thus, the theoretical band can be scaled by factor \(\lambda_{1}\simeq 0.71\) (shaded light orange, labeled '2 sabots') to coincide with the measurements. It should be recognized that this factor \(\lambda_{1}\) accounts for the energy losses caused by two-dimensional flow from the main chamber to barrel, and backflow from the main chamber to the atmosphere via the valve chamber. Thirdly, as aforementioned, all the projectiles are supported by two sabot structures, except for the 10 g projectile, which has only one. Moreover, with a single sabot, larger energy loss occurs due to gas flow past the clearance between the sabot and the inner barrel surface. This gas leak was also observed when measuring velocity using high-speed imaging. To account for this, the theoretical band is further scaled by factor \(\lambda_{2}\simeq 0.64\) (shaded light cyan region, labeled '1 sabot') to match experimental values for the 10 g, single-sabot projectile. Figure 4: An operating velocity-pressure curve (left) and kinetic energy-pressure curve (right) for the gas gun setup presented in this work. Theoretical lines (solid lines) and experimental measurements (square markers) are shown superimposed. The dashed lines are fits to the experimental data, and shaded regions indicates theoretical bands as described in the text. ### Implications The operating regime for carrying out impact experiments using a given projectile can be selected based on Fig. 4. For a given projectile of mass \(m\) and required exit velocity \(V_{0}\), the required initial operating pressure is obtained from the abscissa of the corresponding curve in this figure. The pneumatic triggering mechanism used here allows projectile velocities of \(\sim\)100 m/s with just compressed air, resulting in much higher strain rates than in conventional configurations such as Izod-Charpy or drop-weight tests [13]. This velocity range is suitable for evaluating deformation at strain rates of \(10^{3}-10^{5}\)/s. The present design does not emphasize optimizing velocity, considering the complexity involved in achieving higher velocities, such as the use of multiple stages, but instead focuses on conducting versatile laboratory-scale low-velocity impact experiments. However, as evident from the 1D gas dynamics analysis in Sec. 3.1, the maximum projectile velocity in the current gas gun setup can be further increased by increasing the initial \(p_{0}\) or by using helium [31]. It is also worth mentioning that such analyses are usually integrated with the design phase of the gun in order to estimate the various dimensions of the gas chamber and barrel for the desired velocity range. In summary, for a given initial pressure \(p_{0}\) and projectile mass \(m\), the theoretical kinetic energy can be estimated as \(E_{p}=\lambda p_{0}\), then the output kinetic energy of the projectile in our setup is obtained as either \(E_{m}=\lambda_{1}E_{p}\) if two sabots are used, or \(E_{m}=\lambda_{2}\lambda_{1}E_{p}\) if single sabot is used. ## 4 Results II: Taylor Impact test Having established a method to determine the operating parameters, we now proceed to discuss two complementary impact tests. The first is the Taylor impact test where the projectile is deformable and the target plate is comparatively rigid. The converse test is discussed in Sec. 5. The Taylor impact test uses a deformable cylindrical projectile to strike a stationary plate. Compressive elastic and plastic waves start propagating towards the rear end of the projectile immediately after impact. Elastic waves, being much faster, reach the rear end first and are reflected as tensile waves, which then undergo subsequent reflection from compressive waves arising from the elastic-plastic boundary. This cyclic process results in projectile deceleration until a finite plastic region is developed at the impact end, see schematic in Fig. 5. We now present theoretical analysis of this process, followed by experimental evaluation using our setup. ### Theoretical analysis In his pioneering work, Taylor proposed a one-dimensional simplified model [14] to estimate the dynamic yield strength in terms of density, impact velocity, and initial/final lengths of the projectile. Several modifications to this model have been proposed subsequently, to incorporate more complex constitutive material properties and for matching with the experimental observations [32, 33, 34]. However, most of these models are one-dimensional and rely on measurements of the final dimensions of the impacted specimen [35]. To predict the final projectile geometry, we use an axisymmetric model in which the plastic zone geometry is approximated as the frustum of a cone [36]. This model considers incompressible plastic deformation that causes bulging at the impact end and incorporates spatio-temporal variations of strain and strain rate, thus also predicting deformation history. Given the constitutive equation of the material, the deformation of the impacting projectile can be obtained by solving a set of ordinary differential equations (ODEs) in time, derived purely based on physical arguments. We use this formulation to predict the influence of the impact velocity \(V_{0}\) on the mechanics of projectile deformation. The operating parameters required to determine the impact velocity are to be estimated from the velocity-pressure relationship curve shown (_cf._ Fig. 4). Figure 5: Schematic of Taylor impact showing the meaning of various symbols. The definitions of quantities \(\beta\) and \(\gamma\) are also displayed. We have made two minor modifications to the model presented in Ref. [36]. Firstly, the effective plastic strain and strain rates are determined by averaging the strain rate components along the elastic-plastic boundary since the unyielded part of the projectile is subjected to stress waves generated at this boundary. The corresponding expressions for strain rate components are (_cf._ Ref. [36]): \[\dot{\epsilon}_{xx} =-\frac{\beta^{2}vx}{D_{0}(\gamma-1-\log\gamma)(D_{0}+\beta x)}\] \[\dot{\epsilon}_{rr} =\dot{\epsilon}_{\theta\theta} =\frac{\beta^{2}vx}{2D_{0}(\gamma-1-\log\gamma)(D_{0}+\beta x)}\] \[\dot{\epsilon}_{xr} =-\frac{\beta^{2}vr}{2(\gamma-1-\log\gamma)(D_{0}+\beta x)^{2}}\] Here, the initial diameter, length, and impact velocity of the projectile are denoted by \(D_{0}\), \(L_{0}\), and \(V_{0}\), respectively. During impact, \(D\) and \(v\) represent the bulge end diameter and projectile velocity, see schematic in Fig. 5. Also depicted here is the axial shortening \(h\), undeformed length \(l\), and plastic zone size \(s\) that occur during impact, as well as the dimesionless ratios \(\beta\) and \(\gamma\). The coordinate system is located at the center of the elastic-plastic boundary (see bottom row, Fig. 5). At this boundary (\(x=0\)), the average strain rates are obtained by integrating over the cross-sectional area, \[\dot{\bar{\epsilon}}_{xr}=\frac{4}{\pi D_{0}^{2}}\int_{0}^{2\pi}\int_{0}^{D_{0 }/2}\dot{\epsilon}_{xr}\Big{|}_{x=0}rdrd\theta=\frac{\beta^{2}v}{6D_{0}(\gamma -1-\log\gamma)}\] and the other components are identically zero (\(\dot{\bar{\epsilon}}_{xx}=\dot{\bar{\epsilon}}_{rr}=\dot{\bar{\epsilon}}_{ \theta\theta}=0\)). The effective plastic strain rate can now be obtained by using the von-Mises criterion: \[\dot{\bar{\epsilon}}_{p}=\sqrt{\frac{2}{3}\dot{\bar{\epsilon}}_{ii}}=\frac{ \beta^{2}v}{3\sqrt{3}D_{0}(\gamma-1-\log\gamma)} \tag{4.1}\] The effective plastic strain can be determined by time integration of this equation. Additionally, the yield stress (\(\sigma_{y}\)) is determined assuming a Johnson-Cook constitutive relation, and ignoring thermal effects \[\sigma_{y}=\left[A+B\bar{\epsilon}_{p}^{n}\right]\left[1+C\ln\left(\frac{\dot{ \bar{\epsilon}}_{p}}{\dot{\epsilon}_{0}}\right)\right] \tag{4.2}\] where, \(A,B,n,C\) and \(\dot{\epsilon}_{0}\) are material constants. Combining these relations, the following set of coupled non-linear ODEs are obtained \[\frac{dl}{dt} =-c_{p} \tag{4.3}\] \[\frac{dh}{dt} =v\] (4.4) \[\frac{dv}{dt} =-\frac{c_{e}(c_{e}+c_{p})}{El}\] (4.5) \[\frac{dD}{dt} =\frac{\beta v(\gamma-1)}{2(\gamma-1-\ln\gamma)} \tag{4.6}\] along with the expressions (4.1) for strain rate and the constitutive equation (4.2). The symbols \(E\), \(c_{e}\), and \(c_{p}\) denote the Young's modulus, the elastic wave speed, \(c_{e}=\sqrt{E/\rho}\), and the plastic wave speed \(c_{p}=\sqrt{\sigma_{y}/\rho}\), respectively. As pointed out in Ref. [36], numerical instability prevails at the beginning of time integration due to the presence of \(s\) in the denominator of the expression for \(\beta=(D-D_{0})/s\). Time integration is hence performed by setting constant values of \(\beta\) and \(\gamma\). The simulation steps are as follows--we begin with an initial guess for \(\beta\) and \(\gamma\) for the first iteration and obtain the final deformed geometry. The diameter after bulging (\(D_{f}\)) is determined using the final undeformed length \(l_{f}\) and plastic zone size \(s_{f}\) from the previous iteration, using the incompressibility condition. Further, we update \(\beta\) and \(\gamma\) equal to the average values \(\beta_{av}(=0.5\beta_{f})\) and \(\gamma_{av}(=0.5(1+\gamma_{f}))\), respectively for the subsequent iterations, until the tolerance for these quantities is below \(10^{-3}\). ### Experimental measurements We prepared cylindrical specimens of as-received aluminum 6061 with initial diameter \(D_{0}\) and length \(L_{0}\) impacting an EN19 plate (20 mm thickness) with different velocities \(V_{0}\), see Fig. 6. The specimens are labeled with small alphabets ('a' to 'e'), and their initial parameters are tabulated in Figure 6: Taylor specimens Al 6061-T6 after impact. The initial parameters are tabulated beneath the image. The model predicted deformed geometry (in orange) is overlaid over the deformed specimens. Figure 7: Numerical results for specimen ’c’ (\(D_{0}=10\) mm, \(L_{0}=25\) mm, and \(V_{0}=105\) m/s). Figure (left) shows the time history (left) of the bulge diameter \(D\) and length \(L\) of the projectile. Figure (right) shows the spatial effective strain field distribution (\(\lx@overaccentset{{\bullet}}{e_{p}}\)) (right) for three distinct time instances. the figure. The sabots used for the test consists of one Teflon ring of length 2/3 times the projectile length, placed at the free end. The combined sabot plus projectile mass for all projectiles is in the range of 12-15 g, and the corresponding operating pressure is 7 bar. Using scaling factors for single sabots (see Sec. 3.2), the estimated velocities are again consistent with measured values during the test. The deformed specimens are shown in Fig. 6, along with the predicted shapes overlaid (orange dashed lines) from the model. The close agreement between the final experimentally observed shapes and the prediction is clear from this figure. For the model computation, Johnson-Cook material constants for aluminum 6061 were taken from Ref. [37] as: \(A=334\) MPa, \(B=114\) MPa, \(n=0.42\), \(C=0.002\) and \(\dot{\epsilon}_{0}=1\). We also observed minor buckling in the deformed specimens, alongside the cone-shaped bulging of the plastic zone. The strain rate varies from a maximum at the beginning to zero at the end of the impact. The model additionally predicts deformation history during impact, which is otherwise challenging to obtain experimentally, see Fig. 7. Temporal variation of the frustum diameter \(D\) and length \(L\) of the projectile (corresponding to specimen 'c') is shown in Fig. 7(left). Corresponding spatial variation of the effective plastic strain rate (\(\dot{\bar{\epsilon}}_{p}^{*}\)) for three different time instances is shown in Fig. 7(right). All quantities are reported here in non-dimensional form, time \(t^{*}=tV_{0}/L_{0}\), velocity \(v^{*}=v/V_{0}\) and strain-rate (\(\dot{\bar{\epsilon}}_{p}^{*}\)) is expressed in terms of \(t^{*}\). These results clearly predict the large instantaneous strain-rate that the bulge region near the impact end is subjected to, making it particularly susceptible to premature damage. It is expected, especially given the close match in final shapes seen in Fig. 6, that the spatio-temporal evolution matches that in the experiments, as may perhaps be confirmed by careful high-speed _in situ_ imaging. This, however, is beyond the scope of the present manuscript. Another interesting prediction of the model is the deformation path traced over the yield surface, see Fig. 8(left). Effective plastic strain (\(\bar{\epsilon}_{p}\)), strain rate (\(\dot{\bar{\epsilon}}_{p}\)), and yield stress (\(\sigma_{y}^{*}\)) are all computed at the elastic-plastic boundary. Note that the non-dimensional yield stress is defined as \(\sigma_{y}^{*}=(\sigma_{y}-A)/A\) using the Johnson-Cook parameter \(A\). This deformation history is useful for semi-quantitatively predicting the final microstructure in the deformed projectile--given this information, one could easily use a multi-scale (e.g., crystal-plasticity) model to estimate grain shape/size evolution and potentially predict the onset of fracture. The varied paths of two very similar specimens 'a' and 'b' is also noteworthy in this figure. As first-attempt experimental validation, the microstructure of specimen 'c', and a schematic showing corresponding locations, is reproduced in Fig. 8(right). The microstructure is obtained by mechanical polishing with emery papers of various grades, followed by cloth polishing using diamond paste, and then etched by submerging it for 1-3 minutes in Keller's reagent. The change in grain size is indistinguishable as the specimen is only subjected to 15% plastic strain and 18% more stress than the static yield stress as inferred from the deformation path of specimen 'c'in Fig. 8(left). ### Implications The physics-based model described here provides a direct method to predict the deformation geometry given \(m,V_{0}\). It also suggests two inverse problems. The first one, as originally suggested by Taylor, is to deduce the material parameters \(A,B,n\) for an unknown material, from measurements of the final shape and the impact velocity \(V_{0}\). In such a case, the inverse problem is posed in which the material parameters are perturbed in the model in such a way that the predicted deformation Figure 8: The deformation path (left) traced over the yield surface by all Taylor specimens ‘a’ to ‘e’. The microstructure images (right) of a small section cut at two different locations indicated by coloured squares in the schematics on the longitudinal cross-section of specimen ’c’. best fits the experimental measurements, see, for instance, Refs. [36, 38]. Since the Taylor impact configuration is simple to set up, a number of experiments can be easily carried out using the framework presented here. A second inverse problem is to determine the impact velocity \(V_{0}\), given the material properties and the final deformed shape. This is of critical importance in applications where determining impact conditions are necessary as a _post mortem_ procedure, as for e.g., in failure of spacecraft structures. One could even go a step further and envisage predicting \(V_{0}\) based on microstructural features in the final projectile. This will be necessary in practical situations where, perhaps part of the projectile is lost due to subseqeuent impact events and deformation history must be reconstructed from a very small portion of the final projectile. For this purpose, the relations presented in Fig. 8, showing the deformation history of the specimen, may be exploited to obtain the initial \(V_{0}\). This route must be taken with care, since it is quite likely that unique solutions cannot be guaranteed in general. ## 5 Results III: Plate deformation experiments We now discuss a configuration complementary to that of the Taylor experiment--impact of a rigid projectile against a deformable plate. Upon impact, the projectile will induce plastic deformation that may lead to fracture, and potentially, resulting in perforation [27]. For a given projectile-target geometry, a range of impact velocities can result in either mere initiation of plastic flow to complete perforation [39, 40]. We deal here only with the non-perforating case to estimate plastic deformation in the plate. ### Theoretical analysis Upon impact of the rigid projectile, elastic and plastic waves begin to propagate within the plate, causing transverse deflections in the direction of projectile motion. Two plastic deformation regions may be delineated--bulging where the target conforms to the projectile shape, and bending-induced dishing, which extends over a considerable distance from the impact zone. To estimate the amount of deformation as a function of impact parameters \(V_{0},m\), we adopt the model proposed by Beynet and Plunkett [39], assuming a rigid blunt projectile and a perfectly-plastic target material with no elastic effects. The plate bends as the bulge region moves with the projectile. The plate subsequently yields due to the developed radial stress, resulting in plastic strain in the transverse direction. Assuming that the deformation is dominated by radial stress, the transverse displacement \(w\) is governed by \[\frac{\partial^{2}w}{\partial\bar{r}^{2}}+\frac{1}{\bar{r}}\frac{\partial w}{ \partial\bar{r}}=\frac{\partial^{2}w}{\partial\bar{t}^{2}}\;\;\text{for}\;\;1 <\bar{r}<\infty,\bar{t}\geq 0 \tag{5.1}\] where \(\bar{r}=r/r_{0}\), and \(\bar{t}=c_{p}t/r_{0}\) are the nondimensionalized radial distance and time, respectively. Here, \(r_{0}\) is the projectile radius and \(c_{p}=\sqrt{\sigma_{y}/\rho}\) is the plastic wave speed. Equation 5.1 describes dishing as a radial propagation of plastic strains with speed \(c_{p}\). To determine the initial and boundary conditions for solving Eq. 5.1, we note that the radial stress near the bulge region retards the velocity of an equivalent body consisting of projectile and bulge region, which serves as a boundary condition, and can be written as \[\alpha\frac{\partial w}{\partial\bar{r}}=\frac{\partial^{2}w}{\partial\bar{t} ^{2}},\;\;\text{at}\;\;\bar{r}=1,\bar{t}\geq 0 \tag{5.2}\] with \(\alpha=2\Delta M/(M+\Delta M)\), \(M\) and \(\Delta M\) are masses of projectile and the bulge region, respectively. Initially, the transverse displacement is zero everywhere, and only the bulge region moves, which forms the initial conditions given by: \[\begin{split} w(\bar{r},0)&=0\\ \frac{\partial w(\bar{r},0)}{\partial t}&=\begin{cases} V_{0}\frac{M}{M+\Delta M}\frac{r_{0}}{c_{p}}&\bar{r}=1\\ 0&\bar{r}>1\end{cases}\end{split} \tag{5.3}\] with \(V_{0}\) the projectile velocity at the onset of impact. Equation (5.1) is then numerically solved using a finite-difference scheme, subject to the initial (Eq. (5.3)) and boundary (Eq. (5.2)) conditions to obtain the final shape of the plate. ### Experimental measurements For obtaining experimental measurements to evaluate the predicted temporal target deformation, we impact a stainless steel projectile of diameter 10.2 mm and length 45.8 mm with \(V_{0}=65\) m/s against an aluminum 6061 (as-received) target plate of dimensions 50 mm \(\times\) 50 mm \(\times\) 3 mm. The resulting shape is shown in Fig. 9(left), with bulging and dishing regions demarcated. The corresponding microstructure (Fig. 9(right)) shows clearly distinguishable grains in these two regions. The occurrence of locally compressed grains within the bulge region is evident in this figure, and this can be related to the projectile impact velocity \(V_{0}\) using grain-size analysis. Since the model predicts transverse displacement, we obtain equivalent experimental data by measuring the deformed surface profile of the plate. While this profile can be accurately reconstructed using laser scanning, we use a more primitive, yet easily implementable, method instead. This is done by tracing grid points embossed _a priori_ onto the plate's surface during preparation, see supplementary material. Using the simple fact that a static liquid surface will form a perfectly horizontal surface, we recover \(x,y,z\) coordinates of points on the surface of the deformed plate. A more comprehensive description of this method is provided in the supplementary material. The final transverse displacement and developed surface strain fields in the plate can be estimated from the deformed profile, and by correlating initial and final grid locations, respectively. A radial section of the obtained profile gives the deflection \(w\) of the points in the deformed plate plotted against the predicted deflection from the model, see Fig. 10(left). For comparison, \(\sigma_{y}=334\) Figure 9: Deformed plate of Al 6061 (left) along with microstructure (right) of the small section near the bulge region marked in red. MPa and \(\rho=2700\) kg/m\({}^{3}\) were chosen as representative for Al6061. The model assumes the plate to be infinite, without accounting for edge effects from clamping. In contrast, the considered plate in the test is finite; hence, the predicted deformation is much smaller than that reconstructed from the final shape. We expect the discrepancy between the two results to reduce for larger in-plane plate dimensions. The in-plane strain field components for a finite deformation are computed by using the displacement mapping function from the original to the deformed profile. The radial component \(\epsilon_{rr}\) is only shown here, see Fig. 10(right), estimated in the red region marked in Supplementary Figure 1. The other components \(\epsilon_{\theta\theta}\) and \(\epsilon_{r\theta}\) are found to be less than 10% of the maximum value of \(\epsilon_{rr}\), which _a posteriori_ justifies the radial symmetry assumption in the model. ### Implications Despite being based on simplified theoretical arguments, the analysis presented here provides predictions of material response in typical non-perforating impact. It should be noted that any material can typically exhibit a wide variety of failure mechanisms in a plate impact test [27]. Some common possibilities include fracture due to initial compressive waves, spalling caused due to the reflection of compressive wave from the distal boundary of the plate, plugging resulting from highly localized shear zone formation [18, 20, 41] and even fragmentation [42, 43]. These failure types Figure 10: Results of the measurement using the camera-ink setup for the deformed aluminum plate. Figure (left) shows the transverse deflection \(w\) as a function of radial distance \(r\) from the center of the projectile impact predicted from the model compared with the experiment. The strain field component \(\epsilon_{rr}\) using camera-ink setup is shown in Figure (right). predominantly depend on material properties, geometrical characteristics, and impact velocity. As a result, and given its practical consequences for space applications, plate deformation has attracted significant research effort for decades [44, 45, 46]. Considering the broad variability in failure response, a predictive model relating the impact velocity with the deformation mechanics is essential in setting the test operating parameters in an experiment. As per test configuration and material heterogeneities (for instance, porous solids, composite materials _etc._), one needs to resort to more involved analyses to predict the deformation [47, 48]. ## 6 Discussion and Summary In this work, we've described a compact laboratory-scale gas gun setup for studying low-velocity impact events on both projectiles and targets. Some noteworthy features of the configuration are that it is easy to setup and adapt to multiple test modalities (e.g., Taylor impact, perforation) and realize different final target geometries (e.g., multi-layer shields, curved structures). Our work here has focused on determining an operating scheme for the test setup, as well as calibrating theoretical velocity-pressure curves with those obtained experimentally. using only compressed air, the system was demonstrated to maximum projectile velocities of \(\sim\)100 m/s, making it ideal for low-velocity impact studies in a laboratory setting. The capabilities of the presented design have been demonstrated for two standard impact test configurations _viz._ Taylor and plate impact. The former has been performed by impacting an aluminum specimen against a rigid alloyed steel plate and using an axisymmetric model for predicting deformation geometry. Close agreement between experiments and the model predictions were noted, in addition to the determination of time history for the deformation fields, which are extremely challenging to obtain experimentally. In the second impact configuration, an aluminum plate is deformed by a stainless steel projectile. A physics-based model is presented to relate the plate's transverse deflection with the impact velocity. The final displacement and strain fields after impact were measured using a visual grid mapping technique, and agrees reasonably well with the numerical prediction. While accounting for the disparities between the simplified models and the actual experiments, the numerical models help estimate the effect of impact velocity on the material deformation behaviour. The derived impact velocity will then be suitably used to estimate the operating parameters for the test by referring to the produced velocity-pressure relationship curve. We believe that the framework described in this work is easily replicable and will be useful for groups that are exploring the use of novel structural designs for space applications, such as metamaterials or microarchitected internal features [52, 53], for impact energy absorption applications. We also believe that it will help elucidate the impact performance of emerging classes of metallic materials such as high-entropy alloys [50, 51] that represent an area of active research interest. The question of of how these multi-component alloys fail and how such potentially catastrophic mechanisms can be mitigated either by diverse microarchitecting, or additional alloying or a combination of both, represent interesting areas for future research for space-grade materials. We are presently investigating some of these questions and hope to communicate our results in due course.
2306.04756
A Linearly Convergent GAN Inversion-based Algorithm for Reverse Engineering of Deceptions
An important aspect of developing reliable deep learning systems is devising strategies that make these systems robust to adversarial attacks. There is a long line of work that focuses on developing defenses against these attacks, but recently, researchers have began to study ways to reverse engineer the attack process. This allows us to not only defend against several attack models, but also classify the threat model. However, there is still a lack of theoretical guarantees for the reverse engineering process. Current approaches that give any guarantees are based on the assumption that the data lies in a union of linear subspaces, which is not a valid assumption for more complex datasets. In this paper, we build on prior work and propose a novel framework for reverse engineering of deceptions which supposes that the clean data lies in the range of a GAN. To classify the signal and attack, we jointly solve a GAN inversion problem and a block-sparse recovery problem. For the first time in the literature, we provide deterministic linear convergence guarantees for this problem. We also empirically demonstrate the merits of the proposed approach on several nonlinear datasets as compared to state-of-the-art methods.
Darshan Thaker, Paris Giampouras, René Vidal
2023-06-07T20:08:27Z
http://arxiv.org/abs/2306.04756v1
# A Linearly Convergent GAN Inversion-based Algorithm for Reverse Engineering of Deceptions ###### Abstract An important aspect of developing reliable deep learning systems is devising strategies that make these systems robust to adversarial attacks. There is a long line of work that focuses on developing defenses against these attacks, but recently, researchers have began to study ways to _reverse engineer the attack process_. This allows us to not only defend against several attack models, but also classify the threat model. However, there is still a lack of theoretical guarantees for the reverse engineering process. Current approaches that give any guarantees are based on the assumption that the data lies in a union of linear subspaces, which is not a valid assumption for more complex datasets. In this paper, we build on prior work and propose a novel framework for reverse engineering of deceptions which supposes that the clean data lies in the range of a GAN. To classify the signal and attack, we jointly solve a GAN inversion problem and a block-sparse recovery problem. For the first time in the literature, we provide deterministic _linear convergence guarantees_ for this problem. We also empirically demonstrate the merits of the proposed approach on several nonlinear datasets as compared to state-of-the-art methods. ## 1 Introduction Modern deep neural network classifiers have been shown to be vulnerable to imperceptible perturbations to the input that can drastically affect the prediction of the classifier. These adversarially attacked inputs can pose problems in safety-critical applications where correct classification is paramount. Adversarial attacks can be either universal perturbations, which remain fixed and can deceive a pretrained network on different images of the same dataset [29], or image-dependent perturbations [33]. For the latter approach, attack generation for a given classification network entails maximizing a classification loss function subject to various constraints [25]. For instance, we can assume that the additive perturbation \(\delta\) for a clean signal \(x\) lies in an \(\ell_{p}\) ball for some \(p\geq 1\), i.e., \(\delta\in\mathcal{S}_{p}\), where \(\mathcal{S}_{p}=\{\delta:\|\delta\|_{p}\leq 1\}\)[26]. Over the last few years, there has been significant interest in the topic of devising defenses to enhance the adversarial robustness of deep learning systems. Two popular defense strategies are: a) adversarial training-based approaches, [44; 40; 38], which rely on a data augmentation strategy and b) adversarial purification-based defenses, which rely on generative models [36; 21; 47]. The latter approach aims to filter out the noisy component of the corrupted image by projecting it onto the image manifold, parametrized by a pretrained deep generative model. The constant endeavor to develop reliable deep learning systems has led to a growing interest in methods that adopt a more holistic approach towards adversarial robustness, known as the _Reverse Engineering of Deceptions (RED)_ problem. The objective of RED is to go beyond mere defenses by simultaneously _defending against the attack_ and _inferring the deception strategy_ followed to corrupt the input, e.g., which \(\ell_{p}\) norm was used to generate the attack [13]. There are various practical methods to reverse engineer adversarial attacks. These works either rely on deep representations of the adversarially corrupted signals that are then used to classify the attacks, [28] or complicated ad-hoc architectures and black box models [13, 12]. Their effectiveness is only empirically verified, and there is a noticeable lack of theoretical guarantees for the RED problem. This inspired the work of [43], in which the authors propose the first principled approach for the RED problem. Specifically, for additive \(\ell_{p}\) attacks, they assume that both _the signal \(x\) and the attack \(\delta\) live in unions of linear subspaces_ spanned by the blocks of dictionaries \(D_{s}\) and \(D_{a}\) that correspond to the signal and the attack respectively i.e. \(x=D_{s}c_{s}\) and \(\delta=D_{a}c_{a}\). These dictionaries are divided into blocks according to the classes of interest for \(x\) and \(\delta\) (i.e., the signal classification labels for \(x\) and the type of \(\ell_{p}\) threat model used for generating \(\delta\)). The specific form of \(D_{s}\) and \(D_{a}\) gives rise to block-sparse representations for the signal \(x\) and the attack \(\delta\) with respect to these dictionaries. This motivates their formulation of RED as an inverse optimization problem where the representation vectors \(c_{s}\) and \(c_{a}\) of the clean signal \(x\) and attack \(\delta\) are learned under a block-sparse promoting regularizer, i.e., \[\min_{c_{s},c_{a}}\|x^{\prime}-\underbrace{D_{s}c_{s}}_{x}-\underbrace{D_{a}c _{a}}_{\delta}\|_{2}+\lambda_{s}\|c_{s}\|_{1,2}+\lambda_{a}\|c_{a}\|_{1,2}. \tag{1}\] Above, \(\|\cdot\|_{1,2}\) is a block-sparsity promoting \(\ell_{1}/\ell_{2}\) norm, [42, 9, 10]. To solve this problem, the authors of [43] use an alternating minimization algorithm for estimating \(c_{s}\) and \(c_{a}\) and accordingly provide theoretical recovery guarantees for the correctness of their approach. While these recent works undoubtedly demonstrate the importance of the problem of reverse engineering of deceptions (RED), there still exist several challenges. **Challenges.** Existing approaches for RED bring to light a common predicament of whether to develop a practically useful method or a less effective, but theoretically grounded one. Specifically, black box model-based approaches for RED [13] are shown to perform well on complex datasets, but lack performance guarantees. Conversely, the approach in [43] is theoretically sound1, but comes with strong assumptions on the data generative model, i.e., that the data live in a union of linear subspaces. It is apparent that this assumption is unrealistic for complex and high-dimensional datasets. Given the limitation of the signal model of [43], the main challenge that we aim to address is: Footnote 1: As theoretically shown in [43], an \(\ell_{p}\)-bounded attack attack on test sample can be reconstructed as a linear combination of attacks on training samples that compose the blocks of the attack dictionary \(D_{a}\). _Can we relax the simplistic assumption on the generative model of the signal made in [43], without compromising the theoretical recovery guarantees for solving the RED problem?_ A natural step towards this objective is to _leverage the power of deep generative models_, thus building on adversarial purification approaches and suitably adjusting their formulation to the RED problem. However, in doing so, we are left with an inverse problem that is highly non-convex. Namely, the signal reconstruction involves a projection step onto the manifold parameterized by a pretrained deep generative model. Even though this approach is a key ingredient in applications beyond RED, such as adversarial purification, the problem is yet to be theoretically understood. Further, RED involves finding _latent representations for both the signal and the attack_. An efficient way to deal with this is to use an _alternating minimization algorithm_, as in [43]. This leads to the following challenge for developing both practical and theoretically grounded algorithms: _Can we provide theoretical guarantees for an alternating minimization algorithm that minimizes a non-convex and non-smooth RED objective?_ **Contributions.** In this work, we propose _a novel reverse engineering of deceptions approach_ that can be applied to _complex datasets_ and offers _theoretical guarantees_. We address the weakness of the work in [43] by leveraging the power of nonlinear deep generative models. Specifically, we replace the signal model \(x=\bar{D}_{s}c_{s}\) in (1) with \(x=G(z)\), where \(G:\mathbb{R}^{d}\rightarrow\mathbb{R}^{n},d\ll n\) is the generator of a Generative Adversarial Network (GAN). By using a pre-trained GAN generator, we can reconstruct the clean signal by projecting onto the signal manifold learned by the GAN, i.e., by estimating a such that \(G(z)\approx x\). Further, adversarial perturbations are modeled as in [43], i.e., as block-sparse vectors with respect to a predefined dictionary. The inverse problem we solve in this model is then: \[\min_{z,c_{a}}\|x^{\prime}-\underbrace{G(z)}_{x}-\underbrace{D_{a}c_{a}}_{ \delta}\|_{2}+\lambda\|c_{a}\|_{1,2}. \tag{2}\] Our main contributions are the following: * _A Linearly Convergent GAN inversion-based RED algorithm._ We address the main challenge above and provide recovery guarantees for the signal and attack in two regimes. First, we deal with the unregularized setting, i.e., \(\lambda=0\) in (2), where we alternate between updating \(z\), the latent representation of the estimate of the clean signal, and \(c_{a}\), the attack coefficient, via an alternating gradient descent algorithm. In this setting, we show _linear convergence of both iterates jointly to global optima._ Second, as in [43], we consider a regularized objective to learn the signal and attack latent variables. In this regime, for an alternating proximal gradient descent algorithm, we show _linear convergence in function values to global minima_. * _A Linearly Convergent GAN inversion algorithm._ Next, we specialize our results for the clean signal reconstruction problem, known as GAN inversion, which is of independent interest. For the GAN inversion problem, we demonstrate linear convergence of a subgradient descent algorithm to the global minimizer of the objective function. Note that we rely on assumptions that only require _smoothness_ of the activation function and a _local error-bound_ condition. To the best of our knowledge, this is the _first result that analyzes the GAN inversion problem departing from the standard assumption of networks with randomized weights [14; 17]_. * _SOTA Results for the RED problem._ Finally, we empirically verify our theoretical results on simulated data and also demonstrate new state-of-the-art results for the RED problem using our alternating algorithm on the MNIST, Fashion-MNIST and CIFAR-10 datasets. ## 2 Related Work **Adversarial Defenses.** We restrict our discussion of adversarial attacks, [5; 3], to the _white-box attack_ scenario where adversaries have access to the network parameters and craft the attacks usually by solving a loss maximization problem. Adversarial training, a min-max optimization approach, has been the most popular defense strategy [44]. Adversarial purification methods are another popular strategy; these methods rely on pretrained deep generative models as a prior for denoising corrupted images [30; 36]. This problem is formulated as an inverse optimization problem [45], and the theoretical understanding of the optimization landscape of the problem is an active area of research [17; 14]. Our work leverages pretrained deep generative models for the RED problem and also aims to shed light on theoretical aspects of the corresponding inverse problems. **Theoretical Analysis of GAN-inversion algorithms.** In our approach, we employ a GAN-inversion strategy for the RED problem. There is a rich history of deep generative models for inverse problems, such as compressed sensing, [31; 16] super-resolution, [27], image inpainting, [45]. However, efforts to provide theoretical understanding of the landscape of the resulting optimization problem have restricted their attention to the settings where the GAN has random or close-to-random weights [39; 14; 17; 22; 41]. For the first time in the literature, we depart from these assumptions to provide a more holistic analysis of the GAN inversion problem, instead leveraging recent optimization concepts i.e. error-bound conditions and proximal Polyak-Lojasiewicz conditions [18; 11; 8]. **Reverse Engineering of Deceptions (RED).** RED is a recent framework to not only defend against attacks, but also reverse engineer and infer the type of attack used to corrupt an input. There are several practical methods proposed for the RED problem. In [12], the authors use a multi-class network trained to identify if an image is corrupted and predict attributes of the attack. In [13], a denoiser-based approach is proposed, where the denoiser weights are learned by aligning the predictions of the denoised input and the clean signal. The authors in [28] use pretrained self-supervised embeddings e.g. SimCLR [6] to classify the attacks. The work most related to ours is [43], in which the authors show a provably correct block-sparse optimization approach for RED. Even though [43] is the first provably correct RED approach, their modelling assumption for the generative model of the clean signal is often violated in real-world datasets. Our work addresses this issue by developing a provable approach with more realistic modelling assumptions. Problem Formulation We build on the formulation of [43] to develop a model for an adversarial example \(x^{\prime}=x+\delta\), with \(x\) being the clean signal and \(\delta\) the adversarial perturbation. We replace the signal model of Equation (1) with a pretrained generator \(G\) of a GAN. Thus, the generative model we assume for \(x\) is given by \[x^{\prime}\approx G(z)+D_{a}c_{a}. \tag{3}\] We use generators \(G:\mathbb{R}^{d}\rightarrow\mathbb{R}^{n_{L}},d\gg n_{L}\) which are \(L\)-layer networks of the form \[G(z)=\sigma(W_{L}\sigma(W_{L-1}\cdots W_{2}\sigma(W_{1}z))) \tag{4}\] where \(W_{i}\in\mathbb{R}^{n_{i}\times n_{i-1}}\) are the known GAN parameters with \(n_{0}=d\), \(\sigma\) is a nonlinear activation function, and \(D_{a}\in\mathbb{R}^{n_{L}\times k_{a}}\) is an attack dictionary (typically with \(k_{a}>n_{L}\)). As in [43], the attack dictionary \(D_{a}\) contains blocks corresponding to different \(\ell_{p}\) attacks (for varying \(p\)) computed on training samples of each class. The authors of [43] verify this modelling assumption by showing that for networks that use piecewise linear activations, \(\ell_{p}\) attacks evaluated on test examples can be expressed as linear combinations of \(\ell_{p}\) attacks evaluated on training examples. Using the model in (3), we then formulate an inverse problem to learn \(z\) and \(c_{a}\): \[\min_{z,c_{a}}\mathcal{L}(z,c_{a})\triangleq f(z,c_{a})+\lambda h(c_{a}), \tag{5}\] where \(f(z,c_{a})=\|x^{\prime}-G(z)-D_{a}c_{a}\|_{2}^{2}\) denotes a reconstruction loss and \(h(c_{a})\) denotes a (non-smooth) convex regularizer on the coefficients \(c_{a}\). For example, in [43], the regularizer \(h(c_{a})\) is \(\left\|c_{a}\right\|_{1,2}\) which promotes block-sparsity on \(c_{a}\) according to the structure of \(D_{a}\). We note that our theoretical results do not assume this form for \(D_{a}\), but rather only that its spectrum can be bounded. A natural algorithm to learn both \(z\) and \(c_{a}\) is to alternate between updating \(z\) via subgradient descent and \(c_{a}\) via proximal gradient descent, as shown in Algorithm 1. ``` Given: \(x^{\prime}\in\mathbb{R}^{n_{L}},G:\mathbb{R}^{d}\rightarrow\mathbb{R}^{n_{L} },D_{a}\in\mathbb{R}^{n_{L}\times k_{a}}\) Initialize: \(z^{0},c_{a}^{0}\) Set: Step size \(\eta\) and regularization parameter \(\lambda\) for\(k=0,1,2,\ldots\)do \(R_{i}\leftarrow\text{diag}(\sigma^{\prime}(W_{i}z^{k}))\) for \(i\in\{1,\ldots,L\}\) \(z^{k+1}\gets z^{k}-\eta(W_{1}R_{1})^{T}(W_{2}R_{2})^{T}\cdots(W_{L}R_{L})^ {T}(G(z^{k})+D_{a}c_{a}^{k}-x^{\prime})\) \(c_{a}^{k+1}\leftarrow\text{prox}_{\lambda h}\left\{c_{a}^{k}-\eta D_{a}^{T}(G( z^{k})+D_{a}c_{a}^{k}-x^{\prime})\right\}\) endfor return\(z^{k+1},c_{a}^{k+1}\) ``` **Algorithm 1** Proposed RED Algorithm ## 4 Main Results: Theoretical Guarantees for RED In this section, we provide our main theoretical results for the RED problem with a deep generative model used for the clean data. We demonstrate the convergence of the iterates of Algorithm 1 to global optima. A priori, this is difficult due to the non-convexity of (5) introduced by the GAN generator \(G(z)\)[14]. To get around this issue, works such as [14] and [15] make certain assumptions to avoid spurious stationary points. However, these conditions essentially reduce to the GAN having weights that behave as a random network (see Definition 13 in Appendix). In practice, especially for the RED problem, modelling real data often requires GANs with far-from-random weights, so there is a strong need for theoretical results in this setting. We draw inspiration from the theory of deep learning and optimization literature, where several works have analyzed non-convex problems through the lens of Polyak-Lojasiewicz (PL) conditions or assumptions that lead to benign optimization landscapes [18; 35; 24]. Our goal is to depart from the randomized analysis of previous GAN inversion works to address the non-convexity of the problem. The main assumption we employ is a _local error bound_ condition. We conjecture this assumption holds true in practice for two reasons. First, we show that the random network conditions assumed in existing works [14; 15] already imply a local error bound condition (see Corollary 7). Moreover, in Section 6.1, we give examples of non-random networks that also empirically satisfy the local error-bound condition, showing the generality of our assumption. Secondly, the empirical success of GAN inversion in various applications suggests that the optimization landscape is benign [45]. However, for the GAN inversion problem, traditional landscape properties such as a PL condition do not hold globally 2. Nevertheless, we can use local properties of benign regions of the landscape to analyze convergence3. Our work serves as an initial step to analyze convergence of far-from-random networks, and an important avenue of future work is verifying the local error bound condition theoretically for certain classes of networks. Footnote 2: We refer the reader to Section 3 of [24] for a simple explanation of this phenomenon. Footnote 3: Note that similar local conditions to analyze convergence have been used in works analyzing the theory of deep learning, such as [24]. ### Reverse Engineering of Deceptions Optimization Problem without Regularization We first consider the unregularized setting where in Algorithm 1, we only minimize \(f(z,c_{a})\), i.e. \(\lambda=0\) and \(\text{prox}_{\lambda h}(\cdot)\) is the identity function. Suppose there exists a \(z^{*}\) and \(c_{a}^{*}\) such that \(x^{\prime}=G(z^{*})+D_{a}c_{a}^{*}\), so \((z^{*},c_{a}^{*})\) are global minimizers of \(f(z,c_{a})\). Our first set of results will ensure convergence of the iterates \((z^{k},c_{a}^{k})\) to \((z^{*},c_{a}^{*})\). We will denote \(\left\|\Delta z^{k+1}\right\|_{2}\triangleq\left\|z^{k+1}-z^{*}\right\|_{2}\) and \(\left\|\Delta c_{a}^{k+1}\right\|_{2}\triangleq\left\|c_{a}^{k+1}-c_{a}^{*} \right\|_{2}\). To state our convergence results, we posit some assumptions on \(G\) and the iterates of the algorithm. **Assumption 1**.: _(Activation Function) We assume that \(\sigma\) is twice differentiable and smooth._ Note that standard activation functions such as the sigmoid or smooth ReLU variants (softplus, GeLU, Swish etc.) satisfy Assumption 1. **Assumption 2**.: _(Local Error Bound Condition) For all \(z^{k}\) and \(c_{a}^{k}\) on the optimization trajectory, suppose that there exists a \(\mu>0\) such that_ \[\left\|\nabla_{z}f(z^{k},c_{a}^{k})\right\|_{2}^{2}+\left\|\nabla_{c_{a}}f(z^ {k},c_{a}^{k})\right\|_{2}^{2}\geq\mu^{2}(\left\|\Delta z^{k}\right\|_{2}^{2}+ \left\|\Delta c_{a}^{k}\right\|_{2}^{2}) \tag{6}\] Under these assumptions, our main theorem demonstrates linear convergence of the iterates \(z^{k}\) and \(c_{a}^{k}\) to the global minimizers \(z^{*}\) and \(c_{a}^{*}\). **Theorem 3**.: _Suppose that Assumption 1 holds for the nonlinear activation function and Assumption 2 holds with local error bound parameter \(\mu\). Let \(\rho\) and \(-\epsilon\) be the maximum and minimum eigenvalues of the Hessian of the loss. Further, assume that the step size satisfies \(\eta\leq\min\left\{\frac{1}{4\epsilon},\frac{3}{2\rho}\right\}\) and \(\eta\in\left(\frac{3\mu^{2}-\sqrt{9\mu^{4}-32\mu^{2}\rho\epsilon}}{4\mu^{2} \rho},\frac{3\mu^{2}+\sqrt{9\mu^{4}-32\mu^{2}\rho\epsilon}}{4\mu^{2}\rho}\right)\). Lastly, assume that \(\mu\gtrsim\sqrt{\rho\epsilon}\). Then, we have that the iterates converge linearly to the global optimum with the following rate in \((0,1)\):_ \[\left\|\Delta z^{k+1}\right\|_{2}^{2}+\left\|\Delta c_{a}^{k+1}\right\|_{2}^{ 2}\leq\left(1-4\eta^{2}\mu^{2}\left(\frac{3}{4}-\frac{\eta\rho}{2}\right)+4 \eta\epsilon\right)(\left\|\Delta z^{k}\right\|_{2}^{2}+\left\|\Delta c_{a}^{ k}\right\|_{2}^{2}) \tag{7}\] The proof is deferred to the Appendix. Assumption 1 is crucial to our proof, since we show an almost co-coercivity of the gradient (Lemma 8 in Appendix) that depends on bounding \(\rho\) and \(\epsilon\) for smooth and twice differentiable activation functions, similar to the proof strategy of [35]. Along with the step size \(\eta\), there are three problem-specific parameters that affect the convergence rate: the largest and the smallest eigenvalues of the Hessian of the loss, i.e., \(\rho\) and \(-\epsilon\) respectively, and the local error bound parameter \(\mu\). Note that because the problem is non-convex, the Hessian will have at least one negative eigenvalue. The rate becomes closer to \(1\) and convergence slows as \(\epsilon\) gets larger because \(\epsilon\) controls the slack in co-coercivity of the gradient in our proof. Similarly, if the operator norm of the weights is controlled, then the convergence rate is faster as a function of \(\rho\). Finally, the convergence rate speeds up as \(\mu\) increases since each gradient descent iterate takes a larger step towards the minimizer. The condition \(\mu\gtrsim\sqrt{\rho\epsilon}\) ensures that the gradient norm is roughly larger than the negative curvature of the Hessian, so that progress towards the global minimizer can still be maintained. The quantity \(\sqrt{\rho\epsilon}\) is the geometric mean of the largest and smallest eigenvalue of the Hessian and can be thought of as a quantity capturing the range of the spectrum of the Hessian. Note that extending our results for non-smooth activation functions such as ReLU is nontrivial since we will need to control \(\epsilon\). Moreover, due to the almost co-coercivity property of the gradient operator (see Lemma 8, Appendix), the step size of gradient descent needs to be bounded away from zero. However, for practical purposes, the regime that is most useful for ensuring fast convergence is when the step size is indeed sufficiently large. ### Regularized Reverse Engineering of Deceptions Optimization Problem We now consider the regularized problem, with \(\lambda\neq 0\). The analysis presented in Section 4.1 does not immediately extend to this setting because \((z^{*},c_{a}^{*})\) now denote minimizers of \(\mathcal{L}(z,c_{a})=f(z,c_{a})+\lambda h(c_{a})\), which is not necessarily the pair \((z^{*},c_{a}^{*})\) such that \(x^{\prime}=G(z^{*})+D_{a}c_{a}^{*}\). In order to demonstrate convergence, we appeal to well-known results that use the Polyak-Lojasiewicz (PL) condition. We assume a local proximal PL condition on the iterates \(c_{a}^{k}\), which can be thought of as a version of Assumption 2 but on the function values instead of the iterates [18]. This assumption also takes into account the proximal update step for \(c_{a}\)4. Footnote 4: We refer the reader to [18] for intuition on the global proximal PL inequality **Assumption 4**.: _Let \(\rho\) denote the Lipschitz constant of the gradient of \(f\) with respect to both \(z\) and \(c_{a}\). For all \(z^{k}\) and \(c_{a}^{k}\) on the optimization trajectory, suppose that there exists a \(\mu>0\) such that_ \[2\rho\mathcal{D}(c_{a}^{k},\rho)+\left\|\nabla_{z}f(z^{k},c_{a}^{k})\right\|_ {2}^{2}\geq\mu(\mathcal{L}(z^{k},c_{a}^{k})-\mathcal{L}(z^{*},c_{a}^{*})) \tag{8}\] _where \(\mathcal{D}(c_{a}^{k},\rho)=-\min_{y}\left[\left\langle\nabla_{c_{a}}f(z^{k}, c_{a}^{k}),y-c_{a}^{k}\right\rangle+\frac{\rho}{2}\left\|y-c_{a}^{k}\right\|_ {2}^{2}+h(y)-h(c_{a}^{k})\right]\)_ **Theorem 5**.: _Suppose Assumption 4 holds with constant \(\mu>0\). Let \(\rho\) be the maximum eigenvalue of the Hessian of the loss. If \(h\) is convex and \(\eta=\frac{1}{\rho}\), then the function values converge linearly:_ \[\mathcal{L}(z^{k+1},c_{a}^{k+1})-\mathcal{L}(z^{*},c_{a}^{*})\leq\left(1- \frac{\mu}{2\rho}\right)(\mathcal{L}(z^{k},c_{a}^{k})-\mathcal{L}(z^{*},c_{a} ^{*})) \tag{9}\] The proof of this result is in the Appendix, but we note the proof is similar to [18], Theorem 5. ## 5 Convergence Analysis of the GAN Inversion Problem As a special case when there is no adversarial noise, our results also give us a convergence analysis for the realizable GAN inversion problem. This simply corresponds to finding the latent code \(z\) for an input \(x\) and fixed GAN \(G\) such that \(G(z)=x\). We let \(f(z)\triangleq\left\|x-G(z)\right\|_{2}^{2}\). The following theorem is a specialization of Theorem 3 to the GAN inversion problem. **Theorem 6**.: _Suppose that Assumption 1 holds. Further, assume a local error bound condition on the optimization trajectory of \(z_{k}\) with \(\mu>0\):_ \[\left\|\nabla_{z}f(z^{k})\right\|_{2}\geq\mu\left\|\Delta z^{k}\right\|_{2} \tag{10}\] _Let \(\rho\) and \(-\epsilon\) be the maximum and minimum eigenvalues of the Hessian of the loss. Under the same assumptions on the step size \(\eta\) and the local error bound parameter \(\mu\) as Theorem 3, we have that the iterates linearly converge to the global optimum with the following rate in \((0,1)\):_ \[\left\|\Delta z^{k+1}\right\|_{2}^{2}\leq\left(1-4\eta^{2}\mu^{2}\left(\frac{ 3}{4}-\frac{\eta\rho}{2}\right)+4\eta\epsilon\right)\left\|\Delta z^{k} \right\|_{2}^{2} \tag{11}\] The proof of this theorem is identical to the proof of Theorem 3 by taking \(c_{a}^{k}=c_{a}^{*}=0\). ### Comparison to Existing Approaches The works of [14] and [15] derive a condition on the weights of the GAN, which they call the Weight Distribution Condition (WDC), under which they can characterize the optimization landscape of the GAN inversion problem. The WDC ensures the weights of the network behave as close to random networks (see Definition 13 in Appendix). The authors of [14] show that under the WDC, there is only one spurious stationary point and the basin of attraction to that point is a small region. The following corollary provides a different viewpoint on this observation by demonstrating that the WDC implies a local error bound condition with parameter \(\mu\). This allows us to show a GAN inversion convergence result for subgradient descent. **Corollary 7**.: _(GAN Inversion for Networks that satisfy WDC) Let \(\epsilon\) be fixed such that \(K_{1}L^{8}\epsilon^{1/4}\leq 1\), where \(L\) is the number of the layers of the GAN generator and \(K_{1}\) an absolute constant. Suppose that for all \(i\in[L]\), \(W_{i}\) satisfies the WDC with parameter \(\epsilon\). Suppose we initialize the iterates \(z^{0}\) of Algorithm 1 that satisfy_ \[z^{0}\notin\mathcal{B}(z^{*},K_{2}L^{3}\epsilon^{1/4}\left\|z^{*}\right\|_{2} )\cup\mathcal{B}(-\kappa z^{*},K_{2}L^{13}\epsilon^{1/4}\left\|z^{*}\right\|_{ 2})\cup\{0\} \tag{12}\] _where \(\mathcal{B}(c,r)\) denotes an \(\ell_{2}\) ball with center \(c\) and radius \(r\), \(K_{2}\) denotes an absolute constant and \(\kappa\in(0,1)\). Let \(\rho\) and \(-\epsilon\) be the maximum and minimum eigenvalues of the Hessian of the loss. Then, there exists \(\mu>0\) such that the local error bound condition holds. Under the same assumptions as Theorem 6, we also have that subgradient descent converges linearly to the global optimum with rate \(\left(1-4\eta^{2}\mu^{2}\left(\frac{3}{4}-\frac{\eta\rho}{2}\right)+4\eta \epsilon\right)\)._ To further illustrate the generality of the local error bound condition, we show in Section 6.1 that the local error bound condition can hold for not only random networks, but also certain classes of non-random networks. ## 6 Experiments In this section, we provide experiments to verify the local error bound condition, as well as demonstrate the success of our approach on the MNIST, Fashion-MNIST, and CIFAR-10 datasets. ### Verification of the Local Error Bound Condition By studying a realizable RED problem instance, we will demonstrate that the local error bound condition holds for a variety of random and non-random GANs. First, we set up a binary classification task on data \(x\) generated from a one-layer GAN \(G(z)=\sigma(Wz)\) with \(W\in\mathbb{R}^{m\times d}\). For a fixed classification network \(\psi(x)\), we generate adversarial attacks. Since our problem is realizable, we can compute the error bound parameter \(\mu\) exactly. The full experimental setup is given in the Appendix. **Random GAN.** We begin by verifying Corollary 7 when \(W\) is a random matrix. We run our alternating optimization algorithm for \(10\) test examples and observe that the iterates always converge to the global optimizer, corroborating our theoretical results. Moreover, Figure 1 shows the effect of the expansiveness of the GAN on the local error bound parameter \(\mu\). Many existing results on random GAN inversion assume expansiveness of the GAN (\(m\gg d\)) to prove a benign optimization landscape. By examining \(\mu\) instead, our results offer a different viewpoint. Recall that our convergence theory (Theorem 3) shows that as \(\mu\) increases, we expect a faster convergence rate. Thus, Figure 1 gives further evidence that expansiveness helps optimization and leads to a better landscape. **Non-Random GAN.** To illustrate an example of a non-random network that can still satisfy the local error bound condition, consider a GAN with latent space dimension \(d=2\) and output dimension \(m=100\). Suppose that the rows of \(W\) are spanned by two orthonormal vectors \(\left[-\sqrt{2}/2\quad\sqrt{2}/2\right]\) and \(\left[\sqrt{2}/2\quad\sqrt{2}/2\right]\). The distribution of these rows is far from the uniform distribution on the unit sphere, and also does not satisfy the Weight Distribution Condition (WDC) from Corollary 7 for small values of \(\epsilon\) (in [15], \(\epsilon\) must be less than \(\frac{1}{d^{0}}\) which is a Figure 1: We show the output dimension \(m\) vs the computed \(\mu\) averaged over the optimization path of 10 test examples for a GAN with random weights and latent space dimension \(d=10\). Figure 2: The optimization landscape for a 2-D GAN inversion problem with weights spanned by two orthonormal vectors. See text for details. very small number even for \(d=2\)). However, the optimization landscape is still benign, and we can reliably converge to the global optimum. For this problem, with a initialization of \(z\) as a standard normal random variable and \(c_{a}\) initialized to the all-zeros vector, we observe an average \(\mu\) value of \(0.013\) over different random initializations. Since \(d=2\), we can plot the landscape for the GAN inversion problem when we set \(c_{a}^{*}=c_{a}^{k}=0\) - this is shown in Figure 2 and confirms the benign landscape. Examples of more non-random networks and corresponding values of \(\mu\) can be found in the Appendix. ### Reverse Engineering of Deceptions on Real Data **Experimental Setup.** We consider the family of \(\{\ell_{1},\ell_{2},\ell_{\infty}\}\) PGD attacks - the full experimental details of the attacks and network architectures can be found in the Appendix. We use a pretrained DCGAN, Wasserstein-GAN, and StyleGAN-XL for the MNIST, Fashion-MNIST and CIFAR-10 datasets respectively [34; 2; 37; 20; 46; 19]. The attack dictionary \(D_{a}\) contains \(\ell_{p}\) attacks for \(p\in\{1,2,\infty\}\) evaluated on \(200\) training examples per class. It is divided into blocks where each block corresponds to a signal class and attack type pair, i.e., block \((i,j)\) of \(D_{a}\) denotes signal class \(i\) and \(\ell_{p}\) attack type \(j\). **Signal Classification Baselines.** We consider a variety of baselines for the signal classification task. It is important to note that the main task in the RED problem is not to develop a better defense, but rather correctly classify the threat model in a principled manner. Despite this, we compare to various adversarial training mechanisms designed to defend against a union of threat models. The first baselines are \(M_{1},M_{2}\) and \(M_{\infty}\), which are adversarial training algorithms for \(\ell_{1},\ell_{2}\) and \(\ell_{\infty}\) attacks respectively. We then compare to the SOTA specialized adversarial training algorithm, known as MSD [26; 44]. Lastly, we compare to the structured block-sparse classifier (SBSC) from [43], which relies on a union of linear subspaces assumption on the data. **Attack Classification Baselines.** Even though the RED problem is understudied, the approach most related to our work is [43], which is denoted as the structured block-sparse attack detector (SBSAD). **Algorithm.** To jointly classify the signal and attack for an adversarial example \(x^{\prime}\) computed on classification network \(\psi\), we run Algorithm 1. We initialize \(z^{*}\) to the solution of the Defense-GAN method applied to \(x^{\prime}\), which runs GAN inversion on \(x^{\prime}\) directly [36]. Our methods are: 1. BSD-GAN (Block-Sparse Defense GAN): The signal classifier that runs Algorithm 1 and then uses \(G(z^{k})\) as input to the classification network \(\psi\) to generate a label. 2. BSD-GAN-AD (Block-Sparse Defense GAN Attack Detector): This method returns the block \(\hat{j}\) of the attack dictionary \(D_{a}\) that minimizes the reconstruction loss \(\left\|x^{\prime}-G(z^{k})-D_{a}[i][\hat{j}]c_{a}[i][\hat{j}]\right\|_{2}\) for all \(i\). Further experimental details such as step sizes and initialization details can be found in the Appendix. #### 6.2.1 MNIST and Fashion-MNIST For both the MNIST and Fashion-MNIST datasets, we expect that the method from [43] will not work well since the data does not lie in a union of linear subspaces. Table 1 and 2 show the signal and attack classification results for the two datasets. Surprisingly, even for the MNIST dataset, the baselines from [43] are better than the adversarial training baselines at signal classification and it is \begin{table} \begin{tabular}{c||c c c c c||c c c||c} \hline \hline **MNIST** & CNN & \(M_{\infty}\) & \(M_{2}\) & \(M_{1}\) & MSD & SBSC & BSD-GAN & SBSAD & BSD-GAN-AD \\ \hline Clean accuracy & 98.99\% & 99.1\% & 99.2\% & 99.0\% & 98.3\% & 92\% & 94\% & - & - \\ \(\ell_{\infty}\) PGD (\(\epsilon=0.3\)) & 0.03\% & **90.3**\% & 0.4\% & 0.0\% & 62.7\% & 77.27\% & 75.3\% & 73.2\% & **92.3\%** \\ \(\ell_{2}\) PGD (\(\epsilon=2.0\)) & 44.13\% & 68.8\% & 69.2\% & 38.7\% & 70.2\% & 85.34\% & **89.6\%** & 46\% & **63\%** \\ \(\ell_{1}\) PGD (\(\epsilon=10.0\)) & 41.98\% & 61.8\% & 51.1\% & 74.6\% & 70.4\% & 85.97\% & **87.8\%** & 36.6\% & **95.8\%** \\ \hline Average & 28.71\% & 73.63\% & 40.23\% & 37.77\% & 67.76\% & 82.82\% & **84.23\%** & 51.93\% & **83.7\%** \\ \hline \hline \end{tabular} \end{table} Table 1: Adversarial image and attack classification accuracy on digit classification of MNIST dataset. SBSC denotes the structured block-sparse signal classifier and SBSAD denotes the structured block-sparse attack detector. BSD-GAN and BSD-GAN-AD are the Block-Sparse Defense GAN and Block-Sparse Defense GAN Attack Detector respectively. also able to successfully classify the attack on average. However, our approach improves upon this method since the GAN is a better model of the clean data distribution. The improved data model results in not only higher signal classification accuracy on average, but also significantly higher attack classification accuracy since the signal error is lower. We also observe that for attack classification, discerning between \(\ell_{2}\) and \(\ell_{1}\) attacks is difficult, which is a phenomenon consistent with other works on the RED problem [28; 43]. #### 6.2.2 Cifar-10 We use a class-conditional StyleGAN-XL to model the clean CIFAR-10 data and a WideResnet as the classification network, which achieves 96% clean test accuracy. As many works have observed the ease of inverting StyleGANs in the \(\mathcal{W}+\) space (the space generated after the mapping network), we invert in this space [1]. We initialize the iterates of the GAN inversion problem to a vector in \(\mathcal{W}+\) that is generated by the mapping network applied to a random \(z\) and a random class. Interestingly, the GAN inversion problem usually converges to an image of the correct class regardless of the class of the initialization, suggesting a benign landscape of the class-conditional StyleGAN. Our results in Table 3 show a \(~{}60\%\) improvement in signal classification accuracy on CIFAR-10 using the GAN model as opposed to the model from [43]. The attack classification accuracy also improves on average from 37% to 56% compared to the model that uses the linear subspace assumption for the data. However, for \(\ell_{\infty}\) and \(\ell_{1}\) attacks, we do not observe very high attack classification accuracy. We conjecture that this is due to the complexity of the underlying classification network, which is a WideResnet [48]. Namely, the results of [43] show that the attack dictionary model is valid only for fully connected locally linear networks. Extending the attack model to handle a wider class of networks is an important future direction. ## 7 Conclusion In this paper, we proposed a GAN inversion-based approach to reverse engineering adversarial attacks with provable guarantees. In particular, we relax assumptions in prior work that clean data lies in a union of linear subspaces to instead leverage the power of nonlinear deep generative models to model the data distribution. For the corresponding nonconvex inverse problem, under local error bound conditions, we demonstrated linear convergence to global optima. Finally, we empirically demonstrated the strength of our model on the MNIST, Fashion-MNIST, and CIFAR-10 datasets. We believe our work has many promising future directions such as verifying the local error bound conditions theoretically as well as relaxing them further to understand the benign optimization landscape of inverting deep generative models. \begin{table} \begin{tabular}{c||c c c||c c} \hline \hline **Fashion-MNIST** & CNN & SBSC & BSD-GAN & SBSAD & BSD-GAN-AD \\ \hline \(\ell_{\infty}\) PGD (\(\epsilon=0.3\)) & 2\% & 16\% & **63\%** & 30\% & **42\%** \\ \(\ell_{2}\) PGD (\(\epsilon=2.0\)) & 10\% & 20\% & **68\%** & 55\% & **59\%** \\ \(\ell_{1}\) PGD (\(\epsilon=10.0\)) & 12\% & 35\% & **68\%** & 15\% & **48\%** \\ \hline Average & 8\% & 23.67\% & **66.33\%** & 33.33\% & **49.66\%** \\ \hline \hline \end{tabular} \end{table} Table 2: Adversarial image and attack classification accuracy on Fashion-MNIST dataset. See Table 1 for column descriptions. \begin{table} \begin{tabular}{c||c c c||c c} \hline \hline **CIFAR-10** & CNN & SBSC & BSD-GAN & SBSAD & BSD-GAN-AD \\ \hline \(\ell_{\infty}\) PGD (\(\epsilon=0.03\)) & 0\% & 15\% & **76\%** & 14\% & **48\%** \\ \(\ell_{2}\) PGD (\(\epsilon=0.05\)) & 0\% & 18\% & **87\%** & 36\% & **77\%** \\ \(\ell_{1}\) PGD (\(\epsilon=12.0\)) & 0\% & 18\% & **71\%** & **63\%** & 44\% \\ \hline Average & 0\% & 17\% & **78\%** & 37.66\% & **56\%** \\ \hline \hline \end{tabular} \end{table} Table 3: Adversarial image and attack classification accuracy on CIFAR-10 dataset for 100 test examples. See Table 1 for column descriptions.